Sie sind auf Seite 1von 112


Volume: 10 | Issue: 11


Security: Beef Up Your IT Infrastructure With Tripwire

` 100

Volume: 01 | Issue: 04 | Pages: 112 | January 2013

Build A Home-Grown NAS With An Old Computer Set Up Your Own Network Storage Using FreeNAS


FOSS Guides Obama To Victory

WHaTs hot In andro Whats HOT IN aNDROID ndroId ndro

Embedded Android Gets Popular! Explore PHP On Your Android Device

RNode.js Ruby Python PHP

India us singapore malaysia

` 100 $ 12 s$ 9.5 mYR 19 01

Java EE




9 770974 105001

What Makes OpenShift A Hit?


30 32 37 40 44 47
Embedded Android Gets Popular! Use Emscripten to Compile Code Into JavaScript An Introduction to Socket Programming in C (UDP) Explore PHP on Your Android Device What A Native Developer Should Know About Android Security What are OSGi Web Applications?

51 56 59
Build Your Own Home-Grown NAS with an Old Computer Set Up Your Own Network Storage Using FreeNAS A Peek Into Tripwire, The Open Source Security Tool

an advanced mail 62 axigen:Thats Simple to Set Up Server

08 You Said It... 11 Offers of the Month 12 Q&A Powered By
LFY Facebook


Start Your Own E-Preneur Venture Using Open Source Tools

14 New Products

18 Open Gadgets 22 FOSSBytes 105 FOSS Jobs 108 Tips & Tricks 110 Events & Editorial

Mageia 2.0
x86_64 installation DVD


A stable, secure GNU/Linux-based free operating system for laptops, desktops and servers. Mageia was created in 2010 as a fork of Mandriva Linux. Unlike Mandriva, Mageia project is a community project whose goal is to develop a free Linux-based operating system. This DVD can be used to install or run Mageia 2 in live mode.


Workforce Development with

CompTIA Certification
The CompTIA Cloud Essentials certification demonstrates knowledge of IT
professionals as it relates to these subject matter areas: Business analysis (understanding benefits of various forms of cloud computing) Participating in or selling services related to cloud computing projects and deployment Cloud operations Cloud governance Cloud security Training Duration: 2 Days Full Time

The CompTIA Storage+ Powered by SNIA professional certification demonstrates knowledge of IT professionals as they relate to these subject matter areas: Storage Components Connectivity Storage Management Data Protection Storage Performance Training Duration: 5 Days Full Time

CompTIA Linux+ Powered by LPI Certification Deliver Linux Solutions with Linux Qualified Staff CompTIA Linux+ Powered by LPI is the only high-stakes, vendor neutral certification available to validate the knowledge and skills required of Linux administrators.
Training Duration: 5 Days Full Time

An associate of Linux Learning Centre Private Limited


#2, 1 E Cross, 20 Main Road BTM 1st Stage, Bangalore-560029 Tel: 9449257731, 08042068082

All certification programs and education related to such programs are operated exclusively by CompTIA Certifications, LLC. Cloud Essentials, Linux+ and Storage+ are trademarks of CompTIA Properties, LLC in the U.S. and internationally. Other brands and company names mentioned herein may be trademarks or service marks of CompTIA Properties, LLC or of their respective owners.

The Only IT Magazine Thats Read By Software Developers & IT Administrators


Open Source For You

(Formerly LINUX For You)

Worlds #1 Open Source Magazine

Get Noticed! Advertise Now! Contact Priyanka @ +91 11 4059 6614
EFY Enterprises Pvt Ltd D-87/1, Okhla Industrial Area, Phase 1, New Delhi 110 020

A wish list for OSFY!
I have been a long-time subscriber of LFY (now OSFY). While in service, I benefited immensely from your magazine as I was able to maintain Linux servers ably, assisted by a private vendor. After retirement too, I work off and on, and Linux keeps me engaged always. I have introduced Linux where I work now and my colleagues found Ubuntu useful. However, the Linux Mint 13 carried by you recently has found more takers because of its Cinnamon desktop. (There is some problem with Mate - some error message appears.) I recently installed Wine in Cent OS from the source code but its functionality was limited. However, when I installed Wine in Mint 13, I was in for a pleasant surprise. When I clicked Wine, it gave four options. I mounted a borrowed MS Office back-up copy (I work with Open Office only), and it got installed without any hitch. (I did know that software such as CrossOver or Bordeaux were available for this, but I was not comfortable paying in dollars using a credit card.) Now, I am sure my colleagues will switch over to Linux but use MS Office at the same time, thus enjoying the best of both worlds. Thank you, OSFY! I also have a few suggestions for the OSFY team: 1. Wine takes many hours to get installed through the download. Is it not possible to include it in the OSFY DVD, ready for installation? 2. I think you have stopped carrying articles in the print issue, on the distros bundled with the DVD, for want of space. Can you not include such articles in the LFY website? I am sure an article on Wine will benefit many readers of your magazine. 3. In the September 2012 issue, articles such as Remote Access with SSH and Installing Linux on a Mac were quite interesting and useful for average readers like me. I would like to see more such articles. 4. Thank you for carrying the news item on Raspberry Pi. You could have mentioned that it is available from KitsnSpares. I came to know about this only from Google. 5. Recently, I configured DHCP, Squid, Vsftp and Samba successfully, mainly with the help of the Internet and a book by Christopher Nigus. It took many frustrating days as there is no single resource for a particular distro. For newcomers, articles on configuring CentOS, for instance, will be helpful. V S Nagasayanam, ED: We are indeed happy to receive such insightful feedback from you. We appreciate the fact that you have spent your
8 | january 2013 | OPEn SOurCE FOr yOu

valuable time in giving us an array of wonderful suggestions for OSFY. Thanks a lot. Your suggestions are most welcome and we will try our best to incorporate them in our future editions. Till then, keep reading OSFY and continue sending us your candid views.

Kudos to the author of Cyber Attacks Explained

The article series Cyber Attacks Explained by Prashant Phatak was simply amazing! I must admit that he has done a fantastic job of enlightening students like me who are keen on the security sector. Presently, I am doing the RHS333 course, after having done the RHCE and RHCSA, and these articles have indeed helped me a lot. His articles in the August edition (The Botnet Army) and the November issue (Device Evasions) deserve special mention as these were very informative. Hope he comes up with yet another good series of articles in the forthcoming issues. I wish him all the luck for the same. And finally, a big thanks to the OSFY team for providing a platform to authors like him. Shivam Kotwalia, ED: We are indeed pleased to know that you have been a keen follower of the Cyber Attack series and found them informative. We will certainly convey your words of appreciation to Prashant Phatak. We are sure he will come up with more such enriching write-ups for his fans in the future too. Thanks for sharing your experience with us. Please do keep reading the magazine; we look forward to your valuable feedback.

On changing the name of your magazine

I am indeed happy to know that LFY has changed its name to Open Source For You. It might not be so important for those of us who already use Linux or are involved in the FOSS ecosystem, but it definitely makes a lot of sense for people still not so aware of FOSS, who perhaps keep themselves away from journals like LFY (now OSFY) because of the Linux on the magazines masthead. I have experienced this with fellow students in my college. I hope OSFYs readership increases considerably with this move, which will help FOSS to spread further in India and elsewhere. All the best! Vaidik Kapoor, ED: Thanks a lot for the feedback and we do feel the same as you do. Linux is undoubtedly indispensable in the open source world, but the scope of open source technology has widened

beyond Linux. To keep pace with the changing times, we decided to rename our magazine. Where do I get OSFY in Kerala?

Kudos to OSFY!
This is to congratulate you about the new topics and technologies being discussed in every issue, which reflect your teams hard work. Keep up the good work. Saket Sinha, ED: Its great to get such encouraging words of appreciation from you! We are really glad to know that you feel this way and have taken the time to write to us about it. Our authors too deserve the accolades, for they come up with the new and interesting topics for OSFY readers. Keep reading our magazine and do let us know more about what you feel about its content.

Manu M Das: The OSFY issue for November is still not available on the stands at Kollam, Kerala. Please let me know how I can get a copy. Linux For You: Hi ManuThere is a distributor of our magazine in Thiruvananthapuram, Kerala. You can get in touch with himHari Kumar of India Book House, Thiruvananthapuram at 0471-2475443. Hope this helps.

Manu M Das: I know the India Book House

at Thiruvananthapuram as I visit it regularly and so will collect my copy from there. Thanks a lot for your reply.

Android rules! Include articles on Virtualisation basis, and so far the articles covered in your magazine have been really good. I expect more content on virtualisation as it is one of the hottest and fastest growing areas of IT. Articles on this subject will definitely be of use for those who want to make a career in this fast-evolving domain. It would be wonderful if you could publish interesting articles on the same on a regular basis.

Amit Mittal: I am really happy that you are car-

Magimai Prakash: I follow OSFY on a regular

rying a lot of Android- based articles in your magazine. These have been very helpful for me and all those who want to make a career in Android apps development.

Linux For You: Thanks a lot Amit! Android

is the open source champion and in our future editions, too, you will continue to see a number of articles on Android. All the best for your future endeavours! Loved reading System Programming Using POSIX Threads

Linux For You: Its great to get feedback

from our readers on what kind of content they wish to see in our magazine. We do agree with you, and you will definitely see more content on virtualisation in our forthcoming issues.

Gaurav Sinha: I loved reading the article Sys-

tem Programming Using POSIX Threads published in the November 2012 issue of OSFY. I got to know about yet another role of GDB. Thanks OSFY!

Linux For You: We are happy to know that

Articles on Android Apps Development out with good articles is commendable. Please publish more content on Android apps development and its security. you liked the article. We will convey your words of appreciation to the author, Swati Mukhopadhay. We will also try our best to include more such articles.

Chaitanya Kumar: OSFY's team effort in coming

Linux For You: Thanks for the feedback.

Android apps development is indeed a hot topic, and so is the security of Android devices. You can expect more articles on both these topics in future issues.

Please send your comments or suggestions to:

The Editor
D-87/1, Okhla Industrial Area, Phase I, New Delhi 110020 Phone: 011-26810601/02/03, Fax: 011-26817563 Email:

10 | january 2013 | OPEn SOurCE FOr yOu


| 11 Powered By

Ridwan Fauzanullah:

Lourence Joseph Navarro:

Like . comment

Can you tell me the steps on how to block sites server by OS debian 5/6 with the client as Windows?
Like . comment

I am a beginner in Linux. What kind of Linux distro is the best?

Samantha de Lucio: I assume you are running a

Prashant Dawar: You should go for Ubuntu 12.04 Gaurav ChowMean: Linux Mint. Lourence Joseph Navarro: Thanks a lot friends!

router. Here are some generic pointers. Bad Lazy Netadmin way: -Edit hosts file to point IP's to nowhere or local site. The Good Netadmin Way: -Configure network traffic filtering in the kernel, recompile and install the new kernel/modules. -Install and configure ipfilter. Do a google search for an ipfilter how to, there are a couple of them (i.e. Gentoo ipfilter howto) that also list which options to enable in the kernel.

or Mint. Zorion OS is great if you are migrating from Windows.

Khagendra Padhy:

How to do a modem connection from Samsung mobile in Fedora OS?

Sandeep Kane:

Like . comment

Can I keep any two linux based operating system like Fedora and Linux 5.01 or Ubuntu and Linux 5.01? Please let me know if this is feasible?
Like . comment

Samantha de Lucio: Do you want to install two Linux distributions on one PC, and choose which one to run during the startup? This is possible, if you install each Linux on a separate partition and use a boot manager (i.e. grub, lilo) to select which one to start. Most Linux distro's can't be installed on the same partition due to different library versions, dependencies and directory/file hierarchy. Google for 'multiboot Linux howto' to find guides on this subject. Gaurav ChowMean: This is quite simple. Just
install them side by side. You don't have to make any modifications except that disk partition part.

Mithun Nair: Connect your mobile to the PC, then, type the following in terminal: nm-connection-editor then goto mobile broadband tab under the n/w connection window thus opened. click the add button now, give-in the details of ur service provider, APN and all.. Save the details and click the computer icon in the panel... Click on Connect using mobile broadband.. Hurray.. you are connected to net Alternatively, install wvdial in your machine... get-in the dialer defaults of ur modem and service provider by googling... add this file to wvdial.conf and get connected to net. Vipendra Verma:

Ahmed Bhd:
Like . comment

Which Linux distribution is best for Web Development in both Server & Desktop?

I cannot remove Linux Red Hat and install Windows XP. My computer consists of 128 MB of RAM and freezes all the time with Linux. I try to re install Windows and boot from the disk but it says "Checking Hardware Configuration" for 2 seconds. Then the screen goes black. The disk is fine and I just need to know how to format the drive.
Like . comment

Vishal Shinde: Ubuntu Prashant Dawar: Both Ubuntu and Debian are great choices. Slackware is also preferred. Mithun Nair: Redhat and Fedora.

Prashant Dawar: Method 1: With Red hat, you can change the partition table and write it. Method 2 :Debug your hard drive. Method 3: Download partition magic and format with it. Yogesh Jadhav: Method 4: Format your computer with Wndows 2000 Pro and then install Windows XP.

Image quality is poor as the photos have been directly taken from
12 | jANUARY 2013 | OPEN SOURCE FOR YOU Powered By

Arush Salil:

Can you please help me know how can I connect to a WEP wireless network with Shell ? PS: My wireless card name is wlan0.
Like . comment

Gourav Dwivedi:
Like . comment

How to play a MP3 song in Fedora 14 and Fedora 16 and other Linux based OS?

https://www. wifi+from+terminal+linux&aq=2&oq=how+to +connect+wifi+from+terminal+&aqs=chrome .3.57j0l3.24411&sugexp=chrome,mod=4&so urceid=chrome&ie=UTF-8

Chetan Ghorpade: Try this link:

Avinash Mali: Use VLC, it'll play any media! Follow this: Azad Ansari: Installing VLC will enable you to play MP3s. 1. Open Ubuntu Software Center 2. Search for vlc 3. Install VLC. Neil Desai: You may be needing plug-ins for the

Pradeep Singh:
Like . comment

Can anyone tell me why we have different filesystems?

Azad Ansari: Because it gives flexiblility and is competent to coexist with many other OS. Shreyas Joshi: The file system by itself is a
hierarchical collection of information i.e it represents how the different types of data are stored on your system. When you view it on a GUI, you will see this division of data showing up as folders. Moreover, the file system acts as an index containing the physical location of all the data on the drive and provides access to this data via addressing through memory blocks. The various File Systems exist to cater to changing storage parameters such as storage performance, stability, reliability and easy addressing of data in a particular hierarchical model.

Sai Prasad:

What is the difference between UNIX AND LINUX? Help me, just confused!
Like . comment

Aboobacker Mk: Unix was the one of the popular OS and Linux is based on Unix. Sachidananda Sahu: UNIX is main Operating
kernel which contains the main part like file management, memory management, process management. When they add some extra features to UNIX it becomes Linux. when they add some extra features to UNIX, then it converts into Ubuntu. In simple terms, UNIX+some extra features =LINUX. UNIX+some other features =Ubuntu Unix+some extra features =Fedora.

Ibrahim Musa:
Like . comment

Azad Ansari: UNIX requires the Licensee to pay while Linux is freely redistributable.

Is Solaris 11 useful as a desktop OS for web browsing, chatting, playing audio, reading e books, etc?

fully loaded monster to do a child's work. Better stick to the Desktop Flavours of Linux. My preference goes to the debian based Mint & Ubuntu. UNIX, the others are just UNIX 'like'. I use Lubuntu Linux by the way.

Pranavam Siddharthan: You would not need a

Karthigeyan Kith:
Like . comment

What is system requirement of linux ?

Aditya Magadi: Linux distributions literally run

Ibrahim Musa: But I want to experience a 'real'

on any system configuration. You just need to download the one that meets your architecture (x86 for intel processors).

Image quality is poor as the photos have been directly taken from


| jANUARY 2013 | 13







Indias first network security education provider now available at Four different locations

SAVE MONEY & TIME Get yourself CERTIFIED ONLINE for Details, Call us.

result in RHCE/RHCSS exam

VPS Severs Email Marketing Solutions Java Hosting Shared Hosting Sever Management Domain Registration

Special Offer Only For Online Training

Free Original Course ware Get Hurry: Take demo ( totally free ) for more queries Call 09887789124
NAGPUR: GRRAS Linux Training and Development Center 53 Gokulpeth, Suvrna Building, Opp. Ram Nagar Bus Stand and Karnatka sangh Building, Ram Nagar Square, Nagpur- 440010, Phone: 0712-3224935, M: +91-9975998226, Email:

JAIPUR : GRRAS Linux Training and Development Center 219, Himmat Nagar, Behind Kiran Sweets, Gopalpura Turn, Tonk Road, Jaipur(Raj.) Tel: +91-141-3136868, +91- 9887789124, +91- 9785598711, Email:

PUNE: GRRAS Linux Training and Development Center 18, Sarvadarshan, Nal-stop, karve Road, Opposite Sarswat-co-op Bank, Pune 411004 M: +91-9975998226, +91-7798814786 Email:
Swipe Legend Tab

Wishtel PrithV

Go Tech FunTab Class


Google Nexus 7

Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP:

Android 4.1.1 (Jelly Bean)

Launch Date: MRP:

December 2012 ` 3,300

December 2012 ` 11,999 ` 11,999


December 2012 ` 7,999

November 2012 ` 22,412


` 3,300
7-inch TFT (LCD), capacitive display, 800 MHZ processor, 2800 mAh battery, 0.3 MP camera, supports up to 32GB MicroSD Card3G, WiFi



` 7,999



` 16,599
17.78-cm IPS display, 1.2 GHz processor, 4325 mAh battery, 1.2 MP camera, 16 GB internal memory,3G, WiFi


7-inch TFT LCD capacitive touchscreen, 800 x 400 pixels screen resolution, 1.5GHz processor, 2 MP rear and 1.3 MP front camera, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi

23.1-cm (9.1-inch) capacitive touch screen, 1.5 GHz processor, 5,000 mAh battery, 300K pixels front facing camera, 8 GB In built memory, expandable up to 32 GB, 3G, Wifi

Swipe Velocity Tab




Penta T-Pad WS802C-2G



Android 4.1.1 (Jelly Bean)

Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

November 2012 ` 13,999 ` 11,490


Android 4.0
Launch Date: MRP: ESP:

November 2012 ` 8,299

Android 4.0
Launch Date: MRP: ESP:

November 2012 ` 12,999

October 2012 ` 14,999

8 inches (20.3 cms) capacitive touchscreen, 1024 x 768 pixels screen resolution, dual core Cortex A9 processor, 4500 mAH battery, 2 MP rear camera, 4 GB internal storage, expandable up to 32 GB, WiFi


` 12,999


` 8,299

8 TFT (LCD), multi touch capacitive touchscreen, 1024 x 768 pixels screen resoultion, 1.2 GHz processor, 6000 mAh battery, 2 MP rear camera, 8 GB internal memory, 32 GB, 3G, WiFi Retailer/Website:

8 inches multi-touch capacitive screen, 800 x 600 pixels screen resolution, 5000 mAH battery, 8 GB internal memory, expandable up to 32 GB, 3G, WiFi


` 14,999
9.7 Inches multi-touch capacitive touchscreen display, 1.2 GHz Dual processor, 7000 mAh battery, 2 MP rear camera, 16 GB internal memory, expandable upto 32 GB, 3G, WiFi Retailer/Website:

Zync Z1000

iBall Slide 3G 7334


Penta T-Pad WS702C


Micromax A101

Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

October 2012 ` 10,990 ` 10,990

9.7 Inches capacitive touchscreen, 1024 x 768 pixels screen resolution, 1.5 GHz processor, 7000 mAh battery, 2.0 MP rear camera, 8 GB internal memory, expandable up to 32 GB, 3G, Wifi Retailer/Website:

October 2012 ` 13,999 ` 9890

7- inch capacitive touchscreen, 1024x600 pixels screen resolution, 1 GHz processor, 4400 mAh 2MP rear camera, 8 GB internal memory, expandable up to 32 GB, 3G, WiFi Retailer/Website:

October 2012 ` 7,499 ` 6,899

7" capacitive multi touch screen, 1024 x 600 pixels screen resolution, 1.2 GHz processor, 2 MP rear camera, 8GB internal storage memory, expandable up to 32 GB, 3G, WiFi Retailer/Website:

October 2012 ` 14,999 ` 9,999

12.7-cm (5-inch) TFT WGCA touchscreen, 800 x 480 pixels screen resolution, 1 GHz processor, 2000 mAH battery, 5 MP rear camera, 1.2 GB internal memory, expandable up to 32 GB, 3G, Wifi Retailer/Website:

Karbonn Smart Tab 3 Blade


Croma CRXT1075

EKEN Leopard C70


ZenFocus myZenTAB 708BH


Android 4.1
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

October 2012 ` 5,999 ` 5,999

17.8-cm (7-inch) capacitive touchscreen, 800 x 480 pixels screen resolution, 1 GHz processor, 2 MP rear camera, 4 GB internal memory, expandable up to 32GB, 3G Retailer/Website:

Android 4.0
Launch Date: MRP: ESP:

October 2012 ` 5,990 ` 4,990

17.7-cm (7-inch) capacitive touchscreen, 480 x 800 pixels screen resolution, 1.2 GHz processor,2600 mAh battery, 2 MP rear camera, 4 GB internal memory,expandable up to 32 GB, 3G, WiFi Retailer/Website:

October 2012 ` 6,990 ` 6,450

17.8-cm (7-Inch) Captive Touchscreen, 1.2 GHz processor, 8GB internal storage, expandable upto 32 GB, WiFi Retailer/Website:

October 2012 ` 7,899 ` 7,899

17.7-cm (7-inch) capacitive touchscreen, 800 x 400 pixels screen resolution, 1.2 Ghz processor, 3,200 mAh battery, 0.3 MP front camera, 8 GB internal memory, expandable up to 32 GB, 3G, WiFi Retailer/Website:

18 | january 2013 | OPEn SOurCE FOr yOu
Zen Ultratab A900

Zync Z930

Android 4.0
Launch Date: MRP: ESP:

October 2012 ` 7,999 ` 7,999

22.9-cm (9-inch) capacitive touch screen, 800 x 480 pixels screen resolution, 1.5GHz processor, 4,000 mAh battery, 1.3 MP front camera , 4GB internal storage, expandable up to 32 GB, 3G, WiF Retailer/Website:

Android 4.0
Launch Date: MRP: ESP:

October 2012 ` 5,499 ` 4,499

17-inch TFT capacitive touch Screen, 800 x 480 pixels screen resolution, 1.2 GHz processor, 3600 mAh battery, 0.3 MP front-facing camera, 4GB internal memory, expandable up to 32 GB, 3G, WiFi Retailer/Website:

Mercury mTAB7

Penta T-Pad WS703C


Android 4.0
Launch Date: MRP: ESP:

October 2012 ` 6,499 ` 6,499

17.8-cm (7-inch) capacitive touch display, 480 x 800 pixels screen resolution, 1.2 GHz Cortex A8 processor, 2300 mAh battery, 0.3 MP front-facing camera for video calling, 4 GB internal storage, expandable up to 32 GB, 3G, WiFi Retailer/Website:

Android 4.0
Launch Date: MRP: ESP:

October 2012 ` 6,999 ` 6,899

7.0 inches, Multi-Touch Capacitive Touchscreen,, 800 x 480 pixels screen resolution, 0.3 front camera, 4 GB of internal memory expandable to 32 GB, 3G, WiFi Retailer/Website:

RELAX!! No Need To Pay For Software.

Your Window to FREE professional Software

Adcom Tablet PC-APad 721C


Android 4.0
Launch Date: MRP: ESP:

Laptops Ambrane Mini


October 2012 ` 12,899 ` 9,250

7 Inches (17.78 cm) CAPACITIVE Touch Screen, 3500 mAH battery, 3.2MP rear camera, internal memory 4 GB, expandable up to 32GB, 3G, Wifi

Android 4.0
Launch Date: MRP: ESP:

November 2012 ` 5,499 ` 5,499


7 inches TFT capacitive touch screen, 800 x 480 pixel screen resolution, 1.2 GHz processor, 3000 mAh battery, Built-in 0.3 MP camera, WiFi

Netbooks Samsung N100




Launch Date: MRP: ESP:

Launch Date: MRP: ESP:

August 2011 ` 12,290 ` 11,840

25.7 cm WSVGA anti-reflective LED,1024600 pixel screen resolution,1.33GHz Intel ATOM processor, 1GB DDR3 memory, Intel GMA 3150 graphics, 250GB HDD, 3 cell (40 W) battery, 4-in-1 card reader, 1.03kg. Retailer/Website: Croma Store, Saket, New Delhi, +91 64643610

August 2011 ` 12,499 ` 12,000

25.7 cm LED-backlit screen, Intel Atom processor N455 CPU, 1GB DDR3 RAM expandable upto 2GB, 220GB storage, Bluetooth 3.0, Wi-Fi 802.11 b/g/n, 17.6mm thick, 920g. Retailer/Website: Eurotech Infosys, Nehru Place, Delhi, 9873679321 OPEn SOurCE FOr yOu | january 2013

Free Download
| 19




Contact us : 080-4242-5042, E-mail:,

Samsung Galaxy Music Duos

Lava Iris N400


iBall Andi 4.5H


Karbonn A30

Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

December 2012 ` 8,999 ` 8,999


December 2012 ` 6,399

Android 4.0
Launch Date: MRP:

Android 4.0
Launch Date: MRP:

December 2012

December 2012


` 6,399
4-inch TFT capacitive touchscreen, 400 x 800 pixels screen resolution, 1 GHz processor, 1500 mAh battery, 5 MP camera, 127 MB internal memory, expandable up to 32 GB, 3G, Wifi


` 14,995

` 12,490
4.5 inches (11.43 cms) capacitive touchscreen, 960 x 540 pixels screen resolution, 1 GHz dual-core processor, 1600 mAH battery, 8 MP rear camera, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi


` 12,990

` 11,500
5.9-inch capacitive touchscreen, 480 x 800 pixels screen resolution, 1 GHz processor, 2500 mAh battery, 8 MP camera, 4 GB internal memory, expandable up to 32 GB, 3G, Wifi


7.6-cm (3-inch) QVGA display touchscreen, 850 MHz processor, 1,300 mAh battery, 3 MP rear camera, 512 MB RAM,4 GB internal memory, expandable up to 32 GB, Wifi

Lavas Xolo A800



Lava Iris N320


Lava Xolo A700


Android 4.0
Launch Date: MRP: ESP:

December 2012 ` 11,999 ` 11,999

4.5 inch IPS LCD capacitive touchscreen, 960 x 540 pixels screen resolution, 1600 mAh battery, 8 MP camera, 4 GB internal memory, expandable up to 32 GB, 3G, Wifi

Android 4.0
Launch Date: MRP: ESP:

Android 2.3
Launch Date: MRP:

December 2012

November 2012 ` 4,499

Android 4.0
Launch Date: MRP: ESP:

November 2012


` 15,999 ` 15,999
5.3 inch QHD touchscreen, 1 GHz Dual Core processor.8 MP camera,4 GB internal memory, expandable up to 32 GB, 3G, Wifi



` 3,999


` 9,999 ` 9,999
11.4-cm (4.5-inch) IPS capacitive touchscreen, 960 x 540 pixels screen resolution, 1 GHz dual core processor, 5 MP rear camera, 4 GB internal memory, expandable up to 32GB, 3G, WiFi


8.12-cm (3.2-inch) capacitive touch screen, 240 x 320 pixels screen resolution, 1 GHz processor, 1400 mAH battery, 2 MP rear camera, 100 MB internal memory, expandable up to 32 GB, WiFi

Karbonn A15

Karbonn A5+

Intex Aqua 3.2


Lava XOLO X700


Android 4.0
Launch Date: MRP: ESP:

Android 2.3
Launch Date: MRP: ESP:

November 2012 ` 5,899 ` 5,899


November 2012 ` 5,990

Android 2.3
Launch Date: MRP:

Android 4.0
Launch Date: MRP:

October 2012

October 2012 ` 17,400



` 4,894
3.5 inch capacitive touch screen, 320 x 480 pixels screen resolution, 1 GHz processor, 1420 mAh battery, 3 MP camera, micro SD card slot supporting up to 32GB of expandable memory 3G, Wifi


` 3790

` 3790
3.2-inch capacitive touchscreen, 320 x 240 pixels screen resolution, 1 GHz processor, 1,200 mAH battery, 2 MP rear camera, 512 MB RAM, expandable up to 32 GB, WiFi Retailer/Website:

` 14,000
0.9-cm (4.3-inch) qHD touchscreen, 960 x 540 pixels screen resolution, 1.2 GHz processor, 2000 mAh battery,5MP rear camera, memory expandable up to 32 GB, 3G, WiFi Retailer/Website:

10.2-cm (4-inch) LCD capacitive touchscreen, 800 x 480 pixels screen resolution, 1 GHz processor, 1,420 mAh battery, 3 MP rear camera, micro SD card slot supporting up to 32GB of expandable memory, 3G, WiFi

Reliance Smart V6700


Sony Xperia J

Zync Z5

LG Optimus Vu

Android 2.3
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

October 2012 ` 6,777 ` 6,777

8.9-cm (3.5-inch) HVGA capacitive touch screen, 320 x 480 pixels screen resolution, 800 MHz processor, 1400 mAh battery, 3 MP rear camera, WiFi All Reliance Outlets

October 2012 ` 16,299 ` 15,840

4-inch TFT touch screen, 854 x 480 pixels, 1 GHz processor, 5 MP rear camera, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi Retailer/Website:

October 2012 ` 9,490 ` 9,490

5 inch TFT touchscreen, 480 x 800 pixels screen resolution, 1 GHz processor, 2500 mAH battery, 8 MP rear camera, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi Retailer/Website:

Android 4.0
Launch Date: MRP: ESP:

October 2012 ` 34,500 ` 29,999

5-inch capacitive touchscreen, 1024 x 768 pixels screen resolution, 1 GHz processor, 2080 mAh battery, 8 MP rear camera, 3G, Wifi Retailer/Website:

20 | january 2013 | OPEn SOurCE FOr yOu
Micromax A110 Superfone Canvas 2

Micromax A90S Superfone PIXEL


HTC Desire X

Karbonn A11

Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

October 2012 ` 14,999 ` 9,999

5-inch TFT capacitive touch screen, 480 x 854 pixels screen resolution, 1 GHz processor, 2000 mAh battery, 8 MP rear camera, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi Retailer/Website:

October 2012 ` 12,990 ` 9,999

4.3-inch AMOLED touch screen, 800 x 480 pixels screen resolution, 1 GHz processor, 1600 mAh battery,8 MP rear camera, 512MB of built-in storage, expandable up to 32 GB, 3G, WiFi Retailer/Website:

Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

October 2012 ` 19,799 ` 19,799

10.16-cm (4-inch) Super LCD WVGA display, 1GHz Qualcomm MSM8225 Snapdragon processor, 1650 mAh battery, 5 MP rear camera, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi Retailer/Website:

October 2012 ` 9,990 ` 8,499

10.2-cm (4-inch) capacitive touch screen, 480 x 800 pixels screen resolution, 1500 mAh battery, 5 MP rear camera, 4 GB internal memory, expandable up to 32 GB, 3G, Wifi Retailer/Website:

iBall Andi 4.3j


Spice Stellar Horizon Mi 500


Karbonn A21

Karbonn A9+

Android 2.3
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP:

October 2012 ` 9,499 ` 9,499

4.3-inch capacitive touch display, 1 GHz processor,1,630mAh and 900-mAh dual battery, 5 MP camera, 2 GB internal memory, 3G, Wifi Retailer/Website: Reliance Digital outlets

October 2012 ` 15,999 ` 12,499

12.7-cm (5-inch) multi capacitive touchscreen, 800 x 480 pixels screen resolution.1 Ghz dual-core processor, 2400 mAh battery, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi Retailer/Website:

October 2012 ` 10,490


Android 4.0
Launch Date: MRP:

October 2012 ` 9,290


` 10,490
11.4 cm (4.5 inch) capacitive touchscreen, 1.2 GHz processor, 1800 mAh battery, 5 MP camera, 4 GB of internal memory expandable to 32 GB, 3G, WiFi Retailer/Website:

` 9,290
10.1-cm (4-inch) IPS WVGa display touchscreen, 1.2 GHz processor, 1420 mAh battery, 5 MP camera, 3G, WiFi Retailer/Website: www.

Karbonn A7+

Karbonn A1+

Idea Aurus

Android 2.3
Launch Date: MRP: ESP:

Android 2.3
Launch Date: MRP: ESP:

Android 2.3
Launch Date: MRP: ESP:

Samsung Galaxy Chat GT-B5330


October 2012 ` 6,990 ` 6,990

3.5-inch Capacitive Touchscreen, 320 x 480 Pixels screen resolution, 1 GHz processor, 1420 mAh battery, 5 MP camera, 157 MB internal memory, expandable to 32 GB,3G, WiFi Retailer/Website:

October 2012 ` 4,290 ` 4,290

8.89-cm (3.5-inch) capacitive screen, 1 GHz processor, 1500 mAh battery, 3 MP rear camera, 3G, WiFi Retailer/Website: www.

September 2012 ` 7,190 ` 7,190

8.9-cm (3.5-inch) capacitive display, 800MHz processor, 5 MP rear camera, a front camera, 1300 mAh battery, 256 MB RAM, expandable up to 32 GB, 3G Retailer/Website:

Android 4.0
Launch Date: MRP: ESP:

September 2012 ` 10,000 ` 8,500

7.6-cm (3-inch) TFT LCD display, 320 x 240 pixels screen resolution, 850 MHz processor, 2 MP rear camera, internal memory 4 GB, expandable up to 32 GB, 3G, Wifi, Retailer/Website:

Sony Xperia Tipo


Sony Xperia Tipo Dual


Samsung Galaxy Note 2


Sony Xperia Miro


Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

September 2012 ` 9,999 ` 9,399

8.1-cm (3.2-inch) TFT capacitive touchscreen,320 x 480 pixels screen resolution, 800 MHz processor, 3.2 MP camera, 2.5 GB internal storage, expandable up to 32 GB, 3G, WiFi Retailer/Website:

September 2012 ` 10,499 ` 10,299

8.1-cm (3.2-inch) TFT capacitive touchscreen,320 x 480 pixels screen resolution, 800 MHz processor, 3.2 MP camera, 2.5 GB internal storage, expandable up to 32 GB, 3G, WiFi Retailer/Website:

Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

September 2012 ` 39,990 ` 38,990

5.5-inch HD Super AMOLED screen,1280 x 720 pixels screen resolution, 1.6 GHz processor, 3,100 mAh battery, 8 MP camera, 3G, WiFi Retailer/Website:

September 2012 ` 14,499 ` 14,499

3.5-inch TFT capacitive touchscreen,320 x 480 pixels screen resolution, 800 MHz processor, 1500 mAh battery, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi Retailer/Website: OPEn SOurCE FOr yOu | january 2013

| 21
WebOS may now function as an Android app
If you thought WebOS has become obsolete, here is some news for you. Phoenix International Communications is putting in efforts to make the 'dead' operating system functional as a standalone Android app. This will allow Android enthusiasts to use WebOS as an app on their Android smartphones without having to modify the device. It means that users will be able to run both WebOS and the Android OSs simultaneously on their smartphones without booting into one OS or the other. A glimpse of the project was first seen a couple of weeks back when WebOS was seen operating as an Android app. However, at that time, the app was not able to get past the lock screen step without crashing. The people working on the project have released a new video, which shows the progress of the app.

Mozilla gets Facebook messenger for Firefox

If you are a user of the Firefox browser, you can now chat away with your Facebook friends even when you are actually not on the site, thanks to the Mozilla developers who have officially announced the Facebook Messenger for the Firefox Web browser. The messenger is built on a new social API. According to the Mozilla blog, Once you enable the feature, youll get a social sidebar with your Facebook chat and updates, like new comments and photo tags. Youll also get notifications for messages, friend requests and more that you can respond to right from your Firefox toolbar." The posting elaborated on the new service by stating, Facebook Messenger for Firefox lets you chat with friends and stay connected with their updates wherever you go on the Web, without needing to switch between or open a new tab. You can chat with your friends and family while doing anything from shopping online for the perfect gift, cheering your team on in the big game, watching a video or just surfing the Web. Of course, if youre not feeling social, you can easily hide the sidebar or even disable the feature." To experience this new feature, just upgrade to the latest Firefox Web browser and then visit the Facebook Messenger for the Firefox page and click 'Turn On. Recently, Mozilla had added an MSN-customised version in its Firefox Web browser. Users can now have Microsoft's Bing search engine and MSN as their home page.

Gmail users can now send files up to 10 GB!

Sending a large attachment via email has always been a pipedream of sorts. But Google's new plan to do away with the 25 MB data cap and enhance the limit up to 10 GB via Google Drive integration has made this feasible. However, there is a big BUT. You obviously cannot send a file of 10 GB on an e-mail. After the large file gets uploaded, Google will send a link to the recipients of the e-mail, which would allow them to download the file from the senders Google Drive account. The Google Drive integration would be available soon. Also, Google has updated its Gmail e-mail service, which comes with new search capabilities by filtering e-mails by size and date. The new update has a more flexible date
22 | january 2013 | OPEn SOurCE FOr yOu

option, exact match, and other parameters. Google sources explain that, for example, to find e-mails larger than 5 MB, you can search for size:5m or larger:5m; or to find e-mails sent over a year ago, you can search for older_than:1y. Google has also introduced advanced search operators, which are query words or symbols that perform special actions in Gmail search. Users can get results from Google Drive, Google Calendar and other services along with e-mails when they type search queries in the Gmail search box.

RHCE / RHCVA / RHCSS Exam Centre

India armed with its first 1 GBps Internet connection

From downloading a two-hour high definition movie in a mere 30 seconds to watching your favourite video on YouTube at a breathtaking pace, you can do it all now! Thanks to Google, which recently launched its 1 GBps (1000 MBps) Internet connection at Startup Village in Kerala, which is the world's largest tech incubator. Startup Village is only the second place after Kansas City, US, to be blessed with this super-fast connectivity. The facility was formally introduced on November 17, 2012, by the states chief minister, Oommen Chandy, giving a shot in the arm to Startup Village in its mission to churn out world-class start-ups from Indian campuses. "Startup Village aims to build the elements of a world class tech ecosystem to realise the dream of a Silicon Coast in India. The vision, at this grand scale, is driven by one of India's most successful IT entrepreneurs, Kris Gopalakrishnan, co-founder of Infosys, and is powered by the Department of Science and Technology, Government of India. The Startup Village at Kochi is the first location for this national pilot programme, which would be replicated at other parts of India in the coming months," said Sanjay Vijayakumar, chairman, Startup Village, in a statement.

At ADVANTAGE PRO, we do not make tall claims but produce 99% results month after month TAMIL NADU'S NO. #1 PERFORMING REDHAT PARTNER
Only @ Advantage Pro

Redhat Career Program from THE EXPERT

Also get expert training on My SQL-CMDBA, My SQLCMDEV, PHP, Perl, Python, Ruby, Ajax...

New RHEL 6.2 Exam. Dates

(RHCSA/RHCE) @ ADVANTAGE PRO for Jan - Feb 2013 Jan. 21, 28, Feb. 18, 25,

Do Not Wait! Be a Part of the Winning Team

Splashtop Streamer brings good news for Ubuntu users!

Wish to log on to your Ubuntu desktop from your Android or Apple device? Splashtop Streamer Software for Linux tells you how to do that. Ubuntu users can now access their desktops from anywhere via a smartphone or a tablet with the beta version of

Regd. Off: Wing 1 & 2, IV Floor, Jhaver Plaza, 1A, N.H. Road, Nungambakkam, Chennai - 34. Ph : 98409 82185 / 84 Telefax : 28263527 Email : OPEn SOurCE FOr yOu | january 2013

| 23
Enjoy Minecraft on Raspberry Pi
Splashtop Streamer, which supports the Linux-based operating system. Ubuntu users can install the Ubuntu package on their desktops and use Splashtop's mobile apps to connect to their computers, remotely. The apps are available for Android and iOS smartphones. A nominal subscription fee of $1 or a yearly fee of $10 has to be paid. The price may vary for tablet variants of the mobile app. According to Splashtop, this is a good alternative for standard virtual network computing and other remote desktop technologies. Cliff Miller, chief marketing officer, Splashtop said, Technologically savvy users will like the feature more than the general public will. In fact, this was the factor behind why Splashtop was motivated for an Ubuntu release. We have been working on the Linux version for some time. We just needed to productise it to bring this Ubuntu release. The new streamer should be compatible with other Linux distributions as well, said Miller. However, the company is not supporting them officially and users of other Linux distros may also need to tweak the software a bit. The company can come up with versions for other Linux-based distributions apart from Ubuntu, depending upon the market demand. According to a statement from Splashtop's CEO Mark Lee, the company is giving the user a few ways to tweak the configuration files to stream at different frame rates that it thinks Linux folks will appreciate.

If you own a Raspberry Pi and have a fetish for virtual games, youll be happy to hear that one of the biggest game developers, Mojang, has introduced its Minecraft game for the ARM-powered Linux-based Raspberry Pi mini

computer. The Raspberry Pi Foundation had given it the green light and had sent the device samples to developers at Mojang to tweak it. Developers came up with a Raspberry Pi version of the game named Minecraft: Pi Edition. The game has support for many programming languages and users can access the codes of the game too. The game is available for free. "Users can create a LAN set-up and start gaming while learning the basics of programming on the Raspberry Pi. Users can modify the game world with code," said Owen Hill, chief world officer of Mojang. A few weeks back, there was news about a game named BerryBot, which would help users learn the basics of programming while playing. Patrick Cupka, the proprietor of The Void, has developed BerryBots 1.0.0 for the Raspberry Pi, Linux and Mac OS X platforms. The BerryBot game requires a player to code a ship to sail across and complete a simple stage. The ship has to be programmed to shoot at other ships and to do other simple jobs. The BerryBots game is written in Lua. It seems like the idea of learning while gaming is slowly picking up on the Raspberry Pi.

New improved Aakash-2 Android ICS tablet launched

The president of India, Pranab Mukherjee, recently launched the muchawaited Aakash 2 tablet in New Delhi. Packed with new features, the new version is based on a dual-core Cortex A9 processor clocked at 1 GHz along with 512 MB of RAM. Compared to the first Aakash tablet, which was based on a 366 MHz processor and an older ARM architecture, this new tablet is definitely more advanced. The new Aakash tablet runs Android 4.0 Ice Cream Sandwich and we expect it to be smooth, as the hardware backing it seems good enough for Android 4.0. The cost of the device is Rs 2,263. The Indian government will purchase it at this price and then distribute it to 220 million students across India in the coming five to six years at a subsidised price of Rs 1,130. The government subsidises it by 50 per cent and it will be distributed to students at Rs 1,130. The government is also trying to motivate the state governments to further subsidise it, so that the Aakash 2 can be distributed to students for free, said Suneet Tuli, CEO, Datawind. Aakash 2 can run Linux distributions along with the Android 4.0 Ice Cream Sandwich operating system. The device can also do Aadhar authentication and can remotely control a robot, according to HRD Ministry officials. The tablet will be primarily used for classroom teaching, and over 15,000 teachers at 250 colleges across the country have been provided training.

24 | january 2013 | OPEn SOurCE FOr yOu
You can now boot any Linux distro securely

Linux users now have reason to cheer. Linux developer Matthew Garrett has reportedly resolved the Unified Extensible Firmware Interface (UEFI) Secure Boot issue for Linux users. He has released a version of his Shim Secure Boot bootloader, with which users can launch any Linux distribution Secure Boot systems without the need to disable UEFI Secure Boot. The highlight is that Garrett's Shim Secure Boot binary has been duly signed by Microsoft, thereby making it executable by any type of UEFI firmware. Wondering how it works? Well, to start with, the Shim Secure Boot will ask you for a key at the time of the launch; once the key is provided, it will start any bootloader that has been signed with this key. Garrett explains on his blog that Linux distributors simply need to sign their UEFI bootloader (grubx64.efi) with a separate key, include this key on their installation medium and tell their users where to find the key when the Shim asks for it. Anything else is up to the individual distributor; for example, it is possible to use signed kernel images and modules to implement a chain of trust for the entire boot process. The signed version of the Shim saves Linux distributors the effort of having to get their own bootloaders signed by Microsoft, according to reports. Garrett had indicated the possible Secure Boot issues with Linux earlier, and worked towards developing an easy way of installing Linux on Secure Boot systems. The Linux Foundation is still struggling with Microsoft's Secure Boot signing service. Interested users can download Shim 0.2 as source code and as a signed binary.


Rootkit malware haunts Linux users

Security concerns continue to plague the open source world! A malware has been detected that is mainly targeted at the latest 64-bit Debian Squeezy kernel (2.6.325). Kaspersky Lab has named the malware as Rootkit.Linux.Snakso.a. Kaspersky Lab expert, Marta Janus has described it as an outstanding sample. According to Janus official posting, The rootkit targets 64-bit Linux platforms and users advanced techniques to hide itself. The malware is still in the developing phase. The binary is more than 500k, but its size is due to the fact that it hasn't been stripped. Because some of the functions dont seem to be working optimally or they are not fully implemented yet, said Janus in the posting. The threat is still active. We weren't able to connect to the C on the port used by malware, but the malicious server is still active, added Janus. The threat appears to be complicated and sophisticated. The experts are expecting a kernel-mode binary component with advanced hooking capabilities. The malware is a real low profile one, and it makes the attack look more transparent and low-level than before. Janus concluded by saying, The rootkit shows a new approach to the drive-by download schema, and we can certainly expect more such malware in the future. OPEn SOurCE FOr yOu | january 2013

| 25
Facebook pushes employees to use Android smartphones Skype 3.0 arrives on Android tablets!

The craze for Android smartphones is not just confined to the geeks. The most 'liked' social networking website, Facebook, has also joined the bandwagon. If reports are to be believed, Facebook is asking its employees to say 'no' to Apple and switch on to Android smartphones.

This piece of news is sure to give all Android users a high. Skype introduced its Android app for the modern UI when Google released Android 4.0 (ICS). The app, however, was not optimised for bigger screens until now. Skype 3.0 for Android supports the tablet user interface and is for those who want to do voice chat and video conferencing from their Android tablets. Skype for Android phones worked perfectly, but there is nothing like having the option of running the software on tablets as well. The UI of Skype for tablets is a cross between Androids Holo UI and Skypes own style. Both come together to offer a great experience in terms of look and feel. The new Skype also enables users to sign in with a Microsoft account. It even has an improved audio and a couple of bug fixes. Skype has not revealed too many details about the bug fixes though. The new Skype shows how far it has progressed from its first Android app version. The development is a welcome change for Android fans. You can download Skype for your Android tablet from Google Play Store.

Red Hat Enterprise Linux 6.4 Beta goes well with Microsoft
Recently, Facebook designed a poster that indicated the growth of Android smartphones over iPhones and is asking people to switch to Androidpowered handsets. The strange part is that Facebook used to give out iPhones to its employees earlier but now states that, the aim of the Android effort is apparently to improve functionality for the app, which has been criticised as slow, thin on useful features, and a drain on resources. The aim of this so-called dog-fooding, in which employees are encouraged to use their own products, would be to improve the Facebook experience on the smartphone platform with the largest market share.

Red Hat has come up with a beta release of version 6.4 of Red Hat Enterprise Linux (RHEL). This version has improved security along with several new Microsoft-enabling features. Previously, version 6.3 was released with enhanced virtualisation scalability. Version 6.4 includes support for Microsoft Hyper-V Linux drivers along with interoperability improvements with Microsoft Exchange, which Red Hat includes within the Evolution e-mail system in RHEL. "We had previously planned to integrate Microsoft Hyper-V drivers, said Ron Pacheco, senior manager, Product Marketing, Red Hat, in a report. "As is our practice with all aspects of Red Hat Enterprise Linux, we wait for the code (including drivers) to be accepted into the upstream community before they can be introduced into Red Hat Enterprise Linux 5.9, and now in Red Hat Enterprise Linux 6.4." The next major release from Red Hat Enterprise Linux (RHEL) Version 7 will reportedly be out in the second half of 2013. The company aims to release a major new version of its OS every three years along with updates about every six months, said Jim Totten, vice

26 | january 2013 | OPEn SOurCE FOr yOu
president and general manager at Red Hat's Platform business unit. Over a webcast, he said that, While we are not at a place where we are making announcements ... our general target is the second half of 2013 to see RHEL 7 enter the marketplace. While Red Hat is not revealing much about the date of the release, Totten said the OS will have improvements across its more than 2,000 packages, and that key focus areas are supporting new hardware, file systems, security and performance.

Android tablets closing in on Apple's iPad

If a recent survey by IDC is to be believed, Android-powered devices are gaining ground at a fast pace on the market-leading iPads. IDC has raised its 2013 forecast number to 172.4 million, up from 165.9 million and said that by 2016, worldwide shipments should reach 282.7 million. "Tablets continue to captivate consumers, and as the market shifts toward smaller, more mobile screen sizes and lower price points, we expect demand to accelerate in the fourth quarter and beyond," said Tom Mainelli, research director, Tablets, IDC. IDCs report stated that, "Android tablets are gaining traction in the market thanks to solid products from Google, Amazon, Samsung, and others. And Apple's November iPad mini launch, along with its surprise refresh of the full-sized iPad, positions the company well for a strong holiday season." IDC now expects Android's worldwide tablet share to increase to 42.7 per cent for 2012 from 39.8 per cent in 2011. Apple's share is expected to slip to 53.8 per cent from 56.3 per cent in 2011. "The breadth and depth of Android has taken full effect on the tablet market as it has for the smartphone space," said Ryan Reith, an IDC analyst. "Android tablet shipments will certainly act as the catalyst for growth in the low-cost segment in emerging markets, given the platform's low barrier to entry on manufacturing. At the same time, top-tier companies like Samsung, Lenovo and Asus are all launching Android tablets that are similar to premium products, but are offered at much lower price points," IDC sources said.

Now, an open source alternative to Skype

Ubuntu users may know open source powered Ekiga as a default Voice-overInternet Protocol (VoIP) client in that popular Linux distribution. But now it has made a comeback as an alternative to none other than Skype. Arriving some three years after the previous release, Ekiga 4.0also known as The Victory Releaseis now available as a fresh new Skype alternative for users of Linux and Windows alike. This is a major release with many major improvements, wrote the software's developers in the recent announcement on the projects site.

As a strategy to promote its Windows Phone platform, Microsoft has asked users to narrate their Android horror stories. If you have one that can impress Microsoft, the company will reward you with a Windows Phone for free! It is not the first time Microsoft has resorted to such a strategy. Almost the same time last year, Microsoft launched a similar campaignto make annoyed Android users switch over to Windows Phone. The company declared that it got around 3200 submissions from disgruntled Android users who had phones that were 'infected with malware'. Microsoft is playing on the same strategy again and this time it is using its Windows Phone Twitter account. The company has asked annoyed Android users to use the hashtag #DroidRage with their Android malware 'horror story'. That's not all! Microsoft has also shared some tips with Android users suffering from malware infected devices. It has suggested the following: Wait for your Android phone to get infected with malware Recover from SMS scam bill shock Skip Steps 1 and 2; buy a Windows Phone and connect with people you care about instead of some hacker plotting in a dank basement Will this strategy work again, particularly when Android has grown and Google is being extracautious about the malware factor? Lets wait and watch!

Microsoft invites Android horror stories from users! OPEn SOurCE FOr yOu | january 2013

| 27


Embedded Android Gets Popular!

This article explains why Android is the preferred platform for modern embedded solutions. It also covers the extra market edge that you get with Android.


he Android mobile platform is being adopted by a broad range of embedded devices that span multiple industries and segments. Android-enabled custom solutions, such as the Kindle Fire by Amazon, have proven to be game-changing in the industry. According to Amazon, the Kindle Fire captured 22 per cent of the US tablet market, and increased the companys e-books sales by 175 per cent in 2011.

upon connectivity. And last of all, for reasons such as power efficiency, size and performance, chipsets for embedded systems are designed to be highly integrated. This drastic change in the characteristics of modern embedded systems has given rise to advanced functionality and user experience needs, and Android helps address these needs.

Androids characteristics

The rise of modern embedded systems

As we know, embedded systems control many devices that are in common use today. They range from portable devices Dalvik Virtual Machine such as digital watches and MP3 players, to large stationary Androids Dalvik Virtual Machine is specifically designed to installations like ATM and vending machines. However, support a diverse set of devices, where the applications must embedded systems have changed dramatically in recent be sand-boxed for security, performance and reliability. Also, years. Today, they are largely media-rich, connected and it works very well with limited processor speeds and RAM. highly integrated. Many include graphical user interfaces with high-resolution 2D and 3D graphics. Additionally, Hardware platforms support nearly all embedded systems include IP networking stacks, Android supports a variety of hardware platforms. and link connectivity via a combination of wired and Apart from ARM-based Android development phones, 1, Vikas Permises, 11 Bank Street, Fort, Mumbai, India - 400 001. Mobile wireless network interfaces. The core feature sets often rely it also supports hardware platforms for prototyping and
30 | January 2013 | OPEn SOurCE FOr yOu

Here are some of the key characteristics of the Android platform that make it suitable for modern embedded systems.

benchmarking Android systems, such as tablets and automotive SoC.
and hardware, as desired.


Native Development Kit

The Android advantage

Android supports a Native Development Kit to embed components that make use of native C/C++ code in Android applications. The NDK helps address the needs of performance and graphics-sensitive applications.

So lets understand how Android can help give an extra edge to your embedded system in the market.

A well accepted and recognised UX

Optimised graphics and media support

Android provides support for a wide range of media formats through StageFright, its custom media framework. It also provides its own 2D graphics library, but relies on OpenGL ES for its 3D capabilities. This feature makes it feasible to create small-sized embedded systems with high-end audio and video capabilities.

Android user experience (UX) designs such as the Frogs Feel UX, HTC Sense and many others have raised UX standards to quite a high level. This means that your system can enjoy the high-end UX capabilities that Android supports. With this edge, the learning curve for end users of your embedded system remains no longer a big concern.

Forward-compatible apps

Telephony support

Android supports telephony, which is dependent on hardware. For this feature to work, device manufacturers need to create a HAL module to interface with their hardware, which is integrated as part of the Android build system.

All APIs provided in the application framework are meant to be forward-compatible. Hence, apps developed for one version would continue working in future Android versions. However, platform-level changes would require to be ported when planning to upgrade the Android version of your custom solution.

Wide choice of hardware configuration

Wireless connectivity

Android supports most wireless connectivity options, like Bluetooth, EDGE, 3G and Wi-Fi. This enables the embedded device to have a wide range of connectivity options to any third-party systems.

Android has wide support from the OEM and SoC community. Though primarily supporting ARM-based SoCs, Android now also supports x86-based SoCs. This provides for a wide range of hardware configurations to choose from, depending on what fits your budget and system requirements.


Hassle-free supply chain commoditisation

Android licensing terms are pretty friendly for both commercial and free open source applications. This is because all its core packages are open sourced under the terms of the Apache 2.0 licence.

No doubtAndroid, with its solid foundation on linux, sound ecosystem and developer community, together with the cutting edge advantages, has truly proven itself as the platform of choice for any modern embedded system. By: Pooja Maheshwari
The author is a technical architect at Impetus Technologies. She has about 12 years of IT experience, with wide exposure in the design and development of enterprise mobility solutions, mobile device management solutions and Android-based custom device solutions for enterprises. She is currently involved in R&D and applicability of Embedded Android for enterprises in various segments, to enhance their employee productivity and customer SLA levels.


Android has well-defined interfacing between the framework and its components. This helps to enhance or replace components as per the desired functionality. For example, the default launcher application can be enhanced or replaced with a different launcher application code base. The platform can be enhanced to support additional features

: 09326087210. 1, Vikas Permises, 11 Bank Street, Fort, Mumbai, India - 400 001. Mobile: 09326087210. OPEn SOurCE FOr yOu | January 2013

| 31


This article gives an introduction to Emscripten, an LLVM to the JS compiler.

Use Emscripten to Compile Code Into JavaScript

Let's Try

magine that you have a very large project written in C/C++, and you now want it in JavaScript, so as to port it to the Web. You might ask, Why in JavaScript? Well, you can run applications written in high-level languages, but you need to install many plugins for that and it doesnt always work as expected. JavaScript is currently supported in almost all Web browsers. With the coming of Node.js and mongoDB, and an array of other JavaScript-based front-ends and back-ends, it makes sense to port your favourite app. Rewriting the code in Java Script is too tedious, so Emscripten is the tool that you are looking for. Emscripten is an open source LLVM (low-level virtual machine) to the JavaScript compiler, using which you can compile C and C++ code into JavaScript! This avoids the need to rewrite code.

$ mkdir /path/to/clang-build $ cd /path/to/clang-build $ svn co llvm $ cd llvm/tools $ svn co clang $ cd ../.. $ mkdir build $ cd build $ ../llvm/configure enable-optimized disable-assertions $ make && sudo make install

Node.js (0.5.5 or above): To run the back-end JavaScript, it needs to be interpreted and executed, which is what Node.js does. We can clone Node.js from the git client:
$ git clone

Getting started

Some of the requirements are the Emscripten code, LLVM with Clang, Node.js and Python 2.6. You need to clone the Emscripten source code, available at github/kripken. You can use your favourite git client or the CLI, with git clone git:// LLVM with Clang (3.1 is the officially supported version): The LLVM Project is a collection of compiler and tool-chain technologies, and Clang is a new C/C++ compiler being developed on top of LLVM. So why dont we use GCC? Well, Clang can be a good replacement for GCC. Clang seems to have good compilation speednearly twice the speed of GCC. Also, Clang provides better comments if your code fails at some pointso it doesnt take much time to navigate through the code and fix it. Here are the steps to build and install Clang:
32 | January 2013 | OPEn SOurCE FOr yOu

After cloning, you need to build it, as follows:

$ cd node $ ./configure $ make $ sudo make install

Now that you are ready with the prerequisites, set up Emscripten. Check if Clang and Node work perfectlygo to the Emscripten directory, and run the following command:
$ clang tests/hello_world.cpp $ ./a.out
Let's Try
Clang +LLVM


This should output Hello World! if all is working well. Next try the command given below:
$ node tests/hello_world.js

High level Code (C/C++)

Machine Readable bit code/human readable assembly code

Actual Machine Code

Front End

Back End

This should also show the same output: Hello World! if all is well. Now open the file ~/.emscripten (the settings file) to make a few changes. First, make the Emscripten root your current Emscripten directory:
JAVA=java EMSCRIPTEN_ROOT = os.path.expanduser(/home/user/porting/ emscripten) # this helps projects using Emscripten find it

Code in Java Script


Figure 1: A flowchart explaining the role of Emscripten

LLVM root: Just find in which directory clang++ is located, and make that directory your LLVM root:
LLVM_ROOT = os.path.expanduser(/home/user/porting/clang-build/ build/Release/bin)

files for every source file in the build process, which are then converted to JavaScript (see Figure 1). So basically, the entire code is built using emcc, and once built, it is converted to JS. If you pass a .c/.cpp file to emcc, it outputs a .bc file that contains the bitcode; and if you pass .bc files to it, it outputs a .js file.

Applications ported with Emscripten

Also make NODE_JS=node in the same file. Now, try running Emscripten:
$ ./emcc tests/hello_world.cpp $ node a.out.js

This should print Hello World!. Using Emscripten, you have now transformed hello_world.cpp into an a.out.js file.

How Emscripten works


Lets take a look at the sample code in hello_world.cpp:

class Test {}; // This will fail in C mode int main() { printf(hello, world!\n); return 1; }

Here is a list of a few applications ported to the Web using Emscripten. 1. Banana Bread Engine: Cube 2 is a 3D game engine. It is now compiled into JavaScript and WebGL, and is known as Banana Bread Engine. You can run it in your Web browser itself without any plug-ins. 2. Sql.js: Porting of SQLite to JavaScript resulted in sql.js. This is obtained by compiling the SQLite C code with Emscripten. 3. Speak.js: It uses eSpeak, which is an open source speech synthesiser. eSpeak was compiled from C++ to JavaScript as speak.js. This converts text to speech. There are many more applications compiled from C/C++ to JS using Emscripten. The list can be found at https://github. com/kripken/emscripten/wiki. Most Web applications use JavaScript. The task of porting becomes much easier with Emscripten. So try it out if you need it! References
[1] [2] [3] [4] [5] [6] [7]

The output JavaScript code is a bit lengthy. You can find the whole code in /tests/hello_world.js in the Emscripten directory, but here is a small part of it:
if (typeof print === undefined) { this[print] = printErr; } // *** Environment setup code *** print(hello, world!);

By: Sowmya Ravidas

The author is currently pursuing her Bachelor of Technology degree in Computer Science at Amrita University, India. She has been actively contributing to open source technologies for the last one year. Currently she is working on security related issues in embedded systems. Your can reach her at

The way Emscripten works is that it creates LLVM bitcode OPEn SOurCE FOr yOu | January 2013

| 33

For U & Me

Start Your Own E-Preneur Venture Using Open Source Tools


For people interested in e-commerce, this article explains the basic concepts for starting an e-preneur venture (also known as an e-commerce website). It also lists various open source tools that help build such a business from scratch.

n e-preneur is someone who uses an opportunity in the digital world to start a venture, which contributes solely to the digital economy. So hows an e-preneur different from an entrepreneur? Entrepreneurs contribute to the economyphysical, digital, or boththrough their ventures, but an e-preneur contributes only to the digital economy. Lets look at the case of Mark Zuckerberg. He saw that students at Harvard needed a common platform from where they could communicate with each other, or get updates of their friends without actually having to interact with them. He grabbed the opportunity and developed Facebook. But do you know what went into the making of Facebook? It was developed using PHP as the scripting language, MySQL as the back-end database, Linux as the OS platform and a few hundred dollars. So did you notice that all these are open source platforms (apart from the money, of course)? This article will show you the possibility of starting an e-preneur venture using open source toolsbut before taking the plunge, lets warm up a bit and look at the key ingredients that go into making an epreneur.
34 | january 2013 | OPEn SOurCE FOr yOu

What does it take to be an e-preneur?

The essential requirement for an e-preneur is an idea, and the essentials of a successful idea are its uniqueness, originality, and the extent to which it satisfies the current as well as future needs of people. To be a successful e-preneur, it is good to have an active interest in current and upcoming technologies, because it would enable you to understand the nitty-gritty of the e-commerce platform. If you have an idea, and are technically competent enough to either develop an e-commerce portal, or to modify one according to your needs, theres very little left to be funded. A strong and vast network of people. Such a network not only serves as a medium of advertising, but also provides you with valuable feedback on how to improve the offerings, serviceability, user experience, etc. Skills like innovative thinking, having a long-term vision, etc, come in very handy to brighten the long-term prospects of any e-preneur venture.

Open-sourcing an e-preneur venture

Insight For WorldMags.netU & Me

Do I have knowledge of underlying technologies? Can I develop the online platform? Yes Do I have a large network of people? Will they support and publicize my idea? Yes

An e-preneur venture goes through various stages of planning and development. This includes the idea stage, developing the concept and then the platform, marketing, planning for expansion and growth, innovation and even possibly a redesign stage. All of these stages can be implemented or enhanced using open source tools. The idea stage: Think of starting a venture, and millions of ideas bloom. However, only some of these ideas are actually feasible or practical. Even among the feasible ones, finding that one unique and original idea is like looking for a needle in a haystack. Idea-management tools can prove to be a blessing at this stage. Write down all your ideas (even the stupid ones). Sharing these with your partners and grouping them are some of the features that these tools provide. Ideatorrent (http://drupal. org/project/ideatorrent) and BBYIDX ( are two great open source idea management tools. The concept stage: Once an idea has evolved from the idea stage, it enters the concept stage. At this stage, it is further refined and improvised using various tools such as brainstorming, focus groups, etc, and is converted into a concept. Using brain mapping and collaboration tools can help speed up this stage. The best open source tool for brain mapping is FreeMind ( index.php/Main_Page). Add to this the power of collaboration with Collabtive ( and create a


Technical Competencies


Network of people

Skills and Qualities

Successful E-Preneur

Is the idea Unique? Will it be in demand? Does it bridge a gap?


Can I arrange the funds required? Will the idea be able to generate funds in future?


Do I have innovative skills? Do I have basic managerial qualities? Am I determined & dedicated to my venture?


Figure 1: The funnel model for e-preneur evaluation

concept out of your idea in no time. The platform development stage: A concept, once approved, enters the next stage of platform development. Selecting the technology to build the online platform and developing it is an important element of this phase. HTML5, CSS3, jQuery, PHP and MySQL, are some of the best open source technologies available to develop your platform. Open source PHP frameworks like Codeigniter ( codeigniter) and CakePHP ( can be used to build robust e-commerce portals. User interfaces, or the look and feel of the platform, can be designed using the GIMP ( and the Bluefish editor (http://bluefish. If you wish to go completely open source, use any of the open source Linux (Ubuntu, Mint, etc) flavours as the base operating system.




256-bits ICMP IPv6


My Network is too small, do I need Network Security?

VPN-Tunnels allow


AntiVirusVPNUTMaccess reject Loadbalance IPSec ruleset security Proxy rules



Traffic-Shaper IPOpenVPN PAT Packet IPSContent-Filtering block

Multi-WAN Policy-Routing


I have a lot of branch offices. How do I secure all of them easily?


I don't have a Network Security personnel.


MD5 ltering

FailoverNetwork analysis protect


I don't know what my users do on the web!

Connected, Secured & Optimised Networks Managed by experts @ INR 1000/month*

Hopbox changes the way SMEs look upon Network Security. By eliminating the need for expensive equipment, network consultants and dedicated network security personnel, Hopbox is helping organisations across India with connected, secured and optimised networks and savings upto 60% on total ownership cost compared to prevalent Network Security options.

For queries call Soubhagya at +91-9-7171-600-30

*Conditions apply. Valid for basic package with annual pre-pay subscription.

The Hopbox team is growing. If you are an Open Source / Networking enthusiast or a Technology Sales Professional, and would like to work with a young, dynamic company do get in touch at

Unmukti Technology Pvt Ltd, Level 4, Augusta Point, DLF Golf Course Road, Gurgaon - 122002 |
OPEn SOurCE FOr yOu |

january 2013 | 35

For U & Me

Marketing Stage Expansion & Growth Stage

Idea Stage

Concept Stage

Platform Development Stage

Innovate & Redesign Stage

Next Evolution Cycle

Figure 2: The planning and development process as well as the evolution cycle of an e-preneur venture

In case youre not interested in developing the platform in-house, there are some excellent open source e-commerce platforms available. The best among them is OpenCart (http:// It has all the features of a premium platform, like payment gateway integration, inventory management, catalogue management, etc. It is also supported by a vast community of developers, who publish plug-ins to enhance the platforms functionality. A list of some other open source e-commerce platforms is available at this link ( eCommerce_software). One of the key deciding factors in the success of any business is customer support. Customers are more demanding than kids, throw more tantrums than a four-year-old, and are hard to satisfy. The key to a customers heart is the service and support a business provides once the transaction has been completed. Some great tools in this area are CitrusDB (http://, RT: Request Tracker (http://bestpractical. com/rt/), Help Desk Software (, etc. The marketing stage: The next stage in this process is spreading the word about the venture. This requires using various marketing media like emails, media advertisements, newsletters, etc. Though the biggest boost to a marketing effort is happy customers, should you want to take advantage of other media, there are some great open source tools waiting for you, like Openemm ( It provides features like newsletter management, email marketing, etc. It also has a rich interface and excellent reporting features. Besides this, you can go for advertisements in the social media like Facebook, Twitter, etc. Google Adwords is another great tool, provided you are willing to shell out a few bucks. Search Engine Optimisation (SEO) is another area which an e-preneur should look into. SEO decides the fate of a website. If your website isnt among the top 10 results of a search engine, you might as well not run it. A great open source tool to handle all your SEO troubles is SEO Panel ( It provides a vast variety of SEO plug-ins, and has the capability to optimise a website for all the popular search engines. The expansion and growth stage: Success in the marketing stage indicates that the venture is now ready for expansion and growth. It can be extended to markets that havent been touched
36 | january 2013 | OPEn SOurCE FOr yOu

yet, but promise a greater number of hits or users. Business Intelligence (BI) tools can provide great insights in this area. Some great open source tools for BI are Jaspersoft (http://www., Pentaho ( and Jedox ( All three provide excellent reporting, dashboard and analytics capabilities. The latest release of Jaspersoft uses HTML5 for data visualisations. The innovation and redesign stage: A major challenge that e-preneurs face is keeping their ventures in line with the latest developments in technologies. Changes in technology take place at a very fast pace; what seems attractive today will become ugly tomorrow. Thus, e-preneurs have to come up with innovative ways to surprise their consumers. I personally feel that the only tools available for this stage are creativity and your imagination. Coupled with the GIMP and the Bluefish editor, they can really enhance the user experience. The evolution of an e-preneur venture is an ongoing process, and does not end here. Rather, it keeps going through these stages cyclically. Figure 2 shows the stages of planning and developing an e-preneur venture, and the flow of its evolution.

The challenges

Lets now look at some of the risks faced by an e-preneur. Risk of failure: The chances of an e-preneur venture failing are much higher than any other type of business, because of various factors like competition, threat of substitutes, the threat of new entrants, etc. Technical risks: The technical risk in an e-preneur venture is very high, as its highly technology dependent. For example, the website may not be accessible due to servers being down, and hence faces a loss of credibility. The Internet is full of crackers and malicious attackers; it is hard and expensive to safeguard the website from them. Though they can be traced, much of the damage to your ventures credibility would already have been done by then. Failure to innovate: An e-preneur might not be able to foresee upcoming trends in technology, and hence may fail to make timely innovations. After a certain period, consumers expect to see something new. Failure to surprise them results in loss of customer loyalty. E-commerce is growing at a rapid pace in our country because people have started accepting the trend of online shopping. Websites like,, etc, are fast becoming the new hangouts for all kinds of shoppers. In this article, I have attempted to familiarise readers with the basic concepts of e-preneuring, and how they can use open source tools to reduce the investment and time-tomarket for their venture.

By Uday Mittal
The author is an open source enthusiast and likes to experiment with new technologies. He provides SME and personalised solutions and can be reached at
Let's Try


An Introduction to Socket Programming in C (UDP)

This article explores the basics of writing programs using sockets in C, with BSD sockets and the IP protocol version 4.

etwork applications, like file transfer programs, need to conform to some communication protocol. The most common protocol is TCP/IP, which is layered for example, application, transport, network and link layers. You can actually look at a socket as an extension of inter-process communication techniques like pipes, FIFOs, message queues, etc. These are limited to a single machine. There are also UNIX Domain Sockets, which can be used to communicate only between processes on the same machine. However, I am talking about TCP, UDP or SCTP intermachine network sockets that use the services of the transport layer and can help to directly get the services of the network layer (raw sockets). With these sockets, in addition to local communication on a single machine, you can communicate between different machines running different operating systemsand each program can be written in a different programming language from the other. Here, we look at the

basics of using UDP sockets. What does a socket program look like? A typical socket program might have two source filesone for the client and one for the server. For instance, my source files are lfy_udp_ client.c and lfy_udp_server.c, which can be downloaded from The two code snippets below represent two terminal windows, with each program compiled and the executables runand the output is shown below:
[localhost udp]$ gcc lfy_udp_server.c -o myserver [localhost udp]$ ./myserver UDP Server - Waiting for Client data Data Received from I have received Testing UDP Server - Waiting for Client data OPEN SOURCE FOR YOU | JANUARY 2013

| 37


[localhost udp]$ gcc lfy_udp_client.c -o myclient [localhost udp]$ ./myclient 11710 Client Starting service Enter Data For the server Testing Data Sent To Server Data Received from server Testing Enter Data For the server
Let's Try
Process received data. received. }

Receive datagram from a client (Here, the process waitsthat

is, gets blocked, until the recvfrom function returns). Send datagram to the client, from whom datagram was

The source code

Though client and server generally mean processes running on different machines, in my example, both client and server are running on the same Fedora 15 machine. The server process typically listens on a particular port number; in this case, I have randomly chosen 11710. The client should know the IP (Internet Protocol) address of the machine on which the server is running, as well as the port number. (Here the address is, the loop-back address, since I am using the same machine to run the client and the server.) The client requests some service from the server, which serves the client and typically waits for another client request. In this example, the service is simply returning the string it received from the client and this is displayed by the client. After running the server, you can use the netstat command to see the socket opened by the server:
[localhost udp]$ netstat -a | grep 11710 udp 0 0 *:11710 *:*

Now please refer to the downloaded source code. The socket function returns a socket descriptor (IPv4, SOCK_ DGRAM(UDP)), referred to by the variable sd. An address structure struct sockaddr_in (whose declaration is in the file / usr/include/netinet/in.h in Linux) is needed. It's declaration is available in the source code by inclusion of <netinet/in.h>. In the client code, you need to populate this address structure (the serveraddress variable) with the address family, the servers IP address, and the port number. Note that this needs to be populated in the big-endian order, so use functions/macros like htons and inet_addr htons(atoi(argv[2])), which transform the string the user passes as the command-line argument into a big-endian short integer. Similarly, inet_addr(argv[1]) transforms the IP address command-line argument into a 32-bit IPv4 address in the big-endian order.
serveraddress.sin_family = AF_INET; serveraddress.sin_port = htons(atoi(argv[2]));//PORT NO serveraddress.sin_addr.s_addr = inet_addr(argv[1]);//ADDRESS

The output shows that one UDP socket is available, bound to port 11710. Please remember that the netstat command is very useful when you work with sockets. Now let us go over the simple algorithms for our UDP client and server.

I used the functions recvfrom and sendto to receive and send UDP datagrams respectively. The sendto function prototype is as follows:
ssize_t sendto(int sockfd, const void *buf, size_t len, int flags, const struct sockaddr *dest_addr, socklen_t addrlen);

The algorithm for a simple UDP client

Get the socket descriptor (using the socket function). Update a socket address structure with the servers IP address and port. Get data for sending in the datagram. Send datagram to server, using the address of the server. Receive datagram from server. (Here, the process waitsthat is, gets blocked, until the recvfrom function returns). Close the socket descriptor.

The first argument is the socket descriptor, the second is the array from where to send data, and the third is the number of bytes to send. We have the fourth argument (flags) as 0. This supports a few options, which you can learn from the manual page. The fifth argument is the address structure, which needs to be populated (in the client, it is populated with the servers IP address and port number). The sixth argument is the size of this address structure. Let us now understand the recvfrom function. Its prototype is as shown below:
ssize_t recvfrom(int s, void *buf, size_t len, int flags, struct sockaddr *from, socklen_t *fromlen);

The algorithm for a simple UDP server

Get the socket descriptor (using the socket function). Update a socket address structure with the local address and port number. Associate (bind) the above address with the socket. Repeat forever {

The first three arguments are, respectively, the socket descriptor, the array where the received data is to be stored, and the length of that array. The fourth is the flags argument,


set to 0. The fifth is the socket address structure, but unlike sendto, its purpose is to store the IP address and the port number of the sender. Thus, when recvfrom returns after receiving a datagram, you know who has sent the datagram by checking the contents of that structure. Also, the number of bytes stored in that structure is returned in the sixth argument, socklen_t *fromlen, a pointer to this value. Please note the two lines of code from the server source code, which are shown below:
numbytes = recvfrom(sd, datareceived, BUFSIZE,0, (struct sockaddr*)&cliaddr, &length); bytessent = sendto(sd, datareceived, numbytes,0,(struct sockaddr *)&cliaddr, length);
Let's Try
protocol decode


udp port 11710 tells tcpdump to capture UDP packets with the port number 11710. Here is tcpdumps captured output:
[root@localhost udp]# tcpdump -i lo -t udp port 11710 tcpdump: verbose output suppressed, use -v or -vv for full listening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes IP localhost.49485 > localhost.11710: UDP, length 6 IP localhost.11710 > localhost.49485: UDP, length 6 IP localhost.49485 > localhost.11710: UDP, length 5 IP localhost.11710 > localhost.49485: UDP, length 5

The address structure cliaddr is populated in recvfrom and so after that function returns successfully, the structure contains the IP address and port number of the client, which is then used in the sendto function to direct the servers response to the requesting client. I hope the basic logic is now clear. The bind function associates a local address with the socket. Normally, it is used on the server side. Please note the following fragment of code:
serveraddress.sin_family = AF_INET; serveraddress.sin_port = htons(MYPORT);//PORT NO serveraddress.sin_addr.s_addr = htonl(INADDR_ANY);//IP ADDRESS ret = bind(sd,(struct sockaddr*)&serveraddress, sizeof(serveraddress));

Here in two runs, from the client side the strings hello and test are sent to the server and echoed back. The first and third rows are data sent from the client to the server; the other two are the servers responses to the client. The string lengths are 6 and 5 since the newline character too is accepted by the fgets function used to get keyboard input in the client. The number 49485 is the port number of the client (and 11710, of course, of the server). For the client, the port number is chosen by the OS when sendto is called. I hope from here on you will be able to extend your socket programming. In the next article, we take on the basics of socket programming in TCP. References
[1] Stevens, W Richard, Rago and A Stephen, Advanced Programming in the Unix Environment, 2nd Edition, Pearson Education [2] Stevens W Richard, Fenner Bill and Rudoff A M, Unix Network Programming (The Sockets Networking API Vol. 1), Third Edition, Pearson References Education

The bind function associates the servers port number and IP address with the socket after it is populated. Here, htonl(INADDR_ ANY) is used to bind to all IP addresses on all network interfaces; the INADDR_ANY constant is a wild card address. Also note the use of the inet_ntop function shown below:
printf(Data Received from %s, inet_ntop(AF_INET,&cliaddr.sin_addr, clientname, sizeof(clientname)));

Useful Internet links

[1] Internet Protocol Specification: [2] User Datagram Protocol (UDP): [3] For programming-related questions and answers: http://www. [4] [5] netinet/in.h.html [6]

This converts the 32-bit IPv4 address (which is in the network byte order) into a dotted decimal notation (presentation format), and stores that in the clientname array, whose size is specified by sizeof(clientname). Please note that the server code runs in an infinite loop; to terminate it, use Ctrl-C. (While writing production code, its better to write a signal handler for Ctrl-C and do any clean-up that may be needed. I have omitted this for simplicity.) Now lets try and analyse the running of the program, using the very useful packet analyser tcpdump (you need to be the root user to run it). I ran the server and client in one terminal each, and tcpdump in a third. The option -i is for specifying the capture interface; -t for printing without any time stamps, and

I would like to thank Tanmoy Bandypadhyay for his help in reviewing this article

By: Swati Mukhopadhyay

The author has more than 12 years of experience in academics and corporate training. Her interest areas are digital logic, computer architecture, computer networking, data structures, Linux, programming languages like C and C++, shell scripting and Python scripting. For any queries or feedback, she can be reached at OPEN SOURCE FOR YOU | JANUARY 2013

| 39

If you are working with an emulator, you can do the same thing by downloading the respective files to the location Android/android-sdk/platformtools/ and installing these from the command-line with the adb install filename.apk command.
How To


Running the test program

By now, your Android device is ready to run your PHP scripts Figure 1: Demo scripts of PFA just start Menu and launch

Figure 2: Run the changed Hello script

flexible and interactive applications. So to all developers, happy coding! References

[1] [2]

By: Yatharth A Khatri

The author is a FOSS lover who enjoys working on all types of FOSS projects. He is currently doing research on human-computer interaction and is an Android developer too. You can reach him regarding any software issues at Figure 3: FIX ME error

SL4A. There you will find a number of demo scripts provided by PFA, as you can see in Figure 1. Lets run hello_world. phpjust tap it, and then the pencil icon to edit the code, which is shown below:
<?php require_once(Android.php); $droid = new Android(); $name = $droid -> dialogGetInput(Hi!, What is your name?); $droid -> makeToast(Hello, . $name[result]);

As getInput is depreciated by Android, and will cause an error, you must use dialogGetInput instead. Once done, save and run the code. The result is shown in Figures 2 and 3. As you run the code, you will see a FIX ME error message (Figure 3 top) that reads:
FIX ME! implement getprotobyname() bionic/libc/stubs.c:378

This occurs because the bionic (Android libc) lacks getprotobyname. However, it does not affect the output of your script. When you pack your PHP script into an apk file, it gets removed. For further writing of your own scripts, you can refer to the SL4A API reference at com/p/android-scripting/wiki/ApiReference. SL4A with PFA opens up Android development to the next level for Web developers, with definitely better programming opportunities while offering users more OPEn SOurCE FOr yOu | January 2013

| 41



What A Native Developer Should

Android Security


First version of the Android was built over the Linux 2.6 kernel and is customised to embedded (smartphone or tablet) needs. This article provides a basic overview of the Android security architecture and mentions some important changes that may have an impact while porting your native code to Android.

44 | January 2013 | OPEn SOurCE FOr yOu

ndroid applications are written in Java. However, the Android framework for executing applications, which is written in Java, does not take the traditional JVM/JRE approach. Instead, the Java code is compiled into byte-code (as in a pure JVM environment), which is then compiled into another Android-specific format known as the Dalvik EXecutable (DEX) format. This is run in an Androidspecific virtual machine, known as the Dalvik VM, which is the heart of Androidand the biggest diversion from the pure Java-JVM approach. The Dalvik VM is highly optimised, both in size and execution efficiency. The different approach is because of three primary factors: Size: A .dex file is composed of several .class files; this gives an opportunity for duplicate data or code to be merged and shared among multiple .class files, which results in a smaller DEX file than possible with multiple .class files. Run-time efficiency: The Dalvik VM is highly optimised to execute DEX codea major factor in the market's considerations of usable operating systems. We will later look at how this is achieved with zygotes. An extra layer of security: DEX code is like the next level of abstraction over Java byte-code. The Dalvik VM and the Androidised Linux kernel constitute the Android framework for applications, providing full separation between various applications every application has its own unique instance of the Dalvik VM. This constitutes a very secure design. This means that for every application, Android starts a new instance of Dalvik. At first, this design may appear inefficient. However, a very important concept known as Zygote is primarily meant to address this problem. Zygote, which is always running in the background, is responsible for speeding up Dalvik instance creation as it has a pre-initialised footprint of Dalvik core components always ready in memory. Whenever a new instance of Dalvik needs to be created, Zygote speeds it up by using the pre-initialised core. Android end-user applications and exposed libraries are written in Java. The application developer selects suitable Java classes that expose the required functionality or service.
The Android security framework


From the developers perspective, Android provides two levels of security to applications. Before discussing the details, lets look at how Android applications are shipped and installed on the device. An Android application package is a file with the extension .apk. When a user installs such a file, the Android Package Manager (PM) identifies all the native libraries, along with the other components in the package, and puts these libraries in some package-specific location. Also, this application is assigned a unique UID (Linux user ID) and GID (group ID), based on the developer who signed the application package. From then onwards, this application is identified by the Linux kernel as a unique usera one-app-one-user philosophy. Thus, all the separation provided to one user from another is now applicable to the Android application! Also, permissions to the location where all application data (including native libraries) resides, are as per the assigned UID/GID. Now let's come back to the security framework.

Application sand-boxing

The Native Development Kit (NDK)

Native code is written in C. So how can native code be integrated in the Android framework, which is Java dominant? Android has a solution for thisNative Development (ND). All code written in C (or in other words, all code that can directly run on Linux, outside Dalvik) is called native code. You compile your native code through the NDK provided by Android, and generate a Linux shared library (not DEX format). This shared library can be used in Java applications through the Java Native Interface (JNI) mechanism. It does not run in the Dalvik VM, of course, but directly on the Linux kernel.

In Android, this term is used to describe the encapsulation provided to an application, with the help of Linux userlevel (unique UID/GID) security. Each application runs in its own sandbox, and the Linux kernel maintains that separation, making every applications code and data accessible to that application only. Since this sand-boxing is directly by the kernel, it is very secure. However, there is an exception to this rule: application developers can choose to share some data or native libraries between multiple applications. The restriction is that such applications should be created and signed by the same developer. In this case, more than one application (owned by the same vendor or developer) is assigned the same UID/GID by the Android package manager. This means that the default application sand-boxing can be broken by the developers, making everything shared among their applications. Lets look at how this model affects native code. Let us suppose an application is assigned a user ID <X>, and the application data storage location is <PATH>. Now, only <PATH> has the desired Linux permissions for the user ID <X>. So, if native code running as a part of this application tries to access anything at some other location, the Linux kernel will ensure that this access is denied!

Android permission framework

At the top level, which has the Android (Java) APIs, Android also has another security layer. Android defines a large set of security types (also known as capabilities), which are necessary for an application to exhibit in order to use a specific service from the Android runtime. For example, a permission with the name INTERNET is necessary if the application wants access to network| 45 OPEn SOurCE FOr yOu | January 2013


related resources. If an application tries to use a service or resource for which it has no permission, then the service or resource is denied. The particular permission or capability is declared by the application developer, in the applications Manifest file. This level of security is over and above application sand-boxing. The important point here is that the Android security framework works evenly for Java code and native code, even though the native code is running outside the Dalvik VM. Lets look at an example of how this model influences native code. When native code makes any calls to the Linux API socket , it needs to have the Android permission INTERNET. Similarly, if native code tries to write to the SD card, it should have a permission known as WRITE_ EXTERNAL_STORAGE, and so the list goes on.


The Android filesystem hierarchy

Traditional UNIX distributions follow a filesystem standard known as the Filesystem Hierarchy Standard (FHS). However, Android does not follow this or any other existing standard, as far as file system hierarchy is concerned. Various Android-specific file systems are mounted by the kernel at start-up, at not-very-standard mount points. So, native code should avoid making any assumptions about the availability of a specific mount point. Even if a particular mount point (or path like /dev, /tmp etc.) exists on the Android device, we will still not be sure if this is used in the same way it is used in Linux. As mentioned earlier, Android provides application sandboxing at the kernel level, which is implicitly applicable to the file system as well. For example, only that part of the file system is accessible to the application that has appropriate permissions, as determined by the application developer. However, there exists a concept of 'rooting' an Android device. This mechanism allows the user to escalate to root user privileges. Once this is done, the user can access or modify anything on the file system. The reason for this is that the root user is not subject to any constraints or restrictions, even at the kernel levelin line with traditional Linux. Here are some points to consider: 1. Although Android provides an application separation mechanism, the developer has an option to override this behaviour. You, as a developer, can decide whether your application needs to use this feature. 2. SysV IPC is not supported in the Android C run-time library because of security considerations; so native-code locking primitives that depend on SysV IPC, if any, need to be changed to use an alternative mechanism. 3. Android never stops a (Linux) process, unless the system is extremely low on resources. So, you should optimise the system to minimise memory consumption. 4. Android has tweaked the Linux kernels timer mechanism (along with other features not directly

related here) in order to cater for the sleep/suspend/ wake-up functionality of the device. If your native code uses any such mechanism, then make sure you check with the latest Android documentation. 5. Since any access to protected resources or services is guarded by the application permission framework, all access in native code needs to be analysed, and the required permissions should be identified. Whatever permissions the native code may need should be published for developers, so that they can include these permissions in their applications Manifest file. 6. Native code should not rely on any code or mechanism that needs root access, as this wont be available on standard Android. Native code should rely only on the permissions governed by Androids application permission frameworkwhich can be specified in the application Manifest file. 7. Any hard-coded paths, like /tmp, which is in itself a bad practice, need to be removed, because such a path may not be available on the Android device, as discussed earlier. Yet, such code is still written, so you should scan your code to be sure! 8. Android is designed for the smartphone and tablet market. These devices need to have a very reliable user-response mechanism. If an application does not respond within a specified time period, its possible that the user is unable to switch to another application which is annoying! In Android, this has been taken care of very well. If an application does not respond within a specified period of time (the default is 10 secondsIm not sure if this is configurable or not) then Android assumes that there is something wrong with the application, and prompts the user with a message like, Application so-and-so not responding, do you want to stop it forcibly?. So, its very important that native code does not wait for too long, like in locks, etc. As mentioned at the start, this is not a complete description of the Android security model. However, I believe this should give readers enough insight into the Android security framework for one to understand how it works, especially for native code developers. References
[1] [2] Manifest.permission.html

By: Raman Deep

The author is a principal software engineer with SafeNets Software Rights Management division. He develops UNIX releases for products designed to help software vendors control how their applications are deployed and used. This article represents his views alone. He can be reached at

46 | January 2013 | OPEn SOurCE FOr yOu


What are OSGi Web Applications?

This article covers OSGi concepts, the popular OSGi reference implementation called Equinox, and modular Web application development using the Equinox framework. Knowledge of Java-based Web application development would be sufficient to get the most out of this article.

odular application development is a programming approach involving the assembly of distinct and loosely coupled modules to build composite applications. This approach has many benefits over monolithic application development. An important benefit is the reduced complexity, as it is easier to build and debug a small and independent unit. With loosely coupled modules, application development becomes agile. One module can be substituted by another in less time, and it leads to more creative solutions. Adding to that, the compile and build time of applications is reduced, as each module can be built independently. This will have a bigger impact in large-scale applications. By lazy loading the modules, the application can be made to start in a flash. Due to these compelling benefits, modular application development is enticing to any developer. But there are a few challenges in implementing a module-based solution. Lack of elaborate language support is an important one. Many programming languages do not support modularity, out-of-

the-box. As a result, tooling has not matured extensively. This has led to low adoption rates and community support. OSGi fills these gaps in the Java arena, and hence it has become the de facto standard for modular development in Java. Though there is some serious effort going on in the Java community process (Java Development Kit) itself to add support for modularity, it is still in the nascent stages.

The history and the current state of OSGi

OSGi is a dynamic module system in Java. It is the outcome of an alliance formed between technology innovators who intended to create an open specification to build a modular application in Java. It was started as a home gateway framework in March 1999. This alliance works on the OSGi specification, and occasionally publishes it. The specification contains multiple chapters, each addressing different aspects of OSGi. There are various specification formats, such as core, compendium, enterprise, mobile and residential. The latest core specification is version


5, and the compendium version is 4.3. Each part of the specification is owned by a voluntary expert group, and there is a board of directors that oversees expert groups. Apart from writing down specifications, the OSGi Alliance also creates reference implementations for the specification and test suites. It also certifies any new implementation for compliance with the specification.
Overview OSGi implementations
Since OSGi is an open standard, there are many implementations available. Equinox has been the reference implementation for OSGi for quite some time. The current specification implemented by OSGi is the Core R4 version. It also enjoys support from the Eclipse Foundation, as the Eclipse plug-in system is built on top of Equinox. There are several popular open source implementations like Felix, Knopplerfish and Concierge. The last one is a compact and optimised implementation suited for mobile platforms, but it conforms to R3 specifications only. Felix is a promising implementation, as it is compact and lightweight when compared to Equinox.

OSGi bundle

The basic unit of deployment in OSGi is called a bundle. An OSGi application is composed of multiple bundles. A bundle is a self-contained unit implementing a specific functionality; it works in conjunction with other bundles, and it shares and consumes functionality with or from other bundles. Bundles serve two important purposes modularity and reusability. A bundle is a JAR file with a manifest containing metadata about the bundle. Bundles declare their dependencies in the manifest file, and the container takes care of wiring them at run time. Even the dependencies could be versioned, and multiple versions can co-exist. In OSGi, every class is bundle-private, and through its manifest, a bundle can specify which package it shares using the export directive; other modules can use it by an import directive. The container takes care of delegating the class loading requests between bundles. Each bundle has a lifecycle, which starts with the installed state, and ends once it passes the uninstalled state. Another important aspect of OSGi is the class loading. Each module has a class loader, and unlike hierarchical class loaders in traditional Java applications, OSGi class loaders work as a network. Since OSGi is a dynamic module system, modules can be dynamically installed or uninstalled, and the container takes care of notifying the dependent bundles. The OSGi container passes the bundles context to all the bundles, which is useful to query and manipulate the state of the container and the bundles running. OSGi also has a concept of declarative services, similar to Web services. Any bundle can expose any number of services, and other bundles may consume them. OSGi services are registered in a service tracker, and bundles can look up the tracker and consume services accordingly. The difference between OSGi services and Web services is the absence of the network in the former. Since OSGi services and their consumers run in the same container, there is no network layer involved.

Modularity support in Java

Initially, the Java language was not designed for modularity but attempts were made later to include modularity. Some of the notable Java Specification Requests (JSR) for bringing modularity are JSR 277, 291 and 294. JSR 277 defines a static module system in Java, as opposed to OSGis dynamic module system. JSR 291 defined a dynamic module system, but it was just a reference to the OSGi R4 specification, and did not pass through the normal JCP process, where an expert group debates and finalises the specifications. JSR 294 is an effort to add language and JVM support for module systems. Project Jigsaw is the reference implementation for JSR 294 from OpenJDK. An early-access J2SE 8 release with Project Jigsaw is available from the OpenJDK website, and fullblown Jigsaw may feature in future releases.

Rise of Equinox

The OSGI framework

The OSGi framework has been developed as per the specifications published by the OSGi Alliance. It runs over JVM and provides a secure environment for the bundle to get deployed, run and manage the inter-bundle dependencies. It is made of the execution environment, bundles, service registry and bundle life-cycle management components. The OSGi framework controls the life-cycle of a bundle.

When Eclipses own plug-in system was facing immense challenges in keeping up to what was required of it, a total revamp was called for. Project Equinox was started by the Eclipse development team to look at possible alternatives for Eclipses runtime. Avalon, JMX, and OSGi were studied carefully, and OSGi won the racemainly because of its dynamic nature. The Equinox development team collaborated with the OSGi core specification expert group by providing inputs for the specification. The team also started implementing the specification, and that is how Equinox was born. Some of the key specification improvements that came through the Equinox team were bundle fragments, lazy loading of bundles, and require-bundle directives. They were not part of the R3 specification, but made it into the R4 specification. Equinox is currently the most-used implementation of OSGi, and the current version of Equinox corresponds to the Core R4 OSGi specification. Equinox does not implement the complete OSGi specification; it stops with just core specifications. Others, like the compendium, mobile, enterprise and vehicular specifications are not implemented.

How Eclipse is built around Equinox

Eclipse can be considered as a group of bundles running on the Equinox OSGi runtime. Prior to version 3.0, Eclipse was

using its own component runtime. The unit of deployment was a plug-in (just a JAR file, as any OSGi bundle). Each plug-in provided extension points, using which other bundles could extend the behaviour of the original module. Plugins used plugin.xml file for declaring extension points and other metadata. After Equinox adoption, a compatibility layer was built to let pre-3.0 implementations continue to work without rewriting. Some Eclipse-specific metadata remained in plugin.xml, and others like bundle name, version, export-package, etc, moved to the bundle manifest file. The plug-in development environment (PDE) support was added in Eclipse to let developers extend Eclipse with their own plug-in. In the plug-ins folder of Eclipse, there would be a JAR with the name org.eclipse.osgi.<version_number> and this is the Equinox runtime that runs Eclipse and every other plug-in built by developers. There are many optional and mandatory bundles implementing key features of OSGi, like security, logging, updates, preferences, etc. The Eclipse workbench is built using the Standard Widget Toolkit (SWT) and JFace, which provides a native look and feel. With an extensible plug-in mechanism and a rich cross-platform look and feel, Eclipse, which started as an integrated development environment (IDE), soon became a platform for rich client applications. In fact, it is called a rich client platform (RCP).


on building bundles instead of learning how to start and deploy bundles in multiple OSGi implementations. With a command-line switch, Pax Runner can be made to start different OSGi implementations. It also allows loading bundles from multiple sources like the URL, file system, Zip file, Maven repository, etc. Bundles can be grouped as profiles, and can be deployed or un-deployed collectively. Pax Runner is available as a standalone tool, and also as an Eclipse plug-in. More details at display/paxrunner/Pax+Runner

Eclipse PDE

The Eclipse Plug-in Development Environment built on top of the JDT can also be used for building OSGi bundles, but is more suitable for the development of Eclipse plug-ins than OSGi bundles. For example, dependency management between bundles is handled differently in plug-ins and bundles. The Eclipse plug-in prefers the require-bundle directive, whereas an OSGi bundle uses the importpackage directive. Eclipse is one of the most preferred Java development environments, and hence Eclipse with the bnd plug-in would make a near-ideal environment for OSGi bundle development.

Tooling for Equinox


Theoretically, one would not need more than the JDK and a text editor to develop an Equinox bundlebut that would not be a productive approach. By selecting the right tools, any standard IDE like Eclipse or NetBeans can be equipped for Equinox development. Some of them are Bnd, Pax Runner, Pax Exam and Eclipse PDE itself.

Tycho is a Maven-centric OSGi bundle development tool currently in the incubation phase. Tycho supports building bundles, plug-ins, fragments, RCP applications and update sites. Further details can be found at documentation.php.


Bnd, as the creator claims, is the Swiss army knife of OSGi development. It is used for creating and manipulating OSGi bundles. It takes a .bnd file as input, and creates OSGi bundles. It makes bundle development easier by providing many conveniences. For example, a bundle can export or import multiple packages, and this has to be declared in the manifest file. If hand-coding the manifest, the developer has to list all the packages, but the .bnd specification file accepts wild-cards in package names, and creates the bundle manifest accordingly. Bnd can also do dependency analysis and update the manifest. Bnd can build bundles from sources available on the file system, or other JAR files, as specified in the .bnd file. The bundle JAR can be verified for specification consistency by running Bnds verify command. Bnd is available as an IDE plug-in, and also as Ant and Maven tasks. More information and downloads are available at

The traditional Web application development process was not dynamic, as addition of new features involved redeployment, resulting in unavoidable downtime. OSGi for Web applications was sought as a solution to this problem. OSGi, being a dynamic module system, can reduce or avoid downtime by dynamically deploying or un-deploying modules. As an added benefit, application complexity is reduced with loosely coupled modules. Deployment of an OSGi-based Web application is possible in two ways: Embedding an HTTP server in Equinox Embedding Equinox in an existing application container

OSGi for Web applications

Embedding an HTTP server in Equinox

Pax Runner

Pax Runner makes switching between multiple open source implementations of OSGi easier. The developer can focus

Equinox provides bundles that implement a basic HTTP service, and in simple terms, a minimal application server or servlet container. Currently, there are two different implementations providing different levels of servlet support. The single-bundle implementation org.eclipse.equinox. http is suited for resource-constrained environments. Although the API is compatible with Servlet 2.4, this implementation offers limited support above Servlet 2.1.

Web application bundle
Bridge Servlet OSGI bundles Web Appliction Archive (WAR)

Normal Bundle

OSGI runtime
HTTP server + Servlet container

Java Virtual Machine

Java Virtual Machine

Figure 2: Deployment of Web bundles in application server

Figure 1: Deployment of Web bundles in OSGi runtime

On the other hand, org.eclipse.equinox.http.jetty is a multibundle Jetty-based implementation, which provides full support for the Servlet 2.4 specification. It is made up of the following bundles: org.eclipse.equinox.http.jetty org.eclipse.equinox.http.servlet org.mortbay.jetty org.apache.commons.logging javax.servlet org.eclipse.equinox.http.registry Development and deployment of Web applications in Equinox is different from traditional Web applications. Though this is an application server capable of running servlets and serving HTML files, it cannot understand what a WAR file is. In an OSGi environment, what makes sense is a bundle and not a WARso Web applications should be packaged as a bundle. This bundle should extend HTTP service-specific extension points (org.eclipse.equinox.http.registry.resources and org. eclipse.equinox.http.registry.servlets) to be able to act as a Web application. The first extension point is used to specify the static files to be served by this Web application bundle, while the second is used to specify the servlets the container should run for this Web application bundle.

Figure 3: Packaging Web application bundles in bridge.war file

container: org.apache.jasper_5.5.17.v200806031609.jar org.eclipse.equinox.jsp.jasper_1.0.100.v200804270830.jar org.eclipse.equinox.jsp.jasper. registry_1.0.0.v20080427-0830.jar As a best practice, instead of deploying multiple OSGi bundles, all the bundles are placed inside bridge.war. A sample WAR package structure is shown in Figure 3.

Testing OSGi applications

Embedding Equinox in an existing application container

The unit of deployment in this methodology is a Web Application aRchive (WAR) file. The bridge servlet inside the WAR acts as a front controller, by redirecting incoming HTTP requests to the correct OSGi bundle. It provides controls to manage the embedded framework. It reads the path information in the incoming request and decides where to redirect. If the incoming request is for controlling the framework, then it starts or stops the framework based on the request. If the request is for some other HTML file or servlet in the OSGi Web app, it redirects the request to the appropriate resource. Just as in the case of embedding HTTP server in Equinox, Web OSGi bundles extending HTTP specific extension points are packaged inside WARs and deployed to the application server. If the Web application needs to serve JSP, then the following Jasper OSGi bundles need to be deployed in the

Unit testing of OSGi bundles can be done by writing JUnit test cases. Mock object frameworks like Mockito can be used to create mock objects in JUnit test cases. Dependency injection frameworks like Blueprint can be used to inject objects. Even container objects like BundleContext can be injected to simulate the container environment. Though this form of testing can uncover some defects, the absence of a real container in all these tests will allow many defects through. A well-accepted solution to this problem is a tool from the Pax Runner family called Pax Exam. Pax Exam wraps the test cases as an OSGi service bundle, and deploys the bundle in the OSGi runtime using Pax Runner. Pax Exam invokes the tests and records the results. Pax Exam is driven fully by annotations, which are used to specify the tests to run, bundles to deploy, dependencies to inject, etc. Tycho, described earlier, can also be used for testing OSGi bundles. By: AnandaVelMurugan Chandra Mohan
The author is a Java Web application developer. He likes to explore new technologies and build business cases around them.
How To


Build Your Own Home-Grown NAS with an Old Computer

For those wondering if they could get rid of all the external hard drives and cables lying around, or find a way to use that ancient CPU or laptop gathering dust in a cupboard, a Linux server could be the answer. Servers are incredibly useful even at home. This article demonstrates how to set up a home server for media streaming and back-ups.

hat are servers, and why do I need one? A server is often just a computer that stores your data and serves it to other computers on the network. In a household scenario, it can be hidden away in a corner of your rooma target for all your device backups, and source for all your media to your devices. Specifically, such a set-up is called Network-Attached Storage (NAS). There are a lot of out-of-the-box solutions to create a

NAS, but most of them are very expensive or dont even come close to the kind of customisability that Ubuntu Linux can provide. Ubuntu is one of the best and the easiest to install and use flavours of Linux, and that is why we are using it for our NAS box. So what do you need to get started? There are different versions of Ubuntu available, and most should work fine for our purposes, but in this article, we will follow the easiest method of installation. This is what you will need:


How To

single-boot system, based on your preferences. Also, make sure you select Log in automatically when you are creating a user account, so that you have a single-button power-up sequence, which doesnt need any action from you during start-up.

The server set-up

Figure 1: Sharing options

At this point, I assume you have Ubuntu set up successfully and are booted into it. Now for the server setup: plug in your external drives if you havent already, and click the Ubuntu Dash (the button with the Ubuntu logo) and search for Terminal. Ubuntu has built-in support for NTFS, so your drives should ideally work out-of-the-box. If you are configuring new drives, you can format them as ext4 for better Linux compatibility. Ubuntu supports HFS-formatted disks (the native format on Macintosh computers) as well, as long as journaling is disabled on them; but for the purposes of this article, let us assume your disks are formatted as NTFS or ext4. Step 1: Mount-points: In the terminal, run the following commands as explained below:
sudo mkdir /path/to/mountpoint

A PC/laptop with a minimum of 512 MB of RAM and a 1 GHz processor, and a minimum of 5 GB free space for your server. It is recommended that you set up your server close to your router, so you can use a wired (Ethernet) connection instead of Wi-Fithat is much faster. An Ubuntu live CD, downloaded from the Ubuntu website. We will be using the Desktop version rather than the Server version, since its easier to configure and set up. You could alternately burn the image to a USB flash drive as well, if your soon-to-be server machine does not have a CD drive. Links to help you do this are given at the end of the article. Hard drives to store your media/back-ups. You dont need to get new hard disks for this; we will be sharing the content on your existing disks. You could always plug in more storage if you plan on buying new hard disks.

sudo chown userName /path/to/mountpoint

Replace /path/to/mountpoint with a folder in which you would like to mount your hard disk. For example:
sudo mkdir /media/BackupDisk sudo chown sumit /media/BackupDisk sudo mkdir /media/MovieDisk sudo chown sumit /media/MovieDisk

Enter your password when you are prompted for it. Repeat the commands for each hard disk you have connected to the system. Step 2: Auto-mounting: Next, run the following command in the terminal:
sudo blkid

Setting up Ubuntu

Before beginning the installation, make sure you back up any existing data on your soon-to-be server, and that you have at least 5 GB of free space. To begin, insert the CD/ USB flash drive into your server machine and boot it up. If your machine is not set to boot from a CD/USB, you may need to press F12 to enter a one-time boot menu and select CD/USB, or you could change the boot device priority in the BIOS settings (but make sure you change it back, if youre going to be using external disks for storagewhich is the likely scenario, if you are using an old laptop for your server, like I am). Your systems start-up screen will tell you how to enter the BIOS set-up screen. After booting into Ubuntu, select the Install Ubuntu option and follow the onscreen instructions to set up a dual-boot or

This command should result in an output that looks something like what follows:
/dev/sda1: TYPE=ntfs UUID=A0F0582EF0580CC2 /dev/sda2: UUID=8c2da865-13f4-47a2-9c922f31738469e8 SEC_TYPE=ext2 TYPE=ext3 /dev/sda3: TYPE=swap UUID=5641913f-9bcc-4d8a8bcb-ddfc3159e70f /dev/sda5: UUID=FAB008D6B0089AF1 TYPE=ntfs

Basically, what we are trying to do here is to make sure your disks mount automatically every time you boot up your server. The fstab or file systems table is a configuration file

in Linux systems that lists connected disks and partitions and their initialisation/mounting information. This information is used by the mount tool to mount your disks/ partitions. So we need to start adding entries for each hard disk into the /etc/fstab file. We could use GUI tools, but those would add the disks using their device file names (/ dev/sdX). This leads to problems in some cases, when the system renames these device files, causing errors while mounting disks. That is where UUID comes to the rescue. UUID (Universal Unique Identifier) enables you to uniquely identify your disk; and it does not change with every reboot, like the device files do. UUID is conceptually similar to the MAC address for a network card. In the terminal, run sudo gedit /etc/fstab to open a text editor window (like Notepad on Windows). Add an entry to the end of the file in the following format, for each of your connected disks partitions:
UUID=Your-UUID /path/to/mountpoint fs-type defaults 0 0
How To


Figure 2: Install Samba

You will need to copy the UUID from the list generated by the blkid command. Here, /path/to/mountpoint is the address to the directories you created in Step 1. For example:
UUID=A0F0582EF0580CC2 defaults 0 defaults 0 0 ntfs 0 /media/BackupDisk ntfs

UUID=FAB008D6B0089AF1 /media/MovieDisk

Now save and close this file. You should be able to see all your disks in the left hand sidebar of your file explorer (Nautilus). However, these disks are not mounted yet. You can click on them individually to mount them in the folder you have entered in your fstab file (/media/ in our case). The next time you boot up your machine, this should happen automatically.

Figure 3: Add permissions

Sharing your drives on the network

Now that the basic set-up is out of the way, and your disks are usable by the server, you need to get down to some actual serving. You need to share the connected disks on the network.

In the terminal, run gksudo nautilus; this starts up the Ubuntu file manager with root permissions. The difference between sudo (which we used earlier) and gksudo is that sudo is used to run command-line applications as the root while gksudo is used to launch GUI applications as the root. It is pretty similar to the Run as administrator command in Windows. Right click on each drives in the left sidebar and click Properties and select Share. Select the Share this folder check box in the window that comes up (Figure 1). The first time you do this, Ubuntu should ask you to install the Windows file sharing

Sharing with Windows/Mac

service, Samba (Figure 2). Install whatever it asks you to, and then restart the session when prompted. Start Nautilus from the terminal as the root again. Go back to the Share properties panel for a disk, give it a name, and select the Allow others to create and delete files in this folder option. Choose the Add the permissions automatically button on being prompted (Figure 3). Click Create share. Repeat these steps for all your drives. Finally, we need to set up a password for all the shares you have set up. You can do this by running the following command in the terminal:
sudo smbpasswd -a userName

Replace userName with your Ubuntu username (which you created during installation), and then type and confirm a password for the share. Your password can be anything and is not necessarily your Ubuntu user account password. However, the username needs to be a valid Ubuntu user, otherwise this command will fail. Once you enter your share credentials, you should be able to see all your shared disks. You can also right-click the share and select Map network drive to assign a drive letter to the share. This will also give it a place in


Windows Explorer allowing for quick and easy access.

How To
/path/to/mountedTimeMachine allow:userName


Setting up back-ups

cnidscheme:dbd options:usedots,upriv,tm

One of the most common and important uses for a NAS box is networked back-ups. With your newly set up NAS box, this is going to be extremely simple, provided you have Windows 7 Professional or later versions. To set things up, just open the Start menu on Windows, and type backup to get to the Windows Backup and Restore utility. Click Set Up Backup and then the Save on network button. Proceed to enter the path to your NAS box manually, or browse to discover the server. Choose your back-up settings and set a schedule for your back-ups. I would recommend you connect your computer over Ethernet while the back-up is in progress, to speed things up. If you do not have Windows Professional, you can use an open source tool called Create Synchronicity for creating back-up images for your system.

Here, /path/to/mountedTimeMachine is the address to the mount directories you created in Step 1, and userName is your Ubuntu username. Select the one you wish to be used as a Time Machine back-up. Next, add an entry for each disk below the Time Machine entry, to share your remaining drives over AFP as well. The entry will be in the following format:
/path/to/mountedMovieDisk MediaCapsule allow:userName

cnidscheme:dbd options:usedots,upriv

For the Mac only

But what if all the computers in your house run the Mac OS, and you want to create a wireless Time Machine-based backup solution for all your computers? Then its time for a little more terminal magic. First of all, you need to install Avahi, which implements Apples Bonjour protocol, and Netatalk, an open source implementation of the AppleTalk protocol that you will be using to share your drives with your Macintosh computers. Run the following commands in the terminal to download and install these services on your Linux system:
sudo apt-get install avahi-daemon sudo apt-get install netatalk

Next, edit the configuration files to enable support for the latest version of Time Machine running on Lion or Mountain Lion. In the terminal, run:
sudo gedit /etc/netatalk/afpd.conf

Add the following line at the end of the file, and then save and close the file:
- -tcp -noddp -uamlist,uams_dhx2_ -nosavepassword

Make sure you have only one disk marked as the Time Machine back-up volume (specified by the tm in the options). After you have added entries for all your disks, save this file and restart Netatalk with sudo service netatalk restart. On your Mac, you can open up Finder, where your Ubuntu server should be visible in the left panel. Click on it, and authenticate with your username and password. All your shared disks should be visible under the server, including your Time Machine disk. Your Time Machine disk should also show up under Time Machine now, under the available disks. Select the disk and start backing up; its as simple as that! You can select the same disk as a back-up destination for multiple Macs. Again, I would strongly recommend that you do the first full back-up over Ethernet. Time Machine does incremental backups subsequently, which should not be as large as the first one; those could be done over Wi-Fi, if desired. Your media and back-up server is up and running now, and you can disconnect your display from it. Now that it is set up, there is a whole world of other things you can do with it. For starters, you could connect and share your printer over the network, so all your computers can access itand theres one less item to clutter your workspace. You could also set up a Plex server to share and serve data to devices like your phone and tablet as well. Just start playing around with and exploring the wealth of free and open source software available for Ubuntu, and you will be amazed at how handy your home server can turn out to be! Links
[1] Ubuntu Live CD image: [2] How to create an Ubuntu USB disk image: https://help.

Next, edit the AppleVolumes file to share your drives using AFP (Apple Filing Protocol)run the following command in the terminal:
sudo gedit /etc/netatalk/AppleVolumes.default

By: Sumit Pandey

The author is an interaction designer and a maker of things, who loves playing with code and hacking away at technology in his free time.

Comment out the ~/ Home Directory entry at the end of the file by adding a hash in front of it. Then add the following command below the commented line:


Guest Column Exploring Software

Anil Seth

The Linux Desktop Through the Decades

The author reminisces about how things were, and how much better they are now... while offering some ideas about what is yet to come.
become more frequent. Given that our broadband wasnt broad enough, the introduction of delta packages by OpenSuSE and Fedora was a great boon. In fact, I regularly started using ArchLinux as well, when a user set up a delta repository for it. However, he did not need it after a while, and nothing replaced it. Now, I rarely use ArchLinux. I may look at it again as now there appears to be a delta repository for both i386 and I had explored OpenSuSE Tumbleweed, but had given up on it as it did not have delta package support, besides a few other issues. At college, with a much higher bandwidth, the time required to download the full package is somewhat less than the time needed to re-construct the package using delta-rpm! It is no surprise that the need for delta repositories is not that great in network-rich countries. In a couple of years, I expect that the same will be true for our home PCs as well.

s another year ends and a new one begins, it does not seem so long ago that we had set up a demo of networked workstations for a financial company. The consultant shot us down in one minute. He said that if the analysts felt that their spreadsheets could be accessed by their colleagues, they would not create them! Some years later, it took a year for me to get the first copy of Linux CDs ordered from the USA. A part of the delay was because it involved such a low cost that the purchase department forgot about it! Finally, we got Slackware with Linux kernel 1.2x or 1.3x. Subsequently, it became easier to get various distributions when GT started distributing them from Bangalore. The next major change was when magazines started offering distros and patches on CDs. Our first effort on receiving the magazines was to make sure that the CDs were readable, and apply the patches. However, the computers were slow and had a limited amount of memory. To squeeze the most out of our desktops, we would customise the kernel and remove as much as we could. Sometimes, we needed to add some drivers by using the source code. Now, I cant remember how many years it has been since I compiled a kernel. Network speeds improved, but magazines started distributing DVDs. We were updating desktops and servers using the Internet, but to upgrade the distribution, we still relied on the magazines, or services that would send us the DVDs. Tools like yum and apt were not that common. We relied on search engines like to find pre-packaged applications for our preferred distribution. Installing them was also not always easy, as handling dependencies had to be done manually. Some enthusiastic individuals would make life easier for us by packaging and making available their preferred group of applications on their sites. If our preferred source was FreshRPMs, but we needed one package from Livna, it wasnt always a sure shot. If a dependency needed by both was packaged by both, it could result in one or more applications not working correctly. It seemed simple enough and very beneficial to us, but in retrospect, it is easy to understand why convincing friends and colleagues to switch to Linux was not very easy. Now, with the widespread use of yum and apt-get, and the consolidation of various repositories, version or packaging conflicts are very rare. As the networking speeds improved, we could upgrade our home computers within days of the official releases. We could explore beta releases as well. On the down side, the range of software in various repositories has increased, so we install more packagesand the updates to various packages appear to have

What may happen next?

If an application is needed but is not installed, Ubuntu often recommends a package that may be installed. This approach may become more widespread, and maybe even transparent. There is also the possibility that apps, as on mobile phones, may become the default way we use desktops as well. In this case, there is the possibility of one or more apps for every common website we may be accessing. The number of apps that will need to be managed may soon become ridiculously large, especially as some will be used rarely or just once. Browser applications may give us an indication of where we are headed. Any time we use a browser, it tells us if an upgrade is available. This may be a useful step, in general. Why update packages that are not being used? Package managers can become smart enough to install or update packages and their dependencies, just in time. Or, quite possibly, we may all be using something like a ChromeBook or Firefox OS, and the current systems may slowly vanish or become irrelevant. Whatever may happen, the next few years are bound to be exciting and full of surprises.

By: Anil Seth

The author is currently a visiting faculty member at IIT-Ropar, prior to which he was a professor at Padre Conceicao College of Engineering (PCCE) in Goa. He has managed IT and imaging solutions for Phil Corporation (Goa), and worked for Tata Burroughs/ TIL. You can find him online at and reach him via email at OPEN SOURCE FOR YOU | JaNUaRY 2013

| 55


Set Up Your Own Network Storage Using FreeNAS

This brief overview of FreeNAS, an advanced FreeBSD-based operating system for network-attached storage servers, is followed by a few simple steps to help you install it in a virtual machine.

NAS is usually a dedicated hardware device that connects to a LAN, where file-level computer data can be stored and retrieved across that network. It uses filesharing network protocols such as NFS and CIFS (Common Internet File System) or SMB (Server Message Block) to manage file operations. It is often without peripherals (no keyboard and display) and supports hard drives, virtual drives, or tape drives. It can be connected, controlled and configured over the network, using popular browsers or consoles. NAS devices work perfectly well with stripped-down operating systems designed specifically for commodity PC hardware.

Network Attached Storage vs Storage Area Network

NAS is different from SAN (Storage Area Network). See Table 1 on the next page that lists some of the differences.

Use cases

To provide centralised storage for client computers. To enable easy, simple and cost-effective load balancing and fault tolerance. Is most effective when a large amount of data needs to be managed.

56 | January 2013 | OPEn SOurCE FOr yOu


Figure 1: A browser-based interface for NAS

Table 1

Network Attached Storage (NAS) NAS can be mapped as network drives to share on that server Provides storage and a file system Protocols: NFS, CIFS and SMB For small and medium businesses Storage capacity: up to a few TB Multiple NAS devices Specialised knowledge and training is not required to configure and maintain NAS

Storage Area Network (SAN) SAN can be used as a disk in disk and volume management utilities. Block-based storage Protocols: SCSI, Fibre Channel, iSCSI, ATA over Ethernet and HyperSCSI For large organisations Storage capacity: many TB Single SAN and multiple high-performance disk arrays Specialised knowledge and training is required to configure and maintain SANs

Figure 2: A console-based interface for NAS

Figure 3: USB destination media

Figure 4: FreeNAS installation warning

FreeNAS: An overview

SAN and NAS are not mutually exclusive. A hybrid of SAN and NAS can offer file- and block-level protocols from the same system to support NAS and SAN.

Open source implementations

Open source NAS-oriented distributions of Linux and FreeBSD are available, such as FreeNAS, NAS4Free, CryptoNAS, NASLite, Gluster, OpenFiler, and OpenMediaVault, which can be installed on commodity hardware, connected and configured using a Web browser. Open source NAS implementation can run from virtual machines as well as USB disks; we will look at both these cases in the coming section on FreeNAS installation.

FreeNAS is an advanced FreeBSD-based operating system for network-attached storage servers, with an easy-to-use Web UI. Distributed as an ISO image, it is released under a BSD licence, which imposes nominal limitations on the redistribution of covered software. The FreeNAS Project was initially founded by Olivier Cochard-Labbe in 2005, done in PHP and based on an embedded firewall, which is based on FreeBSD. FreeNAS 8.3.0 is based on FreeBSD 8.3, and so supports the hardware in the FreeBSD 8.3 Hardware Compatibility List. FreeNAS is available for both 32-bit (addresses up to 4 GB of RAM) and 64-bit architectures (for speed and performance).


FreeNAS supports protocols such as NFS, SSH, FTP, TFTP, OPEn SOurCE FOr yOu | January 2013

| 57


Figure 5: FreeNAS after reboot

BitTorrent, iTunes, CIFS (via Samba), etc. It provides plug-ins for SlimServer and XBox Media Stream Protocol. In the legacy version, Unison support is available. An iSCSI target feature is available to create virtual disks. ZFS, UFS, ext2 and ext3 are supported. Read and write for FAT32 and NTFS are supported. It has dynamic DNS clients for DynDNS, ZoneEdit, No-Ip, and The hard drives supported are {P,S} ATA, SCSI, iSCSI, USB and FireWire; booting is possible from HDD, CD-ROMs, floppy disks, or USB flash drives. All wired and wireless network cards and hardware RAID cards supported by FreeBSD work well with FreeNAS. Other than that, S.M.A.R.T., VLAN, link fail-over interfaces, link aggregation and UPS systems are also supported.

Figure 6: FreeNAS system status (reporting)

from Account. The Storage tab is an important part of FreeNAS configuration. Figure 6 shows the system status (the Reporting tab).

New features

Installation in a VM

I assume you have created (using VMWare Workstation, Player or other virtualisation software) a virtual machine having 512 MB of RAM, a 20 GB virtual HDD, 1 processor (I created this on a physical machine with an Intel Pentium Dual CPU t2370 @ 1.73 GHz, 2 GB of RAM, a 160 GB HDD and a 32-bit OS). Attach the FreeNAS ISO downloaded from SourceForge and start the VM. From the FreeNAS 8.3.0-Release Console Setup, select Install/Upgrade. Next, select the drive where you want to install FreeNAS. If a USB drive is attached, it will be available once it is software-disconnected from the physical host (see Figure 3). After this, a warning will be displayed (Figure 4) before proceeding with the installation. A pre-installation summary is displayed while installation is in progress, which should be followed by a message informing you that FreeNAS installation succeeded, and asking you to reboot. Reboot the virtual machine. Figure 5 shows the boot messages after the reboot; note the IP address, and use it in the browser's address bar to access the Web interface. A CRITICAL Alert is displayed when you first access the FreeNAS UI; you have to change the password for the admin user (at first, no password is required to log in). Click Account -> Change Password. Groups and user management can also be done
58 | January 2013 | OPEn SOurCE FOr yOu

FreeNAS 8.3.0 provides snapshot support, de-duplication, a removable log device, as well as support for Areca 13xx SAS controllers. It also has graphs with improvements for navigation to different points in time; the capability to redirect from http:// to https:// when accessing the administrative GUI and SSL is enabled; improved replication tasks and encrypted FTP connections; the availability of a memory device while GUI up-gradation is performed; Active Directory configurations with verbose logging; the ability to modify the serial port speed; non-root NFS mounts, and the ability to specify an address for pools (NFS, ZFS) with the auto-expand property configured. References
[1] [2] [3] [4] [5] [6] [7] [8] [9],1

By: Mitesh Soni

The author is a Technical Lead at iGATE. He is in the Cloud Services (Research & Innovation) and loves to write about new technologies. Blog:


A Peek Into Tripwire, The Open Source Security Tool

With the growing demand for cyber security in the open source world, it has become important to know about all the methods and tools available. Many available tools can be used manually or scripted to automate security-related tasks. With this article, we start a miniseries about such tools, covering small, useful utilities for open source systems that help systems and network administrators strengthen their IT infrastructure.

s we all know, one never ever achieves 100 per cent cyber security; rather, it is a continuous process of improvement. In the past issues, I covered cyber attacks in detail, which laid the ground for understanding the fundamentals of security. We learnt that there is no single utility, commercial or open source, which can fix all the vulnerabilities and secure IT infrastructure from attacks. We also understood that implementing an intrusion detection system is imperative, besides the rest thats already in place. For each vulnerability or attack vector, there are multiple utilities available to help network administrators implement, monitor and audit cyber security. Many famous tools such as Nmap, Chkrootkit, Snort, etc, have already been covered. This series introduces you to tools that are not commonly known. The first in the series is Tripwire.


In the continuously changing IT infrastructure scenario, it is important for a systems administrator to know if and when a file structure is changed. Such a change can stem out of a regular code release, or can be a malicious file update caused by viruses, spy-ware or root-kits. These call for detailed file update monitoring and logging. Tripwire is an open source tool available for all Linux distros, for just this purpose. It detects and reports any unauthorised changes to files and directories. Now lets

understand how it works, and how to install and use it effectively. At a high level, Tripwire works in a sample and compare way. When Tripwire is installed and run for the first time, it scans the entire file system to create a database containing information about the file structure, as well as the date and time stamp and other vital parameters for each file. System binaries as well as user application files are scanned, and their information stored in the database. At this point, Tripwire enters into monitoring mode, and watches every file access. Each update is compared with the reference database, and intelligent actions or alerts are generated out of it. Tripwire monitors and detects file additions, deletions, updates, etc, and also watches for file attribute changes. If the changes are legitimate, the administrator can update the Tripwire database to accept these changes. The contents of this baseline database are indeed supposed to be secure. Tripwire asks for a site key, which is a sort of password that must be entered by the administrator while installing the tool for the first time. Though this is an optional step, I definitely recommend that you do so. This site key encrypts the database, and decrypts it on the fly, while performing a comparison with file-system scan results. During the installation phase, the site key resides on the machine itself in a plain-text form, but after the installation is complete, it is removed, prompting the installer or administrator to remember it, just like a password.


Figure 1: The installation prompt

Figure 2: Tripwire in interactive mode

Now, lets install Tripwire. Being a SourceForge project, you can download it from projects/tripwire. The easiest way to install on the Ubuntu or Backtrack distros is to run apt-get install tripwire. Figure 1 shows the first screen of installation, mentioning what Tripwire does, and the security required around the site key.

Running Tripwire

Once the installation is complete, its important to understand the files created by the tool, and the command set. While the man pages explain everything in detail, the table on the right depicts four commands that every administrator must know, plus three Tripwire user manual pages, which explain each component in detail, with various command-line options. While configuring Tripwire, it is critical to understand its policies. After the default installation, all administrators would want to tune the tool based on their networks scenario. Each object in the policy file is associated with a property mask, which describes what changes to the file or directory Tripwire should monitor, and which ones can safely be ignored. This fine-tuning helps the systems administrator to closely control how the tool should check the integrity of a file system. Policy configuration is usually a one-time task, and needs to be updated only if the infrastructure or the configuration changes drastically. A policy sets the rules in the format object_name -> property_mask. Here, the object name is a fully qualified path to a file or directory; the property_mask specifies what property to examine or skip. For instance, if you want the tool to scan the entire /var/log folder and files, except the mail.log file, the policy would be set as follows:
/var/log -> $(ReadOnly); !/var/log/mail.log;

Tripwire Command Usage Creates and maintains configuration twadmin and key files. Prints Tripwire database and report twprint files in plain-text. tripwire Creates and maintains baseline database. Checks file systems integrity against that database. siggen Utility that displays hash values for files. Required for scriptomatic checks. man twfiles Explains various files of Tripwire and default locations. man twconfig Explains configuration files that enhance usage. man twpolicy Explains policies that Tripwire should follow while scanning.

As we saw, Tripwire is a complete command-line-operated host-based intrusion prevention tool. Administrators can run it using a cron job, and grep and parse its output to generate alerts. In a mid-sized IT infrastructure, by using proper user permissions, Tripwire can scan all servers and save logs on a centralised database for future attack analysis, or for compliance reasons. Large organisations can make this tool part of their code-release management cycle. This can help avoid situations where code inadvertently overwrites a system or critical application file, thus improving post-installation stability. The tool can be used to ensure integrity of plain-text databases too, which are usually found on older Linux systems. By: Prashant Phatak
The author has over 18 years of experience in the field of IT hardware, networking, Web technologies and IT security. Prashant is MCSE and MCDBA certified and is also an F5 load balancer expert. In the IT security world, he is an ethical hacker and Net-forensic specialist. Prashant runs his own firm called Valency Networks in India ( providing consultancy in IT security design, security audits, infrastructure technology and business process management. He can be reached at

Please refer to the man pages of twpolicy to understand the settings. Figure 2 shows a screen where Tripwire is run in interactive mode, which also shows the locations of the policy and configuration files, which can be tweaked as per your need. OPEN SOURCE FOR YOU | JANUARY 2013

| 61


Axigen: An Advanced Mail Server Thats Simple to Set Up

With Axigen, setting up a mail server becomes a pleasant task. And its eye-catching looks and power-packed features will stun you.
How To

ere, I am running two RHEL6 machines on VMWare with the hostnames and client., and IP addresses and, respectively. Lets start the process with downloading the appropriate Axigen package for your Linux machine from Axigens home site. To install it on RHEL6, I just downloaded the file. Now doubleclick this file, and the interactive set-up will be started. The installation process is not daunting; you just need to follow the instructions on your screen. After its installed, the next step is to configure it. Do make sure that services like httpd and postfix are not running on your system, otherwise Axigen settings will conflict with them. Now, in your terminal, run /opt/axigen/bin/axigen-cfgwizard to launch the Axigen configuration wizard. In the first screen (Figure 1), youll be asked to set your admin account password. Do so, and proceed to the next screen. In the second screen (Figure 2), you need to enter the domain name and postmaster account password. Here, I have set up my domain as After this screen,
62 | january 2013 | OPEn SOurCE FOr yOu

youll get additional screens for further configuration. You can choose OK to stick with the default configuration, or you can change any setting based on your requirements. Once youve made all the necessary settings, start the Axigen service by running service axigen start. Now you can access the Axigen admin panel by pointing your browser to (and to access your mail account, visit Follow the same steps to install and configure Axigen on your second machine but there, set the domain name as instead of during Axigen configuration. Now, lets attend to the DNS server, which is required to enable communication between our machines. You can do the same by using the /etc/hosts file, but I prefer having a fully functional DNS server. So, I am going to configure my machine with the IP address as a DNS server. It wont take much time and the steps are as follows. Install the bind package using the yum -y install bind* command. Next, edit /etc/named.conf and under the Options field, set listen-on port 53 {; }; and allow-query { any; }; Then at the end of the file, add the following lines, and
How To


Figure 1: Setting up the Axigen admin password

Figure 2: Axigen domain configuration

save the file:

zone IN { type master; file forward; allow-update { none; }; };

account under the domain Set the DNS server for Axigen on the second machine with the same steps, and create an account on that machine under the domain Thats all about the configuration. At this point, my overall configuration looks like this:
Machine 1 Machine2 vinayak viny client.

Next, run: cd var/named;cp named.localhost forward;chgrp named forward after which edit the forward file and add the following lines and save them:
$TTL 1D @ IN SOA ( 0 ; serial 1D ; refresh 1H ; retry 1W ; expire 3H ) @ server IN A client IN A ; minimum IN NS

Hostname IP Axigen domain Mail account name DNS server address

Now start the service by running the service named start command. Run the setup command on both machines, and set as their DNS server. Youll have to bring the network interface up and down to apply the changed settings. You may further need to adjust your firewall settings on both machines, or you can turn it off using the service iptables stop command. After this, you should be able to ping between the two machines. Now the DNS server is configured and running properly; though its just a minimal configuration, itll still do the job for us. Now, open the Axigen admin panel by visiting in your browser. Here, enter admin as the username and give the password you set during the Axigen configuration process. Now select Services-DNRAdd Nameserver and give the address of your DNS server ( Save your settings, select Domains & Accounts-Manage Accounts-Add Account and create an

Its time to log in to your mail accounts, and get a first look at Axigen. Point your browser to, and log in with the username and password for the mail account you just created. I logged in to my mail account on the first machine ( and sent a mail to another mail account on another machine (viny@; and when I logged on to viny@client. on another machine, I received that mail. So what I have right now is a fully functional mail server, up and running within a few minutes. Axigen is a commercial mail server, and here weve used a 30-day trial version of it. Many of us will find it pointless paying for a mail server when a number of free mail server packages like SquirrelMail are easily available, but its always good to evaluate all options before going for one of them. So Ill leave the job of evaluating and exploring Axigen to you, as no one knows your requirements better than you do. By: Vinayak Pandey
The author is a Red Hat Certified Engineer and open source enthusiast currently working for TCS in Kolkata. Catch his blog: OPEn SOurCE FOr yOu | january 2013

| 63

For U & Me

A Mathematical Journey Into The Shell

This article is the first part of a mathematical journey in the open source world and takes readers through the basic mathematics in a shell.

athematics is part of every moment of our lives, starting from measuring the time, to shopping, to advanced engineering calculations and computations. We have learned math throughout our childhood by counting. With the age of computers, we have been taught how to use them for mathematical purposes. So lets explore the mathematical capabilities of various open source software. Today, we start with the most preliminary onethe shell.

1024 \* 1024 \* 1024

You still dont get an overflow, but the result is 126765 0600228229401496703205376, i.e., 2100. Also, you may use variables. For example:
$ x=5; y=6; z=`expr $x + $y` $ echo $z 11

Shell command expr

This supports basic operations: addition, subtraction, multiplication (\*), quotient (/), remainder (%) with C-like precedence and associativity rules. Some usage examples follow:
$ expr 34 67 -33 $ expr 23 \* 27 621 $ expr 43 / 6 7 $ expr 43 % 6 1 $ expr 2 + 2 \* 3 5 + 21 / 4 \* 6 33

However, it has limitations; first, you do not have the exponentiation operator. Moreover, you cant change precedence by using brackets. And thats where you start looking at the other options.

Shells arithmetic expansion

With this, you get all the mathematical C operators, plus exponentiation using **. You have bracketing, and variable usage even without the cryptic $. It is achieved by putting the complete expression to be evaluated, between $((...)), which is the arithmetic expander. Heres a set of examples:
$ echo $(((2+3)*4-2**4/5%6))

Note the spaces between expr, numbers, and the operators expr is space-sensitive. The power of this simple command is its precision-less-ness. Try out the following to really understand what I mean:
$ expr 1024 \* 1024 \* 1024 \* 1024 \* 1024 \* 1024 \* 1024 \*

17 $ x=y=8; echo $((3 << 4 | x | 2 << y)) 568

There are some additional interesting ways to assign variables, as well, using let or declare -i :


$ let x=y=8 z=3 << 4 | x | 2 << y w=z/3 $ echo $w 189
17 $ echo $((2 ** 100)) 0 $ echo $((2.6 + 3))

For U & Me

And, finally, here are two more illustrative examples:

$ declare -i x=y=8 z=3 << 4 | x | 2 << y w=z/3 $ echo $w 189

Note that the $((...)) has been replaced by single quoting in assigning the value to z. In fact, in most casessee the assignment of wit can be actually assigned directly as in C. Though the single quote for z is required to prevent the shell from specially interpreting constructs like <<, >>, |, *, etc, and for it not to be space-sensitive. Otherwise, checkout this example:
$ declare -i x=y=8 z=++x+y $ echo $z 17

bash: 2.6 + 3: syntax error: invalid arithmetic operator (error token is .6 + 3)

Oops! How come 2100 is zero, and we are not able to do real number operations? It is limited by the C's long integer math, typically upto 64-bit computations on today's computers; and by not having support for non-integer math. Thats when we would need to look for something more powerful: the bench calculator which we can explore later! By: Anil Kumar Pugalia

In fact, the variables can be declared as integer once, and then be operated any number of times, as follows:
$ declare -i x y z $ x=y=8 $ z=++x+y $ echo $z

The author is a hobbyist in open source hardware and software, with a passion for math. His exploration with math in every aspect of life dates back to the 1990s. A gold medalist from the Indian Institute of Science, math and knowledge-sharing are two of his many passions. Apart from that, he experiments with Linux and embedded systems to share his learning through his weekend workshops. Learn more about him and his experiments at He can be reached at

One stop solution for your IT Training and Consultancy requirements
We specialize in IT Infrastructure Management training and Services provide corporate trainings in the following technologies:
Linux/Unix Essentials Shell Scripting Advanced Linux/Unix Administration Linux +

Linux Security, Firewall LAMP Solaris Administration

Server Administration Linux/Unix Administration (Linux, Apache, MySQL, PHP)

Windows Administration
Special Offer: 2 Day free workshop on Linux Administration Dates: 12th, 13th January 2012 Interested candidates. Please mail Register immediately. Hurry! Limited Seats

IT Consultancy and Support Solutions: Linux Servers Configuration Consultancy Maintenance Remote Support Installation
We provide support for CentOS and Redhat Linux Implementations.


IT Corporate Training, Consultancy, Service and SUPPORT
We undertake Turnkey Projects in Linux Server/Network. (HW, SW, System Admin, Support and Staff Training). Contact Details: Mobile:+919916104392 Tel:091-080-41552148. e-mail : Website:


| jANUARY 2013 | 65

For U & Me

Obamas Victory was Backed by Open Source Technology!

Obamas tech team employed open source technology on services and consulting not only to save funds but to create a more efficient infrastructure.
recently held presidential elections in the US were not just a battle of ideologies but also a duel between the IT strategies and tools of the two parties. Obamas IT technology teams use of open source technology proved to be much more effective than Romneys expensive IT infrastructure. Obamas tech team employed open source technology on services and consulting not only to save funds but to create a more efficient infrastructure. According to some online reports, Obamas internal IT back-up cost him $9.3 million, while his competitor Romney spent around $23.6 million on mostly proprietary technology services. And the entire world has witnessed how Obamas architecture outperformed Romneys costly IT campaign. Obamas tech team mostly depended on Linux. To be precise, Ubuntu was deployed on the server as the chief operating system. We used the right technology in the right place, said Scott VanDenPlas, lead DevOps for Obama for America. The team developed around 200 applications in Python, Ruby, PHP, Java and Node.js. The Obama administration, in its first tenure, issued guidelines for using open source technology in critical sectors such as defence as well. In a DoD (Department of Defence) Clarifying Guidance Regarding Open Source Software memo, the CIO declared, To effectively achieve its missions, the Department of Defense must develop and update its softwarebased capabilities faster than ever, to anticipate new threats and respond to continuously changing requirements. The use of Open Source Software (OSS) can provide advantages in this regard. Its not just Obama and his tech team that is recognising the power of open source technology. Open source is spreading its wings far and wide. Canonicals Ubuntu Linux distribution, which is very popular in the desktop environment, is slowly gaining space in Web server deployment also. If the data available online is to be believed, seven per cent of Web servers globally run Ubuntu compared to 5.5 per cent last year. UNIX is used by 64.2 per cent of all websites. Seven per cent might seem low but when compared to the 3.4 per cent that Red Hat holds, the 1.2 per cent of Red Hats community-maintained Fedora, and only 0.8 per cent for SUSE, the figure seems to be quite promising and shows that open source technology is being actively adopted across the globe.

hen US president Barack Obama and his techsavvy team started using Drupal and Linux to run websites back in 2009, concerns were raised all over the world. Those who favoured the proprietary system were of the opinion that resorting to open source technology could turn out to be one of the biggest mistakes of the Obama administration. Since then, the White House website is running a typical LAMP (Linux, Apache, MySQL and PHP) stack to support Drupal. The credit for bringing in the new Linux-based infrastructure goes to GDIT (General Dynamics Information Technology), an IT contractor for the US government. This was not the only incidence of Obama banking on open source technology. One would find it hard to believe but open source technology played a crucial role in his win. The

By: OSFY Bureau



Sandya Mannarswamy


This months column continues the discussion on software bloat, and also covers software abstraction and abstraction penalty.
Welcome to a great new year! Last month, we had featured a bunch of programming questions. Since I received the correct solutions to only a few of them, I would like to keep them open for this month as well. We will discuss the solutions in next months column. This month, lets continue our discussion on software bloat, by looking at how software abstraction, a cornerstone of modern programming paradigms, contributes inadvertently to software inefficiency. These abstract data types are either directly supported by the language, as in functional languages like LISP, or supported through the standard library, as in the case of C++/Java. For instance, C++ STL provides multiple abstract data types, such as set, list, map, queue or stack. Similarly, Java provides the Collections framework, which includes set, list and map. An Abstract Data Type (or ADT as it is popularly known) provides an interface for a certain set of operations on particular kinds of data, shielding users from having to know how the data is represented, and how the operations are implemented. This enables an assembly-line kind of approach, wherein these ADTs can be assembled and used by the programmer to build complex software. For instance, programmers who use the C++ STL container queue need not know how the queue is implemented. They only need to know what operations are supported by queue, so that they can use it in their code. In general terms, what users can do with ADTs can be classified into two kinds of operations: queries and updates. Queries are operations that compute results based on the data present in the ADT. They typically do not modify the ADT. On the other hand, updates are operations that update the ADT. The following table gives some of the operations supported on the list ADT in C++ STL:
reference front() void insert(iterator pos, size_type n, const T& x) iterator erase(iterator pos) iterator erase(iterator first, iterator last) void clear() void resize(n, t = T()) void splice(iterator pos, list& L)

Software abstraction

Abstraction is the holy grail of programming in highlevel languages. Abstraction allows programmers to focus their attention on what needs to be done, instead of how it needs to be done. The programmers can employ abstractions without concern about how they are implemented. Consider a simple example of a library sort routine that programmers use. The programmers need not be concerned with how the sort routine is implemented; all they need to know is, what are the arguments that need to be supplied to the sort routine, and how does it return the results. That allows them to use the sort routine with minimal effort. Similarly, consider basic data types such as the integer or float, supported by a high-level programming language. Programmers can use these data types and the operations supported on them without having to be concerned with how those operations are implemented, or how the integer data type is internally represented inside the machine. Abstraction is useful because it allows programmers to think and spend their time on solving specific problems, while making use of the abstractions provided by the high-level programming language. It improves programmer productivity and facilitates ease of programming. Many high-level programming languages support abstract data types such as sets, lists or maps, in addition to built-in types such as the integer, character or float.

const_reference front() const reference back() const_reference back() const void push_front(const T&) void push_back(const T&) oPEN SoURCE FoR YoU |JANUARY 2013

| 67

void pop_front()
Guest Column
void splice(iterator pos, list& L, iterator i) void splice(iterator pos, list& L, iterator f, iterator l) size_type size() const size_type max_size() const

void pop_back()

void swap(list&) iterator insert(iterator pos, const T& x) template <class InputIterator> void insert(iterator pos, InputIterator f, InputIterator l)

bool empty() const

Among these, operations such as size, empty, max_ size , front, back, etc, are query operations, while insert, splice, clear, merge , etc, are update operations. Consider the size operation. If it was implemented by traversing the list and counting the number of elements present in the list when the operation was invoked, then size has complexity that's linear to the size of list, i.e., if the list has n elements in it, then size will take O(n). On the other hand, if the number of objects is maintained internally by the list ADT, and various update operations that change the number of objects update this count, then the size operation will take O(1). So, in order for programmers to know whether size is a costly operation or not, the programmers need to know the internal implementation details of the size operation. If they do not know this, they can end up using size in a hot loop, where they will incur the penalty of O(n) for each loop iteration. In this case, there is a cost tradeoff between abstraction and knowing the internal implementation of the ADT. This is known as the abstraction penalty. The abstraction penalty creeps in without the programmer's knowledge. For instance, programmers decide to re-factor their code so that they can move a piece of functionality into a separate function, which is called from multiple places, instead of having the code duplicated in many places. However, they may find that the compiler does not end up in-lining the code, but ends up having an adverse performance impact. Grouping variables into structures or objects is another common cause of abstraction penalty. For example, consider that you have two local variables int x, int y, which are supposed to represent the x and y co-ordinates of a point. Now you have decided to create a new struct Point, which contains x and y as members. It is possible that in certain compilers, and at certain optimisation levels, the members x and y will not be placed in registers. This can result in an adverse performance impact. Similarly, placing x and y in a struct may prevent them from being passed through registers when a function call is made, again resulting in an abstraction penalty. While, in theory, compilers and other tools are supposed to eliminate the
68 | JANUARY 2013 | oPEN SoURCE FoR YoU

abstraction penalty so that programmers can write abstract code without any performance penalties, in reality this is not true. Programmers end up paying the penalty due to software abstraction. If you are interested in knowing how much abstraction penalty is associated with your standard STL implementation and compiler, you can run the abstraction penalty benchmark written by Alexander Stepanov (of C++ STL fame), which is available at: http://www.stepanovpapers. com/AbstractionPenaltyBenchmark.cpp. It is a very simple benchmark, which adds 2000 doubles in an array 25,000 times, but performs it in 11 different ways. While the abstraction penalty differs for different STL implementations and compilers, the point to note is that there is a realistic penalty associated with abstraction, and programmers need to be aware of it. By now, you may be wondering why are we spending time on the abstraction penalty when the discussion ought to be about software bloat? Well, bloat and the abstraction penalty arise from the same root cause of making software abstract. Abstraction is useful for improving programmer productivity, but can cause bloat and container inefficiencies, which can result in poor performance. In the next column, we will discuss what you as a programmer can do, to reduce the abstraction penalty your software may incur.

My must-read book for this month

This months must-read book suggestion comes from our reader Kavya. She suggests the book Web 2.0 Architectures: What Entrepreneurs and Information Architects Need to Know by James Governor, Dion Hinchcliffe, Duane Nickull. According to Kavya, This is a great book if you want to understand how Web 2.0 evolved, what the various architectural patterns associated with different Web 2.0 solutions are, and how you can apply these patterns to the further evolution of the Web. Thank you, Kavya, for the recommendation. If you have a favourite programming book or article that you think is a must-read for every programmer, please do send me a note with the books name, and a short write-up on why you think it is useful, so that I can mention it in the column. This would help many readers who want to improve their coding skills. If you have any favourite programming puzzles that you would like to discuss on this forum, please send them to me, along with your solutions and feedback, at sandyasm_AT_ yahoo_DOT_com. Till we meet again next month, happy programming, and heres wishing you the very best!

By: Sandya Mannarswamy

The author is an expert in systems software and is currently working as a researcher in IBM India Research Labs. Her interests include compilers, multi-core technologies and software development tools. If you are preparing for systems software interviews, you may find it useful to visit Sandya's LinkedIn group Computer Science Interview Training India at http://www.

Localisation: The Status of Education and Training

For U & Me

This article surveys the education and training opportunities in language services available in India and across the world.

ommon Sense Advisory is an independent market research company serving the worldwide translation, localisation, interpreting, globalisation and internationalisation marketplace. Its estimate for outsourced language services in 2011 was US$ 29.885 billion. Europe (49.38 per cent) was the largest region, followed by North America (34.85 per cent) and Asia (12.88 per cent). There were around 26,104 language service providers (LSPs) with two or more employees. More than 95 per cent of LSPs have small-scale operations employing just a few people. The localisation market is projected to reach US$ 33.523 billion by the end of 2012, which means an annual growth rate of 12.17 per cent. Unfortunately, there are very few details about the Indian market in the open domain, though it is estimated at about US$ 0.5 billion in 2010, based on the IT enabled services market share.

People graduating with a diploma or degree in translation studies or linguistics courses join the LSPs. Some people who know more than one language work as freelancers. However, as localisation is a specialised area, there is a need for training on the theory and the tools. In the last decade, pre-conference tutorials as part of the annual Localisation World Conference, custom training from experienced professionals, and tool vendors offered the main training opportunities. A few universities and institutions have started offering offline and online courses and certifications in recent years. More details on such offerings are given below.


In the USA, a Localisation Certification Program was launched in 2006 by California State University in partnership
| jANUARY 2013 | 69


For U & Me

with professional associations. A new programme called Localisation Project Management Certification for global website projects is also being offered.


In 2010, the Institute of Localisation Professionals headquartered in Dublin, Ireland launched a Certified Localisation Professional (CLP) programme comprising online and offline modules, and has trained a few hundred people. The programme is currently under revision. The Localisation Research Centre, part of the University of Limerick (Ireland) is offering a one-year M.Sc in Multilingual Computing and Localisation from the year 2011, in the distance education mode.


Several universities offer diploma and PG diploma courses in translation studies and linguistics. Students passing these courses found employment with government organisations and the media and served in translating administrative texts or literature/news. With the onset of liberalisation during the 1990s, the need for language professionals has grown rapidly, with India becoming the hotbed of IT outsourcing. During the last decade, the availability of computer operating systems and applications in Indian languages, and the rapid growth in the use of mobile phones and the Internet is further driving the market for language services. Most of you would have noticed the availability of Indian language user interfaces in ATMs, increased language options in Interactive Voice Response Systems and customer support centres of satellite television companies, besides the availability of English movies and television channels in Indian languages. Multilingual magazine's December 2011 issue focused on opportunities in India. One reason mentioned for the small market for localisation is that the population that can afford to buy goods and services has a good knowledge of English, and would prefer to read material in English. However, the active involvement of the government through its Indian language initiative (Technology Development in Indian Languages); the focus of Google, Microsoft and other companies in their quest to reach a majority of the population; as well as the increased focus of the media and entertainment industries to serve Indians in their native languages, is ensuring that growth prospects in this domain are very attractive. Recent news reports mention the availability of enterprise applications in Hindi and other languages targeting the SME market in India. The Indian government has started several initiatives to promote translation. The Institute of Translation Studies was set up in 2002 to promote offline and online education. The National Translation Mission Project was launched in 2008, with a target of translating knowledge texts into various Indian languages, and developing resources for improving translation in the country. It has published bilingual basic dictionaries of the most frequent words and phrases in several Indian languages. Punjab University

started a Centre for Language Innovation in May 2012, with the objective of education and research in languages, to promote skills development in the related areas. All the above initiatives were focused on translation. It is heartening to note that the Department of Electronics and Information Technology Working Group has proposed to set up a National Localisation Research & Resource Centre (NLRRC) during the 12th five-year Plan (2012-17). The objective is to spur localisation activities in India, with a focus on e-content in Indian languages, localisation of IT and non-IT products, localisation tools and platforms, standards development, as well as to promote entrepreneurship, incubation and Ph.D programmes. Another important development is the launch of a professional association called the Indian Translators 3 Association in 2006, which is working with the government and others to organise the industry. The first international conference on the Role of Translation in Nation Building, Nationalism and Supra Nationalism was held in New Delhi in 2010. The conference has become an annual event, and the association is also taking steps to introduce more professionalism into the industry. In this brief article, we have reviewed the status of education and training in the broader area of language services. As can be seen, there is lot of growth potential in the language services market, and enterprising organisations and individuals can tap into this opportunity. References
[1] The Language Services Market 2012, Common Sense Advisory (Extract at Portals/0/downloads/120531_QT_Top_100_LSPs.pdf [2] Localisation World Conference. http://www.localizationworld. com/ [3] Localisation Certification & Localisation Project Management Certification programme. [4] The Institute of Localisation Professionals. http://www. [5] Multilingual magazine. December 2011 special issue on India. [6] Report of the Working Group on Information Technology Sector, Twelfth Five Year Plan (201217). http:// wgrep_dit.pdf [7] Indian Translators Association website

By: Arjuna Rao Chavala

The author is chief consultant of Arc Alternatives, which works to catalyse the transformation of IT/engineering enterprises with a focus in the areas of IT, programme/ engineering management and open source. He serves as the WG Chair for the IEEE-SA project P1908.1, Virtual keyboard standard for Indic languages. He co-founded and served as the first president of Wikimedia India. He can be reached through his website or by email to

Our goal is to advance Linux as a technology and collaborate with the community for mutual benefits
SUSE is banking big on the Indian market. It is getting aggressive about increasing its market share in the enterprise space. The new team of leaders, courtesy its acquisition by the Attachmate Group, has some different plans. The team members want to grow the business but still want to keep SUSEs roots in the open source community intact. Diksha P Gupta from Open Source For You spoke to Nils Brauckmann, president and general manager of SUSE and Venkatesh Swaminathan, country manager, The Attachmate Group (India). Excerpts:

For U & Me

Nils, what brings you to India?

Nils: First of all, India is a very interesting market for us. We have been investing in this market since many years with our team, business partners and business partner networks. From time to time, we try to stay in touch with the market to personally get to know customers better, to talk to prospects and to promote SUSE as a company, as well as its solutions and services.

SUSE has now been acquired by the Attachmate group. What do you find interesting in India from Attachmates perspective?
Nils: I worked for the Attachmate group earlier and now I am heading SUSE. In my previous role, I was vice president for Eastern Europe and Africa. So my engagements with Asia were very limited. But now my focus is the Asian market place, where India is obviously a major player. What is special about this market is that we have a couple of very fast growing economies that are undergoing a change at the enterprise level and the customer landscape very fast. These markets are, generally speaking, interesting for every vendor. Two of the BRIC countries are a part of the Asian market and India is one of the most dominant of those. So what I have observed is that there is more dynamism, development, engagement and receptiveness to new technologies. These are very lively and vibrant markets, both with regard to growth and the adoption of technology.

Nils Brauckmann, president and general manager of SUSE

| JANUARY 2013 | 71


For U & Me

I wonder whether it has also to do with the fact that the societies we are talking about are very young. The share of young people in India is much bigger than in some of the European countries or in the US. With regard to enterprise needs, you would be surprised that we are talking large enterprises. Many of these enterprises work globally in terms of selling their products and having global teams. So if you compare a global enterprise based out of India to any other part of the world, it has similar needs, challenges and attitudes towards open source. Many of the enterprises tell me that they are frustrated with their traditional vendors and feel locked into the vendor solutions. Sometimes they dont get vendors who listen to them and concentrate on their needs. So open source technologies support them better in such cases and can add value to these companies. If I look at the revenue I am generating with the SUSE business, its $225 million annually. Almost 39 per cent of it is coming from Eastern Europe and Africa, another 41 per cent is coming from the US and 18 per cent of it comes from Asia Pacific. The biggest markets in this region are China and India. Linux is sold everywhere and there is tremendous adoption of open source technology by corporations and enterprises everywhere around the world. India is the second largest market for me in the APAC region. We have over 35 per cent of market share in China, which is the biggest market for us. But thats not our largest market. We sell more products in Central Europe. India is significantly important for us. We all know that a portion of the IT technology heart beats in India. There is so much development and engineering talent in the country. Almost every large vendor has invested in its engineering arm in India. What started as offshore engineering for simple engineering tasks to generate cost savings has developed over the years into much more strategic engineering work and strategic engineering participation by the Indian-based development teams and companies. India has a huge group of IT skills that contribute to technological conversations in a very different way.


"Many of the enterprises tell me that they are frustrated with their traditional vendors and feel locked into the vendor solutions. Sometimes they dont get vendors who listen to them and concentrate on their needs. So open source technologies support them better in such cases and can add value to these companies."

source cloud solutions. So, now we decide on product development or go-to-market strategies, keeping Linux and open source in mind, which is different from the past. Our goal is to advance Linux as a technology and collaborate with the community for mutual benefits by listening to each other, understanding each other and working in a collaborative way.

You are excited about the Indian developer community. What are you trying to do to bridge the gap between SUSE the company and SUSEs community? And how would you describe the relationship between the two?
Nils: First of all, I would like to confirm that we have been an open source company for over 20 years. The open source movement began when Linux took off in 1991. Linux celebrated its 20th birthday a year ago and this year, SUSE celebrated its birthday. So, what I mean is since 20 years, we have not just had a relationship with the community but we are a part of the community. The founders and owners of our company were community guys. These people came out of the Linux community itself and they realised over a period of time that if open source technology is used in enterprises and companies can put their business applications on Linux as a platform, it will do great. But the community alone is not enough for this because if there is a business problem, it has to be fixed then and there while the community works on voluntary initiatives. Businesses obviously cannot wait for voluntary initiatives. So this is what SUSE business is all about. But we are a part of the community like everyone else. Many of my engineers are very active in the community, so they contribute to the community programs. When I say community, I do not mean just the SUSE community but also other communities. As a community, we add something on top, which is hardening the code base, providing service and support to make it ready for the enterprise to use and rely on. Our open source community called openSUSE develops pretty much the same operating system. A lot of innovation that happens in the SUSE enterprise Linux server happens in openSUSE. We have teams in the company that collaborate with the

How much has SUSE changed after the Attachmate take over?

Nils: One of the most important things that has changed is that we are now more independent and more focused on open source and Linux. SUSE was one of the product brands of Novell and in many cases SUSE had to play a role within the bigger Novell product portfolio and the larger Novell marketing strategy. When Novell got acquired, the new management said that we will continue to sell different technologies but we will organise them in a different way. So they organised the company into four business units. The Attachmate Group has four business units and I am heading the SUSE business unit, which is 100 per cent dedicated to open source, Linux and open

openSUSE development team, provide them funding so that they have resources to get their work done; provide content, engineers; and sometimes we rent space for them to hold their conferences. So we help the community to work together and to survive. That is totally collaboration based. So I do not have the power to tell the openSUSE community what they must work on. We are also working with the OpenStack community. We play an important role there as well. We are their platinum sponsors, so we finance the community to make sure that it works. One of our employees is the first chairman of the OpenStack board, not because we appointed him but because he was elected by the community. So there are many examples of us engaging, collaborating and contributing. But no one can force the community to do anything.

For U & Me

Half of the supercomputers in the world are running SUSE Linux. And there are many other distributions around, including some really good Linux ones. Why do you think the makers of the supercomputers choose SUSE over others?

Nils: I think it is proof of our ability to develop solutions for certain target markets, which is the enterprise customer. Maybe the enterprise customer has slightly different needs from the consumers or the developer community. They might not look for the newest and most attractive features around Linux but what they really need is reliability, stability and performance. That is what we are trying to work on. So we have a reputation, expertise and quality in our Linux distribution. We set up our kernel to deliver those results for high-performance tasks. Apart from that, since years we have had a good relationship with hardware vendors. They do performance testing and they realise that we are really fast and they can rely on us.

Venkatesh Swaminathan, country manager, The Attachmate Group (India)

You just mentioned your OEM partners. If you look at Canonical, it is joining hands with hardware players to popularise Ubuntu and is making an effort to increase its retail presence. What is SUSEs strategy in the desktop space? Why are you not joining hands with OEM partners to make SUSE popular in the desktop space as well?
Nils: I am glad Canonical is making such an effort. Let us not forget that Ubuntu is also living off the same kernel and by virtue of this, we are all members of the same open source Linux community and we all contribute to it. These are the things we have in common but there are a host of things that separate us. From a very early point of time, we have clarified that our sweet spot is the enterprise customer, where we offer Linux enterprise servers and Linux enterprise desktops. We want to be the Linux distribution that is best for enterprise customers. We believe that our system is most stable and best performing, and that we are probably one of the most interoperable solutions in the Linux environment. That also means that we are not as innovative about Linux when it comes down to the community and in offering new features for the

" I think open source communities are appreciative of good technology work done in all the domains. We definitely keep a watch on the products, but I wouldnt comment on whether Windows 8 will inspire us in the development of upcoming versions."
community. Ubuntu has certain strengths in the desktop space and they are working more with the consumer market. Needless to say, this is the reason why the majority of their successes are around the desktops. SUSE has tie-ups with enterprise vendors like HP, Huawei and many others who pre-load SUSE with their enterprise products.

How much of a role does SUSE have to play in bringing out openSUSE?

Nils: There is a set of engineers that works for and is paid for by SUSE but also works for openSUSE projects in its
| JANUARY 2013 | 73


spare time. We have some people paid by us but their role is to reach out to the openSUSE community and make sure that our core engineering team and their engineering community are well-aligned and synchronised. We know that there is voluntary and unorganised participation going on but sometimes we need to know what are they working on and how we can synchronise our efforts. So, in a nutshell, there are people paid to build a link between openSUSE and SUSE. On top of that, we give them some funding and marketing support as well.

For U & Me

You have joined the OpenStack cloud project like Canonical and Red Hat. Was that a strategic move?

Nils: Yes, very much. We heard from our customers that they are considering building a private cloud. Seeing that demand, we wanted to find out whether they would be happy to consider powering their cloud with open source. We spoke to some of our enterprise customers and we got some fairly positive responses. They wanted open source not only as an operating system but as a choice to power their private cloud initiatives. So we got a confirmation that there is a future for private clouds and second, enterprises are ready to consider open source as their option. After we got a go-ahead from all corners, we thought about which open source community to join. And we looked at the community which had the most momentum, the biggest industry support, and the biggest community because if the community is large, many people contribute and the development process is faster. We finally found the last missing piecethat was OpenStack. We then committed to be the platinum sponsors for the OpenStack Foundation for the next five years.

government and most of the state governments, they have mandated that any technology that is to be purchased for the government initiatives would be predominantly on an open source platform. So organisations like ELCOT have been significant buyers of our desktops over a number of years and they will continue to be a significant partner for us in the desktop space. We are also beginning to see a lot of initiatives from the Odisha, West Bengal and North Eastern state governments. A lot of state governments are also looking to provide laptops to their students. So we are working on getting those laptops run on our SUSE desktop offerings. So you would see a lot of momentum and a lot of adoption in government initiatives in the desktop space.

The UP government is also planning to distribute laptops and tablets to students. Are you in talks with them as well?

Venkatesh: Yes we are. As we speak, we are engaging with the partners because we want to figure out what their expectations are. Every government is unique, the pricing patterns are unique, the usage patterns and education systems are also very unique. So we are still figuring out the many aspects of this venture.

Since you are talking of the desktop space, lets talk about Windows 8 as well. Have you had an experience with it and what is your opinion about it?
Venkatesh: Yes, I did get an opportunity to try the operating system. It looks nice and very different from what Windows 7 was or, for that matter, any other earlier version of Windows. But the fact that it will offer a similar experience on the desktop and mobile space, will be something good for the consumers.

You said that you are banking big on the enterprise space. Linux is already big in the enterprise space. Is SUSE trying to leverage on Linuxs popularity or is the company trying to play safe because you are not talking of the desktop space or even the mobile spacewhich is a hot property these days?
Nils: We are not thinking of the mobile space at all. The mobile space is pretty much taken over by the other vendors and I dont think there is anything much left for SUSE to come up with in this space. So it is the enterprise space. We offer enterprise server solutions and enterprise desktops as well. So if you talk about reaching out to consumers straight away, we have enterprise desktops which we sell to our enterprise customers. We are also one of the largest contributors to the LibreOffice open source project. LibreOffice is a desktop product as well and we are selling LibreOffice to enterprises to run on SUSE Linux enterprise desktops and also to run on Windows desktops. So we are targeting the desktop space and investing in the cloud space as well. Venkatesh: If you look at the initiatives from the Indian


How do you see that in terms of competition?

Venkatesh: I would not deny that the company has done great work in terms of making it easy for users. I think this would definitely change the way devices are experienced.

Will the next version of SUSE desktop be inspired by Windows 8?

Venkatesh: I think open source communities are appreciative of good technology work done in all the domains. We definitely keep a watch on the products, but I wouldnt comment on whether Windows 8 will inspire us in the development of upcoming versions.

What are your plans for involving developers in India and do you plan to replicate some of the global programmes in India as well?
Nils: This is one of the constant conversations we have in the company. We do not mind replicating some of our successful models but we want to be close enough to the local market and local developers, and customise our programmes to suit their needs.
| JANUARY 2013 | 75


For U & Me

The Top Open Source Skills that Will Get You Hired this Year!
Despite the continuing economic downturn, the IT job landscape is expected to be rather buoyant for open source professionals in 2013.


pen source technology, undoubtedly, has navigated its way through the choppy waters of the global recession and emerged a key driver of IT growth. There is a surge in the number of companies continuing with open source technology adoption. And this trend will translate into more jobs for FOSS professionals in the days to come. According to the 2013 Salary Guide released by the US consulting firm Robert Half International, this year, some of the rewarding jobs will go to those with a fundamental understanding of Linux and open source. With the job market looking upbeat for FOSS experts, we spoke to industry leaders about the top open source skills that will be in demand this year. Industry stalwarts believe that the next couple of years will see Big Data and cloud computing skills featuring high on the wish list of recruiters. A recent poll conducted by the OSFY team on its Facebook page also indicates that cloud computing is being touted as the hottest skill in 2013. Lux Rao, country lead, Cloud Consulting at HP, India, shares, There will be a tremendous
76 | january 2013 | OPEn SOurCE FOr yOu

amount of interest around Big Data and cloud computing skills this year as both of them are changing the way enterprises function. With data analytics gaining momentum and being acknowledged as an important key to business growth, there will be an emerging requirement for data scientists. Also, 2013 will see a mass adoption of the cloud by companies in India. Most organisations use an open source cloud distribution, such as OpenStack or CloudStack, so hiring individuals with expertise in the open source domain will definitely be a priority. Echoing similar views, Srihari Srinivasan, head of technology at Thoughtworks, India, states, Some significant skills based on OSS that will continue to gain traction include the ability to create solutions based on Hadoop and the components of its ecosystem, and the ability to implement private clouds based on Infrastructure as a Service solutions such as OpenStack and CloudStack. Today, its really hard to imagine an enterprise not consuming OSS in one way or the other since people are no longer paranoid about using OSS on their critical systems.

With virtualisation becoming a mainstream technology in most large businesses, 2013 will also see employees preferring applicants who have a fundamental understanding of this domain. And with a vibrant open source virtualisation ecosystem out there, open source professionals will definitely have an extra edge, feels Divyanshu Verma, Linux engineering manager at Dell R&D, India. Many companies are relying on open source virtualisation to cut costs and enhance flexibility, thus paving the way for more jobs in this terrain. Aspirants are expected to have good comprehension of virtualisation softwares like OpenVZ, KVM (kernel-based virtual machines), Xen, Virtual Box and more, says Divyanshu. According to him, Linux kernel device driver development is yet another skill that may be in demand this year. Experts believe that the job market is looking up in yet another booming segment--Android apps development. According to Dice Holdings, a job posting website, job postings for Android experts rose up by 33 per cent from last year. Explains S N Rai, co-founder and director, Lava Mobile, In the last three quarters, the growth of Android apps has been much higher than what has been put on record. So you can understand how hot this trend is in terms of job prospects as well. We do hire developers for our company and also integrate with many small developers to build apps. And this system is definitely creating a job requirement. Industry leaders also named programming tools like PHP, Python and Ruby to be priority skills this year. There is also a huge demand for open source security professionals on the recruitment landscape, says Vineet Tyagi, director, GRASS Linux Training and Development Centre. He further adds, With the advancement of technology, security has become a major concern. A growing number of companies are implementing open source security tools, so the demand for open source security professionals is naturally high. When it comes to hiring, open source professionals have an advantage as they can work in a flexible environment. So, what skills should an aspirant possess to enjoy a successful innings in companies that work with open source software? Vikram Vaswani, founder and CEO of Melonfire, a consulting services firm with special expertise in open source tools and technologies, reveals, The ability to learn independently, flexibility and creativity in thinking, the ability to self-manage, a sound technical foundation, and a proven history of open source collaboration or development are some of the skills needed. Experts believe that the curriculum of most of the training institutes or colleges in India does not provide students the right kind of exposure in open source software and this should change for good. Its essential to bridge this gap! But for this to happen, colleges should have good teachers and mentors who themselves are passionate about FOSS. There are quite a few initiatives driven by the Free Software Foundation in India to help students who are passionate about contributing to FOSS. Students can look up to these events and initiatives to start contributing, quips Srinivasan.

Career For WorldMags.netU & Me

While Srinivasan feels that the curriculum of open source technologies should be at par with the requirements of the IT industry, heads of training institutes say that they design the course keeping the same point in mind. Dr. D Viswanatha Raju, CEO of Vector India that offers a specialised Embedded System Course, explains, We have a tremendous placement record of 126 MNCs hiring our students from February 2011 to February 2012, and this is because we design the course to be on par with the industry standards. Recently, our students who did a specialised course in Android Apps Development were placed in companies like LG and Sony. Ramesh Kumar, director, Linux Learning Centre, shares, All our courses are mapped towards catering to the needs of the industry. We keep ourselves abreast of everything thats happening in the open source world and use all this learning back in the classroom. We are constantly updating our course content and developing new programmes to reflect the latest industry developments and techniques. So, if you want to move up the career ladder in the lucrative open source domain, get armed with the right skill sets!

By Priyanka Sarkar
The author is a member of the OSFY Bureau. She loves to weave in and out the little nuances of life and scribble her thoughts and experiences in her personal blog.


We have the exact solution!!
Website Design & Development Mobile Website Development Portal Development MLM Application Development Payroll Application Website Hosting

ValueWorx Solutions
Empowered by value....

ValueWorx Solutions Pvt. Ltd.

205, M-92/1, Baldev Singh Complex Munirka New Delhi - 110067 (India) Phone : 01140517135, Mob: 8860786486, Email:



| january 2013 | 77

For U & Me

Live Within Your Budget, With a Little Help From FOSS!

This article reviews a few simple yet effective open source financial management software that claim to help you to effectively manage your various transactions on a daily basis, while offering tips on how to plan better.
managers (PFMs) that can track your personal savings and expensesGrisbi, Skrooge and Eqonomize.

ith the increase in the number of shopping malls, and the use of plastic money, one can easily lose track of how much is spent. Financial management has now become a necessity in this fast-paced world. And along with the growing use of gadgets and computers, no one would like to use ledger books or even simple notebooks. That's where financial management (FM) software comes in. There are many professional FM packages available that are designed to handle the finances of companies and other organisations, but in this article I'd like to introduce three open source personal finance


This is a KDE-based PFM. It is very simple software, with which you can create accounts, keep track of your expenses and income, make transactions from the various accounts you have created, and schedule transactions so that they occur automatically. There is also an option to keep track of investments in securities. Besides recording your transactions, you can set budgets for each category, and

Bill (1.4%) Food (12%) Leisure (7.7%) Other (69%) Shopping (9.8%)

Figure 1: Home 1

Figure 2: Report


For U & Me

Grisbi is also available in the Ubuntu Software Centre, or you can run sudo apt-get install grisbi in a terminal.

Figure 3: Home 2

keep your expenses within control. Eqonomize provides a complete solution with a double-entry system. It supports the following types of accounts: current, savings, credit card, cash, securities and liabilities. The cash account is just like your wallet it shows how much cash you have in hand. You can create categories of incomes and expenditures, as needed. Once expenditure has occurred, you can still record refunds that have been made by right-clicking the expense and selecting Refund from the drop down menu. There is a feature that allows you to get the summary of all transactions in pictorial or tabular form (Figure 2). Eqonomize also shows the remaining budget in each category, so that you can spend wisely. When the need arises, you can also export your transaction details as QIF, CSV and HTML files. One small feature that could have been included in Eqonomize is auto-save. Quite often, I enter all my transactions, and forget to save them. Maybe the team will include that feature in the coming versions. Eqonomize can be installed from the Ubuntu Software Centre or by running sudo apt-get install eqonomize.

Skrooge is a PFM for KDE 4, intended to be used by those who want to keep an eye on their money. It can keep track of your income and expenses across several accounts, in several currencies. It has all the features you would expect from such a tool, such as categories, scheduled operations, graphical reporting, stocks management, etc. It also has some less common features, like fast operation edition, search as you type, refund trackers, customisable attributes, etc. The best part of the Skrooge home page is that it provides a summary of all your accounts and displays a report along with some advice (Figure 6). Skrooge can create current, investments, credit card, assets, loan, wallet, and other accounts. Like Grisbi, Skrooge also has an option to enter your bank account number and contact detailsand you can even have your bank logo associated with your accounts. Recording transactions is also quite similar to Eqonomize. You just enter the amount,


Grisbi is another PFM, released under the GPL licence, and runs on Linux and Windows. It can handle multiple accounts, but does not use the double-entry system. It allows users to enter the details of the bank, the phone number of the bank, etc; so it can be used even to store such details. It has a vast set of categories defined by default. The initial set-up asks for details like the name of the account holder, the type of account, the banks name, the contact details of the bank, etc. Grisbi can handle multiple currencies. It creates budgetary lines, provides statistics of various accounts, records your expenses and income, and does scheduled transactions; you can import QIV files and even export QIF and CSV files. A feature called the Credit Simulator can simulate loan interest amounts that will have to be paid after a certain period of time. Grisbi is a bit complicated compared to Eqonomize, but if you take the time to learn how to use it, it can make your life easy.

Figure 4: Transactions

Figure 5: Reports


| jANUARY 2013 | 79

For U & Me

Get it from the Software Centre or run sudo apt-get install skrooge. In this article, I have introduced the most popular open source PFMs. We have just scratched the surface of whats available; there's a lot more to be learnt by using these software and by trying out many others that are available in the market. In my opinion, for someone keen on using a PFM, you should start with Eqonomize, as it is the most user-friendly and doesnt require you to know anything about accountancy. I hope the article has provided readers an insight into what open source has to offer regarding PFM systems. Hopefully, no reader will ever be short of money hereafter!

Figure 6: Home 3

[1] [2] [3]

the account from which it is paid, and to which category it is being paid. Skrooge has a very large set of categories by default. It includes almost anything you can think of. Budgets can be set for different categories and tracked. Skrooge allows you to password-protect your document so that the bank details do not fall into the wrong hands. It also shows reports of various periods both in tabular and pictorial form, so that users have a better understanding of their money inflows and outflows.

By: Vineeth Kartha

The author is an electrical engineer and has a great passion for open source technologies, computer programming and electronics. When not coding, he loves to do glass painting. He can be reached at [blog]

You can mail us at


Guest Column Joy of Programming

The Software Quality Iceberg

The term software quality continues to be an oxymoron! In this article, we explore the meaning of the term to understand why we need to focus on internal quality to develop high quality software.
they are able to reuse it, port it and make it available on other platforms, or make it inter-operate with existing systems. As you can see, the same software can be perceived to be of high or low quality, depending on who you are. Let me give you a specific example. I came across an enterprise software product that was widely deployed and used by customers worldwide. Since it evolved over time and matured, it was stable in the sense that it was a product that customers could trustit was efficient, reliable, and easy to use. The organisation that developed it was very happy with the product since it was a cash cow -- hundreds of thousands of copies of the software product continued to sell worldwide. Further, the product was easily portable to different platforms. However, if you checked with the engineers who maintained the software, they were frustrated. The software was built on age-old technology nobody used anymore and, with two decades of changes, the software had accumulated so much code, that no one understood it any more. Further, the software was not designed with testing in mind, so making changes was a nightmareany change in the code would often break the software. This symptom is known as code rot, and developers dread touching such software. Obviously, the software was reaching its retirement age, and was perceived as of lowquality by the testing engineers. Yet, ironically, this same software was perceived as of high quality by the organisation that sold the software and by those who used it. However, there is more to it. Gradually, over a period of a few years, the software had become so rigid that no more changes could be made to it. Finally, the software product had to be scrapped, and rewritten from scratch! This led to the company losing billions. In other words, though on the surface, the different views on the softwares quality seem independent of each other, they are intricately interconnected! There are many other perspectives of software quality. For example, Capers Jones talks about the functional, aesthetic and structural aspects of software quality. As users, we care about the functionality that the software provides (i.e., functional software quality) and how easy it is to use (i.e., aesthetic software quality). However, as developers, we care about additional aspects of the software, such as its structural
| JAnuAry 2013 | 81

ithin just a few decades, software has come to pervade our lives. It is surprising that people dont realise how intricately connected with software their lives are. There is considerable software in the car you drive, the microwave oven with which you cook, the TV you watch, the mobile you use, the planes in which you fly, the projectors you use for making presentations the list is endless. Even if you know which devices need software, you may be surprised by the extent to which they rely on it. Do you know that high-end cars these days have over a million lines of code? Or were you aware that the smart TVs that we watch nowadays and the planes in which you fly are aided by software that have over a million lines of code? With the pervasive use of software, monitoring its quality has become more important than ever before. Most of you even if youve nothing to do with computerswould have faced problems because of low-quality software. For example, if you visit banks often, you would certainly have experienced, first-hand, the problems with the banking software -- it takes time for boot-up and log in, will often crash, will not provide or allow certain features, etc. For these reasons, the banking staff members are forced to make you wait longer, or ask you to come back later. Recently, when I had to visit a branch of one of Indias largest private-sector banks on a weekend, I found that the staff were asking all the customers to go home; the central server had crashed because of the huge volume of transactions, and no work could be taken up till the problem was resolved! Quality itself is a complex concept, and since software is intangible, software quality is even more complex. Let me illustrate why this is so and how quality could be measured differently, depending on the viewpoint. Based on whether you are a consumer of software, the software developer, or the organisation that develops the software, the meaning of high-quality software would differ. As a user, you would consider it high-quality if the application responds very fast, is very reliable, and easy to use, among other desirable characteristics. If youre developing the software, youll rate software to be of high quality if youre able to easily make changes and test the software. Organisations that produce the software will find the software to be of high quality if

open source for you

Joy of Programming
Guest Column
reliability usability correctness efficiency

aspects (i.e., structural software quality). In general, you can view software quality from two perspectives: internal quality and external quality. The aspects of software that concern the users of the softwaresuch as reliability, correctness, and efficiencyare known as external quality. While the quality aspects that relate to the development organisation or team, such as maintainability, readability, understandability and complexityare known as internal quality (see Figure 1). Most developers understand that there is a difference between the quality aspects that concern themselves versus the quality aspects that impact the users. Once they see it in a visual form, like shown in Figure 1, they are able to appreciate the differences much better. Still, they are normally not able to perceive how they are related to each other. However, experienced architects understand that internal quality has considerable impact on external quality! This link is hard to see, and the aspects of internal quality are easy to ignore, since it is often not very visible. The picture shows an iceberg to illustrate this point: only a small part of the iceberg is visible above the water, while 1/8th to 1/10th of it is submerged beneath. But they are interlinked! And the software project ship can sink if we underestimate the impact of poor internal quality by superficially focusing only on external quality! Most development teams and project managers, in the name of being pragmatic, focus only on the external quality aspects. They focus on testing, which can be used for improving external quality. They run dynamic analysis tools and profiling tools to find and fix problems, or improve the software on the externally perceivable quality aspects. But what they dont realise is that most quality problems are internal to the softwareand unless internal quality is good, external quality cannot be, even if it appears to be; poor internal quality can derail the project. This is analogous to the captain of the Titanic, who underestimated the impact of the iceberg because it looked small, but the large part of the iceberg that was submerged in the water sank the ship. Internal (structural) quality affects external (behavioural) quality this is the most important axiom in software quality. In the name of being pragmatic, or just plain ignorance, many managers disregard internal quality. Recently, a manager who is a big fan of agile methods made a startling statement: Refactoring is rework, and hence is a total waste of time and resources. We should get it right the first time! Superficially, the statement makes sensewhy refactor and rework when you can get it right the first time? But if you reflect on it, youll see how ridiculous this statement is, especially in the context of agile methods. The proponents of agile methods have a very good understanding of the dynamics of software development, and what goes into making high-quality software. One of the basic
82 | JAnuAry 2013 | open source for you

External quality
(user's view)

maintainability testability

Internal quality
complexity readability

(developer's view)

flexibility reusability


Figure 1: The software quality iceberg (internatl vs external quality)

tenets of agile methods is that software evolves by adapting the software to ever-changing requirements. With every incoming change, the existing structure needs to be improved and adapted, which is achieved using refactoring. In fact, one of the principles of agile methods is that continuous attention to technical excellence and good design enhances agility. With this, instead of up-front investment in design, good design is achieved through continuous refactoring. If you take out refactoring, and keep making changes to add more functionality, that will result in rotten code. To return to our earlier discussion on how internal quality influences external quality, the rotten code will ensure that the project is a failure. We should get it right the first time, is the approach with up-front investment in design, the approach that the traditional waterfall model encourages. The agile approach does not recommend up-front investment in design, and instead relies on good design that emerges because of continuous refactoring. Hence, refactoring is an integral part of agile methods, and should not be taken away because it is needless rework! To be successful, software projects (open source or enterprise) need to focus on internal quality. That is the reason why design quality is of paramount importance. Structure gives rise to behaviour: external quality is emergent in nature and it emerges from the internal structure of the software. Hence, the way to good external quality is to focus on and improve the internal quality of the software. References:
[1] Titanic Dilemma: The Seen Versus the Unseen: http://blog.

By: S G Ganesh
The author works for Siemens (Corporate Research & Technologies), Bangalore. You can reach him at sgganesh at gmail dot com.

How To

For U & Me

The chances that you own a DLNA-capable device are high, but the chances that you are using the technology fully are pretty low. This article will help you make the most out of DLNA at home.

ave you noticed the logo near the title on products you bought recently? The latest home electronics devices come with a media-sharing technology called DLNA, which is mainly for non-technical end-users and lets them share pictures, videos and music across devices, without hassle. There are already half a billion DLNA-certified devices in the world, but awareness and use by consumers is still very low. A few years ago, to watch videos from your PC on the TV's larger screen, you had to burn a CD/DVD and play it on the DVD player. Then devices like USB pen drives made CDs and DVDs obsolete, also overcoming the disc size limits, if you had a player with a USB port. Later, when media devices came out with LAN or Wi-Fi features, the configuration was tediousit included configuring the IP address, starting a server on your PC, etc. DLNA technology makes all this redundant. If a DLNA-compliant server is running on your home network, DLNA clients (players) connected to the same network will discover it. It is that simple! DLNA or Digital Living Network Alliance is an initiative for standardisation by the electronics industry leader, Sony, which began in 2003. It now has around 220 member companies, which include HP, Cisco, Broadcom, Google and

Samsung. DLNA organises devices in different classes based on the capability of the device and the interaction it will have with another device class. For example, you have a PC with music as your media source, classified as a Digital Media Server (DMS), and you want to listen to those songs on your smartphone, which is now the target device, called Digital Media Player (DMP). One device can be in more than one class, because of the interaction criteria. For example, the same smartphone that was earlier a DMP could share videos or pictures with your new LCD/LED networked TVthus acting as a DMS, while the TV acts as a DMP. There are three other classes: Digital Media Renderers, Controllers and Printers. In addition, there are equivalent classes exclusively for mobile devices, which support different file formats commonly used in small-screen devices. To avoid confusion in terminology, I will restrict my focus to DMS and DMP.

DLNA architecture

DLNA is based on UPnP (Universal Plug and Play), an Internet Protocol stack service. When you search the Internet for DLNA, you are more likely to find vendor articles or videos that paint DLNA as a technology in which
| JANUARY 2013 | 83


For U & Me
Copyright protection Media formats Content discovery and control Device discovery Content transfer
How To

UPnP AV 1.0
UPnP Device Architecture (Auto-IP/DHCP, SSDP)

Layer 7: Application layer Layer 6: Presentation layer Layer 5: Session layer

Wired: 802,3!,802.3u Wireless: 802.11a/b/g/n

Layer 4: Transport layer Layer 3: Network layer Layer 2: Data link layer Layer 1: Physical layer

Network connection DLNA classification

network, which discovers and communicates between your Wi-Fi or Ethernetconnected home devices. This is the important layer that really gives you the plug-and-play experience, and also selects the media types that can be supported on the network.

Standard technology

OSI reference model

Figure 4: UPnP Router Control

Figure 1: Protocol stack of DLNA (Image courtesy

Figure 2: Media Tomb server settings

Consumer electronics equipment such as TVs or DVD players come with DLNA capability embedded, and will discover each other without manual configurationbut if you want to set up your PC as a DMS, then you need to do some configuration first. One DMS software that will make the videos, pictures and music on your PC available to all DLNA compatible players is Media Tomb it is simple, easy to configure, and has been released under the GPL. After installing it, the server runs as part of system services. To enable a browser-based UI to add or remove media, follow the steps indicated below. First, as the superuser, edit /etc/mediatomb/config.xml and if any of the options below are No, change them to Yes:
<ui enabled="yes" show-tooltips="yes"> <accounts enabled="yes" session-timeout="30"> <account user="mediatomb" password="mediatomb"/> </accounts> </ui> <transcoding enabled="yes">

Open source softwareservers, clients and renderers

Figure 3: GUPnP Universal Control Point

everything's automatic. But rarely will you find, in one place, what equipment is required, and how to configure DLNA? Is it like Bluetooth, with which you pair devices before connecting, or like Wi-Fi, where you discover or manually enter an SSID to connect? Understanding the DLNA architecture helps you find the answers faster than the hours spent searching the Internet. Since DLNA relies on UPnP, it is like the physical plug-and-play experienceremember Microsoft's Plugand-Play feature from Windows 95 onwards? UPnP tries to offer the same experience in the digital media world. Figure 1 compares the OSI reference model to DLNA terms. The physical and data-link layer are the IEEE Ethernet and Wi-Fi standards, so your network can be wired or Wi-Fi (smartphones, tablets, etc) based. UPnP is a layer over the IP

After that, do a sudo /etc/init.d/mediatomb restart to activate the new configuration. Then, in your browser, go to http://localhost:49152/ (assuming you haven't changed the default port). Log in with the name and password: mediatomb/mediatomb and you can then add the directories or files you would like to share on the home network to the database (see Figure 2). Other DMSs include miniDLNA and Rygel. The former is used as the media server, in embedded form, by products like Seagate NAS storage, for home use. The XBMC software media player and entertainment hub also comes as a live CD and can make your PC a dedicated DMS.

DLNA/UPnP-related software

The GUPnP project has many applications to help you discover, monitor and track UPnP devices. GUPnP Universal Control Point discovers all UPnP-capable devices on the network. In

How To

For U & Me

Figure 5: Skifta app displaying media servers

Figure 6: Files shared using MediaTomb

Figure 7: Android app

Figure 3, you can see MediaTomb (running on a Linux box), a NAS server (axentraserver:root), Rygel (another media server for Linux), an Android mobile (SAMSUNG GT-N7000), a Belkin router (with UPnP support) and experimental GUPnP Network Light. This application also gives you details about the interaction between the devices, the state changes and services offered. The GUPnP Network Light sample application simulates a UPnP device on the network, letting you toggle between the two possible states, ON and OFF. UPnP Router Control is an application that connects to the home media gateway (router) that all devices are connected through; it displays upload/download speeds, real-time network activity and the total volume of media data exchange (Figure 4).

to keep your PC on all the time; the alternative is to buy a product like the Seagate GoFlex Home Network Storage system, instead of USB external disks.

Check DLNA certification online

If you did not pay attention to the DLNA / UPnP logos on the product package, you can also check online for the full specifications of the product, or you can search the DLNA home page ( for certification.

New buys

Android apps

Your Android phone can play media stored on a DMS (like a PC or NAS storage device). Skifta is one DLNA player app; it automatically detects DMSs on the network and uses the Android default player or user-installed players for rendering. Other popular Android DMS/DMP apps are the aVia media player, the MediaHouse UPnP/DLNA browser and bubbleUPnP, which let you share media from your mobile with other devices.

If you are buying a new TV, Blu-ray player or a mobile, be sure it has DLNA certification, or uses a platform that has DLNA software available. When you buy home networking devices like Wi-Fi routers or NAS storage devices, check the specifications for DLNA or UPnP. Figure 7 shows an Android app from that will help you make a choice if you are planning to buy new devices. When you are buying a Wi-Fi router, you might find lowerspeed routers (e.g., 54 MBps) cheaper. After all, there's a long wait for Net connection speeds to reach that level, affordably. But it might be better to have a higher-speed router (like 150 MBps+) for your home networking needs, as it would make video streaming to your HD TV smooth. References
[1] DLNA technology tutorial: technology/technology/theme/dlna_01.html [2] Open source GUPnP project: [3] DLNA product certification search:


What if the TV you currently have is not DLNA-certified, or if it does not have an Ethernet port or Wi-Fi? Can you not experience DLNA features? There is a solution, through additional devices like iOmega TV Link, or media players that are DLNA-capable and can be connected to the TV using the AV cable or HDMI. There are also products that combine storage and media players with A/V output and HDMIone of these is the iOmega ScreenPlay DX HD Media player. Another option is to use gaming consoles like XBox and Sony PlayStation 3, which are DLNA certified. To share a huge media collection with your PC, you have

By: Janardan Revuru

The author is fond of open source and gadgets. For any queries on making DLNA work at home, you can reach him at


| JANUARY 2013 | 85

Instagram is a fast, beautiful and fun way to share your photos with family, friends and peers. Snap a picture, choose a filter to transform its look and feel, then post to Instagram, and easily share it on social networking sites tooand it's absolutely FREE! You can transform your ordinary snaps into works of art your friends admireand you can view what your friends post every day. The main features of this app include: Custom-designed filters and borders. Instant sharing on FB, Twitter, Tumblr and Foursquare. The ability to Like and comment on the pictures posted by your friends. Full support for front as well as rear cameras. The current version available is 3.2.0, and it requires Android 2.2 or above. The average rating of this app is 4.6.


For U & Me

Figure 3: Instagram UI

Figure 2: Instagram

Figure 4: Instagram-2

As frequently happens, you leave your phone somewhere perhaps even in silent mode, if you were in a meeting. And later, you cant remember where you left it. Besides the expense and hassle of getting a new phone, you might also have personal or business data on your phone that you do not want to lose. There's an app for that! Android Lost, developed by Theis Borg, helps you locate your misplaced Android smartphone. You can control your smartphone by just texting a custom attention message, and the app turns the ringer volume up, making your phones ringtone audible (useful when your phone is in silent mode). It's also possible to get the exact location of your phone (through GPS). Simply text your attention word from any other cell phone, or from the Internet, to your lost phone. If it's been stolen, the pass-code feature in it prevents the thief from uninstalling the app without prior notice to the owner. By remotely controlling Figure 5: Android Lost

Android Lost

the phone through this app, you can retrieve the phone's IMEI number, set up call forwarding, read your SMSs and send them to your e-mail. You even have full Internet access (the app can create network sockets). What more could you ask from a free app? The average rating is 4.6very nice. The current version 4.0.4 requires Android 2.2 or above.

Figure 6: Android Lost-1

Continued on page 89...

| jANUARY 2013 | 87


Open Gurus

Introducing Open VBX
How To

OpenVBX, an open source business phone platform launched by Twilio under the Mozilla Public License, provides an installfree solution that can be hosted on your Web server or on a cloud provider like Amazon EC2, and then used to build call and SMS flows.
f youre seeking to build an application that can work across voice, SMS and IM, there are plenty of options available. What earlier used to be a complex and costly integration can now be created and launched using simple Web wizards and programming, thanks to cloud platforms like OpenVBX. These managed service platforms allow you to build very powerful voice and text applications right into your browser. OpenVBX is a virtual MySQL and PHP-based PBX, which can be downloaded from Twilios website or from GitHub. Call or SMS flows themselves are driven by widgets that you may drag-and-drop. Some of the possibilities are: Getting a local or toll-free phone number, for users to call. Routing calls to the menu, individuals, teams and voice-mails. Letting the first individual from a team take the phone call. Building conference lines and menus. Transcribing voice mails. Lets look at how to install OpenVBX, build a sample application, and then test it. This requires a basic set-up and then workflow building. Lets look at a simple call workflow. Log in to OpenVBX with your Twilio account. Set up a device, by giving it a device name and number. You can choose to receive an SMS notification on the device. Set up voice-mail via text-to-speech, an MP3, or recording using a phone. Create a new call flow using the wizard. When a call begins, choose to read a greeting from text. After the prompt, provide a menu with Options 1 and 2. For Option 1, play a song; for Option 2, play voice mail for a user; for Option 3, send an SMS back to the caller, and hang up. Configure other options (e.g., play the menu only once). Save the flow. You dont need to be a programmer to create such a workflow, and you will be amazed at the simplicity with which you can let your imagination loose.

Sample application

Test the sample application

Install OpenVBX

Get the zip file and deploy it on your Web server. Access the root page and walk through the installer. You will need a MySQL database. Register for a free Twilio account.

Get a sandbox number and PIN from Twilio, and configure it with this flow in the OpenVBX portal. (For privacy reasons, my sandbox PIN number is not shared here. Get yours after registering for a free testing account.) Call this number with the PIN. Observe the above workflow playing for you. Wonder at how easy this was, with zero programming so far!

Figure 1: Call Flow1

88 | january 2013 | OPEn SOurCE FOr yOu
How To

Open Gurus

Figure 4: Voicemail Figure 2: Admin Flow

What more?

Advanced applications can be built by using APIs, adding user groups and configuring enterprise workflows. The ecosystem also consists of several plug-ins submitted by the open source community, which power the capability of OpenVBX. These can be added easily. To start using real phone numbers, you can buy a number from Twilio. Although OpenVBX was born at Twilio, other platforms like Tropo also provide an implementation. Other providers might also work to have OpenVBX run on more than one cloud platform.
Figure 3: Call Flow 2

By: Puneet M Sangal

The author has been in the software industry for the last 14 years. He has worked in several areas on the Web and the Internet. He runs a media e-learning site at He currently works at Yahoo as a senior manager of products.

In case of any issues in making the workflow, I can be contacted at Also see the figures for reference.

Continued from page 87...

Hide pictures and videos in Vaulty
Too many uninvited eyes viewing the pictures and videos stored on your phone? Need a way to hide them? Use Vaulty to hide your personal stuff from the public gazecamouflaged as a fake stocks app, which works just like a real stocks application that cant be viewed unless one enters the correct password. You can rename and filter your videos and pictures, and a password recovery option is also available. The latest version varies from device to device. A rating of 4.7 is impressive. It runs on Android 1.6 or above. I tried to run it on the Galaxy Note but, unfortunately, it didnt work.

Figure 7: Vaulty

Figure 8: Vaulty-1

Figure 9: Vaulty-2

Hopefully, the above-mentioned apps will be of use to you. Feel free to send me your feedback and suggestions.

By: Ranjan Satapathy

The author is a freelance tech writer and an open source enthusiast. Follow him on G+ ( OPEn SOurCE FOr yOu | january 2013

| 89

For U & Me
How To

Set up Your Own E-Commerce Platform with Spree

Quite often, businesses want niche software solutions but dont have the hours, expertise or capital to invest in them. However, often, such solutions already exist in the open source world. Spree is an example of such a pre-baked e-commerce platform built with Ruby on Rails. It lets you set up your own Web store within minutes, and truly justifies the Dont reinvent the wheel mantra.

n e-commerce store should be highly customised in its look, feel and workflow to match the requirements of the seller. It should showcase all the products elegantly, provide a catalogue of available items, allow users to search through the myriad products, and offer administrators the controls to monitor all aspects of user behaviour. Spree has tons of features that satisfy these core requirements, along with much more functionality. Here is a quick peek at what it has to offer.

discounts, taxes and other rules to the process is totally effortless.

Powerful administrative controls

Managing your products and orders is a breeze with Spree. You can have multiple variations of items based on colour and size. Custom properties can be added to products. It is also easy to control all aspects of your order life cycle. Inventorytracking features help you track each and every product, with detailed analytics provided to help understand buyers actions.

Easy customisability

The power of Spree lies in the extent to which it can be customised. It offers such flexibility with application-specific customisation, extensions and themes. Lets look at all three aspects. The official Spree docs explain application-specific customisations as used by developers and designers to tweak Sprees behaviour or appearance to match a particular business operating procedures or branding, or to provide a unique feature. These are the most basic sets of customisations a store needs. Extensions add logic and functionality to Spree. Themes, on the other hand, just change the look and feel of the store. Both extensions and themes are usually distributed as Ruby packages called gems. Moreover, Spree gives us the flexibility to choose a Checkout flow for the items. You can have a single-step or multiple-step checkout, with standard or AJAX navigation. Also, adding

Integration with payment gateways and shipping carriers

Payment gateways allow secure payment transactions. In addition to the default gateways supported out-of-the-box, with Spree GatewayGem you can integrate with several popular payment services. For shipping items, the customer gets to see all the available providers, with their costs for that particular zone, prior to choosing one. The shipping costs finally get added to the product cost.

Easy SSL and SEO

The official Spree guide makes it easy to set up a searchengine-friendly Web store that supports SSL.


Spree provides a convenient API to access store data. With

such a mechanism in place, it is feasible to build mobile applications for store or middleware applications that act as a bridge between Spree and a warehouse or inventory system.
How To

For U & Me

Great support!

Spree has a vibrant community backing the ecosystem. For organisations that need hand-holding with the entire set-up, cost-effective paid support plans are easily available.

The structure of gems

spree_api spree_auth spree_cmd spree_core spree_dash spree_promo

Spree itself is a collection of gems that provide all the functionality. In the Spree Git repository, the following gems are present:
Provides REST API access Provides authentication and authorisation Utility for installing Spree and creating extensions Simple overview dashboard Simple overview dashboard Coupons and promotions

spree_sample Sample data and images

Rails engines, since their introduction in Rails 3, let developers build mini applications that provide additional functionality to host applications. All the constituent gems are nothing but Rails 3 engines.


Trying out Spree

The Spree creators have a live demo running at http://demo. to showcase a sample Spree store without admin access. If you want a personal sandbox with full access, you need to sign up at one_click/signup, after which a personal store is created, and login credentials are mailed to you. For folks who love to get their hands dirty, installing Spree locally on your machine requires very little effort, but some technical chops. Without further ado, lets get started with installing Spree. For the sake of brevity, assume a system running Ruby on RVM (Ruby Version Manager). First, install the latest Rails gem with gem install rails -v 3.2.3; next, install the Bundler gem with gem install bundler. A very popular image-manipulating library, Image Magick, is used by Spree to manipulate images. If you want to add product images in your store, you need to install this suite. Image Magick installation is OS-dependent and a quick Web search will point you to some excellent resources. Finally, install the Spree gem with gem install spree and youre done with the first part. Now, you need to create a Rails application that makes use of Spree, with, for example, rails new mythstore; after that, change to the application directory (cd mythstore) and install Spree for this application (spree install). Spree will ask you some questions regarding the installation of sample data. Choose based on your requirements, or hit Enter to go with the default options. With Rails 3.1, an Asset Pipeline was introduced to the

Rails framework in order to concatenate and minimise client-side assets such as JavaScript and CSS files. It also allows developers to write assets in languages like Sass or CoffeeScript. This feature can cause some performance issues in development mode. To fix this, issue the following precompile task:
bundle exec rake assets:precompile:nondigest

You are now ready to take Spree for a spin! Start a Web server on your local machine by executing rails s, then fire up your browser and point it to http://localhost:3000 to see the Web store.

Beyond the horizon

Spree comes bundled with a lot of documentation. To deploy a production Spree store instance on the Web, you can dig deeper and read the entire documentation at http://guides.spreecommerce. com/. To check out some real-world stores that are already running Spree, visit Seasoned Rails developers can read the contribution guide at http:// to add their efforts to this amazing project. By: Nitish Upreti
The author is a Rubyist at heart. A technology start-up enthusiast and classic rock lover, he tweets as @nitish and blogs at


| jANUARY 2013 | 91

For U & Me

A Tribute to Raj Mathur: A Much Respected FOSS Contributor


People active on Linux User Groups in India do not need an introduction to Raj Mathur. Yes, Raj Mathur, the guy known for his brutal honesty and principles, is no more. We lost him on 12.12.12.
camouflage was lying lifeless on the low diwan. For a second, I imagined a cup of coffee and creme, which was a kind of ritual commencement of linux-delhi meetings, with Kishore bringing the coffee and Raj arranging for cups and creme. The cup of coffee is not there any more, that table is missing and, more importantly, Raj has checked out of the opium den. Heres what Raj Mathur told us when we spoke to him about tips for open source businesses: You should be aware of what technologies are available in different fields even if you are not very conversant with them. Those whose jobs are basically to architect and implement solutions for their clients, should be aware of the latest and emerging technologies. For instance, when you are architecting a solution in the proprietary software domain for a mail server, you would use one base technology and all the other technologies come along with it. But when you are dealing with open source technology, you have to choose each component individually and make all the components work together. You should be aware of what components are available, as well as their strengths, weaknesses and special features. You should also know which technologies will fit together for a solution thats appropriate for your client, and you have to know how to join all of them together. These are the things that you need to be aware of when you are dealing with open source. It is very easy to get stuck with one technology in open source also, which is fine, as one can become an expert in that technology. But eventually, what happens is that clients end up losing out on new technologies, or rather, better technologieswhich may be more suitable for their requirementsbecause the service provider is not aware of them. And the awareness comes from interactions with the community whether it is through RSS feeds, forums or mailing lists. The most important thing while consulting in open source is that you should be more open to new technologies than you would be in the proprietary software world, because technology doesnt change so fast in the case of the latter. In the open source world, the technologies get enhanced and mutate really fast.

aj Mathur was a founder member of the Indian Linux Users Group, Delhi, and a very active member of the free and open source community. Well respected and extremely knowledgeable, he was often sought after for advice, which he readily offered. He was one of the earliest users and advocates of Linux and free software in India. Apart from contributions to the FOSS corpus with numerous packages released under the GNU GPL, he was also a regular member of the Free Software Award Committee, Director Emeritus of the Open Source Initiative and visiting professor at the Indian Statistical Institute, New Delhi. Mathur was an inspiration to many. He always extended a helping hand to those who wanted to experience the world of FOSS and even guided those who wanted to make money out of it. He ran Kandalaya, a consulting firm in the GNU/Linux, network application integration and network security domains. Mathur was a wonderful orator as well when it came to talking about the importance of open source technology. We had the opportunity of inviting this visionary techie to the recently organised Open Source India 2012, held in Bengaluru. Kishore Bhargava, Mathurs close buddy for the past 30 years, broke down when we contacted him. He recalled, We spoke over the phone almost every day. Unfortunately, we did not speak yesterday. Raj and I have shared the greatest moments together. He loved to have fun and encouraged everyone around him to do the same. He loved his food, his movies, his music, and being with friends and family. Bhargava informed us that Raj suffered a heart attack yesterday and was declared brought dead by the doctors when taken to the hospital. Supreet Sethi, one of the members of ILUG-D, wrote in his tribute to Raj on the LUG, Yesterdays news, of Raj moving on, is shocking. I met him through ILUG-D and came to know him better through various activities of ILUG-D that he helped organise. He was brutally honest and that made him special. He was special because with those piercing words, he could get through to you better. Yesterday night, I visited his drawing room, which played host to many ILUG-D events. Niyam playfully called it the opium den. The opium den was filled with people like Andrew, Gora and Friji. The person with the trademark green
92 | january 2013 | OPEn SOurCE FOr yOu

By: OSFY Bureau

Open source has more headroom in India than anywhere else

When companies like NIIT Technologies Ltd adopt open source technologies, they take the technology to the next level with their innovation and development, creating a trend of sorts in this domain. NIIT Technologies works on open source projects, contributes to the community and hires open source professionals for its projects. If given a choice, the company chooses open source technologies over proprietary solutions because it gets adequate talent and is ready to train engineers in relevant technologies, if required. Diksha P Gupta from OSFY spoke to Arvind Mehrotra, president, APAC, India & Middle East, NIIT Technologies, about the scope for open source professionals at NIIT Tech and what the company looks for. Excerpts:
Arvind Mehrotra, president, APAC, India & Middle East, NIIT Technologies
Recruitment Trends

For U & Me

Do you feel India is rich enough in open source talent and do you get the kind of trained manpower you want for NIIT Tech?
There is cutting-edge work constantly happening in open source. So, today if I were to look at it, yes, there is sufficient talent available. As to whether there is an enthusiastic workforce for open source in particular, is an open question. I think its a 50:50 kind of ratio. People take IT as a career option, whereas open source requires one to play the role of a continuous evangelist on the technology front. People opting for IT as a career are selecting the industry, are focusing on certain business processes and certain types of applications to further their roles and careers. So, if you talk about the availability of a workforce, yes, it is available but has to be groomed correctly. A lot of those qualified have the aptitude and the attitude to be evangelists for open source. But open source is a culture where people will make mistakes. It requires openness to download applications, work on them and contribute back; whereas, in a formal organisation, we are worried about the legal implications of copying a piece of code and embedding it into work that we are contracted to deliver to our customer. So we have formal barriers there.

We cannot allow that because we will carry those risks. We do not want someone else later claiming ownership of the code our engineers say that they have created. There are legal and contractual implications that prevent businesses from inducting open source applications easily.

You said people can become open source evangelists if you train them right. Do you train people when they come on board at NIIT Tech?
Yes we do. NIIT Tech has made an open source commitment. Before the cloud was launched, we used to make Build My Application (BMA). It was a framework for rapid application development. We did market it. When we realised that people were finding it difficult to retain the code and maintain it because it was being generated by a RAD equivalent tool, we allowed it to go to the open source community so that others could continue to use it and build their own applications with that. So that has been our contribution, and that momentum and spirit continues to fuel the organisation to make more such contributions. As far as training the professionals is concerned, we have a schedule calendar and a huge library of training material
| JANUARY 2013 | 93


For U & Me

available for our employees. The training programs are available in both electronic media and as premise-based training. Our technology innovation centre constantly generates ideas of projects and programmes that the community works on. The centre runs an annual session, during which people who have worked on open source or proprietary technologies come and showcase their capabilities. For us, open source is also a business issue as there is immense demand in this sector. There is a centre of competence that drives it. But it trains only those who will work within the organisation. So, if the demand for a particular skill set is not very large, we do not train a huge workforce just for the sake of giving them exposure to open source. Ultimately, employees are becoming more discerning. They want to know why they are getting trained and whether the training will benefit their career further or fulfil any personal objectives. Only then will they agree to get trained on a subject.

Recruitment Trends

So there are employees who are eager to get trained on open source technologies these days?

there is very strong demand. The NIC, Government of India and DeitY are formally identifying applications that should go to open source. But they are also requesting options from open source and proprietary applications simultaneously in many of their tenders. So, the final choice remains with the vendors and they have to decide what to select. If the penalties are linked to the availability of the application, warranties, support, and given the fact that open source technologies do not allow longevity, OEMs and systems integrators prefer to go for proprietary technologies because they can back these with something. But yes, one cannot deny that the government is creating demand. We as a company do submit Requests For Proposals (RFPs) to the government. I think India has a fairly good demand for open source professionals and it will continue to grow because companies would want to see IT offerings for a lower cost.

Yes, very much. And that depends a lot on the broad area that a professional has chosen to build a career in -- whether its the infrastructure, applications or the database side. Most of the database guys these days want to understand technologies like MySQL, DB2, etc. Employees also choose what they believe their specialisation should be.

What are the challenges involved with open source technologies vs proprietary technologies?

Do you think there is enough training being provided across the country on open source technologies?

In my view, there is a lot of noise about this. There are a lot of forums where such discussions take place. But sometimes, one believes that open source has become not the mainstay but the cheap option. Whereas it should be the other way round. Open source should not be the cheap option but the technology option. So today if people are getting driven to open source for the commercial objective of avoiding licensing costs, they tend to acquire applications that may not suit them -- making the wrong choices on open source. Businesses today, unfortunately, have poor communication with IT professionals, and the latter often do not make the right assessment of what is required. If you select an open source technology, you have to know: (a) who is backing that open source software; (b) if there are support issues, who will solve that problem, etc. So in short, one has to not only look at the availability, but also the maintainability and scalability of the applications developed on open source, rather than just the cost factor. If such decisions are taken, I am sure open source has more headroom in India than anywhere else.

There are two challenges involved if you talk of open source technologies vs proprietary technologies. The proprietary technology providers have established huge brands and open source has a brand issue. So there is a recollection issue with open source technology. If an open source project does not have the backing of an OEM but has a forum, a technology guy will be comfortable in that kind of an environment while a business guy may not be as comfortable because he does not get any assurance. So the open source forums in India will have to give a certain set of assurances or identify a list of applications or application stacks that can be rated. Assessments of the applications are required for the business people to understand what choices they are making.

What are the skill sets that you look for in engineers when you hire them for open source projects?

The centre of competence at NIIT Tech has defined the toolsets that we work on. But what we look for while selecting an engineer depends on the needs of the projects. If we are using open source testing frameworks for a project, we would want to take on professionals who have some experience in those tools. But if we are hiring testers in general, their engineering calibre is the biggest criteria for selecting them. Freshers are hired based on their basic engineering skills and their understanding of technology.

If businesses are ready to adopt open source technology, does that mean that the demand for open source professionals has been increasing in India of late?
I do not have hard statistics to say that, but if I were to look at the government projects that we are working on,

Does NIIT Tech contribute to open source projects as well?

NIIT Tech is definitely involved in making contributions to several open source projects. Our centre of competence and our technology innovation centre interact with the open source community in a formal way. Apart from that, our developers are involved with various community initiatives, which makes us even closer to the community.

For U & Me

Alfresco: The Open Source Answer to SharePoint

Much has been said about how SharePoint makes working in a team easy and collaboration among team mates effortless. But for the open source community, SharePoint is proprietary software. So, whats the alternative? This article looks at one of the most-talked-about options, Alfresco, to evaluate its potential to stand up to SharePoint.

et's look at the main features of Alfresco, which make it a suitable alternative to SharePoint.

Web content sharing

One of Alfrescos main features is that it allows Web content sharing. It lets you create a website where the project team can collaborate, share documents and information, schedule meetings, and do much more. Creating the site is as simple as it gets, using the Alfresco GUI. Once you log in to the admin panel, just go to Sites > Create site. This opens a window (Figure 1). Fill in the details and get started. Once the site is created, it takes you to the site dashboard, where you can add content, share documents, invite people (the Invite button is at the top right) and personalise your site. The Document Library button on the menu lets you add and share documents. There is provision for fine-tuned access control here, letting you manage who accesses what and up to what level (read, write or

both). The Customise Dashboard button next to Invite lets you customise the site; you can add dash-lets (small widgets for different activities), and also arrange their position and display format (one column, two columns and so on). Figure 2 shows the dashboard. As you explore, you will find there is a lot more to content sharing in Alfresco.

Document management and collaboration

Document management is another important feature of Alfresco. You will find the Repository button in the top menu bar (in the admin/user dashboard). The repository is where you can share and manage all the documents of your installation. If you want to add a document specifically to a site, you can go to the site dashboard (not the admin dashboard) and then the link Document Library. Here, you can add (even drag and drop) new folders or new documents. Once the document is added, you can fine-tune the permissionsselect the document or folder you want to modify, and click Selected Items (see Figure 3).

Figure 1: Add a site here

Figure 2: The site dashboard


| JANUARY 2013 | 95

For U & Me

Figure 5: User management Figure 3: Privileges management

Figure 6: Adding users

Figure 4: Automatic document versioning

Alfresco also has version control for documents; it automatically shows edited documents with versions (Figure 4). Hence, you dont have to bother about parallel changes to the same file. Alfresco even allows you to have an exclusive lock on the files, so that only one person can edit the file at a single time. Once you're done and check in the document, it becomes available to others for editing. Alfresco document management also allows tags to be associated with the documents, which makes it easy to search and identify them.

Figure 7: Workflow

User management

Alfresco offers a robust user management system with efficient handling of users and privileges. It also supports social networking, allowing users to set their status, and also like and comment on events. This adds great value, bringing the team closer. To add a user, visit the More link in the admin dashboard, then the Users link to get to the Users page (Figure 5). Once users are created, you can invite them to any Alfresco site you have created; and while inviting them, you can set the users roles. This allows a great deal of flexibility, since you can have a user with different roles on different sites. Similarly, a group can be added instead of a user, and a whole group can be invited at once to a site, making the whole process a lot faster.

7). A workflow can be of five types, and these pretty much cover real-world scenarios. Once you select a type, you get a form to fill, and you are done. Workflows also allow you to add documents to them. Thus you can send a file for approval, review, and almost anything you can think of. You can even start a workflow by selecting a document and then clicking the Start Workflow link. As we have seen, Alfresco has probably all the features to make it an ideal enterprise content management software, and we should have no qualms in claiming it as a suitable open source alternative for SharePoint. I am sure that once you start exploring it, you will realise that whatever I have discussed in this article is just the tip of the icebergthe best awaits you. I hope you find this article useful, and it serves you in some way. Any questions are most welcome. By: Nitish Kumar Tiwari
The author is based in Bengaluru, India, where he works as a software developer for a FOSS-based firm. He also works as a consultant for open source tools implementation for various companies. In his free time, he likes to dabble with open source software and share experiences with others. His other interests include movies and travelling. Contact him at


Alfresco supports workflows also. You can create workflows for any task you wish. Just go to the admin/user dashboard and select Start Workflow in the My Tasks window (Figure

A Look at Mobile OS Security: Android vs iOS

For U & Me

When it comes to mobile operating systems, which one is the most secure? This article attempts to assess the security levels of Android and iOS, and arrive at which one fares better.

he war between mobile operating systems (MOS) started with the advent of smartphones and other mobile devices, and shows no signs of ending anytime soon. We have had MOSs going head-to-head in multiple fields: usability, extensions and apps, support, community, even eyecandy and appearance! However, one persistent area of debate is related to mobile security. Before I begin, allow me to clarify one rather minor, yet oftrepeated issuethe Wait for the next update, when (the name of problem appears here) will be fixed rhetoric, which has not been considered in this article. When you think of it, both Android and iOS were created with security in mind. While Android has the ever-powerful Linux base, Apples iOS, too, is not a minnow when it comes to out-of-the-box security standards. Yes, Androids open source standards ensure that the MOS remains regularly updated and patched, but even Apple has not created a billiondollar empire by shipping vulnerable OSs. To quote Symantec regarding the security of both Android and iOS, The ostensible goals of the creators of iOS and Android [were] to make the platforms inherently secure, rather than to force users to rely upon third-party security software. That statement comes from a company that earns its livelihood by providing

users with third-party security solutions for their devices. Naturally, this cannot be untrue! Still, both Android and iOS have faced their due share of criticism when it comes to security. For instance, Android is said to offer little or no protection against data integrity attacks, while iOS has poor scores when it comes to implementing permission-based access control. Arguably, it is claimed that iOS has a way better encryption mechanism than Android. Yet, such encryption does not mean that iOS is better than Android at preventing resource abuse. In fact, Android performs much better, simply because it has the ability to run apps in isolation, thereby limiting the damaging ability of a malicious app. However, considering the fact that both Android and iOS have been designed for the end user, they have to make a trade offfor the casual user, a security norm of varying degree is often abstracted in order to create a more usable OS.

Some number-crunching

Both Android and iOS are extremely popular (oops, isnt that obvious?). Both the operating systems currently have a huge number of applications for their users. Here is a rather simple table to help you assess the state of each operating system:
| JANUARY 2013 | 97


For U & Me
Category Approx number of apps
Android 600,000+ 96% 80% Unregulated by Google Google Bouncer screens for such apps Yes Average Yes Yes, but via third-party apps only Granted while downloading 59% Apple iOS 500,000+ 92% 40% Strictly regulated by Apple Apple has a regulation mechanism in place, no specific software for malicious apps Yes Excellent Yes No Granted while using for the first time 20%

Hack attempts on Top 75 paid apps (%) Hack attempts on Top 15 free apps (%) Marketplace/App Store

Malicious apps Protection from Web-based attacks Data encryption User authentication for Apps Remote erase

Permissions Approx market share

The above numbers will give you a fair idea of the security settings on each operating system. Having done that, let us head to the practical aspects.

Some practical examples

As the above table already tells you, Android lets you grant apps permissions before you actually download them. Apple iOS, on the other hand, lets you do the same when you run the app for the first time. So, what if you accidentally download an app that is, well, evil? In iOS, you will have that app lying dormant until you actually run it, and as a result, that app will have all the time in the world to harm you. Android apps, on the other hand, are required to inform you about the permission settings that they seek even before they are downloaded, and if you are not happy with the security settings, you can easily choose not to install such apps in the first place. Dont trust me? Consider this: both Android and iOS users once encountered certain apps that asked for a lot of user data. Fifty-four per cent of Android users actually refused to even install those apps, whereas as many as 79 per cent of iOS users ended up installing them (only to uninstall them later, when they found out about the security settings upon first launch of the apps). What about the screening mechanism? Every iOS user tells you that the Apple App Store is more secure than Google Play, right? Here is another example: The Telegraph UK

once reported an incident related to an app named InstaStock created by Charlie Miller. The app served the purpose of providing the users with stock prices, market trends and other business related stuff. However, Miller later showed that the app could also access the users photos and contacts, and even transmit them to a remote device, without informing the user! This app was approved by Apple initially, downloaded by a good million users, and later when Miller exhibited the flaw in his app, Apple pulled back the app, and terminated Millers developer account. So, a developer shows the heart to challenge the security settings and break the myths related to iOS, and Apple banishes him! I wouldnt argue for or against what Miller did, but the moral of the story isthe Apple App Store is secure only on paper. So, is Android more secure? In one word: yes. How? Let me explain. It is true that Android faces more hack attempts and malicious attacks than iOS. Yet, that is because it is a more popular OS. Consider this: which has more viruses -Windows, Linux or BSD? The answer is obvious. However, unlike the others, Android is open to third-party management tools for security, and thus, with a proper security settings level, you can easily make your Android device bullet-proof. Before I actually conclude the article, I shall cite another case study. Apple iOS devices have less malicious attacks in terms of numbers, but whatever little share of attacks they get are way more serious in nature compared to the ones against Android. As AdaptiveMobile once reported, iOS is the least secure platform when it comes to basic things such as sending an SMS. However, even Android is not a divine creation, and you need to exercise caution: download apps that are from trusted developers (the G+ ratings and community feedback are an excellent way to judge the credibility of a developer). Furthermore, make sure you read the permission details carefully, and use mobile banking only via the official apps. Last but not the least, stay away from those cash prize links in emails and apps. References The Telegraph UK apple/8878408/Rogue-iPhone-app-banned-from-iTunes-store.html iphone-sms-spoofing-network-or-phone-issue

By: Sufyan bin Uzayr

The author is a freelance writer and artist based in India. He is associated with multiple magazines and websites, and takes a keen interest in open source software, Web CMS, digital art and mobile development. Sufyan has authored a book named Sufism: A Brief History, and is currently serving as the founder-editor of an e-journal named Brave New World ( You can visit his website at or catch him on Facebook at

Cloud Corner

Brokers and cartridges

How To

The broker acts like a single point of contact for all things related to the application. The command-line tools and Web console both connect to this broker with a REST-based API. Cartridges, on the other hand, are again containers that run the architectural and framework components of an application and provide enhanced features for the application. The cartridges can run on one or more gears, and different cartridges for components like PHP, JBoss, Ruby and MySQL are available. The two types of cartridges available are framework and embedded cartridges. The embedded cartridges are those that perform support functions like monitoring, database management, performance metrics etc. If you are well acquainted with this platform, you should be able to write your own cartridges to add custom features to your application.

Supported languages and frameworks

The supported languages and environments for OpenShift include Node.js, Ruby, Python, PHP, Perl, and surprisingly, Java. There are very few cloud application platforms that are associated with such a large variety of environments, and particularly with Java, which is usually restricted to enterprise applications. It is actually the first such platform ever to support Java EE in its entirety. Among the other frameworks it supports are Ruby on Rails, Django, CakePHP, CodeIgniter, etc. Among the database options are MongoDB, PostgreSQL and MySQL, which are currently available with phpMyAdmin and rockMongo as the consoles, which can also be accessed via SSH.

Installing the client


Districts are a set of nodes, which act like resource pools that allow gears to be moved between the nodes within the district, so as to allow for load balancing to avoid overloading of a particular node. Districts are optional, but generally recommended for all applications that need to work in production environments. Developers should also be aware of other application components like namespace, application name, aliases, the Git repository, etc. OpenShift also provides for manual and automatic scaling of Web applications, both of which allow you to add multiple instances to tackle the increase in demand. While automatic scaling is easy and convenient, it takes an extra cartridge called HAProxy or High Availability Proxy, which monitors all incoming traffic and assigns an extra gear whenever there is a spurt in demand for the Web application.

The client tools can be installed on all OSs including Windows, Mac OS X and Linux. Documentation is available for distributions like Ubuntu, Debian, Fedora, OpenSUSE and RHEL 6. The installation procedure is, however, similar across all platforms. Before proceeding to install, the required dependencies are Ruby, Ruby Gems and Git. On Ubuntu, for example, these can be satisfied with a sudo apt-get install ruby-full rubygems git-core command. Finally, the client can be installed with sudo gem install rhc. Now, the software has been installed by default in the /var/lib/gems/1.8/bin directory, which can be made universally accessible by adding it to the PATH variable and creating a symbolic link to it in the /usr/bin directory:
sudo export PATH=/var/lib/gems/1.8/bin:$PATH sudo ln -s /var/lib/gems/1.8/bin/rhc* /usr/bin/

To start configuring authentication, you can either generate SSH keys manually or preferably use the set-up wizard, which you can invoke with rhc setup; it will guide you through creating the SSH keys and will later ask you to upload the generated public key to the OpenShift server from the console.

Deploying applications from the control panel

Figure 1: Choosing the type of application for creation


Working on the Web console is pretty easy, and is just a matter of registering and working with graphical wizards that guide you through every step. It starts with registering your account and defining the namespace of the application. This is immediately followed by the Create Application wizard, which is a three-step process of choosing the application type, entering the configurations and confirming the creation of the application (Figures 1 and 2). OpenShift even provides for preconfigured applications like WordPress and Drupal, which require minimal effort on the part of the user. After creating the application, the options related to

scaling, building and cartridges can be accessed from the application console (Figure 3). The application created with this console can be cloned to the local working directory later, with the following command:
git clone ssh://

How To Cloud Corner

The free plan of OpenShift, known as FreeShift, allows for a maximum of three small gears for such an application. It doesnt even require billing details to be stored, so you can easily take a decision on when to upgrade, according to your convenience.

Working with the command-line

To give you an idea of how the command-line works, here are a few pointers to help you get started. In this case, lets begin by creating a new domain (or alternatively, namespace):
rhc domain create -n [DomainName] -l [rhlogin] -p [password]

Figure 2: Choosing the configuration options for the new app

Then new applications can be created as follows:

rhc app create -a [AppName] -t [AppType]

Both manual and automatic scaling options are available. Automatic scaling can be enabled with the -s option while creating the application (adding it to the above command). To scale the application manually, an SSH connection to the application needs to be established and changes can be made with the addgear and remove-gear commands:
ssh [AppUUID]@[AppName]-[DomainName] add-gear -a [AppName] -u [AppUUID] -n [DomainName] remove-gear -a [AppName] -u [AppUUID] -n [DomainName]

Figure 3: The application management console

New cartridges can be also be added, if needed, as follows:

rhc app cartridge add -a AppName -c CartType

Openshift Application, and work with all the configuration details in a similar wizard-like graphical interface. So OpenShift makes it very easy for a beginner to get started, with tools like preconfigured applications and the easy-to-use Web interface. Of course, it needs a lot more work when you dig down deep and start working with the code. The fact that it supports so many languages and frameworks at the same time, all with equal focus, does change things for the better.

These commands help in easily configuring the application. Developers can make changes to the code, or deploy the application, with the Git commands. These are some of the commands generally used to set up and get started. There are a lot more options and arguments available, for which you can refer to the documentation. OpenShift can also be configured to work directly from Eclipse, with the help of an extension known as JBoss tools, which can be installed directly from the Eclipse marketplace. Then you will be able to create a new

[1] OpenShift User Guide: docs/en-US/OpenShift/2.0/html-single/User_Guide/index.html [2] OpenShift Getting Started Guide: knowledge/docs/en-US/OpenShift/2.0/html/Getting_Started_ Guide/index.html

By: Ankit Mathur

The author is a geek with a crush on Java, and also loves flirting with almost everything related to databases and Web technologies. Feel free to poke fun at his articles and direct your feedback at


| JANUARY 2013 | 101


Boost your MySQL Performance with Cloud Computing

housands of new online and mobile applications are launched every day, yet the success of these applications is dependent on uninterrupted access and availability. As mobile computing adoption grows, so does the amount of data causing most successful companies to eventually hit a (database) wall. The result: poor performance and frustrated customers. The ideal solution for such scenarios would be either purchasing a proprietary database or investing heavily in purchasing new hardware. Most companies can't, and shouldn't have to, install new databases every time they grow. This is where eNlights cloud offers the perfect solution. The eNlight cloud from ESDS cloud eliminates complexity from organisational applications and database infrastructures by enabling a fast and efficient response to dynamic business requirements. The eNlight cloud provides real time elasticity that increases database availability without requiring any changes to the existing infrastructure.

software is able to take advantage of the additional resources). No code changes to the application it still runs against a single (larger) machine. Scales both throughput and capacity Another way to optimise the database is sharing on the cloud. This can be done by selecting a key in the data and splitting the data by hashing that key and having some distribution logic. It can also be done by identifying the application needs and setting different tables or different data sets in different databases.

Benefits with the cloud

Making LAMP a Cloud

MySQL doesnt scale on its own. MySQL needs four fundamental resources to do its work: CPU cycles, memory, I/O, and the network. Installing MySQL on a cloud-hosted server isnt the only way of using MySQL in the cloud. It is more important to satisfy the resource needs of the database server..

This approach scales both throughput and capacity. This solution scales both reads and writes. Scaling is done online with no downtime/partitioning event. New resources are added/removed automatically, on the fly, in a way thats transparent to the application. Scales beyond the limitations of any single machine/node. No application change is needed.

Use Your Memory

Just-in-Time Resources!

eNlight is a one of its kind solution and is the first in the industry to provide automatic scalable services. eNlight, with its iNtelligent technology, senses the need for additional resources and also when to withdraw them, thus ensuring top notch performance at optimal prices. One does not have to worry about resource allocation and server upgrade/downgrade cycles as eNlight will take care of all these factors and adjust the resources in realtime, automatically, without having to reboot or even a pause in the server. eNlight is the solution to resource related problems faced on servers. The resources are monitored continuously; so whenever a load is detected on the virtual server, the cloud allocates more resources to that server. The main advantage of this feature is the quantum of resources stays proportional to the users requirements. Users do not have to invest in procuring the right amount of hardware, thus saving almost 60 per cent of their hardware costs.

RAID 10 mirroring and striping is possible, with as many disks as you can fit in your server or raid cabinet. A database does a lot of disk I/O even if you have enough memory to hold the entire database. Why? Sorting requires rearranging rows, as does group by, joins, and so forth. Plus the transaction log is disk I/O as well!

Fault tolerant and redundant

Advantages of scaling up or down

Its simple This type of scale is usually fairly simple to deploy, and is supported by most of the databases used today (so that when more processors and cores are identified, the database

eNlight, the latest and most prominent entrant in the cloud hosting market, addresses all your concerns around optimising operating costs, managing perfect resource utilisation, issues involved in the upgrade or downgrade of hardware, coverage in case of disaster, ease of provisioning and management, ease of back-ups, data security, etc. eNlight also ensures that you are doing your bit in maintaining a balance between taking care of your IT infrastructure as well as the earth. All ever-changing demands of IT infrastructure are taken care of by eNlight, allowing you to focus on your business and grow your revenues, with eNlight providing a helping hand in saving you some expenses, resulting in an indirect addition to your revenues! There is no such thing as better hardware; but the hardware that serves the application needs to be better. Database operations such as back-up and large-scale queries are now much more efficient, thanks to eNlight cloud, which has the ability to handle increasing IOPS. For more information, please visit : or call us at 1800 209 3006

102 | January 2013 | OPEn SOurCE FOr yOu

Dell Digital Track Solutions HP

iValue Infosolutions

Mindlance NetApp Sify

Dell Digital Track Solutions HP

A List Of Network Storage Solutions Providers

Dell | Bengaluru, India
Dell helps customers tackle challenges like handling data volume growth and managing IT budgets by standardising IT infrastructures, simplifying operations and automating processes. Dell Compellent, Dell EqualLogic and Dell PowerVault are the storage solutions offered by the company, along with other Dell storage tools and utilities that can help organisations of any size derive the full value of their IT investments while protecting vital enterprise data, ensuring high business continuity, and maintaining high flexibility and scalability. Dell PowerEdge servers running Linux, compared to proprietary systems, can deliver lower TCO, increased flexibility and reduced complexity.

iValue Infosolutions

Mindlance NetApp Sify

iValue Infosolutions
IBM Dell Digital Track Solutions

Mindlance NetApp Sify


Digital Track Solutions | Chennai, India

The company offers a myriad of network storage solutions like NAS, SAN and Unified Storage. The company is also into network storage consultation, proposing suitable storage and back-up solutions.

HP offers a full range of NAS solutions including the X1000 G2 Network Storage Systems for small and medium IT environments, the X3000 G2 Network Storage Systems that add IP-based services to arrays and SANs, the X5000 G2 Network Storage Systems for mid-sized companies with Windows-centric environments, and the X9000 Network Storage System scale-out solutions for enterprises. The NAS appliances are simple, cost-effective and offer a convenient way to back up Windows, UNIX, Linux or MAC clients and/or servers.

IBM | Bengaluru, India

IBM network attached storage (NAS) products provide a wide range of network attachment capabilities to a broad range of host and client systems. IBM Scale Out Network Attached Storage (SONAS) is designed to embrace and deliver cloud storage in the petabyte age. SONAS can meet todays storage challenges with quick and cost-effective IT-enabled business enhancements designed to grow at an unprecedented scale.

iValue InfoSolutions | Bengaluru, India

A premium technology enabler, iValue InfoSolutions drives adoption of cutting-edge technology solutions and offers services throughout the entire lifecycle of IT security, storage and managed enterprise wireless solutions. Storage offerings include SAN IP/FC, NAS, VTL, storage servers, virtualisation gateways and cloud storage nodes. With a mission to empower organisations to effectively manage their digital assets, which are critical business differentiators, and in order to protect and grow their business, iValue offers solutions and services that are aligned and optimised for its customers business needs through its partnerships.


| 103

HP | Bengaluru, India

Dell Digital Track Solutions HP

iValue Infosolutions

Mindlance NetApp Sify

Dell Digital Track Solutions HP
Mindlance | Bengaluru, India
Mindlance is a global business and information technology consulting firm. It works towards enterprise and collaborative network storage solutions. Its offerings extend to NAS and SAN with smart data management capabilities. Mindlance also offers managed storage services to its clients. The role of open source in its offerings: The company has recently scaled up its open source capabilities in order to provide more bundled offerings to clients. Mindlance believes in creating customised solutions for customers, and in helping them achieve ROI and business enablement. Leading clients: One of the leading automobile manufacturers, a leading power management firm, a leading local business search firm and more. USP: Proactive and consultative, it has a reputation for innovative solutions, clubbed with state-ofthe-art services. The company delivers value through a combination of the right people, processes, technologies and program management solutions. Its proven approach includes: applying domain expertise in specific industry segments, utilising a highly skilled technically-trained workforce; leveraging a proven global delivery model that offers onsite, offsite, offshore and hybrid development options; and implementing quality processes and methodologies while staying cost-effective. Special mention: Mindlance has won a prestigious award from STPI for the highest exports in the south and from Silicon India for being one of the Top 10 Promising IMS Companies in India. Mindlance has also bagged two EDGE Awards instituted by InformationWeek for its Unified Communications and Cloud Implementation projects. Website:

iValue Infosolutions

Mindlance NetApp Sify

iValue Infosolutions
IBM Dell Digital Track Solutions

Mindlance NetApp Sify



NetApp | Bengaluru, India

The NetApp portfolio of NAS solutions enables flexible provisioning and reliable back-up, archiving, business continuity, and virtualisation. Whether your business uses NFS or CIFS protocols, NetApp NAS devices help cut costs by up to 40 per cent, with consolidation, higher utilisation, and reduced downtime. They also help reduce administrative overhead by up to 60 per cent, while delivering scalable, reliable file storage.

Sify | Chennai, India

The company offers a unified SAN/NAS fabric for IOPS intensive applications. Sify provides myStorage services as laptop and desktop backup services on the cloud. Role of open source in its offerings: The orchestration in the engine uses multiple open source technologies, one of them being Drupal. Leading clients: Over 150 enterprise customers using the storage services from industries like retail and banking. Its customers have multi-TB storage for their business critical applications, thanks to Sify's NAS services. USP: Supports all kinds of case scenarios, block storage, file storage and object storage. The company also provides enterprise class scalable platforms. Website:

104 | JANUARY 2013 | oPEN SoURCE FoR YoU

For U & Me

We wanted to leverage the best-in-class solutions available and so we chose open source
Heres a business built on open source technologies. A conscious decision that its founders took based on the security, versatility, affordability and strong community support that comes with each open source solution.
Case Study

hile there are still organisations that are wary of adopting open source technology, firms like LifeSize take pride in resorting to open source technology to build their products. LifeSize provides high-definition video collaboration services in the country. Designed to make video conferencing universal, the complete range of products from LifeSize are based on open standards-based systems that aim to offer enterprise-class, IT-friendly technologies. The company was founded in 2003 and acquired by Logitech in 2009. It takes a platform or application approach to video-conferencing. Commenting on the innovations it has done using open source technology, Raghu Belur, senior director, LifeSize, says, Our focus is a video-conferencing infrastructure solution. In that solution, we have a platform which sits on the open source set of technologies. The platform provides the common facilities required for multiple applications. For example, every application requires a Web server, an administrative console, user authentication, logging in, reporting and a clear picture of the ROI generated from the product. All these aspects are built into the product that we call the UVC platform. This sits

on top of a version of Ubuntu Linux, which is the basis of our product. On top of the UVC platform, we have about half a dozen different applications that are targeted to specific problems that our customers may want to solve. One of the problems is multi-way calling that involves more than two people in a video-conference call, for which we have an application. Then there is an application for video recording and streaming, which means the person in the video-conference call can record it and stream it, so that it can be used by others later or can be used for training purposes. LifeSize uses open source technologies to build its products. The company uses Linux as the basis of its video-conferencing solutions. Belur and his team did a lot of research and evaluated different technologies before zeroing in on open source technology. He explains, Our journey began with a search for the appropriate technology. We were looking for a set of technologies that provided us the ability to develop rapidly, so that we could focus on the things that are important to our business and not on the basic stuff like the standard operating system, security, Web development, etc. We wanted to leverage the best-inclass solution available. The obvious choice was open source technology, which offered us a mature, secure and proven starting

point with a fairly good community of people who had been trying it out. So, in all probability, if we have a problem or a question, somebody else in the community may have faced it earlier and could suggest a solution. It is a solid base to start from. It allows us to focus on the things that we do uniquely and gives us access to a community of people that we can rely on to help us if we need.
Case Study
Some tips and tricks

For U & Me

Security was never a concern

Many people worry about adopting open source technology or solutions because they believe security is a major concern. But for Belur, the security of open source technology came as the biggest advantage. He says, Security is a huge concern for everyone. It is probably implicit to the choice of open source. Open source technologies are scrutinised by the best and the brightest in the world; therefore, a lot of security issues get addressed. We almost take that for granted, confident that our choice of open source allows us to say that we have an extremely solid and secure foundation. Our solutions are deployed by people on hardware in a network that is facing the Internet. So, they are subject to attack from a lot of different people. Belur adds, For example, a mail server may be deployed inside my lab, so it won't be attacked by a lot of people. In the end, it is within the boundaries of my firewall. Whereas, many of the products that we develop are deployed in a way that they are accessible on the Internet. So security is extremely important in such cases. I think open source technologies again come with an extremely high assurance of security. It doesn't mean there are no security problems, but they are found and fixed very rapidly. Users of open source get that advantage for sure. Belur opines that it is courtesy the 'openness' of Linux-based technologies that security is not a prime concern. He points out, The very model of open source is helpful in resolving the security issues. With the source code being accessible to all, people can thoroughly validate if the solution is secure because you can push your system to its limits and see what vulnerabilities are there to fix them, as opposed to buying Web server technology from Microsoft or any other proprietary software company, where you don't have access to the source code. In such cases, it is hard to identify the vulnerabilities and one has to wait for the vendors to acknowledge it, fix it and deliver a patch -- depending on how high-profile the issue is. The model of open source, on the other hand, lends itself to high security. The team of engineers not only develops the company's solutions on open source technology but also uses Ubuntu Linux as the primary workstation. The reason is simple: ease of development. Belur explains, We install Ubuntu Linux on our PCs simply because we believe that since our products are deployed on that platform, it would be easy for us to develop on the same platform. Apart from that, open source technology is used to test the solutions as well. Some of the video-conferencing solutions that require the participation of hundreds of people can be tested with just a few mouse-clicks, thanks to open source technologies. Belur explains, We also do a lot of internal testing. For instance, if we claim that over 1000 employees can watch a video-conference in

Resorting to open source technologies to get the best solution and cut costs seems a great idea, but one has to be really cautious and choosy when it comes to settling for an open source technology solution. Belur shares some tips from his experience: I think one has to be very careful while choosing the technology. There are many open source technology projects. Some of them have a good community behind them and a good development roadmap ahead of them. It means that these projects will continue to get enhanced and evolve over a period of time. So if you pick such software as the building blocks of your solution, you can get huge productivity gains. But sometimes, you may end up choosing a wrong open source componenta dead-end project with not many people working on it. In projects that do not have much of a future, you tend to get stuck in the later stages. Such projects do not have an ongoing development roadmap because they do not have a community around it. Such projects should be avoided. You just need to pick the winners. There can be times when you can get stuck in a strange jam with a problem that no one has ever encountered. Sometimes you have to spend hours solving the problem yourself because you are using the open source component in a way that no one else ever has. It means that you are open to the risk of having to go to the source of the project and try to solve a particularly strange problem. One should be mentally prepared for such issues. Last of all, you might have customers that say, Can you produce me a security vulnerability report by a third party agency? Fortunately, our customers are not like that, but if you are ever in such a situation, there are security agencies that do vulnerability and security analysis of products. They issue certificates for the tested products as well. Most companies that make solutions based on open source technology do not have such certifications available. Having certifications helps firms sell their solutions, particularly to enterprises. If there are issues related in getting the certification, then you will have to figure out on who will fix them.

which a companys chief makes an important announcement, we need to validate those claims. Yet, its impossible to have 1000 people involved while testing such claims. So we use a lot of open source technologies that allow us to create the required number of virtual users. This makes the server feel that there are 1000 users logged in but that's not the case in reality. We use a lot of open source technologies in our QA processes as well. There are many common problems that software companies encounter. Open source technology is one platform on which people create solutions for these problems and they can be accessed by one and all. Sometimes, we use them to create our products, and at other times we use them as productivity tools to test our products, he adds. By: Diksha P Gupta
The author is assistant editor at EFY.


| jANUARY 2013 | 107




The file that contains the data regarding the devices to be mounted at start-up is in /etc/fstab. To automatically mount a partition, follow the steps given below. First, create the directory in which your partition will be mounted. Create one directory per partition. I created the directory in /media. This directory is known as the mount point for the partition. To create the mount point, open up the terminal and type the following command:
sudo mkdir location_of_dir/name_of_dir

Auto mounting a partition in Linux

media/mydisk. My partition was of type ext4, so to my / etc/fstab, I added the following command:
/dev/sda5 /media/mydisk ext4 defaults 0 0

Save the file and in the command prompt, type the following:
sudo mount -a

or you can use Nautilus, the file manager, to create a folder. If the directory is created in a location in which you need root privileges, use sudo. After creating the mount point, modify /etc/fstab as per your requirements. It is always advisable to create a backup of the /etc/fstab file before making any changes, because any error in that file can prevent your OS from booting. Now, make changes in fstab to auto mount the HDD partition:
sudo gedit /etc/fstab

Now, the partition will be automatically mounted on every reboot. Vineeth Kartha,

Here is a simple tip that allows you to create a virtual file system and mount it with a loopback device. STEP 1: First create a file of 10 MB using the following command:
$ dd if=/dev/zero of=/tmp/disk-image count=20480

Creating a virtual file system

Open the /etc/fstab with a text editor of your choice with root privileges. In this file, add the details in the same order as done for the existing partitions. The order should be as follows: the device name, the default mount point, the filesystem type, mount options, dump, and the fsck option. The device name is the name of the HDD partition (such as /dev/sda5): the mount point is the full path of the directory where the partition is to be mounted. The file system type is the type of file system like ext4, fat, ntfs, etc. Mount options are normally given as defaults, while dump and fsck options are given as 0. I had a partition /dev/sda5 and I created the directory /
108 | january 2013 | OPEn SOurCE FOr yOu

By default, dd uses a block of 512 so the size will be 20480*512 STEP 2: Now create the file system as ext2 or ext3 Here, in the following example, lets use ext3 as a file system:
$ mkfs -t ext3 -q /tmp/disk-image

You can even use Reiser as a file system type, but youll need to create a bigger disk image. Something like whats shown below:
$dd if=/dev/zero of=/tmp/disk-image count=50480 $mkfs -t reiserfs -q /tmp/disk-image

STEP 3: As the final step, create a mount point and

mount the file system:

$ mkdir /virtual-fs
nmap -sS -O

To scan open UDP ports in the system, use the command given below:
nmap -sU -O

$ mount -o loop=/dev/loop0 /tmp/disk-image /virtual-fs

Note: If you want to mount multiple devices, you will have to increase the loop count as mentioned below:


loop=/dev/loop1, loop=/dev/loop2,... loop=/dev/loopn

After you complete the above steps, you can use it as a virtual file system. You can even add this to /etc/fstab to mount this virtual file system whenever you computer is rebooted. Open your /etc/fstab in a text editor and add the following:
/tmp/disk-image /virtual-fs ext3 rw,loop=/dev/loop0 0 0

Aarsh S Talati,

Identify your current shell name

As *nix sysadmins, we need to do a whole bunch of text-based data processing, either in files or data streams. Here is a shell command called rev that I came across and wanted to share with you geeks because I really liked it and found it useful. The rev command utility reverses the order of characters in every line. In short, it creates a mirror image. The most common use of rev is to reverse the line, extract a particular string and then pipe through rev a second time to restore the original. So, if I want to get the year mentioned at the end of a string, this is what I will do:
$cat fileinfo.txt Last Changed Date: 2011-08-11 18:10:08 -0500 Thu, 11 Aug 2011

Rev up!

You can identify your current shell name by using the following commands:

[narendra@CentOS]$ echo $SHELL $cat fileinfo.txt | rev /bin/bash 1102 guA 11 ,uhT 0050- 80:01:81 11-80-1102 :etaD degnahC tsaL $cat fileinfo.txt | rev | awk {print $1} 1102 $cat fileinfo.txt | rev | awk {print $1} | rev 2011 [narendra@CentOS]$ echo $0 bash

The SHELL environment variable stores the name of the current shell. You can also use the command given below to get the shell name:

$0 will print the name of the program; here the program name is current shell. Narendra Kangralkar,

Voila! Got the year! This is just one workaround, out of the many ways of doing this same task. After all, it doesnt hurt to learn something new. Ram Iyer,

Scan open ports

Share Your Linux Recipes!

The joy of using Linux is in finding ways to get around problemstake them head on, defeat them! We invite you to share your tips and tricks with us for publication in OSFY so that they can reach a wider audience. Your tips could be related to administration, programming, troubleshooting or general tweaking. Submit them at The sender of each published tip will get a T-shirt.

The command given below will scan all the open TCP ports on the loopback interface:

nmap -sS -O

In general, you can use the following:

OPEn SOurCE FOr yOu | january 2013 | 109

Multi-Master Replication
With xBD Replication Server

Designed for applications in need of write availability and read scalability.

Feature Highlights:
Deploy multiple masters across geographies Continuously synchronize master data Uniqueness, Update & Delete conflict detection Multiple conflict resolution strategies Replicate one or more tables Automatic schema replication Publication table DDL replication Graphical Replication Console and CLI Flexible replication scheduler Replication History Viewer Snapshot and continuous modes

Benefit Highlights:
Updates can be faster to a local master in a distributed configuration If one master fails, other masters continue service Synchronize masters between geographies in near real-time Conflict resolution options add flexibility to data management Easy GUI administration A more flexible and available database infrastructure to serve users
Hurry! Offer expires September 30, 2012

Software and Documentation Download

Want to try multi-master replication for yourself? To download the easy installer visit: Not quite ready to try but want to read more? Check out the documentation at the same link above.

Contact us today about : Software Subscriptions Technical Support 24x7x365 Migration Assessments
Training for Administrators and Developers Professional Services * Call: +1 781-357-3390 or 1-877-377-4352 (US Only), Email:

Avail free cloud credit worth ` 25,000*, visit for more details

EnterpriseDB Software India Private Limited Unit # 3, Ground Floor, Godrej Castlemaine, Sassoon Road Pune 411001 Test, develop and T +91 20 3058 9500 Fon VMware vCloud powered cloud deploy your application +91 20 3058 9502