Sie sind auf Seite 1von 60

Encipher

2008

EnciphEr 2008

Send your articles to publish in

Green Hues
you are always welcome to join the club

A Fortnightly Newsletter

Exploring Creativity
Green Club of Thoughts
Kathmandu University
Info.gct@gmail.com, ku.edu.np/greenclub

Editorial
0

E N

E R

EnciphEr is not just bitter black coffee and edited technical articles. It is a result of hard work, sweat, and the pressure of working together as a team for collecting relevant information and creating a unique design. The sole aim however has always been to convey newer ideas and knowledge about the changing technology.
The students of Electrical and Electronics Engineering, Kathmandu University, have always tried to learn the technology and implement them for the benefit of all. The students society SEEE has created this platform where students can practice technology as an extra curriculum. Students and faculty alike have responded well in the making of Encipher 2008. They breathe life into this issue by providing us with more than just articles. Their reviews, research, and messages have made this a plentiful resource. We are greatly thankful to all students, faculty and non-teaching faculty who have helped create another landmark in the form of Encipher. We also express our gratitude to the business houses for the support. This magazine is also uploaded in the website www.ku.edu.np/ee/seee. By logging on, you can give us feedbacks. You can also blog on articles in this magazine. Your comments will be mailed to respective writers. Your valued suggestions are always welcome. Encipher 2008 Team

EnciphEr 2008
ADVISOR: Bhupendra B. Chhetri, Ph. D EDITOR-IN-CHIEF: Gaurab Karki EDITOR: Sumit Karki COPY EDITORS: Aastha Shiwakoti, Binita Shrestha, Jeevan Shrestha, Rosha Pokharel GRAPHICS AND DESIGN: Vivek Bhandari MARKETING EXECUTIVES: Ankit Jajodia, Shashi Shah, Sunil Baniya, Sushil Khadka PRESS MANAGEMENT: Jayram Karkee, Shyam Bohara SEEE MODERATOR: Bishal Silwal
VOL 08

. ENCIPHER 2008

T ABLE OF C ONTENTS
Messages revIew
37 LEd BASEd LIgHTINg SySTEm 38 POwER LINE COmmuNICATION

research

5 HEAd OF THE dEPARTmENT 6 SEEE PRESIdENT

7 mICRO gRId SySTEmS 10 mATERIALS SCIENCE 14 SImuLATION OF TRuNkEd N/w 20 AC TRANSmISSION SySTEmS 34 uPPER TAmAkOSHI - INTROduCTION 35 REduCINg BROAdCAST dELAy 37 mEmRISTORS 41 mImO wIRELESS COmmuNICATION 56 IS 8051 THE ONLy SOLuTION?

IntervIew

Mr. Krishna Gurung


Our students are second to none when they graduate.

44

11 RENEwABLE ENERgy 12 ALTERNATIVES - FOSSIL FuELS

energy

essay
51 mAjOR OR mICRO HydRO? 52 dIgITAL dIVIdE

17 PLC - VOICE OVER PHONE LINE 18 LANdLINE mESSAgINg SySTEm

Projects

19 HVdC TRANSmISSION 23 NANOSCIENCE 33 REACTOR POwEREd CRAFT 26 BIOmETRICS 46 TECHNOLOgy TIdBITS


ENCIPHER 2008

technology

9 43 49 53 30

INTROduCTION TO mATLAB HISTORy OF ELECTRICITy ELECTRICAL SAFETy AdSL TECHNOLOgy LEd LIgHTINg: CuRRENT STAT

Info

. VOL 08

Message

From the Head of the Department

A popular quote from the most renowned scientist of all time Sir Albert Einstein is Imagination is more important than knowledge. Thinking inline with the quote, we may find the answer to the most important question of our life, Why do we aspire for education or learning? Well, the purpose is simplified to To be able to imagine, in other words, to be able to imagine the things and situation that we previously were unable to. Imagination can encompass every aspect. Imagination brings creativity. Imagination brings innovation. Imagination brings good and bad, sadness and happiness; virtually all the things that we experience or have to live with. Encipher 2008 may be seen as result of imagination of our students and shall reflect improvement in their ability to imagine as they progressed in their education at Kathmandu University. Best wishes

Bhupendra Bimal Chhetri, Ph. D Head Department of Electrical and Electronic Engineering School of Engineering Kathmandu University

VOL 08

. ENCIPHER 2008

Message

From the President


LifE as we started is a never ending journey; its starting point and end point is an uncontrolled process but the path we chose for this journey was same and was somewhat under our control. As a result we all are together in this rendezvous popularly known as Kathmandu University, Department of Electrical and Electronic Engineering. Our time here is an investment for converting us from a general student to a group of technical manpower that has an immense responsibility of changing the society. Its not only the theories and practicals we learn that counts, but group activities, social involvements and real life experiences we face during these years are backbone for the years to come. Involvement in such activities does not only broaden our horizon but also take us to the zenith of confidence and foster career development. Thus involvement in such activities should be an integral part of our education and student life. The publication of magazine and other activities require a great deal of team work and cooperation. The publication of Encipher 2008 would not have been completed without team work, great devotion and hard work of the editorial board. My congratulations to devoted and hard working team of Encipher 2008 and best wishes for their future endeavors. I personally like to thank all the persons providing support and cooperation to the Editorial Board of Encipher 2008 and SEEE. My special thanks also goes to business organizations and individuals providing support to Encipher 2008 and we hope for your valuable feedback and support for years to come.

Bijaya Ghimire President Society of Electrical and Electronics Engineers (SEEE) Kathmandu University

ENCIPHER 2008

. VOL 08

mICRO gRId SySTEmS


small, modular generation sources to low voltage distribution systems forming a new type of power system. A modular sources are diesel generator sets, small hydropower, photovoltaic, wind turbine, fuel cells etc. Micro grids can be connected to the main power network or operated autonomously when isolated from the main grid. It means micro grid can operate in either an autonomous mode or non-autonomous mode. In non-autonomous mode, micro grids are interconnected to the utility system or main network. Mostly micro girds are located near to the users sites. Depending on the primary energy source used, the micro grid systems are considered as non-controllable, partially controllable and control-lable. Usually micro sources used for micro grids are less than 100KW with electronic interfaces. For proper operation of micro grids with customers expectation three critical components play major roles: [2,4] (i) Local micro source controller (ii) System optimizer (iii) Distribution protection

Mr. BrIjesh adhIkary

ICRO gRId is defined as the interconnection of

a system optimization, which is provided by the energy manager. Energy manager uses information on local load needs, power quality requirement, demand side management request etc to determine the amount of power that micro grid should draw from the main grid. Based on the information it provides to the individual power and voltage set points for each micro source controller, provides logic and control for islanding and reconnecting the micro grid during events. Protection is necessary in the event of fault in the micro grid system. If the fault occurs in the main grid then it should be isolated from the main grid as rapidly as possible. If the fault occurs within the micro grid system the protection coordinate should isolate the smallest possible section of the feeder to eliminate the fault. LOaD cOnTrOLLEr anD rEacTiVE Var cOMpEnsaTOr: Electricity generated from micro hydro or wind turbine units is from induction generator. In case of micro hydro it can be seen as a constant power source where as power generated by wind turbine cannot be determined due to unpredictable weather.

Micro source controller is Induction generators have an important component many advantages over of the micro grid system. synchronous generator It provides a basic control such as ruggedness, of real and reactive less maintenance power, voltage regulation requirements, absence through voltage droop and of DC excitation and frequency droop for power inherent short circuit sharing. The micro sources Fig 1: Small Hybrid Micro Grid System protection when working as a stand are either from DC source such as fuel cell, photovoltaic and alone low cost energy conversion schemes. On the other battery storage or AC source such as micro turbine, diesel/ hand an induction generator has some drawbacks, as it gas turbine. All the sources are connected to the point requires reactive power to improve the voltage regulation. of common coupling (PCC) after DC voltage sources are In addition to this induction generator used in micro hydel converted to an acceptable AC sources using voltage source scheme, it converts all available mechanical energy into inverter. Figure 1, shows a micro grid system with various electrical energy to eliminate the controller for the turbine. micro sources (wind turbine with induction generator, battery Controller on turbine side is avoided to make the system bank with rotary converter, diesel generator, PV array with simple and cost effective. Thus, whenever the consumer inverter), PCC, controllable (dump load) and uncontrollable load is reduced, the surplus generated energy from loads (village load) and power factor correction (reactive induction generator must be dumped somewhere else so as compensator). to maintain constant voltage and frequency in the system. These drawbacks can be overcome by using an electronic Another important component in micro grid system is load controller and VAR compensator. [1]
In rural areas of Nepal, the average cost of grid extension per km is between $800010,000, rising to around $22,000 in difficult terrains. - World Bank/UNDP Study

VOL 08

. ENCIPHER 2008

mICRO gRId SySTEmS


When several induction generators are connected in a micro grid system, reactive power compensation required by these generators become more significant and complex. Fixed capacitor compensation technique may not be effective. Synchronous condenser or Static Synchronous Compensator (STATCOM) may be used in this situation. Consumers load fluctuates and varies continuously. Load variation may be due to the events such as start up of large heating load, start up of induction motor, loss of load etc. This type of load can be named as uncontrolled load. Change in a load directly affects the system voltage and frequency. Load controller is required to compensate the load variation and VAR compensator is required to maintain the system voltage in the specified limit. Whenever induction generators have surplus electrical energy, it must be dumped somewhere else to maintain the input essentially constant. This can be achieved by using load controller, which diverts the surplus energy to dump load (controlled load) placed parallel to uncontrolled load. MicrO GriD sYsTEM in ThE cOnTEXT Of nEpaL: Nepal being a country with lots of hills and mountains has a tremendous potential for hydropower. In Nepal there are nearly 6000 small and large rivers with an approximate total length of 45000 km. [3] Nepal has already realized that hydropower is a sector, which can boost the economy of the country. Theoretically 83GW of hydro electricity can be generated in Nepal out of which 44GW is economically feasible. In some places, a single generating plant can produce electricity of up to 10GW. Developments of large hydropower plants are not the only solution in Nepal. Due to topographical constraints, large interconnected grids in every part of the country are not economically feasible. In those areas electricity can be generated from various sources such as micro hydro, wind turbines, fuel cell, photovoltaic, biogas and diesel/gas generators. There are a number of possibilities and technical solutions available regarding power conversion, control and integration of the energy sources as well as distribution and end use of energy. Transmitting electricity in remote areas by the use of hydropower resources from the main grid is difficult. This is due to difficult topographical structure. Therefore micro grid concept in remote areas play important role. The load controller and VAR compensator design will be more suitable to the Nepalese micro hydro system. rEfErEncEs: [1] Murthy, S.S.; Jose, R.; Singh, B., Experience in the development of microhydel grid independent power generation scheme using induction generators for Indian conditions TENCON 98. 1998 IEEE Region 10 International Conference on Global Connectivity in Energy, Computer, Communication and Control, Volume: 2, 17-19 Dec.1998 Pages: 461 - 465 vol.2 [2]R.H.Lasseter, MicroGrids, 2002 IEEE Power Engineering Society Winter Meeting Conference Proceedings, NewYork, NY, vol. 1, pp 305-8] [3]H.B Jha, Sustainable Development of Small Hydropower in Nepal, Centre for Economic and Technical Studies and Friedrich Ebert Stiftung, 1995 [4]Lopes Pecas J.A, Management of MicroGrids ENK5CT_2002-00610 Mr. Adhikary is Assistant Professor at Department of Electrical and Electronic Engineering, Kathmandu University.

AROuNd THE wORLd


People on the island of Bougainville in Papua New Guinea have found their own solution to high energy prices - the humble coconut. They are developing mini-refineries that produce a coconut oil that can replace diesel. From police officers to priests, the locals are powering up their vehicles and generators with coco-fuel. Inquiries for the coconut power have come in from overseas, including Iran and Europe. For years, the people of Bougainville have been dependent on expensive fuel imported onto the island. Shortages have often caused many businesses in this part of Papua New Guinea to grind to a halt. High energy costs have not helped either. Increasingly, locals are turning to a cheaper and far more sustainable alternative to diesel. Coconut oil is
ENCIPHER 2008

Coconut Oil Powers Island


being produced at a growing number of backyard refineries. Matthias Horn, a German migrant and an engineer, operates one such refinery. "The coconut tree is a beautiful tree. Doesn't it sound good if you really run your car on something which falls off a tree and that's the good thing about it? You run your car and it smells nice and it's environmentally friendly and that's the main thing." Mr. Horn said his work had attracted interest from Iran. The island endured years of civil unrest during the 1990s. Dwindling supplies of diesel forced islanders to look for alternatives and the coconut was chosen. - BBC News

It was in London 1882 that the Edison Company first produced electricity centrally that could be delivered via a distribution network or grid.

. VOL 08

T wAS during my second year classes that I got the opportunity to acquaint myself with MATLAB. In engineering, especially in the Electrical and Electronic Engineering, MATLAB is the most useful workspace.

AN INTROduCTION to MatlaB

aBarodh koIrala..Batch 2005..c

mathematical function library, MATLAB language, graphics and application program interface. A set of tools helps you to use MATLAB function and files, many of which are graphical user interfaces. It includes the MATLAB desktop, command window, command history, workspace, browsers for viewing help and search path. In function library, elementary function like sum, sine,

MATLAB stands for Matrix Laboratory. MATLAB was originally written to provide easy access to matrix software developed by the LINPACK and EISPACK projects. It is a high performance language for technical computing. It integrates computation, visualization and programming in easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. Using MATLAB, we can solve technical computing problems faster than with traditional programming languages, such as C, C++, and FORTRAN. Its typical uses include: Math and computation Algorithm development Data acquisition Modeling, simulation and proto-typing Data analysis, exploration, visualization, engineering graphics

MATLAB integrates computation, visualization and programming in easy-to-use environment where problems and solutions are expressed in familiar mathematical notation.
cosine, complex arithmetic to sophisticated function like matrix inverse, matrix Eigen value, Bessel function, and fast Fourier transfer function are calculated. MATLAB has extensive facilities for displaying vectors and matrices as graphs. It includes high-level function for two dimensional and three-dimensional data visualization, image processing and animation and presentation graphics. High level matrix/array language with control flow statements, function, data structures, input/output and object-oriented programming features are allowed in MATLAB language. In engineering, MATLAB has many advantages. Short working period, simple programming language and easy interfacing of software to hardware are some of its benefits. For image processing to identify the output and behavior of system MATLAB is very useful. As a student of engineering, one should have a proper knowledge of MATLAB.

MATLAB has evolved over a period of years with inputs from many users. In university environments, it is standard instructional tool for introductory and advanced courses in mathematics, engineering and science. In industry, MATLAB is the tool of choice for high-productivity research analysis. Today, MATLAB engines incorporate the LANPACK and BLAS libraries embedding the state of the art of software for matrix computation. MATLAB is an interactive system whose basic data element is an array. It allows you to solve many technical computing problems, especially those with matrix and vector formulation. MATLAB features a family of add-on application-specific solution called toolboxes. Tool boxes allow you to learn and apply specialized technology. Toolbox is comprehensive collection of MATLAB function that extends the MATLAB to solve particular cases of problems; areas in which toolboxes are available include fuzzy logic, wavelets, simulation, signal and image processing, communications, control design, test and measurement, financial modeling and analysis, and computational biology. The MATLAB system consists of development environment,

About one-third to one-half of the roughly 1 million Matlab users are involved in electronic systems design.

VOL 08

. ENCIPHER 2008

10

mATERIALS SCIENCE
microstructure of solids. Microstructure is used broadly in reference to solids viewed at the subatomic (electronic) and atomic levels, and the nature of defects at these levels.

dr. dIPak raj adhIkarI

ATERIALS SCIENCE is an understanding of the

Materials science encompasses the study of the structure and properties of any material and uses this body of knowledge to create new types of materials while materials engineering is concerned with the design and testing of engineering materials.
Materials science heavily relies on physics, chemistry and engineering fields. Physical properties of materials are usually the deciding factor in choosing which materials should be used for a particular application. This involves looking at many factors such as material composition and structure (chemistry), fracture and stress analysis (mechanical engineering), conductivity (electrical engineering), and optical and thermal properties (physics). It also involves processing and production methods. Research in this area involves many peripheral areas including crystallography, microscopy, mineralogy, photonics, and powder diffraction. Materials science encompasses the study of the structure and properties of any material and uses this body of knowledge to create new types of materials. Materials engineering is concerned with the design, fabrication, and testing of engineering materials. Such materials must simultaneously fulfill the dimensional properties, quality control, and economic requirements. Several manufacturing steps may be involved: 1. Primary fabrication - solidification or vapor deposition of homogeneous or composite materials 2. Secondary fabrication - shaping and micro-structural control by mechanical working, machining, sintering, joining and heat treatment 3. Testing - measuring the degree of reliability of a processed part, destructively or non-destructively Materials science and engineering affects quality of life, industrial competitiveness, and the global environment.

The Materials Research Society maintains three very extensive directories - professional societies, academic departments, and government organizations. Materials are synonymous with civilization. It is of interest to compare the developments in transport, communication, energy, health and environment with their impact on the strategic, civilian and rural stakeholders of the nation. Just as the economy is getting globalized, research is also undergoing globalization. Materials science plays an increasing role in providing health care. The need for prosthetic devices has increased over the years. As a spin-off from defense research, carbonfiber composites have been used for providing lightweight replacements for metallic calipers for polio patients. A sturdy prosthetic foot was developed from rubberized material having special quality of water resistance. Materials technology has played a key role in building materials, biomass gasification and domestic cook stoves. The problems may appear less glamorous than those in the high technology area but are nevertheless equally challenging. The transformation of materials science into a multidisciplinary, multi-institutional and multinational endeavor has led to an interesting development. In many ways, basic research underpins developments in technology. In condensed matter physics, solid-state chemistry and physical metallurgy, notable advances have been made in new materials synthesis, in understanding phenomena such as self-organization as in novel high temperature superconductors, aluminum matrix composites, new titanium ternary inter-metallic, decagonal quasi-crystals, discotic liquid crystals, crystal engineering and density functional theory of freezing. In carbon nano-tubes, synthesizing Y-junction tubes and a new effect of electric field generation due to fluid flow are seen as major breakthroughs. At first, scientists were led into thinking of nano-materials as just another class of materials with the singular defining feature of a small size. It is now apparent that nanomaterials are a completely new development and need to be considered in their own right. Dr. Adhikari is Assistant Professor at Department of Natural Sciences (Physics)

A chip of silicon a quarter-inch square has the capacity of the original 1949 ENIAC computer, which occupied a city block.

ENCIPHER 2008

. VOL 08

11

RENEwABLE------ P k ..B 2005..P&c -------ENERgy


ravesh afle atch

that are regenerative. It effectively uses natural resources such as sunlight, wind, rain, tides and geothermal heat, which are naturally replenished. Each of these sources has unique characteristics which influence how and where they can be used. For this reason renewable energy sources are fundamentally different from fossil fuels, and do not produce many green house gases and other pollutants such as fossil fuel combustion. The traditional use of wind, water and solar energy is wide spread in developed and developing country; but the mass production of electricity using renewable energy sources has been more common place recently, reflecting the major threat of climate change due to pollution, exhaustion of fossil fuel and the environmental, social and political risk of fossil fuel and nuclear power. In 2006, about 18 percent of global final energy consumption came from renewable, with 13% coming from traditional biomass, like wood-burning. Hydropower was the next largest renewable source, providing 3%, followed by hot water/heating which contributed 1.3%. Modern technologies, such as geothermal, wind, solar, and ocean energy together provided some 0.8% of final energy consumption. The technical potential for their use is very large, exceeding all other readily available sources.

ENEwABLE ENERgy is derived from the resources

strength near the earth surface varies and thus cannot guarantee continuous power unless combined with other energy source of storage system. Some estimates suggest that only 33% of total installed capacity can be relied on for continuous power.

WaTEr EnErGY It is the most important renewable energy in this growing world. Earth is replete with water but all cannot be used. However, it is the most easily available source of energy in the world. Energy in water is in the form of motive of temperature difference which can be harnessed and used. Since water is about 1000 times denser than air, even a slow flowing stream or of moderate sea swell can yield considerable amount of energy. Different forms of water energy are: Hydroelectric Energy is a term usually reserved for hydroelectric dams. Micro Hydro Systems are hydroelectric power installation that typically produces up to 100 KW of power. They are often used in water rich areas as a remote area power supply (RAPS). Tidal Power captures energy from the tides in vertical direction. Tides come in, raise water level in basin, and roll out. Around low tide, the water in basin is discharged through turbine. Ocean Thermal Energy Conversion (OTEC) uses the temperature difference between the warmer surface of the ocean and colder lower recess. To this end, it employs cyclic heat engine. sOLar EnErGY Solar energy is the form of energy that is collected from sunlight. Solar energy can be used to generate electricity using photovoltaic solar cells or concentrated solar power. It can also be applied to generate electricity by heating trapped air which rotates turbine in a solar updraft tower. It is also applied on heating buildings directly through passive solar design and heating food stuffs through solar ovens. Heating and cooling air can also be done by using solar chimneys. GEOThErMaL EnErGY Geothermal energy is energy obtained by trapping the earth-heat usually from kilometers deep into the earth crust. Even though it is expensive to build, operating cost
VOL 08

Sources
WinD EnErGY Wind energy is one of the most environmental friendly sources of renewable energy. In this type of renewable energy air flows can be used to run wind turbines and some are capable of producing 5MW of power. The power output of a turbine is a function of cube of wind speed, so as the wind speed increases, power output increases dramatically. Areas where winds are stronger and more constant, such as offshore and high altitude site are preferred locations for wind farms. Wind power is the fastest growing of the renewable energy technologies. Over the past decade, global installed maximum capacity increased from 2500MW in 1992 to just over the 40,000MW at the end of 2003 at the annual growth rate of 30%. Globally, the long term technical potential of wind energy is believed to be five times the current production. Wind

More than 10,000 homes in the United States are powered entirely by solar energy.

. ENCIPHER 2008

12

RENEwABLE ENERgy
is low resulting in low energy cost for suitable sites. This energy derives from the radioactive decay in the core of earth which heats the earth from the inside out. BiOfUEL Plants grow through photosynthesis and produce biomass. Biomass can be directly used as fuel to produce bio fuel. Agriculturally produced biomass fuel, such as biodiesel, ethanol and biogases can be often burned to release its stored chemical energy. Types of Biomass: Liquid Biofuel Liquid biofuel is usually either bio alcohol or bio oil. Bio diesel can be used in modern diesel vehicles with little or no modification to the engine and can be made from waste vegetables, and animal oils and fats. The use of bio diesel reduces emission of carbon monoxide and other hydrocarbons by 20 to 40%. Solid Biogas Direct use of biomass is usually in the form of combustible solid, usually wood, the biogenic portion of municipal solid waste of combustible field crops. Most of the bio matters, including dried manure, can actually be burnt to heat water and to drive turbines. Biogas Biogas can be produced from waste streams such as paper, sewage, animal wastes etc. These various waste streams have to be slurred together and allowed to naturally ferment to produce methane gas. With the unchecked use of fossils fuels, it was imminent that there would be fuel shortages. Renewable energy can definitely be an alternative to the fossil fuels only if we recognize its value and importance.

depends heavily on a sufficient supply of energy. There has been a huge dependence on fossil fuels to meet the energy needs and demands for the industrial, household and transport sectors throughout the world. A global discussion has begun on how long will they last and what alternatives are feasible. According to the United States Energy Information Administration, the world demand for oil in 2005 averaged 83.7 million barrels per day and that in 2006 was 85.3 million barrels per day. The forecast for the consumption of oil for the year 2007 is 87.2 million barrels per day. Between 2003 and 2005, there was 2.22 percent growth in the demand for oil. If growth rate persists in this way then efforts to meet the increasing demand will take a standstill because worldwide oil reserves are set to be drained by 2028. So, we are very close to depleting global oil reserves on the supply side and suffering from economic pressures on the demand side. As our country relies completely on fossil fuels for transportation, it imports all of its petroleum products from India. The price of petroleum product is increasing every year but since the use of other alternative sources of energy is not practiced at present, the government is forced to increase the price of petroleum products in order to fulfill the demands for energy supply. Moreover, the cross-border

ALTERNATIVES TO FOSSILB FuELS s ..B 2005..P&c


unIl anIya atch
OdERN CIVILIZATION is based on an economy that
price differences have given rise to the smuggling of cheap oil from Nepal to India. These factors, amongst others have resulted in a drain on the foreign reserves of the country which come from hard-earned public revenues. So, its high time we search for the alternative sources of energy which can replace the fossil fuels. As Nepal is rich in water resources the use of water as an alternative source can prove to be the best way in meeting the energy needs. Moreover, hydro resources in Nepal have the capacity to generate 42,000MW of electricity per day. The transport sector is the largest consumer of petroleum products and is a major source of income for many people in Nepal. Therefore, its obvious that hydroelectricity is the best alternative energy source in meeting the energy demands of the transport sector. A massive increase in the electrification of the transport system can be achieved through the utilization of Nepals hydroelectricity potential. hisTOrY: Electrical vehicles (EV) run on electricity. They are environmental friendly vehicles. EVs were first introduced in Nepal in 1960s; the first electric ropeway was brought into operation to transport goods from Hetauda in the south to Kathmandu. In 1975, the government of the Peoples Republic of China installed the first electric vehicle in Nepal which was used for public transportation. After that the EVs

The total fossil fuel used in the year 1997 is the result of 422 years of all plant matter that grew in the ancient earth.

ENCIPHER 2008

. VOL 08

13

ALTERNATIVES TO FOSSIL FuELS


movement grew up in a rapid way. The fuel crisis in 1989 which arose from Indias imposed trade embargo, inspired or, say, united a group of engineers to find an alternative arrangement for transportation. As a result, an organization called the Electric Vehicle Development Group was formed in 1992 which converted an old car into an EV. The project converted seven diesel operating three wheelers into EVs (Safa Tempos) and successfully operated it as public vehicles for six months. In 1996, a group of Nepali professionals and entrepreneurs brought seven EVs and started the first EV Company in Kathmandu. A cable car company was also launched in 1998 which was very successful as it aptly suits the rough terrain of our country. Following the success of it, the government has planned to develop this system in different parts of the country. The vehicles running with electricity in Nepal are:rOpEWaY A ropeway is a rope or cable based transport system used especially in mountainous areas. The ropeway system consists of steel cables connected to poles. It is used to carry people or goods where other transportation system is not available. This transportation system can be powered either manually or by a motor. A ropeway from Hetauda to Kathmandu was first started in Nepal to carry goods but it is no longer in use. The ropeway can be a good transportation system in places where access to road transportation is limited. caBLE car It was the first rope way system in Nepal which could carry both passengers and goods. It was first introduced in Nepal in November 1998 and is situated at the hill top of 1315m up from the sea level. It carries goods and passengers from Kurintar to Manakamana (about 2.8 Km) within just ten minutes. It was mainly introduced in Nepal for the development of tourism industry in the Manakamana region. About 20 passenger-carrying cable cars with a seating of 6-passenger each and 3 cargo cable cars are operating now. ELEcTric TrOLLEY BUs It was the first public electric transportation. In its starting days, it operated from Tripureshwor to Suryabinayak, a distance of 13 km. It was started with a motto to provide an excellent and dependable service to the people. Though the electric trolley bus system has been running for the past thirty years, lack of proper maintenance and management has limited it to running only three trolley buses to a 5 km route from Tripureshwor to Koteshwor. safa TEMpO Commercial manufacturing of battery operated electric three wheelers (commonly known as Safa Tempo) began in 1996 with the establishment of Nepal Electric Vehicle Company. Production of Safa Tempos increased after the government decided to ban the ultra-polluting diesel operated three wheelers from Kathmandu valley. Most of these polluting three wheelers were converted into Safa Tempos. Now the number of Safa Tempos running in the streets of Kathmandu has exceeded 600 and the number of these vehicles is increasing. Whilst most of these Safa Tempos operate in the public transportation sector in Kathmandu, many operate as office vehicles, maintenance vehicles, tourism vehicles and waste collection vehicles. Safa Tempos are also being used widely in Nepal by ministries, diplomatic organizations, donor organizations, municipalities, media organizations, and public and private organizations. Mini ELEcTric VEhicLE (EV) Hulas, the only automobile manufacturer in Nepal, came out with its first five to ten passenger electric van on 23rd March 2006. This model is the electric version of the regular model, the Mini Van, manufactured by the company. This electric vehicle is owned by and is being used as an office vehicle by Winrock International, Nepal. As the demand of electrical vehicles increased, private companies also showed interest in the promotion of these vehicles. Other local electric vehicle manufacturers are also working on four wheelers and alternating-current based electric vehicles. siGnificancEs anD DraWBacKs Of ELEcTric VEhicLEs Electric vehicles are not only a solution to conventional fossil fuels but also a boon to the environment. The electric vehicles running in the streets of Kathmandu valley have considerable role on reducing urban air pollution. Besides reducing air pollution, it has high economic value as it is emerging as an alternative to the imported fossil fuel powered vehicles. However, the development of hydropower is important for the development and use of electrical vehicles. But with efficient use of hydropower, the electric vehicles can flourish. Most hydropower plants in our country are the products of the flowing river and have a low grid load factor (that is, efficiency of use) with a huge chunk of electricity wasted during off peak hours (night time). Battery powered electric vehicles can be charged during this hour of the time to utilize the electricity. Although operating cost of electric vehicles is higher than other vehicles, it has lots of public support. General public, who are fade off with the smoke bleaching vehicles,
VOL 08

10 percent of total US generating capacity is fueled by hydropower, about the same as natural gas.

. ENCIPHER 2008

14

ALTERNATIVES TO FOSSIL FuELS


appreciate these cleaner vehicles. The use of hydropower not only comes with the environmental benefit but also a lot of economic value for a country like Nepal. In Nepal, all the fossil fuels are being imported from other countries. Promotion of these vehicles which run on local resources will eventually reduce the demand on imported fossil fuel. If the available water resources are used to produce electricity then the price of electricity will surely decrease and as a result the transportation cost will also be less. However the operating cost of electric vehicles, especially Safa Tempo, Trolley Bus and Mini EVs, are high compared to other vehicles due to high electricity tariff and extra cost of battery. Operating costs of LPG and petrol driven three wheelers are Rs.6.17 and Rs.7.06 per kilometer respectively whereas for EV and Safa Tempos is Rs.11.62 per kilometer. In addition, the heavy load of battery used in Safa Tempos results in lower speed. These problems persist because of lack of research and development in improving its efficiency. cOncLUsiOn Government policies are generally favorable towards the promotion of EVs. The National Transport Policy, 2058 promotes the use of environment friendly electric vehicles. In order to promote clean technology in automobile industry, the government should provide tax incentives and other facilities to minimize the operating cost and encourage domestic manufacturer to produce EVs which can compete with other vehicle products available in the market. Also EV should be promoted in all cities of Nepal. Although EV industry in Kathmandu is struggling to survive, we should not forget that it is contributing to reduce the urban air pollution in highly polluted cities like Kathmandu. Promotion of EVs will be an effective means to reduce the emission of harmful gases from vehicles along with the reduction in demand of fossil fuels to power vehicles. Therefore, there is an urgent need to find out the bottlenecks in the promotion of these environmental friendly vehicles and make it a competitive vehicle of 21st century.

SImuLATION OF TRuNkEd NETwORk

usIng MatlaB
r aurav

ANd VERIFICATION OF ERLANgB TABLE M. s r t

atna

uladhar

and unfortunately youre stuck in a traffic jam. You want to call home to inform that you will be late but to your utter dismay, your call cannot connect and you are presented with the very familiar pre-recorded voice: Sorry! No channels are available at the moment. Please try later Ever wondered why your call could not connect and why there isnt a channel available for you. The reason for this is rooted to the very nature of a communication network. A telecommunication network has to provide service to a large number of users/clients. Each user requires a channel to communicate with another. One crude approach would be to provide each user in the network with a dedicated channel. This would ensure that a call request by the user is never blocked. On the contrary, such an approach would be inefficient, cumbersome and too expensive for the operator. Hence, in telecommunications we rely on trunking to serve large number of users with limited number of channels/ servers.

mAgINE! yOu are on your way back home from college

In a trunked network system, there will be only limited number of channels intending to serve very large group of users. The channels are placed in a common pool for every user to share. No user has a fixed channel; rather the channels are assigned to users from the pool on-demand basis. When a user completes a call, the channel is placed back into the pool. The use of trunked network is based on trunking theory which exploits the statistical behavior of human users. This theory provides the basis for network operators to predict the number of channels required to serve the demand of its customers with satisfactorily service quality. In a trunked network since the number of available channel is less than the number of users, there is a possibility of all the channels being busy when a user attempts a call, there by his call will be blocked. The blocked call may be either cleared or kept in a queue. Accordingly we have two systems Blocked Calls Cleared (BCC) and Blocked Calls Delay. In BCC trunked network, there is always finite probability of a call attempt being blocked. In such system once a call is

1 litre of regular gasoline is the time-rendered result of about 23.5 metric tonnes of ancient phytoplankton material deposited on the ocean floor.

ENCIPHER 2008

. VOL 08

15

SImuLATION OF TRuNkEd NETwORk...


blocked, the user can re-attempt to call. The probability of a call being blocked in a BCC network is taken as the measure of networks Grade of Service (GoS). PSTN and cellular mobile networks are Blocked Calls Cleared type systems. It is a standard practice to design a cellular mobile network with 2% GoS. Before I go on to discuss the simulation, lets go through some common terms relevant to the trunking theory: Blocked Call: Call which cannot be completed at time of request, due to congestion. Also referred to as lost call. Holding Time (TH): Average duration of a typical call. Request Rate (): The average number of call requests per unit time. Denoted by seconds-1. Busy Hour: The busiest, one-hour period of the day. This is the period when incoming calls are most likely to be blocked or turned away, so this is the time for which statistics are calculated. Traffic Intensity: In a telecommunication network, call made by the users creates traffic. This traffic is measured in units of Erlang. It is a dimensionless quantity and may be used to measure the time utilization of single or multiple channels. It is denoted by A and is proportional to TH and . The probability of a call being blocked in a BCC trunked network with N trunked channels and A Erlang offered traffic is given by the Erlang B formula which is as follows: mean values, exponentially distributed values arrv Inst and termTime are generated. With all the timing information, we compute the total number of blocked and serviced calls An algorithm was developed (Flowchart, Figure 1) to model the call handling nature of blocked calls cleared network. The algorithm was implemented in Matlab (trafficsim.m).

The traffic capacity (A) of a BCC trunked network system is tabulated for various values of GoS and numbers of channels (N) in a standard Erlang B table. The aim of this simulation is to model a BCC trunked network and simulate the system to determine blocking probability under different conditions and verify the result with Erlang B table. siMULaTiOn The simulation was implemented in Matlab. The following approach was developed based on the nature of BCC trunked network: It is assumed that 1000 calls are attempted through the network. Then we generate the arrival instances (arrvInst) and terminating time (termTime) for each call. Inter-arrival time and Holding time are exponentially distributed random values. Appropriate values for mean inter-arrival time (mIAT) and mean holding time (mHT) are chosen and using these

FIGURE 1: Flowchart for implementation of BCC trunked network

The first ever underground trunk telephone cable went from London to Birmingham which were then termed the Midland and Northern Counties.

VOL 08

. ENCIPHER 2008

16

SImuLATION OF TRuNkEd NETwORk...


rEsULT The simulations were run for two cases (a) 10 trunked channels (b) 20 trunked channels. For each case, different sets of Mean Inter-arrival Time and Mean Holding Time were chosen, and for each set, the simulation was run 100 times. The final blocking percentage was obtained by averaging the results of each run. (Table1: Simulated Blocking %). The simulated blocking percentages were then compared with the values mentioned in the standard Erlang B table. A comparative graph for each case is shown below: From the data presented in Table1 and the corresponding comparative graphs, we can see that the simulation results closely match the standard values. Hence, for the sample values considered in the simulation, the implementation is acceptable. Everyone is free to obtain the Matlab implementation from Mathworks File exchange portal (www.mathworks.com/ matlabcentral/fileexchange Search Keyword: Lost Calls Cleared). You are welcome to study the implementation and make improvements. Any suggestions can be directed to me. rEfErEncEs: 1. Wireless Communication Principles and Practice, T.S Rappaport, 2nd Edition 2005 p77-80 2. Handbook Teletraffic Engineering, ITU-D Study Group 2,2005 Mr. Tuladhar is a Teaching Assistant at Department of Electrical and Electronic Engineering.

Table 1: Simulated Blocking % Compared With Blocking % In Erlang B Table n 10 10 10 10 10 20 20 20 Mean interarrival Time 3 5 7 5 2 3 5 3 Arrival Rate() 0.33 0.20 0.14 0.20 0.50 0.33 0.20 0.33 Mean holding Time(th) 18.648 37.555 60.312 59.75 29.36 39.54 76.25 52.83 Erlang Traffic (A) from table 6.216 7.511 8.616 11.95 14.68 13.18 15.25 17.61 simulated Blocking % 5.09 9.88 14.67 29.86 39.15 1.96 4.96 9.49 Blocking % from table 5 10 15 30 40 2 5 10

20

0.20

98.25

19.65

14.59

15

A mobile phone is designed to operate at a maximum power level of 0.6 watts. A household microwave oven uses between 600 and 1,100 watts.

ENCIPHER 2008

. VOL 08

17

PLC - VOICE OVER POwER LINE


OwER LINE Communication (PLC) is a type of communication technique that uses power line as a communication channel through which the frequency signals are transmitted. This technology is cheap to implement as no extra wire is needed. The pre-existing power cables can be used as channels for carrying information. In PLC, communication is done by superimposition of an analog signal over the standard 50 or 60 Hz alternating current. PLC is gaining popularity all over the world due to its wide coverage. It also helps to bridge the digital divide in a very cheap way. PLC has both broadband and narrowband application.
One of the narrowband applications of PLC is the voice communication over power line. It is the system that can transmit audio (or voice) through power lines and receive the transmitted signal at the other end. However, communicating via power lines is not an easy task because of the high frequency noise, time varying attenuation and variable impedance that exist in power line. The voice or audio signal is transmitted through the power lines and is received at the other end. The system mainly consists of three major components - transmitter, receiver and a channel. The transmitter and the receiver are a low voltage high frequency system whereas the channel (power line) is a high voltage and low frequency system. So a power line coupler is used to couple the transmitted signal to the power line. The power coupler provides an interface between the power line and the communication system on both the transmitter and receiver side. The coupling circuit is designed using a transformer and the capacitors. The block diagram of voice communication over power line is shown below:
Power Line Input SIgnal (Audio)

controlled oscillator can be used as a frequency modulator. The audio signal is given as an input to the VCO that modulates it and gives the modulated signal which is then coupled to the power line and then transmitted. A power coupler is used on both the transmitter and receiver side to block or pass the signal. The coupler components consists of coupler capacitors and transformer called an isolation transformer. The coupling capacitor blocks the power signals of low frequency and passes the carrier signals of high frequency. The capacitors have to be of high voltage ratings and high frequency. The isolation transformer provides galvanic isolation and impedance matching and it passes high frequency carrier signals.

BIshal sIlwal..Batch 2005..P&c

Not only this, PLC may be used in industrial automation, security and Modulator safety, remote control and monitoring Low Phase etc. Various research are undergoing on Audio Pass Locked Speaker both the narrowband and broadband Amplifier Filter Loop (PLL) applications of PLC. PLC is a proven, competitive, and alternative technology The transmitter part consists of a frequency modulator which is supported by utilities and manufacturers for the through which the input audio (or voice) is given. The control and communication applications, because of the process of impressing a low frequency signal onto a higher advantage of utilizing the pre-existing power line cable as a frequency carrier signal is called modulation. A voltage- communication channel.
The Telephone Company Ltd issued the first known telephone directory on January 1880 containing details of over 250 subscribers.

Voltage Controlled Oscillator

Power Coupler

Power Coupler

Pass Filter

The received signal is first sent through active band pass filter. The active band pass filter used is a narrow band pass filter whose quality factor Q>20. In narrow band pass filter the output voltage peaks at the central frequency. The band pass filter cuts off the high and low frequency noise and passes the signal of frequency near to the carrier frequency. The signal from the band pass filter is then passed through an amplifier. The frequency which is modulated and transmitted via the power lines is now demodulated at the receiver side to get original audio signal. Demodulation of the signal is done using Phase Locked Loop (PLL). The free running frequency of the PLL is set to the center frequency of the VCO by the timing capacitor and resistor. Then the demodulated signal is passed through an audio amplifier that amplifies low-power audio signals (signals of frequencies between 20 Hz to 20 KHz, the human range of hearing) to a level suitable for driving loudspeakers. Finally, the output from the audio amplifier is given to the speaker and the audio can be heard. This system can be implemented in our daily life. We can play a song in one room in the home and listen it from the speakers in any other room without the Band need for extra wires.
Amplifier

Demodulator

VOL 08

. ENCIPHER 2008

18

LANdLINE mESSAgINg SySTEm


computers to control industrial machinery and processes, replacing human operators. It plays an increasingly important role in the global economy and in daily experience. Microcontrollers play an essential role in the field of automation. Making use of this idea, we can choose to do project on messaging via landline telephone using mainly a DTMF (Dual Tone Multiple Frequency) chip, a microcontroller and LCD display. Such phone structure for text messaging does not require a specialized messaging center and a separate data channel, leading to a simple message display system. The electronic components required are 89C52 microcontroller, 16X2 LCD display and 8870 DTMF decoder chip. For this purpose, students can use the intercom to build the system. The system can be divided into following sections: LCD section The decoded O/P of the microcontroller is displayed. A 16X2 LCD can be used for convenience. DTMF section In this section the DTMF signal is decoded into 4 bit binary code via 8870 DTMF IC. The datasheet of 8870 DTMF chip

uTOmATION IS the use of control systems such as

is easily available with the test circuit diagram and the truth table.

deePranjan dongol..Batch 2005..P&c

Microcontroller section This is the most important part of the system. The 4 bit output of the DTMF IC is interfaced with the microcontroller which is programmed to constantly remain in a loop checking for input from the DTMP chip. At the arrival of the input, the microcontroller is interrupted. The microcontroller then decodes each input signal to the corresponding alphanumeric character according to the algorithm table (See Table 1). The algorithm table shows the processing of the microcontroller. For instance if a key 2 is entered twice within a defined interval by the programmer, it is decoded as b, similarly on pressing key 3 once ,it is decoded as d. This algorithm is analogous to the SMS typing in mobile communication system. The alphanumeric character is then displayed on the LCD. In this way message is conveyed from the sender to receiver. Students interested in learning microcontroller can use this project to learn the basics of microcontroller. Besides programming, this project will be helpful in learning to interface external devices to the microcontroller.

Table 1: Code for the keys acc. to number of times pressed within defined time interval Key 0 1 2 3 4 5 6 7 8 9 * Code 0 space .,;1 abc2 def3 ghi4 jkl5 mno6 pqrs7 tuv8 wxyz9 Backspace 1 0 . a d g j m p t w 2 space , b e h k n q u x 3 ; c f i l o r v y 4 1 2 3 4 5 6 s 8 z 5 7 9 Sender
Landline Phone

Receiver Telephone Line


Landline Phone

DTMF Decoder

Microcontroller

16X2 LCD Display

Figure 1: Simplified block diagram of the system

The first telephone networks were tiny, connecting a handful of users in closely located buildings, with a small switchboard in between.

ENCIPHER 2008

. VOL 08

19

HVdC TRANSmISSION
with direct current. The availability of transformers and the development and improvement of induction motors at the beginning of the 20th century led to greater appeal and use of AC transmission. Through research and development in Sweden at Allmana Svenska Electriska Aktiebolaget (ASEA), an improved multi-electrode grid controlled mercury arc valve for high power and voltages were developed in 1929. Experimental plants were set up in the 1930s in Sweden and the USA to investigate the use of mercury arc valves in conversion processes for transmission and frequency changing. DC transmission now became practical when long distances were to be covered or where cables were required. The increase in need for electricity after the Second World War stimulated research, particularly in Sweden and in Russia. In 1950, a 116 km experimental transmission line was commissioned from Moscow to Kasira at 200 kV. The first commercial HVDC line built in 1954 was a 98 km submarine cable with ground return between the island of Gotland and the Swedish mainland. Thyristors were applied to DC transmission in the late 1960s and solid state valves became a reality. In 1969, a contract for the Eel River DC link in Canada was awarded as the first application of solid state valves for HVDC transmission. Today, the highest functional DC voltage for DC transmission is +/- 600 kV for the 785 km transmission line of the Itaipu scheme in Brazil. DC transmission is now an integral part of the delivery of electricity in many countries throughout the world. WhY UsE Dc TransMissiOn? The question is often asked, Why use DC transmission? One response is that losses are lower, but this is not correct. The level of losses is designed into a transmission system and is regulated by the size of conductor selected. DC and AC conductors, either as overhead transmission lines or submarine cables can have lower losses but at higher expense since the larger cross-sectional area will generally result in lower losses but cost more. When converters are used for DC transmission in preference to AC transmission, it is generally by economic choice driven by one of the following reasons: 1. An overhead DC transmission line with its towers can be designed to be less costly per unit of length than an equivalent AC line designed to transmit the same level of electric power. However the DC converter stations at each end are more costly than the terminating stations of an AC

MandIP luItel..Batch 2004..P&c

LECTRIC POwER transmission was originally developed

line and so there is a breakeven distance above which the total cost of DC transmission is less than its AC transmission alternative. 2. If transmission is by submarine or underground cable, the breakeven distance is much less than overhead transmission. It is not practical to consider AC cable systems exceeding 50 km but DC cable transmission systems are in service whose length is in the hundreds of kilometers and even distances of 600 km or greater have been considered feasible. 3. Some AC electric power systems are not synchronized to neighboring networks even though the physical distance between them is quite small. This occurs in Japan where half the country is a 60 Hz network and the other is a 50 Hz system. It is physically impossible to connect the two together by direct AC methods in order to exchange electric power between them. However, if a DC converter station is located in each system with an interconnecting DC link between them, it is possible to transfer the required power flow even though the AC systems so connected remain asynchronous. ADVANTAGES If the cost of converter station is excluded, the DC overhead lines and cables are less expensive than AC lines and cables. A DC link is asynchronous. The corona loss and radio interference are less. The line length is not restricted by stability. The interconnection of two separate AC systems via a DC link does not increase the short-circuit capacity, and thus the circuit breaker ratings of either system. The DC line loss is smaller than for the comparable AC line. The DC line has two conductors rather than three in the case of AC line and thus requires two-thirds as many insulators. The required towers and right-of-way are narrower in the DC line than the AC line.

DisaDVanTaGEs The converters generate harmonic voltages and currents on both dc and ac sides and therefore filters are needed. The converter consumes reactive power. The dc converter stations are expensive. The dc circuit breakers are difficult to design.

A charged piece of acrylic behaves like a charged high voltage capacitor which can retain almost 1000 Joules of electrostatic energy.

VOL 08

. ENCIPHER 2008

20

HVdC TRANSmISSION
Some notable HVDC transmission systems used worldwide are: In Itaipu, Brazil, HVDC was chosen to supply 50Hz power into a 60 Hz system; and to economically transmit large amount of hydro power (6300 MW) over large distances (785 km). In Leyte-Luzon Project in Philippines, HVDC was chosen to enable supply of bulk geothermal power across an island interconnection, and to improve stability to the Manila AC network. In Rihand-Delhi Project in India, HVDC was chosen to transmit bulk (thermal) power (1500 MW) to Delhi, to ensure: minimum losses, least amount right-of-way, and better stability and control. In Garabi, an independent transmission project (ITP) transferring power from Argentina to Brazil, HVDC backto-back system was chosen to ensure supply of 50 Hz bulk (1000MW) power to a 60 Hz system under a 20-year power supply contract. In Gotland, Sweden, HVDC was chosen to connect a newly developed wind power site to the main city of Visby, in consideration of the environmental sensitivity of the project area (an archaeological and tourist area) and improve power quality. In Queensland, Australia, HVDC was chosen in an ITP to interconnect two independent grids (of New South Wales and Queensland) to: enable electricity trading between the two systems (including change of direction of power flow); ensure very low environmental impact and reduce construction time. There are over 80 existing HVDC transmission line worldwide that either use mercury arc rectifiers, thyristors or IGBTs as a convertor to realize transmission in high voltage d.c. UsE Of hVDc VOLTaGE hiGhEr Than 600 KV The growth and extension of power grids and consequently the introduction of high voltage levels have been driven by a big growth of power demand over decades. In the next 20 years, power consumption in developing and emerging countries is expected to increase for 220 %. Priorities in future developments will be given to low costs at still adequate technical quality and reliability. The use of HVDC technology is the solution, especially for bulk power transmission from the remote generating stations over long distances to load centers. In the future it will be necessary to transmit bulk power > 4000 MW over more than 2000 km. Therefore it is important to develop HVDC transmission technologies for voltage levels above 600 kV. Various researches have been going on worldwide on the possibility of transmitting at 800kV. Presently it is a challenge to all engineers to make it a reality.

FLEXIBLE AC TRANSmISSION SySTEmS


T
HE CuRRENT scenario in power system is the use of AC transmission Network for transfer of power from the generation centers to the load centers. Although the HVDC transmission has emerged in the power transmission system it has not shown its optimum potential to replace the A.C transmission and distribution systems. The AC transmission system has its dependence on the system parameter such as line impedances, currents and voltages which has limited the power transfer capability of the network. In an AC transmission network mismatch of above parameter causes some part of the network to be overloaded while other part being underutilized as a result power imbalance and voltage fluctuation are observed in the power system. The ultimate result of power imbalance and voltage fluctuation is loss of . VOL 08

BIjaya ghIMIre..Batch 2004..P&c

stability of the system and reliability of the power system as a whole. Although such problems are of no general concern in our country but in countries with deregulated power delivery mechanism its a major concern of power generators, transmitters, distributors and customers. In deregulated power delivery mechanism reliability and quality is major concern. This major headache of power system operators was mitigated from tap changing transformers, series compensators and shunt compensators with advent of power electronics, being more focused its the emergence of FACTS technology this problem has been mitigated.

The longest HVDC link in the world is the Inga-Shaba 1700 km 600 MW link connecting the Inga Dam to the Shaba copper mine in Congo.

ENCIPHER 2008

21

FLEXIBLE AC TRANSmISSION SySTEm


inTrODUcTiOn Domination of power electronics in every sector of electrical has opened a new and broader ways in power system for controlling of various parameters of power system as well as in the sector of reactive power flow in the power system and making power system flexible. Flexible AC Transmission Systems (FACTS) is a technology that responds to these needs. It significantly alters the way transmission systems are developed and controlled together with improvements in asset utilization, system flexibility and system performance. FACTS technology basically uses the technology of semiconductor devices such as Thyristors, Insulated Gate Bipolar Junction Transistors (IGBT) and Gate Turn off Thyristors (GTO). The parameters of power systems are controlled by varying the switching sequence of the semiconductors devices and hence flexibility is obtained. Technological advancement has developed various FACTS devices which are being used in modern day power systems according to the requirement of system to be installed or upgraded. Some of the main FACTS devices are Static Var Compensator (SVC), Thyristor -Controlled and Thyristor -Switched Reactor (TCR and TSR), Static synchronous compensator (STATCOM), Thyristor-Switched Series Capacitor (TSSC), Thyristor-Controlled Series Capacitor (TCSC), GTO Thyristor-Controlled Series Capacitor (GCSC), Static Synchronous Series Compensator (SSSC), Unified Power Flow Controller (UPFC). characteristics of the transmission line to increase the steady-state transmittable power and to control the voltage profile along the line. Static Var Compensators (SVCs), the most important FACTS devices, have been used for a number of years to improve transmission line economics by resolving dynamic voltage problems. The accuracy, availability and fast response enable SVCs to provide high performance steady state and transient voltage control compared with classical shunt compensation. SVCs are also used to dampen power swings, improve transient stability, and reduce system losses by optimized reactive power control. STATiC SynChRonouS CoMpEnSAToR (STATCoM)

Fig: Static Synchronous Compensator

sTaTic shUnT cOMpEnsaTOrs


Shunt compensation is achieved by varying shunt impedance, voltage source or current source. Basically current is injected into the system at the point of connection this current being in phase quadrature with line voltage enables the compensator to supply reactive power. This type of compensation influences the natural electrical

Among the FACTs devices available till now the most important one can be considered as static synchronous compensator (STATCOM). In the following text I will be describing about STATCOM device. Basically a STATCOM device is a static synchronous generator operated as a shunt-connected static var compensator whose capacitive or inductive output

The Ekibastuz-Kokshetau power line in Kazakhstan holds the record for operating at the highest transmission voltage in the world (1,150 kV).

VOL 08

. ENCIPHER 2008

22

FLEXIBLE AC TRANSmISSION SySTEmS


current can be controlled in-dependent of the AC system voltage. STATCOM is a controlled reactive power source It provides voltage support by generating or absorbing reactive power at the point of common coupling without the use of large external capacitor or reactors. STATCOM basically exploits the use of IGBT or GTO for the purpose of Var compensation. The charged capacitor Cdc provides a DC voltage to the converter, which produces a set of controllable threephase output voltages with the frequency of the AC power system. By varying the amplitude of the output voltage U, provide uninterrupted power for critical load. The derivation of the formula for the transmitted active power employs considerable calculations. cOMparisOn Of shUnT cOMpEnsaTOrs SVC and STATCOM are very similar in their functional compensation capability, but the basic operating principles are fundamentally different. A STATCOM functions as a shunt-connected synchronous voltage source whereas a SVC operates as a shunt-connected, controlled reactive admittance. This difference accounts for the STATCOMs superior functional characteristics, better performance, and greater application flexibility than those attainable with a SVC. In the linear operating range the V-I characteristic (Fig. 10) and functional compensation capability of the STATCOM and the SVC are similar. Concerning then on-linear operating range, the STATCOM is able to control its output current over the rated maximum capacitive or inductive range independently of AC system voltage, whereas the maximum attainable compensating current of the SVC decreases linearly with AC voltage. Thus, the STATCOM is more effective than the SVC in providing voltage support under large system disturbances during which the voltage excursions would be well outside of the linear operating range of the compensator. The ability of the STATCOM to maintain full capacitive output current at low system voltage also makes it more effective than the SVC in improving the transient stability. The attainable response time and the bandwidth of the closed voltage regulation loop of the STATCOM are also significantly better than those of the SVC. In situations where it is necessary to provide active power compensation the STATCOM is able to interface suitable energy storage (large capacitor, battery) from where it can draw active power at its DC terminal and deliver it as AC power to the system. On the other side, the SVC does not have this capability. rEfErEncEs 1.Spectrum Vol1 Issue 2 March 2001 2.Muhammad H. Rashid Power Electronics 3rd Ed. phi 2004 3.Simulation Of Real And Reactive Power Flowcontrol With Upfc Connected To Atransmission Line, S. Tara Kalyani, G. Tulasiram Das, Jawaharlal Nehru Technological University, Hyderabad, India -500085

Fig: Two machine system with STATCOM the reactive power exchange between the converter and the AC system can be controlled. If the amplitude of the output voltage U is increased above that of the AC system UT, a leading current is produced, i.e. the STATCOM is seen as a conductor by the AC system and reactive power is generated. Decreasing the amplitude of the output voltage below that of the AC system, a lagging current results and the STATCOM is seen as an inductor. In this case reactive power is absorbed. If the amplitudes are equal no power exchange takes place. A practical converter is not lossless. In the case of the DC capacitor, the energy stored in this capacitor would be consumed by the internal losses of the converter. By making the output voltages of the converter lag the AC system voltages by a small angle, the converter absorbs a small amount of active power from the AC system to balance the losses in the converter. The mechanism of phase angle adjustment can also be used to control the reactive power generation or absorption by increasing or decreasing the capacitor voltage Udc, and thereby the output voltage U. Instead of a capacitor also a battery can be used as DC energy. In this case the converter can control both reactive and active power exchange with the AC system. The capability of controlling active as well as reactive power exchange is a significant feature which can be used effectively in applications requiring power oscillation damping, to level peak power demand, and to
ENCIPHER 2008

The Ameralik Span is the longest span of an electrical overhead powerline in the world with a span width of 5,376 metres.

. VOL 08

23

SmALL wONdERS THE wORLd OF NANOSCIENCE

Quree Bajracharya..Batch 2005..c

patrol our arteries, diagnosing ailments and fighting disease; where military battle-suits deflect explosions; where computer chips are no bigger than specks of dust; and where clouds of miniature space probes transmit data from the atmospheres of Mars or Titan.

of the exciting and challenging aspects of the nanoscale is the role that quantum mechanics plays in it. The rules of quantum mechanics are very different from classical physics, which means that the behavior of substances at the nanoscale can sometimes contradict common sense by behaving erratically. You can't walk up to a wall and immediately teleport to the other side of it, but at the nanoscale an electron can -- it's called electron tunneling. inTrODUcTiOn There's an unprecedented multidisciplinary convergence of Substances that are insulators, meaning they can't carry an scientists dedicated to the study of a world so small, we can't electric charge, in bulk form might become semiconductors see it -- even with a light microscope. That world is the field when reduced to the nanoscale. Much of nanoscience requires that you forget what of nanotechnology, the realm of atoms and nanostructures. The rules of quantum mechanics are you know and start learning all over again. Nanotechnology is so new;

mAgINE A world where microscopic medical implants

no one is really sure what will come of it.

A nanometer (nm) is onebillionth of a meter, smaller than the wavelength of visible light and a hundredthousandth the width of a human hair. Nanotechnology includes the many techniques used to create structures at a size scale below 100 nm. Similar to our bodies, that is assembled in a specific manner from millions of living cells. At the atomic scale, elements are at their most basic level. On the nanoscale, we can potentially put these atoms together to make almost anything. hOW nEW is nanOTEchnOLOGY?

very different from classical physics, which means that the behavior of substances at the nanoscale can sometimes contradict common sense by behaving erratically. You can't walk up to a wall and immediately teleport to the other side of it, but at the nanoscale an electron can!

So what does this all mean? Right now, it means that scientists are experimenting with substances at the nanoscale to learn about their properties and how we might be able to take advantage of them in various applications. Engineers are trying to use nano-size wires to create smaller, more powerful microprocessors. Doctors are searching for ways to use nanoparticles in medical applications. prODUcTs WiTh nanOTEchnOLOGY Many products on the market are already benefiting from nanotechnology, which includes Sunscreen, Self-cleaning glass, Clothing, Scratch-resistant coatings, Antimicrobial bandages, Swimming pool cleaners and disinfectants. Before long, we'll see dozens of other products that take advantage of nanotechnology ranging from Intel microprocessors to bio-nanobatteries, capacitors only a few nanometers thick. While this is exciting, it's only the tip of the iceberg as far as how nanotechnology may impact us in the future. ThE fUTUrE Of nanOTEchnOLOGY

In 1959, the physicist Richard Feynman gave a lecture to the American Physical Society called "There's Plenty of Room at the Bottom." The focus of his speech was about the field of miniaturization and how he believed man would create increasingly smaller, powerful devices. In 1986, K. Eric Drexler wrote "Engines of Creation" and introduced the term nanotechnology. ThE WOrLD Of nanOTEchnOLOGY Nanotechnology is rapidly becoming an interdisciplinary field. Biologists, chemists, physicists and engineers are all involved in the study of substances at the nanoscale. One

In the world of "Star Trek," machines called replicators can produce practically any physical object, from weapons to a steaming cup of Earl Grey tea. Long considered to
VOL 08

Nanotubes are 100 times more resistant than the steel, conduct electricity much better than copper, and are also excellent heat conductors.

. ENCIPHER 2008

24

SmALL wONdERS: THE wORLd OF NANOSCIENCE


be exclusively the product of science fiction, today some people believe replicators are a very real possibility. They call it molecular manufacturing, and if it ever does become a reality, it could drastically change the world. Atoms and molecules stick together because they have complementary shapes that lock together, or charges that attract. Just like with magnets, a positively charged atom will stick to a negatively charged atom. As millions of these atoms are pieced together by nanomachines, a specific product will begin to take shape. The goal of molecular manufacturing is to manipulate atoms individually and place them in a pattern to produce a desired structure. The first step would be to develop nanoscopic machines, called assemblers, that scientists can program to could work together to automatically construct products, and could eventually replace all traditional labor methods. This could vastly decrease manufacturing costs, thereby making consumer goods plentiful, cheaper and stronger. Eventually, we could be able to replicate anything, including diamonds, water and food. Famine could be eradicated by machines that fabricate foods to feed the hungry. Nanotechnology may have its biggest impact on the medical industry. Patients will drink fluids containing nanorobots programmed to attack and reconstruct the molecular structure of cancer cells and viruses. There's even speculation that nanorobots could slow or reverse the aging process, and life expectancy could increase significantly. Nanorobots could also be programmed to perform delicate surgeries -such nanosurgeons could work at a level a thousand times more precise than the sharpest scalpel. By working on such a small scale, a nanorobot could operate without leaving the scars that conventional surgery does. Additionally, nanorobots could change your physical appearance. They could be programmed to perform cosmetic surgery, rearranging your atoms to change your ears, nose, eye color or any other physical feature you wish to alter. Nanotechnology has the potential to have a positive effect on the environment. For instance, scientists could program airborne nanorobots to rebuild the thinning ozone layer. Nanorobots could remove contaminants from water sources and clean up oil spills. Manufacturing materials using the bottom-up method of nanotechnology also creates less pollution than conventional manufacturing processes. Our dependence on non-renewable resources would diminish with nanotechnology. Cutting down trees, mining coal or drilling for oil may no longer be necessary -- nanomachines could produce those resources. manipulate atoms and molecules at will. Rice University Professor Richard Smalley points out that it would take a single nanoscopic machine millions of years to assemble a meaningful amount of material. In order for molecular manufacturing to be practical, you would need trillions of assemblers working together simultaneously. Eric Drexler believes that assemblers could first replicate themselves, building other assemblers. Each generation would build another, resulting in exponential growth until there are enough assemblers to produce objects. Trillions of assemblers and replicators could fill an area smaller than a cubic millimeter, and could still be too small for us to see with the naked eye. Assemblers and replicators
ENCIPHER 2008 . VOL 08

Many nanotechnology experts feel that these applications are well outside the realm of possibility, at least for the foreseeable future. They caution that the more exotic applications are only theoretical. Some worry that nanotechnology will end up like virtual reality -- in other words, the hype surrounding nanotechnology will continue to build until the limitations of the field become public knowledge, and then interest (and funding) will quickly dissipate. nanOTEchnOLOGY chaLLEnGEs, risKs anD EThics The most immediate challenge in nanotechnology is that the need to learn more about materials and their properties

There are now over 600 products available that use nanotechnology.

25

SmALL wONdERS: THE wORLd OF NANOSCIENCE


at the nanoscale. Universities and corporations across the world are rigorously studying how atoms fit together to form larger structures. They are still learning about how quantum mechanics impact substances at the nanoscale. Because elements at the nanoscale behave differently than they do in their bulk form, there's a concern that some nanoparticles could be toxic. Some doctors worry that the nanoparticles are so small, that they could easily cross the blood-brain barrier, a membrane that protects the brain from harmful chemicals in the bloodstream. If we plan on using nanoparticles to coat everything from our clothing to our highways, we need to be sure that they won't poison us. There are some hefty social concerns about nanotechnology too. Nanotechnology may also allow us to create more powerful weapons, both lethal and non-lethal. In terms of capturing the public imagination, unleashing hordes of self-replicating devices that escape from the lab and attack anything in their path is always going to be popular. Unfortunately nature has already beaten us to it, by several hundred million years. Naturally occurring nanomachines, that can not only replicate and mutate as they do so in order to avoid our best attempts at eradication, but can also escape their hosts and travel with alarming ease through the atmosphere. No wonder that viruses are the most successful living organisms on the planet, with most of their `machinery' being well into the nano realm. However, there are finite limits to the spread of such `nanobots', usually determined by their ability, or lack thereof, of converting a sufficiently wide range of material needed for future expansion. Indeed, the immune systems of many species, while unable to completely neutralize viruses without side effects such as runny noses, are so effective in dealing with this type of threat as a result of the wide range of different technologies available to a large complex organism when confronted with a single purpose nano-sized one. For any threat from the nano world to become a danger, it would have to include far more intelligence and flexibility than we could possibly design into it. Our understanding of genomics and proteomics is primitive compared with that of nature, and is likely to remain that way for the foreseeable future. For anyone determined to worry about nanoscale threats to humanity should consider mutations in viruses such as HIV that would allow transmission via mosquitoes, or deadlier versions of the influenza virus, which deserve far more concern than anything nanotechnology may produce. cOncLUsiOns Nanotechnology, like any other branch of science, is primarily concerned with understanding how nature works. As discussed how our efforts to produce devices and manipulate matter are still at a very primitive stage compared to nature. Nature has the ability to design highly energy efficient systems that operate precisely and without waste, fix only that which needs fixing, do only that which needs doing, and no more. We do not, although one day our understanding of nanoscale phenomena may allow us to replicate at least part of what nature accomplishes with ease. While many branches of what now falls under the umbrella term nanotechnology are not new, it is the combination of existing technologies with our new found ability to observe and manipulate at the atomic scale that makes nanotechnology so compelling from scientific, business and political viewpoints. For business, nanotechnology is no different from any other technology: it will be judged on its ability to make money. This may be in the lowering of production costs by, for example, the use of more efficient or more selective catalysts in the chemicals industry, by developing new products such as novel drug delivery mechanisms or stain resistant clothing, or the creation of entirely new markets, as the understanding of polymers did for the multi-billion euro plastics industry. Politically, it can be argued that fear is the primary motivation. The US has opened up a commanding lead in terms of economic growth, despite recent setbacks, as a result if the growth and adoption of information technology. Of equal significance is the lead in military technology as demonstrated by the use of unmanned drones for both surveillance and assault in recent conflicts. Nanotechnology promises far more significant economic, military and cultural changes than those created by the internet, and with technology advancing so fast, and development and adoption cycles becoming shorter, playing catch-up will not be an option for governments who are not already taking action. Maybe the greatest short term benefit of nanotechnology is in bringing together the disparate sciences, physical and biological, who due to the nature of education often have had no contact since high school. Rather than nanosubmarines or killer nanobots, the greatest legacy of nanotechnology may well prove to be the unification of scientific disciplines and the resultant ability of scientists, when faced with a problem, to call on the resources of the whole of science, not just of one discipline
VOL 08

Governments around the world invested $4.1bn in nanotechnology R&D last year, compared with $432m in 1997 and $1.5bn in 2001.

. ENCIPHER 2008

26

BIOmETRICS
IOmETRICS IS an emerging technology of study of methods for uniquely recognizing individuals based upon one or more intrinsic physical or behavioral traits. It is the development of statistical and mathematical methods applicable to data analysis problems in the biological sciences. The term biometrics is derived from the Greek words bio (life) and metric (to measure). According to current terms of use, biometrics deals with the use of computer technology and signal processing technologies for measuring and analyzing a persons physiological or behavioral characteristics for identification and verification purposes.
Biometric characteristics can be divided in two main classes: 1. Physiological are related to the shape of the body. The oldest traits, which have been used for more than 100 years, are fingerprints. Other examples are face recognition, hand geometry and iris recognition. 2. Behavioral are related to the behavior of a person. The first characteristic to be used, still widely used today, is the signature. More modern approaches are the study of keystroke dynamics and of voice. Other biometric strategies are being developed such as those based on gait (way of walking), retina, hand veins, ear canal, facial thermo gram, DNA, odor and scent and palm prints. Biometrics is used in two major ways: Identification is determining who a person is, and it involves taking the measured characteristic and trying to find a match in a database containing records of people and that characteristic. It is often used in determining the identity of a suspect from crime scene information. Verification is determining if a person is who they say they are. It involves taking the measured characteristic and comparing it to the previously recorded data for that person. ThE BiOMETric sYsTEM Biometric devices consist of a sensor, interface, DSP
ENCIPHER 2008 . VOL 08

jeevan shrestha..Batch 2005..c


processor driver software, and a database that stores the biometric data for comparison with previous records. The match points are processed using an algorithm into a value that can be compared with biometric data in the database. In case of fingerprint authentication system, an example of biometric system, normally placement of finger on the sensor produces a surface image of the finger and print is taken to create template. However in the advance system, the live cells of the finger dont come in contact with the sensor. Instead as soon as the finger is placed in the sensor a modulated RF wave is generated and conducted into live cells of finger surface by the sensor. This RF modulated wave impinges on the live and conducting surface of the finger, and is reflected back. The reflected RF modulated
DSP Processor Input Sensor
System Threshold

Image Taken Interface Matcher Decision Module

User Template

Database

Authentication

Figure: System block diagram of Biometric System

wave is then processed by DSP processor to identify the subject. The sensors RF field operates under the control of an algorithm that periodically looks for the presence of a fingertip. If the sensor doesnt detect a finger it reverts to sleep mode. When finger starts to move across the sensor, the processor scans the image. anaLYsis Analysis is carried out by converting the biometric of individual into a digital form and comparing it with the template of individual. The measurement scale adopted for analysis is called hamming distance (HD). Hamming distance gives the expert a measure of how closely the measured and digitized biometric matches with digitized template. The scale is between 0 and 1. Two totally similar bit strings give an HD of 0 while two totally dissimilar bit strings gives HD of 1. But in the real world biometric system biometric measure is referred in FAR and FRR. The FRR measures the percentage of invalid users who are invalidly accepted as genuine users, while the FRR measures the percentage of valid users rejected as imposters. FAR and FRR are trades against

CIA, FBI, and NASA have all been the implementers of retinal scanning.

27

BIOmETRICS
each other, and common measure is obtained which is the rate at which both accept and reject rates are equal. In current context, the analysis is done using image processing technique, which is also a part of Digital Signal Processing. cOMparisiOn Biometric systems are compared against each other for performance based on seven criteria: Universality: each person should have the characteristic Uniqueness: how well the biometric separates individually from another. Permanence: measures how well a biometric resists aging. Collectibility: ease of acquisition for measurement. Performance: accuracy, speed, and robustness of technology used. Acceptability: degree of approval of a technology. Circumvention: ease of use of a substitute Comparison of above seven criteria for various biometric inputs is shown in the table below. Another key aspect is how user-friendly a system is. The process should be quick and easy, such as having a picture taken by a video camera, speaking into a microphone, or touching a fingerprint scanner. Low cost is important which includes initial cost of the sensor or the matching software that is involved, and the life-cycle support cost of providing system administration and an enrollment operator. NO SYSTEM IS PERFECT, of course including the one we propose. Any biometric system is prone to two basic types of errors: a false positive and a false negative. The measure of false accept rate (FAR) or false match rate (FMR): the probability that the system incorrectly declares a successful match between the input pattern and a non-matching pattern in the database. In the case of false negative, on other hand, the system fails to make the match between biometric input and stored template. This is due to the fact that two different samples of the same biometric identifiers are never identical.

Table: Comparison of various biometric technologies (H=High, M=Medium, L=Low)


Biometrics Face Fingerprint Hand geometry Keystrokes Hand veins Iris Retinal scan Signature Voice Facial thermograph Odor DNA Gait Ear Canal Universality Uniqueness H M M L M H H L M H H H M M L H M L M H H L L H H H L M Permanence M H M L M H M L L L H H L H Collectability H M H M M M L H M H L L H M Performance L H M L M H H L L M L H L M Acceptability Circumvention* H M M M M L L H H H M L H H L H M M H H H L L H L L M M

* circumventability listed with reversed colors because low is desirable here instead of high There is no one perfect biometric that fits all needs. There are, however, some common characteristics needed to make a biometric system usable. First, the biometric must be based upon a distinguishable trait, i.e., two biometric samples of different individuals should not be alike. Technologies such as face or iris recognition have come into widespread use due to their distinguishable trait. The reasons for this could be the sensed biometric data may be noisy or distorted. For example a cut in a finger leaves the fingerprint with a scar. Noisy data can also result from improperly maintained sensors. Or, the biometric data acquired during authentication may be very different from the data used to generate the template during enrollment. For example during authentication user might blink eye during iris capture. Some errors might be avoided by using
VOL 08

Fingerprints on paper, cardboard and unfinished wood can last for up to forty years unless exposed to water.

. ENCIPHER 2008

28

BIOmETRICS
improved sensor. For instance, optical sensors capture fingerprint details better than capacitive fingerprint sensors. System based on the multiple traits could achieve very low error rate. But in both case there is trade off between cost and accuracy, and system based on multiple traits could be burdensome for the legitimate user, as well. cUrrEnT issUEs anD cOncErns As technology advances, and with passage of time, more private companies and public utilities may use biometrics for safe, accurate identification. These advances are likely to raise concerns such as: 1. Identification Preservation and privacy infringement The biggest concern is the fact that once a biometric source has been compromised it is compromised for life, because users can never change their biometric source. Another concern is how a persons biometric, once collected, can be protected. Some argue that if a persons biometric data is stolen it might allow someone else to access personal information or financial accounts, in which case the damage could be irreversible. 2. Biometric theft, impersonation and victimization There are concerns whether our personal information taken through biometric methods can be misused, tampered with, or sold, e.g. by criminals stealing, rearranging or copying the biometric data. Also, the data obtained using biometrics can be used in unauthorized ways without the individuals consent. 3. Physical harm to the individual from biometric instrument. Some believe this technology can cause physical harm to an individual using the methods, or that instruments used are unsanitary. For example, there are concerns that retina scanners might not always be clean. MaJOr arEas Of appLicaTiOn In early days, biometric systems, particularly automatic fingerprint identification systems (AFIS), had been widely used in forensics for criminal identification. Due to recent advancements in biometric sensors and matching algorithms (through image processing), biometric authentication has been deployment in a large number of civilian and government applications. Biometrics is being used for physical access control, computer log-in, welfare disbursement, international border crossing (immigration and naturalization), health care, forensic voice and tape analysis, and personal identification etc. It can be used to verify a customer during transactions conducted via
ENCIPHER 2008

telephone and Internet (electronic commerce and electronic banking). In automobiles, biometrics is being adopted to replace keys for keyless entry and keyless ignition. Due to increased security threats, the ICAO (International Civil Aviation Organization) has approved the use of e-passports (passports with an embedded chip containing the holders facial image and other traits). sTanDarDiZaTiOn EffOrTs As biometric uses are a global phenomenon, standardization of data is being attempted on universal scale. A Common Biometric Exchange File Format (CBEFF) for biometric data interchange and interoperability is under development by the International Biometric Industry Association (IBIA). IBIA is also attempting to integrate measurement schemes to enhance reliability and use of biometric data. Much of biometrics standardization was initiated in the USA, but in 2002 a new ISO/IEC JTC1 Sub-committee was established - SC37, and first met in Orlando in December 2003. ISO (the International Organization for Standardization) and IEC (the International Electro technical Commission) form the specialized system for worldwide standardization. In the field of information technology, ISO and IEC have established a Joint Technical Committee 1: ISO/IEC JTC 1 on Information Technology. The goal of this new JTC 1 is to ensure a high priority, focused, and comprehensive approach worldwide for the rapid development and approval of formal international biometric standards. These standards are necessary to support interoperability and data interchange among applications and systems, and to facilitate the rapid deployment of significantly better open systems standard-based security solutions for purposes such as homeland defense and the prevention of ID theft. At last, we can hope that multidisciplinary research teams in both industry and academia can design the right blend of technologies to create practical biometric application and can integrate them into large system without introducing additional vulnerabilities. rEfErEncEs: April 2008, Electronics for You pp 32-36 http//:www.globalsecutity.org http//:www.spectrum.ieee.org

The first known example of biometrics was a form of finger printing used in China in the 14th century to distinguish the young children.

. VOL 08

29

VOL 08

. ENCIPHER 2008

30

CURRENT STATUS & FUTURE OF LED BASED LIGHTING SYSTEM


VER THE past hundred years, incandescent and gas discharge technology occupied almost all general lighting applications. Light Emitting Diodes (LED) has introduced new notion in the field of lighting and is considered to be one of the most viable lighting solutions Right now, LEDs in lighting are still a bit of a gimmick. They are popular and used quite widely, but often theyre used just because they are a new technology. Even so, there are already many examples, where LEDs have been used in the retail and leisure markets, and of course we are now starting to see substantial LED installations in general purpose lighting application as well. The use of LED is growing rapidly in developed world in numerous applications. Where as, the developing world is not usually considered a prime market for LEDs but there are an estimated 1.6 billion who lack access to the electricity grid. LED-based lighting systems are considered an excellent option for such people compared with lamps that burn kerosene or other fuels.
MarKET sTaTUs anD fUTUrE pOssiBiLiTiEs According to LED Magazine, LED market grew by 9.5% in 2007 to reach $4.6 billion, which is quiet higher than 6% growth in 2005 and 2006. Similarly, growth of 12% is expected in 2008 and compound annual growth rate of 20% in next five year to reach $11.5 billion in 2012. Currently the use of LED is very much focused to sign and displays, cell phones and automobile lighting. General illumination covers only 7% of total market in 2007.

dr. BhuPendra BIMal chhetrI..Mr. dIwakar BIsta

BacKGrOUnD

cOnTEMpOrarY appLicaTiOns MOBiLE appLicaTiOns Mobile applications occupies largest segment 44% in the market. LEDs are generally used as backlight or indicators in mobile devices like laptop computer, cell phone, mp3 player, gaming devices, GPS devices etc.

siGns anD DispLaYs LEDs have created a transformation in how electronic signs, message reader boards and signal displays are made. Displays with LED have numerous applications like traffic sign, message reader boards, electronic billboards, spectacular sign displays, sport stadium scoreboards, signaling indicators, large scale outdoor public art etc. These applications occupy 17% of LED market.

aUTOMOTiVE LiGhTinG In automobile, LEDs today are mainly used for secondary functions such as dashboard lighting, turn signals and sidemarkers. But with the development in high brightness LED, they are now also being used as headlamp in automobile. Current share of LED application is 15% in automobile.

Illumination sector grew rapidly in 2007, with revenue more than 60% higher than in 2006. Illumination market is expected to reach $1.37 billion by 2012.
Gallium LED life is rated at 40,000 hours, which equates to twenty years of useful service when operated during regular business hours.

ENCIPHER 2008

. VOL 08

31

CURRENT STATUS AND FUTURE OF LED BASED LIGHTING SYSTEM


iLLUMinaTiOn LED still has some drawbacks (high price, low color rendering index, lack of standards etc.), limiting the use of LED for illumination. But still illumination occupies 12% of LED market. Street lighting, general purpose household lighting, decorative lighting etc. are some of the fields where LED based luminaries are used. Recently Boeing has introduced LED based lighting in their Boeing 787 commercial aircraft. OThEr appLicaTiOns There are various other applications like machine vision lighting, lighting for medical and dental observation, phototherapy lighting, marine and airport signaling etc. The application field is no doubt going to increase because LEDs are now being used in areas where two to three years ago they wouldnt have been considered. cOncLUsiOn For widespread use of LED in the field of general illumination and other application significant improvement in performance is demanded. As the LED research and manufacturing community continues to improve the efficiency of LEDs, manufacturers should go beyond talk of lm/W (lumen output per watt), and instead quantitatively describe system performance. They also need to be able to evaluate other characteristics such as Color Temperature, Color Rendering Index and glare by the same standards used for other products. With all the requirements fulfilled, LEDs will undoubtedly replace conventional luminaries and may stand alone in the lighting application. The authors from Department of Electrical and Electronic Engineering are members of Asia Link Project ENLIGHTEN.

SEEE ACTIVITIES

Society of Electrical and Electronics Engineers SEEE has been actively involved in improving the personal and professional qualities of the students. It is a nonprofitable, non-political organization established with the motive of volunteering. It coordinates and organizes various welfare activities for the students of electrical & electronic engineering. SEEE conducts social welfare activities and improves interaction among students of other departments of the university and other institutions in Nepal. Every event of SEEE is purely dedicated towards the holistic development of its members. SEEE started its activities first by appointing its executive board. The board included Bijaya Ghimire as President, Bishal Silwal as Vice-President, Pravesh Kafle as Secretary, Suresh Joshi as Treasurer and other executive members. The first event of SEEE was the orientation program organized for the first year students on 7 Sept 2007. The welcome program to the first year students was organized in the same month under the supervision of the department. The Dean of the School of Engineering was

the chief guest of the program. On the same program the fortnightly Newsletter of SEEE Tech-Brief was launched. The most awaited annual event of SEEE, SEEE Running Shield Football Tournament, was held in November. The final deciding match of the league was between third year White and fourth year which the third year won by 2-1. SEEE also participated in the Environment Week- 08 which was organized by other clubs of KU. After that, the SEEE quiz contest was organized. Fifteen teams participated and the top four advanced to the final. The final, held on 13 June 2008, was won by Bikram Tiwari and Bibek Shrestha. After the Quiz, SEEE Girls Football competition was held on 20th June 2008. The first year and second year students combined against the third year and fourth year students. The senior team won the match by 2-0. The SEEE annual newsletter Enchiper will be launched on the annual day of SEEE. SEEE thanks all the students and faculty for their support and active participation in its activities.
VOL 08

Global lighting energy use is significant, totaling approximately $230 billion per year.

. ENCIPHER 2008

33

REACTOR POwEREd CRAFT


The possibility of life in different planets of our solar system is a matter of great challenge and curiosity for the upcoming generation. Most of the developed nation is interested in making huge investment on such complicated, time consuming project which is indeed praiseworthy. Robotic technology is the fundamental to such project. NASAs Project Prometheus is such an ambitious mission which will orbit three planet-sized moons of Jupiter - Callisto, Ganymede and Europa, with speculation that these moons harbor vast oceans beneath their icy surfaces. The Jupiter Icy Moons Orbiter Mission would orbit each of these moons for extensive investigations of their makeup, history and potential for sustaining life. It uses pioneering techniques of electric propulsion powered by a nuclear fission reactor. The mission has three top-level science goals: 1. To determine the thicknesses of ice layers, mapping where organic compounds and other chemicals of biological interest lie on the surface and whether the moons do indeed have subsurface oceans 2. To investigate the origin and evolution of these moons by determining their interior structures, surface features and surface compositions in order to interpret their evolutionary histories (geology, geochemistry and geophysics) 3. To determine the radiation environments around these moons and the rates at which the moons are weathered by material hitting their surfaces. As Callisto, Ganymede and Europa all orbit within the powerful magnetic field of Jupiter, they display varying effects from the natural radiation, charged particles and dust within this environment. The instruments it would carry are a radar instrument for mapping the thickness of surface ice and a laser instrument for mapping surface elevations and other instruments like a camera, an infrared imager, a magnetometer, and instruments to study charged particles, atoms and dust that the spacecraft will encounter near each moon. It would power its ion thrusters with a nuclear fission reactor and a system for converting the reactor's heat to electricity. A generous electrical power supply available from the onboard nuclear system will run the high-powered instruments and will boost the data-transmission rate back to Earth.

jayaraM karkee..Batch 2005..P&c

After entering orbit around Jupiter, the spacecraft will then orbit Callisto, then Ganymede, and finally Europa. Europa is one of the most apt places in our solar system where primordial extraterrestrial life could exist which was discovered by Galileo Galilei in 1610. An autonomous underwater robot called DEPTHX is being developed to find life on Jupiters icy moon Europa. The Deep Phreatic Thermal Explorer (DEPTHX) robot weighs 3,300-pound and has more than 100 sensors, 36 onboard computers, and 16 thrusters and actuators. The navigation, control, and decision-making capabilities allow the DEPTHX robot to explore out the unknown 3D territory and make a map of what it sees, and use that map to return home; and secondly it can identify zones for the existence of microbial life, and can autonomously collect microbial life in an aqueous environment. This robot was successfully tested in one of the world's deepest sinkholes called Sistema Zacatn in northeastern Mexico where it obtained numerous samples of water and the gooey biofilm that coated the cenote walls. Water chemistry, including temperature, pH, salinity, conductivity and sulphide concentration is sampled to identify gradients or values indicating biological activity. More detailed examination of these regions is conducted by computer analysis of video images from two video cameras. The system also includes a pump to circulate water through a flow cell and capture video images from a microscope. Sequences of microscope images are analyzed to detect the motion and frequency of microorganisms and color images of wall surface are analyzed to characterize the color and texture of wall surface regions. Analysis software compares these attributes to a model of what is normally observed. When image analysis indicates signs of interesting biological activity, the DepthX computers direct the robot to collect samples. This is done with a hydraulically operated robotic arm that can reach six-feet beyond the edge of the vehicle. The robotic arm carries a video camera, a tube for collecting water samples and a unique coring tool for collecting solid samples of algae mat or other growth on the wall. Water samples are collected in five different containers under control of the computer. Consequently water samples and solid samples are returned to the surface for analysis. To allow sufficient development and ground-testing time, the mission is not proposed for launch before the year 2011. This first reactor powered spacecraft with revolutionary capability would certainly create new hope and level in the space age.
VOL 08

Each year, between 20 and 50 million tons of electronic waste is generated globally. Most of it winds up in the developing world

. ENCIPHER 2008

34

uPPER TAmAkOSHI
PPER TAmAkOSHI Hydroelectric Project (UTKHEP) is a run-off-river type project with daily peaking pondage. The project will utilize 820 m gross head with design discharge of 44 m3/s in order to generate designed output of 309 MW. The project is located at Tamakoshi River in Lamabagar VDC, Dolakha-1, Nepal. Tamakoshi River is one of the seven major tributaries of Sapta Koshi River. Intake site is located at Lamabagar which lies about 90 km east of Kathmandu and about 6 km south from Nepal-China Border. Powerhouse is located at Gongar Gaon in Lamabagar VDC near the confluence of River and Gongar Khola. Of run-of-river projects investigated in Nepal, UTKHEP is the most attractive one due to its low specific investment costs and its limited environmental impacts. Environmental Impact Assessment (EIA) report for generation has been approved by Government of Nepal (GoN) while EIA Report for Transmission Line is under final stage of approval. The main construction of the project will commence in 2009 (FY2065/66) and will be completed by 2013 (FY2069/70) with a construction period of 4.5 years.
prOJEcT fEaTUrEs A concrete dam with overflow weir will create an about 2 km long daily pondage at Full Supply Level (FSL) of 1985 Elevation (EL). The intake is located on the right bank and it is integrated as part of the dam structure. Two parallel settling basins each about 250 m long and with cross sectional area of 192 m2 are provided immediately after intake structure. Low pressure headrace tunnel of length 7,170 m and cross sectional area of 29 m2 starts from the basins and ends at the beginning of the pressure shaft. Pressure shaft is about

INTROduCTION & PRESENT STATuS

naMrata t. shrestha..Batch 2004..P&c

2,100 m long in total. An underground powerhouse with four Pelton turbines is provided at 1165 El. The 2,500 m long tailrace tunnel is designed as free surface flow tunnel. 220 kV Transmission line of 47km long is required from Gongar to Khimti to connect UTKHEP power to Integrated Nepal Power System (INPS) via Khimti. The constructional time of this project is 4.5 years and its constructional cost is about US$ 392 Million (excluding interest during construction). It will transmit 220 KVA Double Circuit, 47.0 km (Gongar to Khimti Substation). spEciaL fEaTUrEs Of ThE prOJEcT

The special natural features observed in this project are: 300m high natural dam Good geology in the Tunnel and Powerhouse sites Very good minimum flow in the river during dry season, low flood discharge during wet season. Very low sediment in the Tamakoshi River at the Intake site Minimum environmental effect. nEcEssiTY Of ThE prOJEcT Based on the forecast, it has been estimated that 467 MW power will be deficit in 2013. Even the completion of 70 MW Middle Marshyangdi HEP in 2008 and other planned projects like Chameliya, Kulekhani III and other Independent Power Producer (IPP) Projects to be completed by 2012, there will still be a deficit of more than 300 MW power in 2013. Therefore, in order to meet the peak demand in 2013, it is necessary to complete the Upper Tamakoshi HEP in 2013.

CORPORATE STRuCTuRE
Upper Tamakoshi Hydropower Limited (UTKHPL) Employees, EPF Contributors, Local People, General Public Representatives from other Shareholder Groups (After the completion of the Project)

Nepal Electricity Authority

Nepal Electricity Authority

Board of Directors

Representatives from Debt Financing Institutions

Chief Executive Officer

Project

Hydropower potential in the Koshi Basin is identified at about 10,860 MW with Saptakoshi having capacity of 3000 MW.

ENCIPHER 2008

. VOL 08

35

REduCINg BROAdCAST dELAy


areas in memory called buffers. Typically, a program buffers data before sending it on to a device, such as a hard drive or printer. Typically, components buffer small chunks of digital media so the data can be processed before rendering it, or sending it to a hard disk drive or computer over a network.

inTrODUcTiOn

coMPIled By: BInIta shrestha..Batch 2005..c

OST COmPuTER programs use temporary storage

Data enters the top of the buffer at zero seconds and leaves the bottom of the buffer five seconds later. During the five seconds that the data is in the buffer, a program or algorithm can change, copy, delete, rearrange, or store the data. For example, the encoder buffers data so that the codec can analyze and compress it. Windows Media stores digital media in buffers for three basic reasons: Processing Fast Start Error correction prOcEssinG By default, Windows Media Encoder adds a five-second buffer to a session in order to provide the Windows Media Video codec with the optimum amount of data to compress a stream. While playing on-demand content, the buffer is not noticed. However, when encoding a live stream the buffer increases the broadcast delay. The codec needs a buffer because it analyzes video content over a period of time. For example, it analyzes movement or change from one video frame to the next in order to reduce the bit rate of the content as much as possible and produce high-quality images. To compress data at one point, it might use the analysis of data that occurs two seconds later. Therefore, the more data it can analyze, the better it can compress the stream. Reducing the buffer shortens the delay, but can result in lower quality images. fasT sTarT Fast Start reduces start-up delay. Start-up delay is the period of time the user must wait for the Player buffer to fill with data when first connecting to a stream. If the buffer is set to hold five seconds of data, the wait is at least that length of time, possibly longer depending on network conditions and the bit rate of the stream. Fast Start causes data to be sent faster than the actual bit rate of a stream when a user first connects in order to quickly fill the buffer. If the user has high-speed network access, buffering time is reduced and the Player begins playing the stream sooner. After the buffer is filled, the bit rate returns to normal. In order to reduce start-up delay, Fast Start creates a buffer, which in doing so increases the broadcast delay.

In a broadcast scenario, buffers play a significant role in assuring the quality of digital media as computers and network devices compress, encode, distribute, decompress, and render large amounts of data, very quickly, in real time. In general, the larger the buffer, the better is the end-user experience. However, there is one main disadvantage of large buffers in a live or broadcast-streaming scenario. Buffers cause delays or latencies. A broadcast delay is the difference in time between the point when live audio and video is encoded and when it is played back, and it is created primarily by the buffers that store digital media data. For example, if the buffers store a total of ten seconds of data, Windows Media Player will show an event occurring ten seconds late. Often a delay of less than twenty seconds is not a problem. However, when timing is important, Windows Media components provide a number of ways that you can minimize broadcast delay without causing a significant loss of image and sound quality. BUffErinG sTrEaMinG MEDia DaTa In general when describing the size of a buffer, we usually think in terms of storage space. For example, a word processing program might use 50 kilobytes (KB) to temporarily store a document. When describing a Windows Media buffer, on the other hand, we usually think in terms of time, because audio and video are temporal media. For example, we say a typical buffer holds five seconds of data. When streaming and playing content, the buffers are constantly changing; new data is continuously being added to the buffer in real time, as older, processed data is sent on and deleted from the buffer. The following figure illustrates how data, in a sense, moves through a fivesecond buffer.

The MP3 standard for audio compression first gained a foothold in college dorm rooms in the late 1990s.

VOL 08

. ENCIPHER 2008

36

REduCINg BROAdCAST dELAy


ErrOr cOrrEcTiOn In order for audio and video to play back properly, the digital media must be received in a continuous, unbroken stream. The Internet is an especially challenging network environment. Because of the tremendous size of the Internet, the large amount of data that must be routed and carried, the unpredictable load and bandwidth, and the great distances the data must travel, network conditions are less than favorable for the delivery of streaming content. Though conditions are improving with higher speeds and faster devices, it would not be possible to stream audio and video on the Internet or any congested or slow network without a system of error correction. To mitigate errors caused by data being lost, delayed, and received incomplete, Windows Media Services and Windows Media Player maintain buffers. Streaming media data is sent in numbered packets that include the addresses of the sender and receiver. The Player uses a buffer to arrange the packets that arrive according to their numbers. If a packet is broken or does not arrive in time, the Player requests a new packet from the Windows Media server. The server can then resend the packet from its buffer. The following figure illustrates that even when incoming data stops, playback can continue because it uses the data left in the buffer. When incoming data continues it is added to the buffer and presentation continues smoothly even though packets might have been received in the wrong order and at different times. buffer time values in Windows Media components balance quality and delay for most situations. However, there are three buffer settings you can modify to minimize broadcast delay. The following figure shows the buffers in the encoder, server, and player computers, and how each adds to the delay.

The encoder buffers content to enable the codec to optimize compression; the server buffers content to enable Fast Start; and the Player buffer provides smooth playback for the user. Assuming each buffer stores five seconds of data, and taking into account delays that might be added by network devices, the total broadcast delay can be between 15 and 20 seconds. You cannot eliminate the buffers completely; Windows Media components could not work without any buffers. However, by minimizing the buffer sizes using the procedures in the next three sections, you can reduce broadcast delay to approximately six to nine seconds, depending on network conditions. As you reconfigure settings, test playback and watch for problems. For example, when you reduce the encoder buffer, make sure you are comfortable with the quality of the images. The codec has less motion data to work with, so you might notice more video artifacts, and movement might not appear as smooth. If your content does not include fast motion scenes, you may not even notice any degradation of quality. With a small Player buffer size, playback is more likely to be interrupted by buffering, which is caused by the Player running out of data. Buffering might not be a problem for users with stable, high-speed Internet access. However, users with slow connections may actually need to increase their buffer size. Remember you are balancing quality with broadcast delay, and by reducing delay you may have to make compromises. If you do not have to minimize delay, use the buffers. To reduce overall broadcast delay, you can reduce the size of the encoder and Player buffers, and disable the Fast Start buffer on your Windows Media server.

If packets do not arrive in time, the Player edits out the missing data from the buffer. There is a small jump in the presentation where the missing data is, but without the buffer there would be a hole in the presentation. MiniMiZinG DELaY Buffers are an essential part of creating, delivering, and rendering streaming media content, enabling Windows Media components to provide a high-quality end-user experience. However, there are trade-offs. The larger the buffer size, the longer the broadcast delay. The default

The CODECs can be broken down into two simple subsets: Lossless (ALAC, FLAC, AIFF, WMA Lossless) and Lossy (MP3, AAC, WMA).

ENCIPHER 2008

. VOL 08

37

mEmRISTORS - THE mISSINg LINk

HAT IS a memristor? Memristors are basically a fourth basic building block of circuits joining the resistor, the capacitor, and the inductor, that exhibit their unique properties primarily within the nanoscale. Theoretically, Memristors, a concatenation of memory resistors, are a type of passive circuit elements that maintain a relationship between the time integrals of current and voltage across a two terminal element. Thus, a memristors resistance varies according to a devices memristance function. Memristors were first proposed in 1971 by Professor Leon Chua, a scientist at the University of California, Berkeley. However Hewlett-Packard (HPS) Stan Williams helped develop memristors and the team at HP have already started their work on it.
Set It and Forget it: memristors remembers whether its on or off. Shown here 17 memristors in a row.

it incredible? Just Imagine. This unique property would let a computer effectively avoid a start-up process: as memory would always store its most recent state, computers could be instantly ready for use. The breakthrough would also reduce power consumption by saving the need to reload data and could also lead to human-like learning processes. Today, most PCs use dynamic random access memory (DRAM) which loses data when the power is turned off. But a computer built with memristors could allow PCs that start up instantly, laptops that retain sessions after the battery dies, or mobile phones that can last for weeks without needing a charge. If you turn on your computer it will come up instantly where it was when you turned it off. This new circuit element solves many problems with circuitry today - since it improves in performance as you scale it down to smaller and smaller sizes. Memristors will enable very small nanoscale devices to be made without generating all the excess heat that scaling down transistors is causing today. Lets hope memristors can help the memory revolution to continue! Now virtually every electronics textbook will have to be revised to include the memristor and the new paradigm it represents for electronic circuit theory.

anukraMan neuPane..Batch 2005..c

The memristors are so called because they have the ability to remember the amount of charge that has flowed through them after the power has been switched off. Isnt

dESIgN & EVALuATION OF LEd BASEd LIgHTINg SOLuTIONS


IgHTINg IS the most common end use of generation of electricity. The conventional lighting equipments are incandescent lamps, fluorescent lamps, halogen lamps; sodium vapor lamps etc. are inefficient in one factor or other. White Light Emitting Diodes (WLEDs) are among the most efficient devices for converting electricity into light, rivaling the efficiency of low vapor sodium lamps. Like incandescent lights they have the advantage of running directly from low voltage batteries without the need for fancy, expensive electronics to convert voltage levels. They are totally solid state devices, and last much longer than any other luminaries. But WLED technology is in its beginning state, so still there are no well defined standards (lumen requirement, efficiency, life, Color Rendering Index, Color Temperature etc.) for WLED and WLED based luminaries.

Mr. dIwakar BIsta

The ongoing lighting research at the Department of Electrical and Electronics Engineering is based on WLEDs being used as new technology which can replace conventional lighting equipments. The research is mainly focused on characterization of WLED and developing technical standard for WLED which can be used in Nepal for different lighting projects. The research also considers development of test instrument for measuring parameters (I-V characteristic, linear and angular illumination pattern, Spectral power distribution) of WLED. The research was initiated as a part of Asia Link project ENLIGHTEN which is a collaboration of Helsinki University of Technology (HUT) - Finland, Vilnius University (VU) Lithuania with Kathmandu University (KU) from Nepal.

LED lights generate no heat therefore they are cool to the touch and can be left on for hours without incident or consequence if touched.

VOL 08

. ENCIPHER 2008

38

POwER LINE COmmuNICATION M. s

Irsha

signals etc) through the existing power cables is referred to as Power Line Communication (PLC). It is a new and competitive technology, being supported by utilities and manufacturers around the globe. Broadband application of PLC refers to high data rate applications, such as internet over power lines (e.g. Data rates of 3Mbps in uploading and up to 200Mbps in downloading) while its narrowband applications refer to low data rate applications that are useful in control and telemetry of electrical equipments like switches, meters and domestic appliances, operating over a frequency range of 10 KHz-500 KHz. While the technology of PLC avoids the necessity of installing extra wiring or cables in order to transfer information, its hostile and noisy environment presents some challenging tasks of signal conditioning for a system engineer. The impedance and attenuation characteristics of power lines are time varying as well as frequency varying. Also, it is an inherently noisy environment; every time a device turns on or off, it introduces some noise, in the form of short impulses. Energy-saving devices often introduce noisy harmonics into the line. The system must be designed to deal with all such kinds of naturally occurring disruptions. In Kathmandu University, we have been dealing with narrowband applications of PLC. In the past few years, remote control and monitoring, and point-to-point data and voice communication have been successfully tested in the research laboratory. The current research work is mainly focused on data transmission through power line from a computer to a terminal device. In addition, it involves a new concept of indoor visible-light communication, which refers to the use of visible light as a carrier for information signals. After data is correctly received from the power line, it can be fed to WLED lamps. These lamps, used for lighting purpose can also be used to transfer information in a wireless link to other terminal devices utilizing optical sensors, and then suitably displayed in a display device, like the LCD. The waveforms and simplified block diagram of the system are presented below.

RANSmISSION OF information (data/ voice/ control

BhattaraI

fig 1: pLc Modem

Fig 2: optical Communication System

White LEDs offer advantageous properties like high brightness, low power consumption, minimal heat generation etc. compared to incandescent bulbs. White LEDs are now being regarded as the next generation lighting. The concept of using LEDs for lighting as well as communications yields many interesting applications. Also, indoor optical communication exhibits several advantages over Wi-Fi and IR. White-LED radiation is not subject to spectrum licensing regulations because it does not cause any electromagnetic interference, as opposed to RF communication systems. Shadowing effects are also minimized by distributing LEDs across the room. In addition, LEDs are cheaper sources compared to lasers used in IR. Power line communication integrated with visible-light communication can provide very high data rate communications access for indoor networking. The combination of these two techniques has perhaps unlocked a new dimension in the last- mile system. Ms. Bhattarai is a Research Fellow at Kathmandu University - Happy House Foundation.

fig 3: fsK Modulated Data

Fig 4: Transmitted and Received Data (power Line)

1997 was the year of first test for bidirectional data signal transmission over the electrical supply network.

ENCIPHER 2008 . VOL 08

39

Sudoku
3 1 9 4 6 1 8 6 4 2 8 4 5 3 2 6 6 6 7 7 8 2
Brain Teaser Three men decided to split the cost of a hotel room. The hotel manager gave them a price of $30. The men split the bill evenly, each paying $10, and retired to their room. However, the manager realized that it was a Wednesday night, which meant the hotel had a special: rooms were only $25. He had overcharged them $5! He promptly called the bellboy, gave him five one-dollar bills and told him to return it to the men. When the bellboy explained the situation to the men, they were so pleased at the honesty of the establishment that they promptly tipped the bellboy $2 of the $5 he had returned and each kept $1 for himself. problem: Each of the three men ended up paying $9 (their original $10, minus $1 back) totalling $27, plus $2 for the bellboy makes $29. Where did the extra dollar go?

For years the electrical utility companies have led the public to believe they were in business to supply electricity to the consumer, a service for which they charge a substantial rate. The recent accidental acquisition of secret records from a well known power company has led to a massive research campaign which positively explodes several myths and exposes the massive hoax which has been perpetrated upon the public by the power companies. The most common hoax promoted the false concept that light bulbs emitted light; in actuality, these 'light' bulbs actually absorb DARK which is then transported back to the power generation stations via wire networks. A more descriptive name has now been coined; the new scientific name for the device is DARKSUCKER. Here, we introduce a brief synopsis of the darksucker theory, which proves the existence of dark and establishes the fact that dark has great mass, and further, that dark particle (the anti-photon) is the fastest known particle in the universe. Apparently, even the celebrated Dr. Albert Einstein did not suspect the truth.. that just as COLD is the absence of HEAT, LIGHT is actually the ABSENCE of DARK... scientists have

Solution: The faulty reasoning lies in the addition at the end. 3 x $9 equals $27, but the $2 tip is included in that $27, so it makes no sense to add the $2 to $27 to make $29. They paid $25 for the hotel room, $2 for the tip ($27), and then got $1 back each to make the original $30. Dark Conspiracy Involving Electrical Power Companies Surfaces now proven that light does not really exist! The basis of the darksucker theory is that electric light bulbs suck dark. Take for example, the darksuckers in the room where you are right now. There is much less dark right next to the darksuckers than there is elsewhere, demonstrating their limited range. The larger the darksucker, the greater its capacity to suck dark. Darksuckers in a parking lot or on a football field have a much greater capacity than the ones used in the home, for example. As with all manmade devices, darksuckers have a finite lifetime caused by the fact that they are not 100% efficient at transmitting collected dark back to the power company via the wires from your home, causing dark to build up slowly within the device. Once they are full of accumulated dark, they can no longer suck. This condition can be observed by looking for the black spot on a full darksucker when it has reached maximum capacity of untransmitted dark... you have surely noticed that dark completely surrounds a full darksucker because it no longer has the capacity to suck any dark at all. HumOR
VOL 08

. ENCIPHER 2008

40

HumOR
VIVA Subject: Electrical Engineering
Examiner: Why is a thicker conductor necessary to carry a current in AC as compared to DC ? Candidate: An AC current goes up and down (drawing a sinusoid) and requires more space inside the wire, so the wire has to be thicker. Examiner: Why does a capacitor block DC but allow AC to pass through? Student: See, a capacitor is like this ---| |--- , OK. DC Comes straight, like this -------, and the capacitor stops it. But AC, goes UP DOWN UP DOWN and jumps right over the capacitor! Examiner: What is a step-up transformer? Student: Transformer that is put on top of electric poles. Examiner(smiling): And then what is a step-down transformer? Student (hesitantly): Uh - A transfomer that is put in the basement or in a pit? Examiner (pouncing): Then what do you call a transformer that is installed on the ground? (Student knows he is caught -- cant answer) Examiner (impatiently): Well? Student (triumphantly): A stepless transformer, sir!

What in the world is electricity?


Electricity is actually made up of extremely tiny particles, called electrons, that you cannot see with the naked eye unless you have been drinking. Electrons travel at the speed of light, which in most American homes is 110 volts per hour. This is very fast. In the time it has taken you to read this sentence so far, an electron could have traveled all the way from San Francisco to Hackensack, New Jersey, although God alone knows why it would want to. The five main kinds of electricity are alternating current, direct current, lightning, static, and European. Most American homes have alternating current, which means that the electricity goes in one direction for a while, and then goes in the other direction. This prevents harmful electron buildup in the wires. Although we modern persons tend to take our electric lights, radios, mixers, etc., for granted, hundreds of years ago people did not have any of these things, which is just as well because there was no place to plug them in. Then along came the first Electrical Pioneer, Benjamin Franklin, who flew a kite in a lighting storm and received a serious electrical shock. This proved that lighting was powered by the same force as carpets, but it also damaged Franklin's brain so severely that he started speaking only in incomprehensible maxims, such as "A penny saved is a penny earned." Eventually he had to be given a job running the post office. After [Benjamin] Franklin came, a herd of Electrical Pioneers whose names have become part of our electrical terminology: Myron Volt, Mary Louise Amp, James Watt, Bob Transformer, etc. These pioneers conducted many important electrical experiments. For example, in 1780 Luigi Galvani discovered (this is the truth) that when he attached two different kinds of metal to the leg of a frog, an electrical current developed and the frog's leg kicked, even though it
ENCIPHER 2008

was no longer attached to the frog, which was dead anyway. Galvani's discovery led to enormous advances in the field of amphibian medicine. Today, skilled veterinary surgeons can take a frog that has been seriously injured or killed, implant pieces of metal in its muscles, and watch it hop back into the pond just like a normal frog, except for the fact that it sinks like a stone. But the greatest Electrical Pioneer of them all was Thomas Edison, who was a brilliant inventor despite the fact that he had little formal education and lived in New Jersey. Edison's first major invention in 1877, was the phonograph, which could soon be found in thousands of American homes, where it basically sat until 1923, when the record was invented. But Edison's greatest achievement came in 1879, when he invented the electric company. Edison's design was a brilliant adaptation of the simple electrical circuit: the electric company sends electricity through a wire to a customer, then immediately gets the electricity back through another wire, then (this is the brilliant part) sends it right back to the customer again. This means that an electric company can sell a customer the same batch of electricity thousands of times a day and never get caught, since very few customers take the time to examine their electricity closely. In fact the last year any new electricity was generated in the United States was 1937; the electric companies have been merely re-selling it ever since, which is why they have so much free time to apply for rate increases. Today, thanks to men like Edison and Franklin, and frogs like Galvanis, we receive almost unlimited benefits from electricity. For example, in the past decade scientists have developed the laser, an electronic appliance so powerful that it can vaporize a bulldozer 2000 yards away, yet so precise that doctors can use it to perform delicate operations to the human eyeball, provided they remember to change the power setting from Bulldozer to Eyeball.

. VOL 08

41

mImO wIRELESS COmmuNICATION

Mr. MadhuP khatIwada

inTrODUcTiOn N RECENT years, one of the growing sectors of the communications industry has been wireless communication providing high-speed and better quality information exchange between mobile devices. Some of the major applications of wireless communications include internet-based mobile communications, videoconferencing, distance learning, wireless home appliances etc [1]. However, supporting these applications using wireless techniques poses a significant technical challenge. From a technical point of view, the vital challenge faced in

This article will present an overview of Multiple-Input Multiple-Output (MIMO) system as an emerging technique used in wireless communication. It will put forward some background research leading to the discovery of the enormous potential of MIMO. It will then acquaint the reader with the principles of MIMO along with the MIMO equation and theoretic capacity, which will depict MIMO as an extraordinary bandwidth-efficient approach to wireless communication.
wireless communications lies in the physical properties of wireless channels. In addition to all these challenges, there is an increasing demand for higher data rates, better quality service, and higher network capacity. Multiple Input Multiple Output (MIMO) technology promises a cost-effective way to provide these capabilities. A MIMO system comprises a wireless communication link with multiple antenna elements in both transmitter and receiver. MIMO takes advantage of the multipath scenario by sending a single transmission from the transmitter antenna array to bounce along multiple paths to a receiver. Transmitting data on multiple signal paths increases the amount of information transferred in a system. [2]. Thus the system is able to handle more information and at a faster rate than a single-input single-output (SISO) scenario with a single path.

MIMO was conceived in the early 1970s by the researchers at Bell Laboratories while they were trying to address the bandwidth limitations that signal interference caused in large, high-capacity cables. However it was not thought to be practical in that era because of the high expense incurred in generating the processing power necessary to handle MIMO signals. Later, with the advancement of technology and cost reductions in signal-processing schemes and with the need to meet the increasing demands, the researchers were compelled to reconsider MIMO for wireless systems [2]. Multiple-input multiple-output (MIMO) architecture has emerged as a popular solution to the incessant quest for increased capacity in modern wireless communication systems. It promises enormous capacity with a marginal increase in cost and complexity. This technology is now poised to penetrate large-scale commercial wireless products and networks such as broadband wireless access systems, wireless local area networks (WLAN), third-generation (3G) networks and beyond [3]. A MIMO system comprises a wireless communication link with multiple antenna elements in both transmitter and receiver as illustrated in Figure 1. A vector of signals is transmitted from the transmitter antenna array simultaneously and then travels through the wireless channel. These signals are then received at the receiver antenna array which combines the signals in such a way that the quality of the communication in terms of BER and data rate is significantly improved for each MIMO user. MIMO systems are based on a space-time signal processing architecture where time is complemented

Figure 1 A MIMO Communication System Model with space or distance which is obtained from the spatial distribution of antennas in the transmitter as well as the receiver. Space-time processing in MIMO systems basically increases data rate (spatial multiplexing) and link reliability (space-time coding).

There are more than 2.3 billion mobile subscribers worldwide.

VOL 08

. ENCIPHER 2008

42

mImO wIRELESS COmmuNICATION


An interesting feature of MIMO is that it exploits the multipath scenario, a pitfall of wireless transmission communication, to its advantage. MIMO benefits from it by transmitting data on multiple signal paths which increases the amount of information transferred in a system which in turn increases the number of users it can serve. Some of the key advantages of MIMO communication can be identified as follows. Extraordinary high spectral efficiency (from 30-40 bit/s/ Hz) and large fade level reduction 10-30 dB) are the prime advantages [4]. Among other benefits that the system provides reduction in co-channel interference (5-15 dB), a flexible architecture through Digital Signal Processing (DSP) and diversity order, which is two times higher in MIMO than in Single-Input Multiple-Output (SIMO). MiMO sYsTEM EqUaTiOn In a single user MIMO model with N transmit and M receive antennas, the MIMO system equation is given by [5] case of SIMO and MISO. In the case of MIMO with antenna diversity at both receiver and transmitter, the capacity expression is given by [6, 7]

C = log 2 [det( I M +

r H N

)................(2) ]

where () means transpose-conjugate, is the SNR at the receive antenna, H is the channel matrix of size M x N, IM is the M x M identity matrix and H is the M x N channel matrix. It should be noted here that N equal power uncorrelated sources are assumed at the transmitter. cOncLUsiOn This article reviews the major features of MIMO links for use in future wireless networks. It is seen that the great capacity gains which can be realized from MIMO. It is clear that the success of MIMO algorithm integration into commercial standards such as 3G, WLAN, and beyond will solve the problem of increasing demand for higher data rates, better quality service and higher network capacity rEfErEncEs [1] A. Goldsmith, Wireless Communications, 1st ed. Cambridge: Cambridge University Press, 2005. [2] G. Lawton, Is MIMO the future of wireless communications? IEEE Computer, vol. 37, no. 7, pp. 20{22, July 2004. [3] X. Wang and H. Poor, Wireless Communication Systems: Advanced Techniques for Signal Reception, 1st ed. New York: Prentice Hall PTR, 2003. [4] G. D. Golden, G. J. Foschini, R. A. Valenzuela, and P. W. Wolniansky, Detection algorithm and initial laboratory results using the v-blast space-time communication architecture, Electron. Lett., vol. 35, no. 1, pp. 14-15, Jan. 1999. [5] D. Gesbert, M. Shafi, D. S. Shiu, P. Smith, and A. Naguib, From theory to practice: An overview of MIMO space-time coded wireless systems, IEEE Journal on Selected Areas in Communications, vol. 21, no. 3, pp. 281-302, Apr. 2003. [6] J. H. Winters, Optimum combining in digital mobile radio with co-channel interference, IEEE Journal on Selected Areas in Communications, vol. 2, no. 4, pp. 528-539, July 1984. [7] G. J. Foschini and M. J. Gans, On limits of wireless communications in a fading environment when using multiple antennas, Wireless Pers. Commun., vol. 6, pp. 311-335, Mar. 1998. Mr. Khatiwoda is a Lecturer at the Department of Electrical and Electronic Engineering.

or in matrix form r = hs + n...................................................(1) where H is the channel matrix of size M x N, r is the M x 1 received signal vector, s is the N x 1 transmitted signal vector, and n is an M x 1 additive white Gaussian noise vector with zero mean and variance 2. The channel matrix H is the factor by which the signal is amplified and is also known as the channel coefficient. The element hmn in the channel matrix H represents the complex gain between transmitter antenna j and receiver antenna i. MiMO capaciTY MIMO systems promise substantial improvements over conventional systems in both quality-of-service (QoS) and transfer rate. In addition to these benefits, it has been shown that MIMO offers absolute gains in terms of capacity bounds. Some fundamental results which compare singleinput-single-output (SISO), single-input-multiple-output (SIMO), multiple-input-single-output (MISO) and MIMO capacities show that capacity grows linearly with m = min(M,N) in MIMO rather than logarithmically as in the
ENCIPHER 2008

In addition to talking for nearly two trillion minutes in the first half of 2007, wireless users were sending more than 28 billion messages a month!

. VOL 08

43

HISTORy OF ELECTRICITy
would answer Benjamin Franklin. On the surface this is partially correct and I certainly wouldn't want to take anything away from Mr. Franklin, for he was truly brilliant. That said, the fact is, evidence has been uncovered that shows there were batteries over 2000 years ago. A clay pot sits in the Baghdad museum that was discovered in 1936. This pot contained copper plate and tin alloy and had an iron rod sealed with asphalt. The iron showed signs of acidic corrosion. By filling this pot with an acidic solution such as vinegar, an electric current could be produced. What it was used for is not known although some speculation would include some form of medicinal value though no one

F yOu asked most people who discovered electricity they knows for sure. In any event it was forgotten by humankind

for well over 1000 years. Until the 1600s, there was no real experimentation with electricity. Up to this point static electricity was played with and it could be produced but nobody really knew what it was or understood it. By rubbing amber, even as early as ancient Greece, it could be made to attract fibers or dust. In the late 1600s a man named Otto von Guericke of Germany is credited with doing some of the first experiments with what we now know is static electricity. Guericke created a machine that could produce static electricity which would enable scientists to experiment with this new found electricity. He also noticed the attribute of electromagnetism.

1729 - Stephen Gray


Discovered the conductive properties of electricity. With experiments using static electricity, he found that certain materials such as silk did not conduct electricity. For the first time electricity was seen as a fluid element that could travel or be hampered from travelling. Some of his work is related to insulation and insulators that would protect future scientists from being injured.

1752 - Benjamin Franklin


Presented the idea that electricity had positive and negative elements and that the flow was from positive to negative. He also, through his most famous experiment with a kite, proved that lightning was a form of electricity.
1767 - Joseph Priestley discovered that electricity followed Newtons inverse-square law of gravity. 1786 - Italian physician, Luigi Galvani demonstrated the electrical basis of nerve impulses when he made frog muscles twitch by jolting them with a spark from an electrostatic machine.

1831 - Michael Faraday


1881 - Louis Latimer gets a patent for the first light bulb with a carbon filament 1885 - George Westinghouse develops and finds uses for AC 1888 - Heinrich Hertz discovers electric waves and how to measure them 1889 - Nikola Tesla develops the first real AC motor and invents the Tesla Coil

1800 - Alessandro Volta


Developed the first electric battery. Of course we know it wasnt really the first but this is the way history views it. He invented the Voltaic pile by placing dissimilar metals copper and zinc or silver and zinc together separated by brine soaked cloth. This cell created electric current. The theory was called contact tension.

Discovered magnetic induction. The work Faraday did in his experiments is probably among the most important and led to many advances in the understanding of electricity. His work led to the creation of the generator enabling us to make electricity. If there is a father of electric I think Michael Faraday is it.

1879 - Thomas Edison


Invented the electric light bulb. This again is one of those cases where he wasnt the first to discover that electricity could create light, but using what others found, he invented the best way to accomplish it. Edison found that by using a carbon filament in a glass globe devoid of oxygen, he could make a continuous light - an amazing invention that would change the world.
1899 - Valdemar Poulsen invents the first magnetic recordings - using magnetized steel tape as recording medium - the foundation for both mass data storage on disk and tape and the music recording industry. 1902 - Guglielmo Marconi transmits radio signals from Cornwall to Newfoundland the first radio signal across the Atlantic Ocean.

Electric Firsts
1889 - Electric Streetcar In Seattle 1902 - Electric Flashlight 1903 - Electric Iron 1906 - Radios With Tuners 1907 - Domestic Vacuum Cleaner 1909 - Electric Toaster 1913 - Electric Refrigerator 1919 - Electric Pop-Up Toaster 1923 - Television

During the whole of his life, Edison received only three months of formal schooling, and was dismissed from school as being retarded.

VOL 08

. ENCIPHER 2008

44

INTERVIEw wITH mR. guRuNg


Assistant Professor Mr. Krishna Gurung has been with us since the August of 2000. Born in Ghelanchowk in Manang, Mr. Gurung shares with us the moments and experiences from his very childhood to a university educator. Excerpts:

Our students are second to none when they graduate.

Talking about the university, what changes do you find in students and faculties from then and now? I am not in the position to make comments about the whole university. Talking about our department, the faculties are as friendly and helpful as they were before. The only shortcoming that I can see is they are always young. The School of Engineering in particular has not been able to retain some of the very good faculties. Otherwise the department would have a very strong team of faculties by now. Perhaps this is the inherent trait of a young and growing university. If you look at the brighter side then those faculties who have gone for further studies will, hopefully, return to KU after the completion of their studies and will make our department very strong. Regarding our students, they are well disciplined as always. To compare the students then and now, the students in the past didnt have as much resources available to them as compared to the students now. The students now have easy access to electronic media, I mean internet, which can be quite handy and resourceful. Frankly speaking, the students then were really absorbed to the field because they had to work hard to get the opportunity to study engineering but now although some are really interested, I dont find such zeal in every one of them. Most of the students now take their opportunity to study engineering for granted. how do you compare the students of Ku with the students of other reputed institutes? To be frank with you, KU has not been able to become the first choice for excellent students looking for studying

Starting off, tell us what your childhood was like? And how did you manage to keep yourself with the studies and be an engineer? Actually I was brought to Kathmandu at my early age for my studies. But I was born in Ghelanchowk village in Manang district. I came to Kathmandu in 2040. I went to Ananda Kuti Vidhya Peeth School and passed my SLC in 2048. On passing my SLC, I joined Amrit Science Campus. I then joined IOE Pulchowk for a week or so. In the meantime I got an Indian Government seat to study in India through Indian Embassy and I completed my Bachelors degree in engineering from RIT, Jamshedpur in 1999. Why did you choose the academic field and not others? I had a desire of becoming an academician from my early childhood. I thought of joining the British army but my propensity towards the academia was unmatched. The fact that my parents and grandparents were quite educated must have some role in shaping my childhood dream which I am still pursuing. how did you end up being at Kathmandu university? I worked at Swet Bhairav Power Supply Pvt. Ltd. for 6 months after completion of my bachelors degree. I used to voluntarily teach school children during my free time. I really enjoyed disseminating my knowledge to others. It was at that point that I decided to become an academician. Then I heard about the vacancy at Kathmandu University and I applied. And, Ive been with you since August 2000.

ENCIPHER 2008

. VOL 08

45

INTERVIEw
engineering yet. There are many reasons for that. But I sincerely consider those who chose KU very fortunate. Our students find themselves in a very unique environment conducive for creative thinking besides usual or rather conventional teaching and learning. Through their continued efforts, self motivation and project works, our students somehow mould themselves into someone very competitive in the market. So our students are second to none when they graduate. They are even better in some aspects of practical applications. Our students have maintained a good reputation of this institution not only in Nepal but also abroad. Do you feel that the project and lab works will help us in the practical fields? Project and lab works do help you in an indirect way. They help to clarify and consolidate your theoretical knowledge and make you better prepared for any kind of future challenge. Its hard to find the direct application though, because it is impossible to simulate all the situations you may face in the practical fields. They are many diverse areas you may go to work after your graduation. Moreover, the technology keeps on changing. But the fundamental things remain the same. Its this little fundamental thing that will help you succeed in any field of your choice. If your foundation is strong, I cant imagine what could ever get wrong. Ku is well known for its strong project works. What do you think is more important: the core theory or project works? As an engineering student, strength in both theoretical and practical knowledge is essential. Although the students here have a strong practical know-how, I feel that there is plenty of room to strengthen their theoretical knowledge, because theory is as important as practical. The theories will help you a lot in research works. To strengthen yourself in both domains, your knowledge shouldnt be limited to what the teachers teach. You should have the desire to learn more and more on yourself as well. on to your Masters degree, you received MS by Research degree from this university. how does your Masters by Research degree differ from regular ME degree? What was your research? What are your future plans? Masters by Research focuses more on research work and has lesser regular subjects to study. It requires one to go to the depth of a particular area of study but in the regular ME, preference is given to study many subjects with limited depth. My thesis title was Three Phase Self Excited Induction Generator with a Single Phase Load. This generator is widely used in micro-hydro systems in Nepal. As an academician I want to do PhD. GpA has always been a problem for our students. how does the GpA affect us in future? Yes, I am aware that the students in our department are getting comparatively lower grades than others. Once you get a job, the GPA will not be of much importance. All that matters is how much you can deliver. In many countries, the companies offer jobs based on the students university rather than the grades. Even in Nepal, local companies have started valuing our graduates. But the grades are important for further studies abroad especially if you are competing for financial aids. It is where our graduates may suffer due to their lower grades compared to students from other institutes or other departments. youve seen a lot of our graduates over the years. What do you say about the availability of job to our graduates? Certainly it isnt easy to get a proper job of utmost satisfaction. Getting a job is certainly competitive. The initial phase of getting job is very difficult. I too have suffered from it. But I havent heard any of our graduates getting unemployed for more than a few months. Either they go abroad for further studies or sit as a job holder. Most of the engineering colleges have internships in their final year. Wouldnt it be better if it were so in our university as well? Of course it would be better but this cant be done by the University alone. The industries where you can work as a trainee are limited. The situation is worse in the case of electrical and electronic engineering. And its hard to guarantee the place for all the students too. An industry might take the students for 2 or 3 years but what after that? It might not take the students forever. But with the efforts of students like you, you can make it possible. In the college where I studied, the University could guarantee the internship of only top 10 students. Rest of the students had to find the place on their own. We can start the internship program but we need help from all the sectors including industries, University and the students. is it necessary to pursue higher studies to work or will a Bachelor degree be sufficient? What would be better for further studies - nepal or abroad? To work in the industry, its not necessary to have further study. You can if you like. But if you want to get into the academic field, higher study is important. Go anywhere to get higher education (let it be Nepal or abroad) but come back to serve your country. Love your country!
VOL 08

. ENCIPHER 2008

46

INTERVIEw
Will it be better if we serve our country after we become graduates or can we do anything while were studying? What have you done for our country as an Engineer? What can we do? Your question seems to be interestingly long. Let me answer it one by one. Its not that the undergraduate students cant do anything at the moment. A lot of students get involved someway. But the contribution will definitely be more concrete after you graduate. In our time, there were limited engineering colleges in Nepal where we could get quality education. Many aspiring students had to go abroad to study engineering. I myself studied in India. After completing my bachelors degree I came back to Nepal and have since devoted myself in imparting engineering knowledge to the students. In that way I have served my country by helping it produce quality manpower for its future development. Im satisfied with what Im doing. My contribution in educating my students is by far the most important contribution I have made. There are many things you can do to serve your country. Its not necessarily limited to engineering field. nepal has been touted as having one of the highest per capita hydropower potential at a staggering 43 GW? What are your views on the inability to harness such enormous energy? We should have harnessed more electricity than we are producing today. But this is not entirely an engineering issue; it is more of a political issue. Important and major hydro powers havent started due to the laxity of the government. Now, the situation might change and hopefully the project will go as planned. Governments policy favors micro hydro project a lot. Whats your take on micro hydro projects and major hydro projects? Thanks to the combined effort of government and private sector as well that Nepal has become one of the worlds fastest growing nations in developing and implementing micro hydro systems. I think the micro and major hydro projects should go simultaneously for the development of hydropower. The micro-hydro projects are helping a lot in rural electrification. Small villages in remote areas cannot wait for years or decades for the national grid to reach them. On the other hand, micro hydro alone cannot fulfill the bulk energy demand of the urban and industrial areas. So the micro and major hydro projects should go side by side. Finally, weve come to an end of this wonderful interview. is there any suggestion youd like to give to your students? Well, this has been a wonderful experience for me. Well, my dear students, whatever you do, always remember that its good to be an important person but its even more important to be a good person.

TECHNOLOgy

TINy FuELCELL mIgHT REPLACE BATTERIES IN LAPTOP COmPuTERS, PORTABLE ELECTRONICS


The fuel cell system can be packaged in containers of the same size and weight as conventional batteries and is recharged by refilling a fuel cartridge, they say. Research on these battery replacement fuel cells, which they claim are safer for the environment than regular batteries, was described today at the 232nd national meeting of the American Chemical Society. Were trying to maximize the usable hydrogen storage capacity of borohydride in order to make this fuel cell power source last longer, says study leader Don Gervasio, Ph.D., a chemist at the Universitys Biodesign Institute, Center for Applied NanoBioScience. That could lead to the longest lasting power source ever produced for portable electronics. One of the challenges in fuel cell development is finding hydrogen-rich compounds for the fuel source. Many different hydrogen sources have been explored for use

If youre frustrated by frequently losing battery power in your laptop computer, digital camera or portable music player, then take heart: A better source of juice is in the works. Chemists at Arizona State University in Tempe have created a tiny hydrogen-gas generator that they say can be developed into a compact fuel cell package that can power these and other electronic devices -- from three to five times longer than conventional batteries of the same size and weight. The generator uses a special solution containing borohydride, an alkaline compound that has an unusually high capacity for storing hydrogen, a key element that is used by fuel cells to generate electricity. In laboratory studies, a prototype fuel cell made from this generator was used to provide sustained power to light bulbs, radios and DVD players, the researchers say.
ENCIPHER 2008

One semiconductor plant can require enough electricity to power a city of 60,000 and several million gallons of water a day.

. VOL 08

47

TECHNOLOgy
in fuel cells, including metal hydride sponges and liquids such as gasoline, methanol, ethanol and even vegetable oil. Recently, borohydride has shown promise as a safe, energydense hydrogen storage solution. Unlike the other fuel sources, borohydride works at room temperature and does not require high temperatures in order to liberate hydrogen, Gervasio says. Gervasio and his associates are developing novel chemical additives to increase the useful hydrogen storage capacity of the borohydride solution by as much as two to three times that of simple aqueous sodium borohydride solutions that are currently being explored for fuel cell development. These additives prevent the solution from solidifying, which could potentially clog or damage the hydrogen generator and cause it to fail. In developing the prototype fuel cell system, the researchers housed the solution in a tiny generator containing a metal catalyst composed of ruthenium metal. In the presence of the catalyst, the borohydride in the water-based solution reacts with water to form hydrogen gas. The gas leaves the hydrogen generator by moving across a special membrane separating the generator from the fuel cell component. The hydrogen gas then combines with oxygen inside the fuel cell to generate water and electricity, which can then be used to power the portable electronic device. Commercialization of a practical version of this fuel cell could take as many as three to five years, Gervasio says.

STRETCHABLE SILICON COuLd BE NEXT wAVE IN ELECTRONICS


Researchers at the University of Illinois at Urbana-Champaign have developed a fully stretchable form of single-crystal silicon with micron-sized, wave-like geometries that can be used to build high-performance electronic devices on rubber substrates. "Stretchable silicon offers different capabilities than can be achieved with standard silicon chips," said John Rogers, a professor of materials science and engineering and coauthor of a paper to appear in the journal Science, as part of the Science Express Web site, on Dec 15. Functional, stretchable and bendable electronics could be used in applications such as sensors and drive electronics for integration into artificial muscles or biological tissues, structural monitors wrapped around aircraft wings, and conformable skins for integrated robotic sensors, said Rogers, who is also a Founder Professor of Engineering, a researcher at the Beckman Institute for Advanced Science and Technology and a member of the Frederick Seitz Materials Research Laboratory. To create their stretchable silicon, the researchers begin by fabricating devices in the geometry of ultrathin ribbons on a silicon wafer using procedures similar to those used in conventional electronics. Then they use specialized etching techniques to undercut the devices. The resulting ribbons of silicon are about 100 nanometers thick -- 1,000 times smaller than the diameter of a human hair. In the next step, a flat rubber substrate is stretched and placed on top of the ribbons. Peeling the rubber away lifts the ribbons off the wafer and leaves them adhered to the rubber surface. Releasing the stress in the rubber causes the silicon ribbons and the rubber to buckle into a series of well-defined waves that resemble an accordion. "The resulting system of wavy integrated device elements on rubber represents a new form of stretchable, highperformance electronics," said Young Huang, the Shao Lee Soo Professor of Mechanical and Industrial Engineering. "The amplitude and frequency of the waves change, in a physical mechanism similar to an accordion bellows, as the system is stretched or compressed." As a proof of concept, the researchers fabricated wavy diodes and transistors and compared their performance with traditional devices. Not only did the wavy devices perform as well as the rigid devices, they could be repeatedly stretched and compressed without damage, and without significantly altering their electrical properties. "These stretchable silicon diodes and transistors represent only two of the many classes of wavy electronic devices that can be formed," Rogers said. "In addition to individual devices, complete circuit sheets can also be structured into wavy geometries to enable stretchability." Besides the unique mechanical characteristics of wavy devices, the coupling of strain to electronic and optical properties might provide opportunities to design device structures that exploit mechanically tunable, periodic variations in strain to achieve unusual responses. In addition to Rogers and Huang, co-authors of the paper were postdoctoral researcher Dahl-Young Khang and research scientist Hanqing Jiang. The Defense Advanced Research Projects Agency and the U.S. Department of Energy funded the work.

About 10,000,000,000,000,000,000 transistors are shipped each year, or about 100 times the number of ants estimated to be on Earth.

VOL 08

. ENCIPHER 2008

48

TECHNOLOgy TIdBITS
ELECTRICITy FROm THE EXHAuST PIPE
Researchers are working on a thermoelectric generator that converts the heat from car exhaust fumes into electricity. The module feeds the energy into the cars electronic systems. This cuts fuel consumption and helps reduce the CO2 emissions from motor vehicles. In an age of dwindling natural resources, energy-saving is the order of the day. However, many technical processes use less than one-third of the energy they employ. This is particularly true of automobiles, where two-thirds of the fuel is emitted unused in the form of heat. About 30 percent is lost through the engine block, and a further 30 to 35 percent as exhaust fumes. Scientists all over the world are developing ways of harnessing the unused waste heat from cars, machines and power stations, in order to lower their fuel consumption. There is clearly a great need for thermoelectric generators, or TEGs for short. These devices convert heat into electrical energy by making use of a temperature gradient.

APPEAL: NATIONAL wIRELESS NETwORk

June 4, 2008 on the occasion of World Environment Day at CV Raman Auditorium, Kathmandu University, an enthusiastic young political leader Gagan Thapa said, We do have two choices being a new generation of new Nepal; one leave country saying it as the worst place to live and have professional life, second stay in the worst country and make it better. Do we really want to leave this country or do we want to make it better place to live? Think yourself! We dont have electricity; we dont have job opportunity, we dont have good political stability but we do have a hope. We can hope that one day we would have a broadband communication highway in whole Nepal, from east to west and north to south.

T wAS

And also to meet some of the millennium development goals: Telephone lines and cellular Subscribers per 100 population Personal computers in use per 100 population Internet users per 100 population With the east west optical fiber link many urban areas will rejoice the broadband communication facilities, but due to its geographical structure WI-FI technology best suits the Himalayan region to have wireless broad band highway so that remote villages of Nepal can enjoy the communication facilities. With this vision Ramon Magsaysay Award winner Mahabir Pun of Myagdi initiated Nepal Wireless Networking Project and has connected several villages in Myagdi with WI-FI link. So as to extend this project in whole Nepal he comes up with an idea of one dollar a month to raise a fund for establishing a link in remote villages of Nepal. With 30,000 supporters in this fund raising campaign 24 relay station can be built to serve 80 villages and 70,000 people in Myagdi, Parbat and Kaski district along with many other villages in different district of western Nepal. To help in his Noble cause the Society of Electrical and Electronics Engineers appeals to all the students and faculties to join together and save one dollar a month for Nepal Wireless Project. Your saving can make a difference. For further information: Society of Electrical and Electronics Engineers, Kathmandu University. Or www.nepalwireless.net

People in urban area rejoice the facility of new technology but people in remote village still lack the facility. Use of communication technology be it a telephone or the internet can help people in rural areas find out about market prices and sell their products at a better price. It can also overcome traditional barriers to better education by making books available online and opening the door to e-learning. In the developing country like ours it is very important to have a communication highway so as to provide: Telemedicine services from city hospitals to rural clinics Remittance service in villages to help people working abroad send money Credit card acceptance services to tourist in the trekking routes Start local e-commerce service through e-bulletin boards to help villagers sell products Develop voice over Internet protocol (VoIP) services for cheaper communication
ENCIPHER 2008

The worldwide internet population is estimated at 1.08 billion. In 2000 there were 400 million users, and in 1995 20 million users.

. VOL 08

49

ELECTRICAL SAFETy
Accident reports continue to confirm that people responsible for the installation or maintenance of electrical equipment often do not turn the power source off before working on that equipment. Working electrical equipment hot (energized) is a major safety hazard to electrical personnel. The purpose of this article is to alert electrical contractors, electricians, facility technical personnel, and other interested parties to some of the hazards of working on 'hot' equipment and to emphasize the importance of turning the power off before working on electrical circuits. WhY shOULD ThE pOWEr BE TUrnED Off? shOrT circUiT arcinG faULTs A short circuit occurs when conductors of opposite polarity are accidentally bridged by a conductive object or bridged to grounded metal. Metal screwdrivers, wrenches, fish tapes, test instruments, etc. have all been found to have made inadvertent contact while persons were working on live equipment. An arcing fault may be established that is limited only by the total impedance of the circuit. The arcing will continue until the circuit breaker, fuse, or equipment ground fault protection device on the line side of the fault opens the circuit. Even if the short circuit protective device opens the circuit without any intentional delay, portions of the conductors and other metallic materials in the path of the arc may explode violently, showering the area with hot molten metal that can cause severe burns or death. The flash associated with the arc can also cause permanent eye damage. Finally, a short circuit may expel shrapnel toward the workman, penetrating clothing or the body. nOrMaL Or aBnOrMaL sWiTchinG OpEraTiOns Many of the components of an electrical system (switches, circuit breakers, contactors, etc.) are required to be mounted in an enclosure intended to prevent accidental contact with the live electrical parts. The enclosures are also intended to contain byproducts from normal or abnormal switching operations. When a switch, circuit breaker, or contactor opens a circuit that is carrying rated current or perhaps an overcurrent, an arc is established across the contacts of the device. Hot gasses and tiny metal particles may be expelled, under pressure, from the device. This is a perfectly normal consequence, and the closed enclosure contains the hot gasses and particles, protecting personnel from possible severe injury. If the cover of the enclosure is opened or removed while the equipment is still energized and a switching operation occurs, severe burns to the body can result from the hot gases and ejected metal particles, and permanent eye damage can occur as a result of the associated flash. Enclosures for electrical equipment are designed to safely contain normal or abnormal conditions. They cannot do their job if they are opened when equipment is energized. shOcK Or ELEcTrOcUTiOn The human body will conduct electrical current! A circuit path can be through both arms, through an arm or leg to ground, or through any body surface to ground. There is a certain current level at which an individual cannot voluntarily release from the circuit. This is the "no let go current" from which burns and death by electrocution can result. Studies have shown that the perception of electrical shock begins when the current through the affected parts of the body is about 0.002 amperes. When the current increases to about 0.015 to 0.020 amperes, it becomes impossible to let go of the circuit. At higher values of current, e.g. above about 0.100 amperes, ventricular fibrillation and/or heart stoppage will cause certain death. The value of current will depend on the body's electrical resistance and the voltage applied. From Ohms law ( I = V/R ) it can be seen that an increase in current through the body occurs when either the applied voltage increases or the body's resistance decreases. Electrical circuits of 120V can be just as lethal as 240V, 480V, 600V , or higher voltage circuits because the current through the body is dependent on the body's resistance. Electrical shock can also cause involuntary muscular reactions which may result in other injuries. WhY isn'T ThE pOWEr TUrnED Off? LacK Of prOpEr TraininG Many people are just not aware of the inherent dangers as noted above. Victims and witnesses of electrical accidents are often amazed at the violent and explosive nature of electrical energy, the fire balls, bright flashes, acrid smoke and hot molten metal. Often safety training of electricians is done on an informal basis and may be done by instructors who have already developed bad habits. Sometimes unqualified and unlicensed people work on electrical circuits, and safety training is given lip service, or there is no training at all. It

Electrocution is one of the top five causes of workplace deaths.

VOL 08

. ENCIPHER 2008

50

ELECTRICAL SAFETy
is essential that safety training be emphasized to preclude any such complacency. There are courses in electrical safety provided by colleges, by the IBEW and other labour groups, and by various associations. Industry management can promote increased safety by requiring more of their employees to attend such formal safety courses. ThE ELEcTricaL sErVicE "can'T" BE inTErrUpTED Countless electrical accidents have been the result of this philosophy. Invariably, the accidents cause major shutdowns, outages, and equipment replacement. Thus, what could not be shut down is shut down! With detailed planning, almost any piece of electrical equipment can be taken out of service. While this planning may take additional time and involve additional costs, the risks of not doing it may be an accident that can result in massive equipment damage, personal injury, or death. The time and cost of an accident will far exceed the time and cost of a properly planned outage. ThE JOB MUsT BE DOnE qUicKLY When the pressures of time dominate any work activity, mistakes and accidents invariably happen. Caution and good judgment give way to haste. Again, a resulting accident will inevitably take more time to resolve. "WE'VE nEVEr haD a prOBLEM BEfOrE" There is a common misconception that if a known safety practice is violated several times without resulting in an accident, then a future accident won't happen either. Many electricians who receive safety training learn on 120V/240V circuits. Much of their work deals with 120V to ground. While it is possible to be shocked, burned and/or electrocuted on 120V/240V systems, these individuals may lose their fear by continually working equipment hot until it becomes second nature. A few shocks, sparks, and burned wires may not deter them. It may be faster to make connections without having to turn off the power. Transferring this 120V experience to 480V and above can be a fatal error. ThE EqUipMEnT nEEDs TO BE EnErGiZED fOr TEsTs It is recognized that there are some situations where electrical measurements need to be taken while the equipment is energized. In these situations, there are certain legal requirements that must be met before any work is performed including ensuring the work is done by a "qualfied person". A qualified person is "one familiar with the construction
ENCIPHER 2008

and operation of the equipment and the hazards involved". The possession of an electrical license may not be sufficient to qualify a person to work on all equipment. Education and training may be necessary for the specific equipment involved. OThEr haZarDs There are a number of other hazards related to working equipment hot which are not obvious. In particular, determining that a circuit is OFF can be difficult in some instances. Even with the best of intentions to avoid working hot, it is necessary and important to check for circuit voltage with an appropriate voltmeter before working on equipment presumed to have been deenergized. This situation results when the equipment involves items such as tie breakers, double throw disconnect switches, automatic transfer switches and emergency generators. In such cases, turning the equipment to OFF may result in power being supplied by another circuit route or from another source. Working on these circuits requires extra knowledge and caution. The use of lock-outs and lock-off tags and equipment is essential when working remotely from a disconnect device. The electrician must assure that the power is OFF and stays OFF. Another less obvious hazard can exist when restarting equipment after a fault. Resetting or replacing an overcurrent protective device without correcting the cause can result in circuit breaker tripping, fuse opening, and possible equipment and personnel damage from arc byproducts. This problem can occur at initial start-up, restart after rework, and restart after incidents such as short circuits or water damage. It is important that the validity of the circuit phase isolation be verified by both dielectric strength testing (hi-pot) and insulation resistance testing (megger). Also, prior to restart, all loads should be shed, i.e., the load switches turned off so that the restart does not close into a large number of motor loads. This sort of activity takes knowledge, education, and training and should only be attempted by qualified persons. in sUMMarY Electrical accidents can't happen when the power is shut off. While that statement seems to make obvious sense, this article has attempted to make another statement clearer: electrical accidents can and do happen when working on equipment that is energized. All electrical personnel should remember that even a 'simple; accident can result in major equipment damage, severe personal injury, or even death.

Currents of approximately 0.2 A are potentially fatal, because they can make the heart fibrillate, or beat in an uncontrolled manner.

. VOL 08

SEEE ESSAy COmPETITION

51

mAjOR

OR

mICRO HydRO POwER?


so that we can compete with other countries. And to extend our existing industries, we have to have more power supplies. Even peoples demands are increasing day-by-day. Electric gadgets arent going to simplify things either. So, the micro-hydros will not be able to sustain the ever-growing power demands. Major hydro plants are the only solution for us as hydropower is the only reliable source of energy in Nepal. The national demand for electricity is estimated at approximately 600-700 MW presenting a deficit of almost 200-300 MW. This present deficit of power has led the country to introduce load-shedding. Load shedding has not only disrupted normal life but also affected business and industry. In an area where industries are already experiencing difficulties competing in the market, load shedding has added insult to injury. As a consequence we are facing problems of high cost of goods. So, efforts to harness more hydropower would be a boon to the ailing industries and the development of the country. If we construct major hydropowers, we can sell it to our neighboring countries. China and India are getting highly industrialized and their demands for energy will continue to grow. Our government must look at this as an opportunity to ensure benefits for the country. After all, at a time globalization is gaining momentum, why shouldnt Nepal sell its hydroelectric power when it can? If we can manage sufficient power, electric train, electric vehicles and electricity as household cooking unit wont be unimaginable. It can be the ultimate solution for the persisting fuel crisis. Besides it will also create opportunity of employment in different sector. The availability of abundant water resources and favorable geo-political features has provided ample opportunities for development of both kinds of hydro powers. Nepal could prove itself as one of the richest country in the region if water and human resources are simultaneously developed. Continuous efforts towards sustainable development of the nation's hydro potential with modern technologies are necessary. Co-operation between local groups and international organisation in the planning, investment and operation is needed to bring success in the industry. Hydropower should be developed as a main source of energy for Nepal; whether it is mini or major. However, the hydroelectric power generated and produced in Nepal also should be developed as an item of export. It should not be treated differently from the petroleum and gas exports from India. If we cant afford to build major hydro-project, we should let other build it, even the money from the royalties can contribute to the development of the country.

Hydropower has always been the talk of the town in Nepal. Nepal is rich in hydro-resources, with access to one of the highest per capita hydropower potentials in the world. Of the economically feasible potential of 43 GW, only 262 MW or 0.2% has been produced so far. The construction of hydroelectric projects contributes to the economical, regional, and social development of the region. Such projects often result in increased investment and economic growth. The establishment of such projects also promotes access to roads, schools, health centers and job opportunities which in-turn increases the living standard of the people in the long run. It also brings the benefits of flood control, flow manipulation for downstream irrigation and water supply. So the question arises to what the government should prioritize major or micro hydro project? Both kinds of projects have their own pros and cons over each other but developing both kinds of mini and major hydropower simultaneously will be conducive to the development of the country. All round development is what our country requires at this moment. But for this government has to emphasize on semi or non government, private companies, and cooperative world to invest in the field. The main advantage of micro-hydro power is that it can be installed at any part of the country. So it would be feasible and exploitable at most of the rural areas. It will help establish the basic infrastructure like road, hospital, school in rural areas. Almost half of the country has no access to electricity; enlightening those parts will boost the living and economical standard of the people. The industrial growth can then prosper in those areas. It creates local employment and income as well as the concept of entrepreneurship and ownership rights to community. In addition, it is a reliable source of energy which can be used for irrigation schemes and transportation. Most of the rural parts in Nepal are located far from serviceable roads and long distances from the NEA's national grid (which would mean high transmission and construction cost). So, mini hydro would be a better option in those areas. Also, the components for micro-hydro can be locally manufactured so the cost can be significantly lowered. Micro-hydro plants are comparatively simple in construction. The scheme can be locally managed, operated and maintained. So, it provides better access for reliable, appropriate and affordable energy which has important role in improving the living condition of the poor people in our country. But for the development of the country, we need industries

anuj kattel..Batch 2005..c

There are 1956 micro-hydro plants capable of generating 13,064 kW of power already installed in Nepal.

VOL 08

. ENCIPHER 2008

52

dIgITAL dIVIdE ANd NEPAL


aBhIManyu ManI acharya dIxIt..Bachelor
In
When Bill Clinton talked about digital divide in 1995 in USA, Mercantile Communications introduced itself as a pioneer in Internet Service Provider in Nepal. Computers were household commodities owned by the well-to-do of Kathmandu then. Roughly, 450 used internet. Today, the number has increased to about 3 lakhs- 70% of them inside the Ring Road. According to Bharat Mehra, the author of, The internet for empowerment of minority and marginalized users, Digital Divide is the troubling gap between those who use computers and the internet and those who do not. Simply put, it is the gap between the haves and the have-nots. This divide splits the world into two sections: people having access to digital or modern information technology such as mobile phones, telephones, television etc. linking them to the internet and the rest who dont. In the global community, information is a commodity. People all around the world are interested in what George Bush has in store for Iraq because it affects other countries. The fate of Iraq might become the state of some other poorer third world country of Asia- like Nepal. But for most people of Nepal who are too busy fighting for their daily morsel of grain, USA is not 2 clicks away. To see a light bulb glow, many of them have to walk on foot for days. Their only source of information is the radio. Journalist Kunda Dixit calls todays radios a public address system for government propaganda. Not forgetting that these radios run on batteries whose cost is almost equal to one hearty meal of dhedo (Millet pudding). Many residing in urban centers who suffer 6-8 hours day power cuts would not want to buy Rs.14 Thousand computer. Cyber cafes that seemed to prosper in the past are operating at loss because of the power-cuts and high priced diesel for electricity generators. Urban people have their information need satisfied by television but with hours of blackout it is not satisfying. Those who use internet with inverters to tackle load-shedding, are constantly irritated by messages like your version of windows did not pass the genuine test. Global conglomerates argue that cheaper South Asian version of software is available in the market. This Cheaper Norton Antivirus software costs Rs.700. This price is about two hundred times greater than its counterfeit counterpart which costs about Rs.35. There exists a clear divide of access to internet between rich and poor countries. But for a country like Nepal, the divide
ENCIPHER 2008

SEEE ESSAy COmPETITION


MedIa studIes..Batch 2006

exists between rich and poor areas. While some colleges and Universities of Kathmandu can afford a VSAT installed in their vicinity, and many with internet services from neighboring ISPs, others outside the valley still work with donated Pentium II computers where all that the students do is play chess to enrich their wisdom. All the ISPs are centered in Kathmandu. These ISPs have branches outside the valley but only in limited areas where the first thing to consider is economic benefit. Besides economy, another reason why Nepalese are in the lower end of the digital divide is that many people are not computer literate let alone information technology literate. Many people still think that internet only means email. Thousands of gigabytes of information is still unused by such users. Language is a cause for this. Computers are in English- the language they know only little about. Work has been done however, both nationally and internationally to bridge the divide. One of the examples is Open source. To tackle the hegemony of most business houses, a little penguin (logo of the open source) has been doing a lot. Free operating software like the Nepali version of Linux or Nepalinux has been used to make people computer literate. Madan Purashkaar Pushtakalayas computer literacy campaign initiated in Dailekh had school going student seeing computers for the first time that too in their mother tongue. Some hope arises from the Mobile phone industry. Telecommunications is one of the sustaining industries in Nepal besides packaged noodles. There is hope that competition among mobile phone companies would help newer technology enter more cheaply and more userfriendly. This would in turn help the middle class and the lower middle class use the technology to its maximum extent. All the development in Information Technology in the majority of the country is in grass-root level. The development in IT, is limited, or centralized to the capital to a large extent and to a very small extent to other urban capitals. Villages in Nepal are literally still in the dark ages. When the western youths talk, they talk about convergence, about miniaturization. They talk about whose hard drive is the biggest, whose microprocessor is the fastest, and whose iPhone is the sleekest. While in our part of the world there are people who still say, I saw a computer the other day.

The US, with a population close to the population of the Middle East, has 199 million Internet users while the Middle East has only 16 million.

. VOL 08

53

AdSL TECHNOLOgy
ADSL or Asymmetric Digital Subscriber Line (ADSL) technology uses existing twisted pair telephone lines to create access paths for high-speed data. This exciting technology is overcoming the technology limits of the public telephone network by enabling the delivery of high-speed Internet access to the vast majority of subscribers homes at a very affordable cost. Delivery of ADSL services requires a single copper pair configuration of a standard voice circuit with an ADSL modem at each end of the line, creating three information channels a high speed downstream channel, a medium Speed upstream channel and a plain old telephone service (POTS) channel for voice. Data rates depend on several factors including the length of the Copper wire, the wire gauge, presence of bridged taps, and cross-coupled interference. The line performance increases as the line length is reduced, wire gauge is increased, bridge taps are eliminated and cross-coupled interference is reduced. The modem located at the subscribers premises is called an ADSL transceiver unit-remote (ATU-R), and the modem at the central office is called an ADSL transceiver unit-central office (ATU-C). The ATU-Cs take the form of circuit cards mounted in the digital subscriber line access multiplexer (DSLAM). A residential or business subscriber connects their PC and modem to a RJ-11 telephone outlet on the wall. The existing house wiring usually carries the ADSL signal to the NID located on the customers premises.

shoBakant dhungana..Batch 2005..c

There are both technical and marketing reasons why ADSL is in many places the most common type offered to home users. On the technical side, there is likely to be more crosstalk (refers to any phenomenon by which a signal transmitted on one circuit or channel of a transmission system creates an undesired effect in another circuit or channel. Crosstalk is usually caused by undesired capacitive, inductive, or conductive coupling from one circuit, part of a circuit, or channel, to another). The upload signal is weakest at the noisiest part of the local loop, while the download signal is strongest at the noisiest part of the local loop. It therefore makes technical sense to have the DSLAM (Digital Subscriber Line Access Multiplexer) transmit at a higher bit rate than does the modem on the customer end. Since the typical home user in fact does prefer a higher download speed, the telephone companies chose to make a virtue out of necessity, hence ADSL. On the marketing side, limiting upload speeds limits the attractiveness of this service to business customers, often causing them to purchase higher cost Digital Signal services instead. In this fashion, it segments the digital communications market between business and home users. TEchnOLOGY ADSL depends on advanced digital signal processing and creative algorithms to squeeze as much information through twisted-pair telephone lines as possible. In addition, many advances have been required in transformers, analog filters, and A/D converters. Long telephone lines may attenuate signals at one megahertz (the outer edge of the band used by ADSL) by as much as 90 dB, forcing analog sections of ADSL modems to work very hard to realize large dynamic ranges, separate channels, and maintain low noise figures. On the outside, ADSL looks simple -- transparent synchronous data pipes at various data rates over ordinary telephone lines. On the inside, where all the transistors work, there is a miracle of modern technology. To create multiple channels, ADSL modems divide the available bandwidth of a telephone line in one of two ways -- Frequency Division Multiplexing (FDM) or Echo Cancellation. FDM assigns one band for upstream data and another band for downstream data. The downstream path is then divided by time division multiplexing into one or more high speed channels and one or more low speed channels. The upstream path is also multiplexed into corresponding low speed channels. Echo Cancellation assigns the upstream band to over-lap the downstream, and separates the two by means of local echo cancellation, a technique well know in

The distinguishing characteristic of ADSL over other forms of DSL is that the volume of data flow is greater in one direction than the other, i.e. it is asymmetric. Providers usually market ADSL as a service for consumers to connect to the Internet in a relatively passive mode: able to use the higher speed direction for the "download" from the Internet but not needing to run servers that would require high speed in the other direction.

SDSL or Symmetrical Digital Subscriber line provides symmetrical data transfer rate and is ideal for businesses.

VOL 08

. ENCIPHER 2008

54

AdSL TECHNOLOgy
V.32 and V.34 modems. With either technique, ADSL splits off a 4 kHz region for POTS at the DC end of the band. An ADSL modem organizes the aggregate data stream created by multiplexing downstream channels, duplex channels, and maintenance channels together into blocks, and attaches an error correction code to each block. The receiver then corrects errors that occur during transmission up to the limits implied by the code and the block length. The unit may, at the users option, also create superblocks by interleaving data within subblocks; this allows the receiver to correct any combination of errors within a specific span of bits. This allows for effective transmission of both data and video signals alike. hOW DOEs ThE aDsL WOrK? on the wire not render the line unusable: the channel will not be used, merely resulting in reduced throughput on an otherwise functional ADSL connection. Vendors may support usage of higher frequencies as a proprietary extension to the standard. However, this requires matching vendor-supplied equipment on both ends of the line, and will likely result in crosstalk issues that affect other lines in the same bundle. There is a direct relationship between the number of channels available and the throughput capacity of the ADSL connection. The exact data capacity per channel depends on the modulation method used. Modulation Two main modulation schemes are currently being used to implement ADSL: carrierless amplitude/phase (CAP), a single carrier modulation scheme based on quadrature amplitude modulation (QAM); and discrete multi-tone (DMT), a multichannel modulation scheme. The choice between them naturally depends on how well they perform in the presence of impairments on the existing copper twistedpair access cabling, because these can limit the transmission capacity. In addition, high-bit rate services carried by ADSL must not interfere with other services, particularly plain old telephone service (POTS) that are being transported simultaneously over the same lines. A highly adaptive transmission system is needed to cope with all these sources of signal degradation. Having carefully studied the performance and flexibility of both modulation techniques, Alcatel Telecom decided to implement DMT in its ADSL system. DMT has the added advantage of having been standardized by the American National Standards Institute (ANSI). It is also being adopted by the European Telecommunications Standardization Institute (ETSI). Multicarrier Modulation In essence, multicarrier modulation superimposes a number of carrier-modulated waveforms to represent the input bit stream. The transmitted signal is the sum of these subchannels (or tones), which have the same bandwidth and equally spaced center frequencies. The number of tones must be large enough to ensure good performance. In practice, a value of 256 provides near optimum performance, while ensuring manageable implementation complexity. Early problems with maintaining an equal spacing between tones have been resolved with the introduction of digital signal processors, which can accurately synthesize the

Fig: Frequency plan for ADSL

The area 1 is the frequency range used by normal voice telephony (PSTN), area 2 (upstream) and area 3 (downstream) areas are used for ADSL.Currently, most ADSL communication is full duplex. Full duplex ADSL communication is usually achieved on a wire pair by either frequency division duplex (FDD), echo canceling duplex (ECD), or time division duplexing (TDD). FDM uses two separate frequency bands, referred to as the upstream and downstream bands. The upstream band is used for communication from the end user to the telephone central office. The downstream band is used for communicating from the central office to the end user. With standard ADSL, the band from 25.875 kHz to 138 kHz is used for upstream communication, while 138 kHz 1104 kHz is used for downstream communication. Each of these is further divided into smaller frequency channels of 4.3125 kHz. During initial training, the ADSL modem tests which of the available channels have an acceptable signal-to-noise ratio. The distance from the telephone exchange, noise on the copper wire, or interference from AM radio stations may introduce errors on some frequencies. By keeping the channels small, a high error rate on one frequency thus need
ENCIPHER 2008

For conventional ADSL, downstream rates start at 256 kbit/s to 8 Mbit/s within 1.5 km (5000 ft) of the DSLAM equipped central office.

. VOL 08

55

AdSL TECHNOLOgy
sum of modulated waveforms, and the FFT (fast fourier transform), which can efficiently compute this sum. Discrete Multi-tone Modulation DMT is a form of multicarrier modulation in which each tone is QAM modulated on a separate carrier. The lowest carriers are not modulated, thereby avoiding interference with POTS. DMT modulation is optimal for band-limited communication channels (such as twisted pair telephone cables), which exhibit large differences in gain and phase with frequency. When the modem is initialized, the number of bits assigned to a tone can be set to compensate for differences in these transmission characteristics. Subsequently, if conditions on the line alter slowly, this bit assignment can be changed "on the fly." Over long distances, a DMT-based ADSL transmission system approaches the fundamental capacity limit of 13.6 Mbps. However, over the distances typically found in the access network (a few kilometers), the maximum capacity drops to about 6 Mbps. In principle, tones are independent of one another. However, in practice some intersymbol interference (ISI) occurs because the tail of one symbol corrupts the start of the following symbol. Fortunately, this can be virtually eliminated by adding a small number of samples (the cyclic prefix) to each DMT symbol. Any ISI is then limited to the prefix that is removed after demodulation by the FFT. aDsL in nEpaL ADSL technology is very new in Nepal. Nepal telecom launched its broadband services by use of ADSL 2+ (=up to 24Mbps) technology to its customers. The service is initially available for Katmandu valley only. It has plans to expand the ADSL network through-out the country within the next three years. Initially only high speed Internet Service has been available and according to NTA; services such as VPN, multicasting, video conferencing, video-on-demand and broadcast application etc. will also be added in future. Nepal Telecom has claimed that they are offering ADSL service to net surfers at the cheapest price. This step of Nepal telecom will surely increase Internet user in Nepal. References: http://www.wikipedia.org; http://www.cs.tut.fi http://howstuffswork.com

About 98.5% of Northern Ireland has broadband (ADSL) availability.

VOL 08

. ENCIPHER 2008

56

IS 8051 THE ONLy SOLuTION?


Most of the EEE first year students at KU start working on electronic projects by building simple circuits using discrete components. Second year students make use of MSI/LSI devices in addition. By the end of second year, they realize how difficult and messy it is to get their work done by connecting a large number of components. They want to reduce number of components and try something new and interesting for the third year project. Projects such as making robots or telecommunication devices using microcontrollers are the ones that allure third year students the most. Some of these projects justify the use of microcontrollers whereas, in others, microcontrollers could easily be replaced by other cheaper and/or better components. 8051 based microcontrollers are very popular amongst the students. If you ask them why they have chosen the 8051 microcontroller for their projects, the most likely answer will be either because they want to learn how to use it or because they dont have any better options. The 8051 microcontroller is one of the oldest microcontrollers and is easily available. These are the major reasons behind its popularity. However, better options do exist. For example, dsPIC microcontrollers from Microchip can serve better that the 8051s in DSP systems. Some advantages of microcontroller based systems over hard-wired electronic circuits are programmability, easier board layout, lower cost and higher reliability. Despite these advantages, microcontroller may not be suitable for designs that have stringent speed requirement. Microcontroller executes instructions sequentially, thus it is not suitable for systems that require high-speed parallel operations. Clever Engineer
A mathematician and an engineer are sitting next to each other on a long flight. The mathematician leans over to the engineer and asks if he would like to play a fun game. The engineer just wants to take a nap, so he politely declines and rolls over to the window to catch a few winks. The mathematician persists and explains that the game is real easy and lots of fun. He explains, "I ask you a question, and if you don't know the answer, you pay me $5. Then you ask me a question, and if I don't know the answer, I'll pay you $5." Again, the engineer politely declines and tries to get to sleep. The mathematician, now somewhat agitated, says, "Okay, if you don't know the answer, you pay me $5, and if I don't know the answer, I'll pay you $50!" This catches the engineer's attention, and he sees no end to this torment unless he plays, so he agrees to the game. The mathematician asks the first question. "What's the distance from the earth to the moon?" The engineer doesn't say a word, but ENCIPHER 2008 . VOL 08

A better alternative to microcontroller that has often been overlooked by undergraduate students is Programmable Logic Device (PLD). The main advantage of PLD over microcontroller is speed a digital system implemented in hardware is faster than that implemented in software. Some other advantages include higher pin count, higher parallelism and greater flexibility.

Mr. suBodh ghIMIre

Unlike a full-custom IC, PLD has undefined function at the time of manufacture and can be configured later. PLDs can be reprogrammed several times and their functions can be changed even after they have been soldered onto PCBs. PLDs are available in different size ranging from less than a hundred gate equivalent PAL (Programmable Array Logic) devices to multimillion gate FPGAs (Field Programmable Gate Arrays). With PLDs, designers have many options to match their needs and budget. Low or no cost software tools are widely available in the Internet that allow designers to quickly develop, simulate and test their designs. With the development of Hardware Description Languages such as Verilog and VHDL, designing a complex digital circuit has become as easy as writing a simple C program. Although choosing the correct devices for any system is very difficult and is made even harder when too many alternatives are available, it is advisable to consider as much workable alternatives as possible instead of sticking to the 8051 microcontroller as the only viable solution. Mr. Ghimire is an Assistant Professor at Department of Electrical and Electronic Engineering. HumOR
reaches into his wallet, pulls out a five-dollar bill and hands it to the mathematician. Now, it's the engineer's turn. He asks the mathematician "What goes up a hill with three legs and comes down on four?" The mathematician looks up at him with a puzzled look. He takes out his laptop computer and searches all of his references. He taps into the air phone with his modem and searches the net and the Library of Congress. Frustrated, he sends e-mail to his co-workers all to no avail. After about an hour, he wakes the engineer and hands him $50. The engineer politely takes the $50 and turns away to try to get back to sleep. The mathematician then hits the engineer, saying, "What goes up a hill with three legs, and comes down on four?" The engineer calmly pulls out his wallet, hands the mathematician five bucks, and goes back to sleep.

Intel 8051 has 128 bytes of on-chip RAM and 4 KBytes of on-chip ROM.

Photo Corner

Annual Day 2007 Dr. Chhetri performing at SEEE Annual Day 2007

Launching of Tech-Brief Dean of SoE, Dr. Bhola Thapa launching the fortnightly newsletter Tech-Brief

The Victorious 2005 Batch The 2005 Batch defeated the 2004 Batch 2-1 in the SEEE Running Shield Football Tournament

Look Whos Cheering now! Electrical Girls in Action.

Das könnte Ihnen auch gefallen