Beruflich Dokumente
Kultur Dokumente
2008
EnciphEr 2008
Green Hues
you are always welcome to join the club
A Fortnightly Newsletter
Exploring Creativity
Green Club of Thoughts
Kathmandu University
Info.gct@gmail.com, ku.edu.np/greenclub
Editorial
0
E N
E R
EnciphEr is not just bitter black coffee and edited technical articles. It is a result of hard work, sweat, and the pressure of working together as a team for collecting relevant information and creating a unique design. The sole aim however has always been to convey newer ideas and knowledge about the changing technology.
The students of Electrical and Electronics Engineering, Kathmandu University, have always tried to learn the technology and implement them for the benefit of all. The students society SEEE has created this platform where students can practice technology as an extra curriculum. Students and faculty alike have responded well in the making of Encipher 2008. They breathe life into this issue by providing us with more than just articles. Their reviews, research, and messages have made this a plentiful resource. We are greatly thankful to all students, faculty and non-teaching faculty who have helped create another landmark in the form of Encipher. We also express our gratitude to the business houses for the support. This magazine is also uploaded in the website www.ku.edu.np/ee/seee. By logging on, you can give us feedbacks. You can also blog on articles in this magazine. Your comments will be mailed to respective writers. Your valued suggestions are always welcome. Encipher 2008 Team
EnciphEr 2008
ADVISOR: Bhupendra B. Chhetri, Ph. D EDITOR-IN-CHIEF: Gaurab Karki EDITOR: Sumit Karki COPY EDITORS: Aastha Shiwakoti, Binita Shrestha, Jeevan Shrestha, Rosha Pokharel GRAPHICS AND DESIGN: Vivek Bhandari MARKETING EXECUTIVES: Ankit Jajodia, Shashi Shah, Sunil Baniya, Sushil Khadka PRESS MANAGEMENT: Jayram Karkee, Shyam Bohara SEEE MODERATOR: Bishal Silwal
VOL 08
. ENCIPHER 2008
T ABLE OF C ONTENTS
Messages revIew
37 LEd BASEd LIgHTINg SySTEm 38 POwER LINE COmmuNICATION
research
7 mICRO gRId SySTEmS 10 mATERIALS SCIENCE 14 SImuLATION OF TRuNkEd N/w 20 AC TRANSmISSION SySTEmS 34 uPPER TAmAkOSHI - INTROduCTION 35 REduCINg BROAdCAST dELAy 37 mEmRISTORS 41 mImO wIRELESS COmmuNICATION 56 IS 8051 THE ONLy SOLuTION?
IntervIew
44
energy
essay
51 mAjOR OR mICRO HydRO? 52 dIgITAL dIVIdE
Projects
technology
9 43 49 53 30
INTROduCTION TO mATLAB HISTORy OF ELECTRICITy ELECTRICAL SAFETy AdSL TECHNOLOgy LEd LIgHTINg: CuRRENT STAT
Info
. VOL 08
Message
A popular quote from the most renowned scientist of all time Sir Albert Einstein is Imagination is more important than knowledge. Thinking inline with the quote, we may find the answer to the most important question of our life, Why do we aspire for education or learning? Well, the purpose is simplified to To be able to imagine, in other words, to be able to imagine the things and situation that we previously were unable to. Imagination can encompass every aspect. Imagination brings creativity. Imagination brings innovation. Imagination brings good and bad, sadness and happiness; virtually all the things that we experience or have to live with. Encipher 2008 may be seen as result of imagination of our students and shall reflect improvement in their ability to imagine as they progressed in their education at Kathmandu University. Best wishes
Bhupendra Bimal Chhetri, Ph. D Head Department of Electrical and Electronic Engineering School of Engineering Kathmandu University
VOL 08
. ENCIPHER 2008
Message
Bijaya Ghimire President Society of Electrical and Electronics Engineers (SEEE) Kathmandu University
ENCIPHER 2008
. VOL 08
a system optimization, which is provided by the energy manager. Energy manager uses information on local load needs, power quality requirement, demand side management request etc to determine the amount of power that micro grid should draw from the main grid. Based on the information it provides to the individual power and voltage set points for each micro source controller, provides logic and control for islanding and reconnecting the micro grid during events. Protection is necessary in the event of fault in the micro grid system. If the fault occurs in the main grid then it should be isolated from the main grid as rapidly as possible. If the fault occurs within the micro grid system the protection coordinate should isolate the smallest possible section of the feeder to eliminate the fault. LOaD cOnTrOLLEr anD rEacTiVE Var cOMpEnsaTOr: Electricity generated from micro hydro or wind turbine units is from induction generator. In case of micro hydro it can be seen as a constant power source where as power generated by wind turbine cannot be determined due to unpredictable weather.
Micro source controller is Induction generators have an important component many advantages over of the micro grid system. synchronous generator It provides a basic control such as ruggedness, of real and reactive less maintenance power, voltage regulation requirements, absence through voltage droop and of DC excitation and frequency droop for power inherent short circuit sharing. The micro sources Fig 1: Small Hybrid Micro Grid System protection when working as a stand are either from DC source such as fuel cell, photovoltaic and alone low cost energy conversion schemes. On the other battery storage or AC source such as micro turbine, diesel/ hand an induction generator has some drawbacks, as it gas turbine. All the sources are connected to the point requires reactive power to improve the voltage regulation. of common coupling (PCC) after DC voltage sources are In addition to this induction generator used in micro hydel converted to an acceptable AC sources using voltage source scheme, it converts all available mechanical energy into inverter. Figure 1, shows a micro grid system with various electrical energy to eliminate the controller for the turbine. micro sources (wind turbine with induction generator, battery Controller on turbine side is avoided to make the system bank with rotary converter, diesel generator, PV array with simple and cost effective. Thus, whenever the consumer inverter), PCC, controllable (dump load) and uncontrollable load is reduced, the surplus generated energy from loads (village load) and power factor correction (reactive induction generator must be dumped somewhere else so as compensator). to maintain constant voltage and frequency in the system. These drawbacks can be overcome by using an electronic Another important component in micro grid system is load controller and VAR compensator. [1]
In rural areas of Nepal, the average cost of grid extension per km is between $800010,000, rising to around $22,000 in difficult terrains. - World Bank/UNDP Study
VOL 08
. ENCIPHER 2008
It was in London 1882 that the Edison Company first produced electricity centrally that could be delivered via a distribution network or grid.
. VOL 08
T wAS during my second year classes that I got the opportunity to acquaint myself with MATLAB. In engineering, especially in the Electrical and Electronic Engineering, MATLAB is the most useful workspace.
AN INTROduCTION to MatlaB
mathematical function library, MATLAB language, graphics and application program interface. A set of tools helps you to use MATLAB function and files, many of which are graphical user interfaces. It includes the MATLAB desktop, command window, command history, workspace, browsers for viewing help and search path. In function library, elementary function like sum, sine,
MATLAB stands for Matrix Laboratory. MATLAB was originally written to provide easy access to matrix software developed by the LINPACK and EISPACK projects. It is a high performance language for technical computing. It integrates computation, visualization and programming in easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. Using MATLAB, we can solve technical computing problems faster than with traditional programming languages, such as C, C++, and FORTRAN. Its typical uses include: Math and computation Algorithm development Data acquisition Modeling, simulation and proto-typing Data analysis, exploration, visualization, engineering graphics
MATLAB integrates computation, visualization and programming in easy-to-use environment where problems and solutions are expressed in familiar mathematical notation.
cosine, complex arithmetic to sophisticated function like matrix inverse, matrix Eigen value, Bessel function, and fast Fourier transfer function are calculated. MATLAB has extensive facilities for displaying vectors and matrices as graphs. It includes high-level function for two dimensional and three-dimensional data visualization, image processing and animation and presentation graphics. High level matrix/array language with control flow statements, function, data structures, input/output and object-oriented programming features are allowed in MATLAB language. In engineering, MATLAB has many advantages. Short working period, simple programming language and easy interfacing of software to hardware are some of its benefits. For image processing to identify the output and behavior of system MATLAB is very useful. As a student of engineering, one should have a proper knowledge of MATLAB.
MATLAB has evolved over a period of years with inputs from many users. In university environments, it is standard instructional tool for introductory and advanced courses in mathematics, engineering and science. In industry, MATLAB is the tool of choice for high-productivity research analysis. Today, MATLAB engines incorporate the LANPACK and BLAS libraries embedding the state of the art of software for matrix computation. MATLAB is an interactive system whose basic data element is an array. It allows you to solve many technical computing problems, especially those with matrix and vector formulation. MATLAB features a family of add-on application-specific solution called toolboxes. Tool boxes allow you to learn and apply specialized technology. Toolbox is comprehensive collection of MATLAB function that extends the MATLAB to solve particular cases of problems; areas in which toolboxes are available include fuzzy logic, wavelets, simulation, signal and image processing, communications, control design, test and measurement, financial modeling and analysis, and computational biology. The MATLAB system consists of development environment,
About one-third to one-half of the roughly 1 million Matlab users are involved in electronic systems design.
VOL 08
. ENCIPHER 2008
10
mATERIALS SCIENCE
microstructure of solids. Microstructure is used broadly in reference to solids viewed at the subatomic (electronic) and atomic levels, and the nature of defects at these levels.
Materials science encompasses the study of the structure and properties of any material and uses this body of knowledge to create new types of materials while materials engineering is concerned with the design and testing of engineering materials.
Materials science heavily relies on physics, chemistry and engineering fields. Physical properties of materials are usually the deciding factor in choosing which materials should be used for a particular application. This involves looking at many factors such as material composition and structure (chemistry), fracture and stress analysis (mechanical engineering), conductivity (electrical engineering), and optical and thermal properties (physics). It also involves processing and production methods. Research in this area involves many peripheral areas including crystallography, microscopy, mineralogy, photonics, and powder diffraction. Materials science encompasses the study of the structure and properties of any material and uses this body of knowledge to create new types of materials. Materials engineering is concerned with the design, fabrication, and testing of engineering materials. Such materials must simultaneously fulfill the dimensional properties, quality control, and economic requirements. Several manufacturing steps may be involved: 1. Primary fabrication - solidification or vapor deposition of homogeneous or composite materials 2. Secondary fabrication - shaping and micro-structural control by mechanical working, machining, sintering, joining and heat treatment 3. Testing - measuring the degree of reliability of a processed part, destructively or non-destructively Materials science and engineering affects quality of life, industrial competitiveness, and the global environment.
The Materials Research Society maintains three very extensive directories - professional societies, academic departments, and government organizations. Materials are synonymous with civilization. It is of interest to compare the developments in transport, communication, energy, health and environment with their impact on the strategic, civilian and rural stakeholders of the nation. Just as the economy is getting globalized, research is also undergoing globalization. Materials science plays an increasing role in providing health care. The need for prosthetic devices has increased over the years. As a spin-off from defense research, carbonfiber composites have been used for providing lightweight replacements for metallic calipers for polio patients. A sturdy prosthetic foot was developed from rubberized material having special quality of water resistance. Materials technology has played a key role in building materials, biomass gasification and domestic cook stoves. The problems may appear less glamorous than those in the high technology area but are nevertheless equally challenging. The transformation of materials science into a multidisciplinary, multi-institutional and multinational endeavor has led to an interesting development. In many ways, basic research underpins developments in technology. In condensed matter physics, solid-state chemistry and physical metallurgy, notable advances have been made in new materials synthesis, in understanding phenomena such as self-organization as in novel high temperature superconductors, aluminum matrix composites, new titanium ternary inter-metallic, decagonal quasi-crystals, discotic liquid crystals, crystal engineering and density functional theory of freezing. In carbon nano-tubes, synthesizing Y-junction tubes and a new effect of electric field generation due to fluid flow are seen as major breakthroughs. At first, scientists were led into thinking of nano-materials as just another class of materials with the singular defining feature of a small size. It is now apparent that nanomaterials are a completely new development and need to be considered in their own right. Dr. Adhikari is Assistant Professor at Department of Natural Sciences (Physics)
A chip of silicon a quarter-inch square has the capacity of the original 1949 ENIAC computer, which occupied a city block.
ENCIPHER 2008
. VOL 08
11
that are regenerative. It effectively uses natural resources such as sunlight, wind, rain, tides and geothermal heat, which are naturally replenished. Each of these sources has unique characteristics which influence how and where they can be used. For this reason renewable energy sources are fundamentally different from fossil fuels, and do not produce many green house gases and other pollutants such as fossil fuel combustion. The traditional use of wind, water and solar energy is wide spread in developed and developing country; but the mass production of electricity using renewable energy sources has been more common place recently, reflecting the major threat of climate change due to pollution, exhaustion of fossil fuel and the environmental, social and political risk of fossil fuel and nuclear power. In 2006, about 18 percent of global final energy consumption came from renewable, with 13% coming from traditional biomass, like wood-burning. Hydropower was the next largest renewable source, providing 3%, followed by hot water/heating which contributed 1.3%. Modern technologies, such as geothermal, wind, solar, and ocean energy together provided some 0.8% of final energy consumption. The technical potential for their use is very large, exceeding all other readily available sources.
strength near the earth surface varies and thus cannot guarantee continuous power unless combined with other energy source of storage system. Some estimates suggest that only 33% of total installed capacity can be relied on for continuous power.
WaTEr EnErGY It is the most important renewable energy in this growing world. Earth is replete with water but all cannot be used. However, it is the most easily available source of energy in the world. Energy in water is in the form of motive of temperature difference which can be harnessed and used. Since water is about 1000 times denser than air, even a slow flowing stream or of moderate sea swell can yield considerable amount of energy. Different forms of water energy are: Hydroelectric Energy is a term usually reserved for hydroelectric dams. Micro Hydro Systems are hydroelectric power installation that typically produces up to 100 KW of power. They are often used in water rich areas as a remote area power supply (RAPS). Tidal Power captures energy from the tides in vertical direction. Tides come in, raise water level in basin, and roll out. Around low tide, the water in basin is discharged through turbine. Ocean Thermal Energy Conversion (OTEC) uses the temperature difference between the warmer surface of the ocean and colder lower recess. To this end, it employs cyclic heat engine. sOLar EnErGY Solar energy is the form of energy that is collected from sunlight. Solar energy can be used to generate electricity using photovoltaic solar cells or concentrated solar power. It can also be applied to generate electricity by heating trapped air which rotates turbine in a solar updraft tower. It is also applied on heating buildings directly through passive solar design and heating food stuffs through solar ovens. Heating and cooling air can also be done by using solar chimneys. GEOThErMaL EnErGY Geothermal energy is energy obtained by trapping the earth-heat usually from kilometers deep into the earth crust. Even though it is expensive to build, operating cost
VOL 08
Sources
WinD EnErGY Wind energy is one of the most environmental friendly sources of renewable energy. In this type of renewable energy air flows can be used to run wind turbines and some are capable of producing 5MW of power. The power output of a turbine is a function of cube of wind speed, so as the wind speed increases, power output increases dramatically. Areas where winds are stronger and more constant, such as offshore and high altitude site are preferred locations for wind farms. Wind power is the fastest growing of the renewable energy technologies. Over the past decade, global installed maximum capacity increased from 2500MW in 1992 to just over the 40,000MW at the end of 2003 at the annual growth rate of 30%. Globally, the long term technical potential of wind energy is believed to be five times the current production. Wind
More than 10,000 homes in the United States are powered entirely by solar energy.
. ENCIPHER 2008
12
RENEwABLE ENERgy
is low resulting in low energy cost for suitable sites. This energy derives from the radioactive decay in the core of earth which heats the earth from the inside out. BiOfUEL Plants grow through photosynthesis and produce biomass. Biomass can be directly used as fuel to produce bio fuel. Agriculturally produced biomass fuel, such as biodiesel, ethanol and biogases can be often burned to release its stored chemical energy. Types of Biomass: Liquid Biofuel Liquid biofuel is usually either bio alcohol or bio oil. Bio diesel can be used in modern diesel vehicles with little or no modification to the engine and can be made from waste vegetables, and animal oils and fats. The use of bio diesel reduces emission of carbon monoxide and other hydrocarbons by 20 to 40%. Solid Biogas Direct use of biomass is usually in the form of combustible solid, usually wood, the biogenic portion of municipal solid waste of combustible field crops. Most of the bio matters, including dried manure, can actually be burnt to heat water and to drive turbines. Biogas Biogas can be produced from waste streams such as paper, sewage, animal wastes etc. These various waste streams have to be slurred together and allowed to naturally ferment to produce methane gas. With the unchecked use of fossils fuels, it was imminent that there would be fuel shortages. Renewable energy can definitely be an alternative to the fossil fuels only if we recognize its value and importance.
depends heavily on a sufficient supply of energy. There has been a huge dependence on fossil fuels to meet the energy needs and demands for the industrial, household and transport sectors throughout the world. A global discussion has begun on how long will they last and what alternatives are feasible. According to the United States Energy Information Administration, the world demand for oil in 2005 averaged 83.7 million barrels per day and that in 2006 was 85.3 million barrels per day. The forecast for the consumption of oil for the year 2007 is 87.2 million barrels per day. Between 2003 and 2005, there was 2.22 percent growth in the demand for oil. If growth rate persists in this way then efforts to meet the increasing demand will take a standstill because worldwide oil reserves are set to be drained by 2028. So, we are very close to depleting global oil reserves on the supply side and suffering from economic pressures on the demand side. As our country relies completely on fossil fuels for transportation, it imports all of its petroleum products from India. The price of petroleum product is increasing every year but since the use of other alternative sources of energy is not practiced at present, the government is forced to increase the price of petroleum products in order to fulfill the demands for energy supply. Moreover, the cross-border
The total fossil fuel used in the year 1997 is the result of 422 years of all plant matter that grew in the ancient earth.
ENCIPHER 2008
. VOL 08
13
10 percent of total US generating capacity is fueled by hydropower, about the same as natural gas.
. ENCIPHER 2008
14
usIng MatlaB
r aurav
atna
uladhar
and unfortunately youre stuck in a traffic jam. You want to call home to inform that you will be late but to your utter dismay, your call cannot connect and you are presented with the very familiar pre-recorded voice: Sorry! No channels are available at the moment. Please try later Ever wondered why your call could not connect and why there isnt a channel available for you. The reason for this is rooted to the very nature of a communication network. A telecommunication network has to provide service to a large number of users/clients. Each user requires a channel to communicate with another. One crude approach would be to provide each user in the network with a dedicated channel. This would ensure that a call request by the user is never blocked. On the contrary, such an approach would be inefficient, cumbersome and too expensive for the operator. Hence, in telecommunications we rely on trunking to serve large number of users with limited number of channels/ servers.
In a trunked network system, there will be only limited number of channels intending to serve very large group of users. The channels are placed in a common pool for every user to share. No user has a fixed channel; rather the channels are assigned to users from the pool on-demand basis. When a user completes a call, the channel is placed back into the pool. The use of trunked network is based on trunking theory which exploits the statistical behavior of human users. This theory provides the basis for network operators to predict the number of channels required to serve the demand of its customers with satisfactorily service quality. In a trunked network since the number of available channel is less than the number of users, there is a possibility of all the channels being busy when a user attempts a call, there by his call will be blocked. The blocked call may be either cleared or kept in a queue. Accordingly we have two systems Blocked Calls Cleared (BCC) and Blocked Calls Delay. In BCC trunked network, there is always finite probability of a call attempt being blocked. In such system once a call is
1 litre of regular gasoline is the time-rendered result of about 23.5 metric tonnes of ancient phytoplankton material deposited on the ocean floor.
ENCIPHER 2008
. VOL 08
15
The traffic capacity (A) of a BCC trunked network system is tabulated for various values of GoS and numbers of channels (N) in a standard Erlang B table. The aim of this simulation is to model a BCC trunked network and simulate the system to determine blocking probability under different conditions and verify the result with Erlang B table. siMULaTiOn The simulation was implemented in Matlab. The following approach was developed based on the nature of BCC trunked network: It is assumed that 1000 calls are attempted through the network. Then we generate the arrival instances (arrvInst) and terminating time (termTime) for each call. Inter-arrival time and Holding time are exponentially distributed random values. Appropriate values for mean inter-arrival time (mIAT) and mean holding time (mHT) are chosen and using these
The first ever underground trunk telephone cable went from London to Birmingham which were then termed the Midland and Northern Counties.
VOL 08
. ENCIPHER 2008
16
Table 1: Simulated Blocking % Compared With Blocking % In Erlang B Table n 10 10 10 10 10 20 20 20 Mean interarrival Time 3 5 7 5 2 3 5 3 Arrival Rate() 0.33 0.20 0.14 0.20 0.50 0.33 0.20 0.33 Mean holding Time(th) 18.648 37.555 60.312 59.75 29.36 39.54 76.25 52.83 Erlang Traffic (A) from table 6.216 7.511 8.616 11.95 14.68 13.18 15.25 17.61 simulated Blocking % 5.09 9.88 14.67 29.86 39.15 1.96 4.96 9.49 Blocking % from table 5 10 15 30 40 2 5 10
20
0.20
98.25
19.65
14.59
15
A mobile phone is designed to operate at a maximum power level of 0.6 watts. A household microwave oven uses between 600 and 1,100 watts.
ENCIPHER 2008
. VOL 08
17
controlled oscillator can be used as a frequency modulator. The audio signal is given as an input to the VCO that modulates it and gives the modulated signal which is then coupled to the power line and then transmitted. A power coupler is used on both the transmitter and receiver side to block or pass the signal. The coupler components consists of coupler capacitors and transformer called an isolation transformer. The coupling capacitor blocks the power signals of low frequency and passes the carrier signals of high frequency. The capacitors have to be of high voltage ratings and high frequency. The isolation transformer provides galvanic isolation and impedance matching and it passes high frequency carrier signals.
Not only this, PLC may be used in industrial automation, security and Modulator safety, remote control and monitoring Low Phase etc. Various research are undergoing on Audio Pass Locked Speaker both the narrowband and broadband Amplifier Filter Loop (PLL) applications of PLC. PLC is a proven, competitive, and alternative technology The transmitter part consists of a frequency modulator which is supported by utilities and manufacturers for the through which the input audio (or voice) is given. The control and communication applications, because of the process of impressing a low frequency signal onto a higher advantage of utilizing the pre-existing power line cable as a frequency carrier signal is called modulation. A voltage- communication channel.
The Telephone Company Ltd issued the first known telephone directory on January 1880 containing details of over 250 subscribers.
Power Coupler
Power Coupler
Pass Filter
The received signal is first sent through active band pass filter. The active band pass filter used is a narrow band pass filter whose quality factor Q>20. In narrow band pass filter the output voltage peaks at the central frequency. The band pass filter cuts off the high and low frequency noise and passes the signal of frequency near to the carrier frequency. The signal from the band pass filter is then passed through an amplifier. The frequency which is modulated and transmitted via the power lines is now demodulated at the receiver side to get original audio signal. Demodulation of the signal is done using Phase Locked Loop (PLL). The free running frequency of the PLL is set to the center frequency of the VCO by the timing capacitor and resistor. Then the demodulated signal is passed through an audio amplifier that amplifies low-power audio signals (signals of frequencies between 20 Hz to 20 KHz, the human range of hearing) to a level suitable for driving loudspeakers. Finally, the output from the audio amplifier is given to the speaker and the audio can be heard. This system can be implemented in our daily life. We can play a song in one room in the home and listen it from the speakers in any other room without the Band need for extra wires.
Amplifier
Demodulator
VOL 08
. ENCIPHER 2008
18
is easily available with the test circuit diagram and the truth table.
Microcontroller section This is the most important part of the system. The 4 bit output of the DTMF IC is interfaced with the microcontroller which is programmed to constantly remain in a loop checking for input from the DTMP chip. At the arrival of the input, the microcontroller is interrupted. The microcontroller then decodes each input signal to the corresponding alphanumeric character according to the algorithm table (See Table 1). The algorithm table shows the processing of the microcontroller. For instance if a key 2 is entered twice within a defined interval by the programmer, it is decoded as b, similarly on pressing key 3 once ,it is decoded as d. This algorithm is analogous to the SMS typing in mobile communication system. The alphanumeric character is then displayed on the LCD. In this way message is conveyed from the sender to receiver. Students interested in learning microcontroller can use this project to learn the basics of microcontroller. Besides programming, this project will be helpful in learning to interface external devices to the microcontroller.
Table 1: Code for the keys acc. to number of times pressed within defined time interval Key 0 1 2 3 4 5 6 7 8 9 * Code 0 space .,;1 abc2 def3 ghi4 jkl5 mno6 pqrs7 tuv8 wxyz9 Backspace 1 0 . a d g j m p t w 2 space , b e h k n q u x 3 ; c f i l o r v y 4 1 2 3 4 5 6 s 8 z 5 7 9 Sender
Landline Phone
DTMF Decoder
Microcontroller
The first telephone networks were tiny, connecting a handful of users in closely located buildings, with a small switchboard in between.
ENCIPHER 2008
. VOL 08
19
HVdC TRANSmISSION
with direct current. The availability of transformers and the development and improvement of induction motors at the beginning of the 20th century led to greater appeal and use of AC transmission. Through research and development in Sweden at Allmana Svenska Electriska Aktiebolaget (ASEA), an improved multi-electrode grid controlled mercury arc valve for high power and voltages were developed in 1929. Experimental plants were set up in the 1930s in Sweden and the USA to investigate the use of mercury arc valves in conversion processes for transmission and frequency changing. DC transmission now became practical when long distances were to be covered or where cables were required. The increase in need for electricity after the Second World War stimulated research, particularly in Sweden and in Russia. In 1950, a 116 km experimental transmission line was commissioned from Moscow to Kasira at 200 kV. The first commercial HVDC line built in 1954 was a 98 km submarine cable with ground return between the island of Gotland and the Swedish mainland. Thyristors were applied to DC transmission in the late 1960s and solid state valves became a reality. In 1969, a contract for the Eel River DC link in Canada was awarded as the first application of solid state valves for HVDC transmission. Today, the highest functional DC voltage for DC transmission is +/- 600 kV for the 785 km transmission line of the Itaipu scheme in Brazil. DC transmission is now an integral part of the delivery of electricity in many countries throughout the world. WhY UsE Dc TransMissiOn? The question is often asked, Why use DC transmission? One response is that losses are lower, but this is not correct. The level of losses is designed into a transmission system and is regulated by the size of conductor selected. DC and AC conductors, either as overhead transmission lines or submarine cables can have lower losses but at higher expense since the larger cross-sectional area will generally result in lower losses but cost more. When converters are used for DC transmission in preference to AC transmission, it is generally by economic choice driven by one of the following reasons: 1. An overhead DC transmission line with its towers can be designed to be less costly per unit of length than an equivalent AC line designed to transmit the same level of electric power. However the DC converter stations at each end are more costly than the terminating stations of an AC
line and so there is a breakeven distance above which the total cost of DC transmission is less than its AC transmission alternative. 2. If transmission is by submarine or underground cable, the breakeven distance is much less than overhead transmission. It is not practical to consider AC cable systems exceeding 50 km but DC cable transmission systems are in service whose length is in the hundreds of kilometers and even distances of 600 km or greater have been considered feasible. 3. Some AC electric power systems are not synchronized to neighboring networks even though the physical distance between them is quite small. This occurs in Japan where half the country is a 60 Hz network and the other is a 50 Hz system. It is physically impossible to connect the two together by direct AC methods in order to exchange electric power between them. However, if a DC converter station is located in each system with an interconnecting DC link between them, it is possible to transfer the required power flow even though the AC systems so connected remain asynchronous. ADVANTAGES If the cost of converter station is excluded, the DC overhead lines and cables are less expensive than AC lines and cables. A DC link is asynchronous. The corona loss and radio interference are less. The line length is not restricted by stability. The interconnection of two separate AC systems via a DC link does not increase the short-circuit capacity, and thus the circuit breaker ratings of either system. The DC line loss is smaller than for the comparable AC line. The DC line has two conductors rather than three in the case of AC line and thus requires two-thirds as many insulators. The required towers and right-of-way are narrower in the DC line than the AC line.
DisaDVanTaGEs The converters generate harmonic voltages and currents on both dc and ac sides and therefore filters are needed. The converter consumes reactive power. The dc converter stations are expensive. The dc circuit breakers are difficult to design.
A charged piece of acrylic behaves like a charged high voltage capacitor which can retain almost 1000 Joules of electrostatic energy.
VOL 08
. ENCIPHER 2008
20
HVdC TRANSmISSION
Some notable HVDC transmission systems used worldwide are: In Itaipu, Brazil, HVDC was chosen to supply 50Hz power into a 60 Hz system; and to economically transmit large amount of hydro power (6300 MW) over large distances (785 km). In Leyte-Luzon Project in Philippines, HVDC was chosen to enable supply of bulk geothermal power across an island interconnection, and to improve stability to the Manila AC network. In Rihand-Delhi Project in India, HVDC was chosen to transmit bulk (thermal) power (1500 MW) to Delhi, to ensure: minimum losses, least amount right-of-way, and better stability and control. In Garabi, an independent transmission project (ITP) transferring power from Argentina to Brazil, HVDC backto-back system was chosen to ensure supply of 50 Hz bulk (1000MW) power to a 60 Hz system under a 20-year power supply contract. In Gotland, Sweden, HVDC was chosen to connect a newly developed wind power site to the main city of Visby, in consideration of the environmental sensitivity of the project area (an archaeological and tourist area) and improve power quality. In Queensland, Australia, HVDC was chosen in an ITP to interconnect two independent grids (of New South Wales and Queensland) to: enable electricity trading between the two systems (including change of direction of power flow); ensure very low environmental impact and reduce construction time. There are over 80 existing HVDC transmission line worldwide that either use mercury arc rectifiers, thyristors or IGBTs as a convertor to realize transmission in high voltage d.c. UsE Of hVDc VOLTaGE hiGhEr Than 600 KV The growth and extension of power grids and consequently the introduction of high voltage levels have been driven by a big growth of power demand over decades. In the next 20 years, power consumption in developing and emerging countries is expected to increase for 220 %. Priorities in future developments will be given to low costs at still adequate technical quality and reliability. The use of HVDC technology is the solution, especially for bulk power transmission from the remote generating stations over long distances to load centers. In the future it will be necessary to transmit bulk power > 4000 MW over more than 2000 km. Therefore it is important to develop HVDC transmission technologies for voltage levels above 600 kV. Various researches have been going on worldwide on the possibility of transmitting at 800kV. Presently it is a challenge to all engineers to make it a reality.
stability of the system and reliability of the power system as a whole. Although such problems are of no general concern in our country but in countries with deregulated power delivery mechanism its a major concern of power generators, transmitters, distributors and customers. In deregulated power delivery mechanism reliability and quality is major concern. This major headache of power system operators was mitigated from tap changing transformers, series compensators and shunt compensators with advent of power electronics, being more focused its the emergence of FACTS technology this problem has been mitigated.
The longest HVDC link in the world is the Inga-Shaba 1700 km 600 MW link connecting the Inga Dam to the Shaba copper mine in Congo.
ENCIPHER 2008
21
Among the FACTs devices available till now the most important one can be considered as static synchronous compensator (STATCOM). In the following text I will be describing about STATCOM device. Basically a STATCOM device is a static synchronous generator operated as a shunt-connected static var compensator whose capacitive or inductive output
The Ekibastuz-Kokshetau power line in Kazakhstan holds the record for operating at the highest transmission voltage in the world (1,150 kV).
VOL 08
. ENCIPHER 2008
22
Fig: Two machine system with STATCOM the reactive power exchange between the converter and the AC system can be controlled. If the amplitude of the output voltage U is increased above that of the AC system UT, a leading current is produced, i.e. the STATCOM is seen as a conductor by the AC system and reactive power is generated. Decreasing the amplitude of the output voltage below that of the AC system, a lagging current results and the STATCOM is seen as an inductor. In this case reactive power is absorbed. If the amplitudes are equal no power exchange takes place. A practical converter is not lossless. In the case of the DC capacitor, the energy stored in this capacitor would be consumed by the internal losses of the converter. By making the output voltages of the converter lag the AC system voltages by a small angle, the converter absorbs a small amount of active power from the AC system to balance the losses in the converter. The mechanism of phase angle adjustment can also be used to control the reactive power generation or absorption by increasing or decreasing the capacitor voltage Udc, and thereby the output voltage U. Instead of a capacitor also a battery can be used as DC energy. In this case the converter can control both reactive and active power exchange with the AC system. The capability of controlling active as well as reactive power exchange is a significant feature which can be used effectively in applications requiring power oscillation damping, to level peak power demand, and to
ENCIPHER 2008
The Ameralik Span is the longest span of an electrical overhead powerline in the world with a span width of 5,376 metres.
. VOL 08
23
patrol our arteries, diagnosing ailments and fighting disease; where military battle-suits deflect explosions; where computer chips are no bigger than specks of dust; and where clouds of miniature space probes transmit data from the atmospheres of Mars or Titan.
of the exciting and challenging aspects of the nanoscale is the role that quantum mechanics plays in it. The rules of quantum mechanics are very different from classical physics, which means that the behavior of substances at the nanoscale can sometimes contradict common sense by behaving erratically. You can't walk up to a wall and immediately teleport to the other side of it, but at the nanoscale an electron can -- it's called electron tunneling. inTrODUcTiOn There's an unprecedented multidisciplinary convergence of Substances that are insulators, meaning they can't carry an scientists dedicated to the study of a world so small, we can't electric charge, in bulk form might become semiconductors see it -- even with a light microscope. That world is the field when reduced to the nanoscale. Much of nanoscience requires that you forget what of nanotechnology, the realm of atoms and nanostructures. The rules of quantum mechanics are you know and start learning all over again. Nanotechnology is so new;
A nanometer (nm) is onebillionth of a meter, smaller than the wavelength of visible light and a hundredthousandth the width of a human hair. Nanotechnology includes the many techniques used to create structures at a size scale below 100 nm. Similar to our bodies, that is assembled in a specific manner from millions of living cells. At the atomic scale, elements are at their most basic level. On the nanoscale, we can potentially put these atoms together to make almost anything. hOW nEW is nanOTEchnOLOGY?
very different from classical physics, which means that the behavior of substances at the nanoscale can sometimes contradict common sense by behaving erratically. You can't walk up to a wall and immediately teleport to the other side of it, but at the nanoscale an electron can!
So what does this all mean? Right now, it means that scientists are experimenting with substances at the nanoscale to learn about their properties and how we might be able to take advantage of them in various applications. Engineers are trying to use nano-size wires to create smaller, more powerful microprocessors. Doctors are searching for ways to use nanoparticles in medical applications. prODUcTs WiTh nanOTEchnOLOGY Many products on the market are already benefiting from nanotechnology, which includes Sunscreen, Self-cleaning glass, Clothing, Scratch-resistant coatings, Antimicrobial bandages, Swimming pool cleaners and disinfectants. Before long, we'll see dozens of other products that take advantage of nanotechnology ranging from Intel microprocessors to bio-nanobatteries, capacitors only a few nanometers thick. While this is exciting, it's only the tip of the iceberg as far as how nanotechnology may impact us in the future. ThE fUTUrE Of nanOTEchnOLOGY
In 1959, the physicist Richard Feynman gave a lecture to the American Physical Society called "There's Plenty of Room at the Bottom." The focus of his speech was about the field of miniaturization and how he believed man would create increasingly smaller, powerful devices. In 1986, K. Eric Drexler wrote "Engines of Creation" and introduced the term nanotechnology. ThE WOrLD Of nanOTEchnOLOGY Nanotechnology is rapidly becoming an interdisciplinary field. Biologists, chemists, physicists and engineers are all involved in the study of substances at the nanoscale. One
In the world of "Star Trek," machines called replicators can produce practically any physical object, from weapons to a steaming cup of Earl Grey tea. Long considered to
VOL 08
Nanotubes are 100 times more resistant than the steel, conduct electricity much better than copper, and are also excellent heat conductors.
. ENCIPHER 2008
24
Many nanotechnology experts feel that these applications are well outside the realm of possibility, at least for the foreseeable future. They caution that the more exotic applications are only theoretical. Some worry that nanotechnology will end up like virtual reality -- in other words, the hype surrounding nanotechnology will continue to build until the limitations of the field become public knowledge, and then interest (and funding) will quickly dissipate. nanOTEchnOLOGY chaLLEnGEs, risKs anD EThics The most immediate challenge in nanotechnology is that the need to learn more about materials and their properties
There are now over 600 products available that use nanotechnology.
25
Governments around the world invested $4.1bn in nanotechnology R&D last year, compared with $432m in 1997 and $1.5bn in 2001.
. ENCIPHER 2008
26
BIOmETRICS
IOmETRICS IS an emerging technology of study of methods for uniquely recognizing individuals based upon one or more intrinsic physical or behavioral traits. It is the development of statistical and mathematical methods applicable to data analysis problems in the biological sciences. The term biometrics is derived from the Greek words bio (life) and metric (to measure). According to current terms of use, biometrics deals with the use of computer technology and signal processing technologies for measuring and analyzing a persons physiological or behavioral characteristics for identification and verification purposes.
Biometric characteristics can be divided in two main classes: 1. Physiological are related to the shape of the body. The oldest traits, which have been used for more than 100 years, are fingerprints. Other examples are face recognition, hand geometry and iris recognition. 2. Behavioral are related to the behavior of a person. The first characteristic to be used, still widely used today, is the signature. More modern approaches are the study of keystroke dynamics and of voice. Other biometric strategies are being developed such as those based on gait (way of walking), retina, hand veins, ear canal, facial thermo gram, DNA, odor and scent and palm prints. Biometrics is used in two major ways: Identification is determining who a person is, and it involves taking the measured characteristic and trying to find a match in a database containing records of people and that characteristic. It is often used in determining the identity of a suspect from crime scene information. Verification is determining if a person is who they say they are. It involves taking the measured characteristic and comparing it to the previously recorded data for that person. ThE BiOMETric sYsTEM Biometric devices consist of a sensor, interface, DSP
ENCIPHER 2008 . VOL 08
User Template
Database
Authentication
wave is then processed by DSP processor to identify the subject. The sensors RF field operates under the control of an algorithm that periodically looks for the presence of a fingertip. If the sensor doesnt detect a finger it reverts to sleep mode. When finger starts to move across the sensor, the processor scans the image. anaLYsis Analysis is carried out by converting the biometric of individual into a digital form and comparing it with the template of individual. The measurement scale adopted for analysis is called hamming distance (HD). Hamming distance gives the expert a measure of how closely the measured and digitized biometric matches with digitized template. The scale is between 0 and 1. Two totally similar bit strings give an HD of 0 while two totally dissimilar bit strings gives HD of 1. But in the real world biometric system biometric measure is referred in FAR and FRR. The FRR measures the percentage of invalid users who are invalidly accepted as genuine users, while the FRR measures the percentage of valid users rejected as imposters. FAR and FRR are trades against
CIA, FBI, and NASA have all been the implementers of retinal scanning.
27
BIOmETRICS
each other, and common measure is obtained which is the rate at which both accept and reject rates are equal. In current context, the analysis is done using image processing technique, which is also a part of Digital Signal Processing. cOMparisiOn Biometric systems are compared against each other for performance based on seven criteria: Universality: each person should have the characteristic Uniqueness: how well the biometric separates individually from another. Permanence: measures how well a biometric resists aging. Collectibility: ease of acquisition for measurement. Performance: accuracy, speed, and robustness of technology used. Acceptability: degree of approval of a technology. Circumvention: ease of use of a substitute Comparison of above seven criteria for various biometric inputs is shown in the table below. Another key aspect is how user-friendly a system is. The process should be quick and easy, such as having a picture taken by a video camera, speaking into a microphone, or touching a fingerprint scanner. Low cost is important which includes initial cost of the sensor or the matching software that is involved, and the life-cycle support cost of providing system administration and an enrollment operator. NO SYSTEM IS PERFECT, of course including the one we propose. Any biometric system is prone to two basic types of errors: a false positive and a false negative. The measure of false accept rate (FAR) or false match rate (FMR): the probability that the system incorrectly declares a successful match between the input pattern and a non-matching pattern in the database. In the case of false negative, on other hand, the system fails to make the match between biometric input and stored template. This is due to the fact that two different samples of the same biometric identifiers are never identical.
* circumventability listed with reversed colors because low is desirable here instead of high There is no one perfect biometric that fits all needs. There are, however, some common characteristics needed to make a biometric system usable. First, the biometric must be based upon a distinguishable trait, i.e., two biometric samples of different individuals should not be alike. Technologies such as face or iris recognition have come into widespread use due to their distinguishable trait. The reasons for this could be the sensed biometric data may be noisy or distorted. For example a cut in a finger leaves the fingerprint with a scar. Noisy data can also result from improperly maintained sensors. Or, the biometric data acquired during authentication may be very different from the data used to generate the template during enrollment. For example during authentication user might blink eye during iris capture. Some errors might be avoided by using
VOL 08
Fingerprints on paper, cardboard and unfinished wood can last for up to forty years unless exposed to water.
. ENCIPHER 2008
28
BIOmETRICS
improved sensor. For instance, optical sensors capture fingerprint details better than capacitive fingerprint sensors. System based on the multiple traits could achieve very low error rate. But in both case there is trade off between cost and accuracy, and system based on multiple traits could be burdensome for the legitimate user, as well. cUrrEnT issUEs anD cOncErns As technology advances, and with passage of time, more private companies and public utilities may use biometrics for safe, accurate identification. These advances are likely to raise concerns such as: 1. Identification Preservation and privacy infringement The biggest concern is the fact that once a biometric source has been compromised it is compromised for life, because users can never change their biometric source. Another concern is how a persons biometric, once collected, can be protected. Some argue that if a persons biometric data is stolen it might allow someone else to access personal information or financial accounts, in which case the damage could be irreversible. 2. Biometric theft, impersonation and victimization There are concerns whether our personal information taken through biometric methods can be misused, tampered with, or sold, e.g. by criminals stealing, rearranging or copying the biometric data. Also, the data obtained using biometrics can be used in unauthorized ways without the individuals consent. 3. Physical harm to the individual from biometric instrument. Some believe this technology can cause physical harm to an individual using the methods, or that instruments used are unsanitary. For example, there are concerns that retina scanners might not always be clean. MaJOr arEas Of appLicaTiOn In early days, biometric systems, particularly automatic fingerprint identification systems (AFIS), had been widely used in forensics for criminal identification. Due to recent advancements in biometric sensors and matching algorithms (through image processing), biometric authentication has been deployment in a large number of civilian and government applications. Biometrics is being used for physical access control, computer log-in, welfare disbursement, international border crossing (immigration and naturalization), health care, forensic voice and tape analysis, and personal identification etc. It can be used to verify a customer during transactions conducted via
ENCIPHER 2008
telephone and Internet (electronic commerce and electronic banking). In automobiles, biometrics is being adopted to replace keys for keyless entry and keyless ignition. Due to increased security threats, the ICAO (International Civil Aviation Organization) has approved the use of e-passports (passports with an embedded chip containing the holders facial image and other traits). sTanDarDiZaTiOn EffOrTs As biometric uses are a global phenomenon, standardization of data is being attempted on universal scale. A Common Biometric Exchange File Format (CBEFF) for biometric data interchange and interoperability is under development by the International Biometric Industry Association (IBIA). IBIA is also attempting to integrate measurement schemes to enhance reliability and use of biometric data. Much of biometrics standardization was initiated in the USA, but in 2002 a new ISO/IEC JTC1 Sub-committee was established - SC37, and first met in Orlando in December 2003. ISO (the International Organization for Standardization) and IEC (the International Electro technical Commission) form the specialized system for worldwide standardization. In the field of information technology, ISO and IEC have established a Joint Technical Committee 1: ISO/IEC JTC 1 on Information Technology. The goal of this new JTC 1 is to ensure a high priority, focused, and comprehensive approach worldwide for the rapid development and approval of formal international biometric standards. These standards are necessary to support interoperability and data interchange among applications and systems, and to facilitate the rapid deployment of significantly better open systems standard-based security solutions for purposes such as homeland defense and the prevention of ID theft. At last, we can hope that multidisciplinary research teams in both industry and academia can design the right blend of technologies to create practical biometric application and can integrate them into large system without introducing additional vulnerabilities. rEfErEncEs: April 2008, Electronics for You pp 32-36 http//:www.globalsecutity.org http//:www.spectrum.ieee.org
The first known example of biometrics was a form of finger printing used in China in the 14th century to distinguish the young children.
. VOL 08
29
VOL 08
. ENCIPHER 2008
30
BacKGrOUnD
cOnTEMpOrarY appLicaTiOns MOBiLE appLicaTiOns Mobile applications occupies largest segment 44% in the market. LEDs are generally used as backlight or indicators in mobile devices like laptop computer, cell phone, mp3 player, gaming devices, GPS devices etc.
siGns anD DispLaYs LEDs have created a transformation in how electronic signs, message reader boards and signal displays are made. Displays with LED have numerous applications like traffic sign, message reader boards, electronic billboards, spectacular sign displays, sport stadium scoreboards, signaling indicators, large scale outdoor public art etc. These applications occupy 17% of LED market.
aUTOMOTiVE LiGhTinG In automobile, LEDs today are mainly used for secondary functions such as dashboard lighting, turn signals and sidemarkers. But with the development in high brightness LED, they are now also being used as headlamp in automobile. Current share of LED application is 15% in automobile.
Illumination sector grew rapidly in 2007, with revenue more than 60% higher than in 2006. Illumination market is expected to reach $1.37 billion by 2012.
Gallium LED life is rated at 40,000 hours, which equates to twenty years of useful service when operated during regular business hours.
ENCIPHER 2008
. VOL 08
31
SEEE ACTIVITIES
Society of Electrical and Electronics Engineers SEEE has been actively involved in improving the personal and professional qualities of the students. It is a nonprofitable, non-political organization established with the motive of volunteering. It coordinates and organizes various welfare activities for the students of electrical & electronic engineering. SEEE conducts social welfare activities and improves interaction among students of other departments of the university and other institutions in Nepal. Every event of SEEE is purely dedicated towards the holistic development of its members. SEEE started its activities first by appointing its executive board. The board included Bijaya Ghimire as President, Bishal Silwal as Vice-President, Pravesh Kafle as Secretary, Suresh Joshi as Treasurer and other executive members. The first event of SEEE was the orientation program organized for the first year students on 7 Sept 2007. The welcome program to the first year students was organized in the same month under the supervision of the department. The Dean of the School of Engineering was
the chief guest of the program. On the same program the fortnightly Newsletter of SEEE Tech-Brief was launched. The most awaited annual event of SEEE, SEEE Running Shield Football Tournament, was held in November. The final deciding match of the league was between third year White and fourth year which the third year won by 2-1. SEEE also participated in the Environment Week- 08 which was organized by other clubs of KU. After that, the SEEE quiz contest was organized. Fifteen teams participated and the top four advanced to the final. The final, held on 13 June 2008, was won by Bikram Tiwari and Bibek Shrestha. After the Quiz, SEEE Girls Football competition was held on 20th June 2008. The first year and second year students combined against the third year and fourth year students. The senior team won the match by 2-0. The SEEE annual newsletter Enchiper will be launched on the annual day of SEEE. SEEE thanks all the students and faculty for their support and active participation in its activities.
VOL 08
Global lighting energy use is significant, totaling approximately $230 billion per year.
. ENCIPHER 2008
33
After entering orbit around Jupiter, the spacecraft will then orbit Callisto, then Ganymede, and finally Europa. Europa is one of the most apt places in our solar system where primordial extraterrestrial life could exist which was discovered by Galileo Galilei in 1610. An autonomous underwater robot called DEPTHX is being developed to find life on Jupiters icy moon Europa. The Deep Phreatic Thermal Explorer (DEPTHX) robot weighs 3,300-pound and has more than 100 sensors, 36 onboard computers, and 16 thrusters and actuators. The navigation, control, and decision-making capabilities allow the DEPTHX robot to explore out the unknown 3D territory and make a map of what it sees, and use that map to return home; and secondly it can identify zones for the existence of microbial life, and can autonomously collect microbial life in an aqueous environment. This robot was successfully tested in one of the world's deepest sinkholes called Sistema Zacatn in northeastern Mexico where it obtained numerous samples of water and the gooey biofilm that coated the cenote walls. Water chemistry, including temperature, pH, salinity, conductivity and sulphide concentration is sampled to identify gradients or values indicating biological activity. More detailed examination of these regions is conducted by computer analysis of video images from two video cameras. The system also includes a pump to circulate water through a flow cell and capture video images from a microscope. Sequences of microscope images are analyzed to detect the motion and frequency of microorganisms and color images of wall surface are analyzed to characterize the color and texture of wall surface regions. Analysis software compares these attributes to a model of what is normally observed. When image analysis indicates signs of interesting biological activity, the DepthX computers direct the robot to collect samples. This is done with a hydraulically operated robotic arm that can reach six-feet beyond the edge of the vehicle. The robotic arm carries a video camera, a tube for collecting water samples and a unique coring tool for collecting solid samples of algae mat or other growth on the wall. Water samples are collected in five different containers under control of the computer. Consequently water samples and solid samples are returned to the surface for analysis. To allow sufficient development and ground-testing time, the mission is not proposed for launch before the year 2011. This first reactor powered spacecraft with revolutionary capability would certainly create new hope and level in the space age.
VOL 08
Each year, between 20 and 50 million tons of electronic waste is generated globally. Most of it winds up in the developing world
. ENCIPHER 2008
34
uPPER TAmAkOSHI
PPER TAmAkOSHI Hydroelectric Project (UTKHEP) is a run-off-river type project with daily peaking pondage. The project will utilize 820 m gross head with design discharge of 44 m3/s in order to generate designed output of 309 MW. The project is located at Tamakoshi River in Lamabagar VDC, Dolakha-1, Nepal. Tamakoshi River is one of the seven major tributaries of Sapta Koshi River. Intake site is located at Lamabagar which lies about 90 km east of Kathmandu and about 6 km south from Nepal-China Border. Powerhouse is located at Gongar Gaon in Lamabagar VDC near the confluence of River and Gongar Khola. Of run-of-river projects investigated in Nepal, UTKHEP is the most attractive one due to its low specific investment costs and its limited environmental impacts. Environmental Impact Assessment (EIA) report for generation has been approved by Government of Nepal (GoN) while EIA Report for Transmission Line is under final stage of approval. The main construction of the project will commence in 2009 (FY2065/66) and will be completed by 2013 (FY2069/70) with a construction period of 4.5 years.
prOJEcT fEaTUrEs A concrete dam with overflow weir will create an about 2 km long daily pondage at Full Supply Level (FSL) of 1985 Elevation (EL). The intake is located on the right bank and it is integrated as part of the dam structure. Two parallel settling basins each about 250 m long and with cross sectional area of 192 m2 are provided immediately after intake structure. Low pressure headrace tunnel of length 7,170 m and cross sectional area of 29 m2 starts from the basins and ends at the beginning of the pressure shaft. Pressure shaft is about
2,100 m long in total. An underground powerhouse with four Pelton turbines is provided at 1165 El. The 2,500 m long tailrace tunnel is designed as free surface flow tunnel. 220 kV Transmission line of 47km long is required from Gongar to Khimti to connect UTKHEP power to Integrated Nepal Power System (INPS) via Khimti. The constructional time of this project is 4.5 years and its constructional cost is about US$ 392 Million (excluding interest during construction). It will transmit 220 KVA Double Circuit, 47.0 km (Gongar to Khimti Substation). spEciaL fEaTUrEs Of ThE prOJEcT
The special natural features observed in this project are: 300m high natural dam Good geology in the Tunnel and Powerhouse sites Very good minimum flow in the river during dry season, low flood discharge during wet season. Very low sediment in the Tamakoshi River at the Intake site Minimum environmental effect. nEcEssiTY Of ThE prOJEcT Based on the forecast, it has been estimated that 467 MW power will be deficit in 2013. Even the completion of 70 MW Middle Marshyangdi HEP in 2008 and other planned projects like Chameliya, Kulekhani III and other Independent Power Producer (IPP) Projects to be completed by 2012, there will still be a deficit of more than 300 MW power in 2013. Therefore, in order to meet the peak demand in 2013, it is necessary to complete the Upper Tamakoshi HEP in 2013.
CORPORATE STRuCTuRE
Upper Tamakoshi Hydropower Limited (UTKHPL) Employees, EPF Contributors, Local People, General Public Representatives from other Shareholder Groups (After the completion of the Project)
Board of Directors
Project
Hydropower potential in the Koshi Basin is identified at about 10,860 MW with Saptakoshi having capacity of 3000 MW.
ENCIPHER 2008
. VOL 08
35
inTrODUcTiOn
Data enters the top of the buffer at zero seconds and leaves the bottom of the buffer five seconds later. During the five seconds that the data is in the buffer, a program or algorithm can change, copy, delete, rearrange, or store the data. For example, the encoder buffers data so that the codec can analyze and compress it. Windows Media stores digital media in buffers for three basic reasons: Processing Fast Start Error correction prOcEssinG By default, Windows Media Encoder adds a five-second buffer to a session in order to provide the Windows Media Video codec with the optimum amount of data to compress a stream. While playing on-demand content, the buffer is not noticed. However, when encoding a live stream the buffer increases the broadcast delay. The codec needs a buffer because it analyzes video content over a period of time. For example, it analyzes movement or change from one video frame to the next in order to reduce the bit rate of the content as much as possible and produce high-quality images. To compress data at one point, it might use the analysis of data that occurs two seconds later. Therefore, the more data it can analyze, the better it can compress the stream. Reducing the buffer shortens the delay, but can result in lower quality images. fasT sTarT Fast Start reduces start-up delay. Start-up delay is the period of time the user must wait for the Player buffer to fill with data when first connecting to a stream. If the buffer is set to hold five seconds of data, the wait is at least that length of time, possibly longer depending on network conditions and the bit rate of the stream. Fast Start causes data to be sent faster than the actual bit rate of a stream when a user first connects in order to quickly fill the buffer. If the user has high-speed network access, buffering time is reduced and the Player begins playing the stream sooner. After the buffer is filled, the bit rate returns to normal. In order to reduce start-up delay, Fast Start creates a buffer, which in doing so increases the broadcast delay.
In a broadcast scenario, buffers play a significant role in assuring the quality of digital media as computers and network devices compress, encode, distribute, decompress, and render large amounts of data, very quickly, in real time. In general, the larger the buffer, the better is the end-user experience. However, there is one main disadvantage of large buffers in a live or broadcast-streaming scenario. Buffers cause delays or latencies. A broadcast delay is the difference in time between the point when live audio and video is encoded and when it is played back, and it is created primarily by the buffers that store digital media data. For example, if the buffers store a total of ten seconds of data, Windows Media Player will show an event occurring ten seconds late. Often a delay of less than twenty seconds is not a problem. However, when timing is important, Windows Media components provide a number of ways that you can minimize broadcast delay without causing a significant loss of image and sound quality. BUffErinG sTrEaMinG MEDia DaTa In general when describing the size of a buffer, we usually think in terms of storage space. For example, a word processing program might use 50 kilobytes (KB) to temporarily store a document. When describing a Windows Media buffer, on the other hand, we usually think in terms of time, because audio and video are temporal media. For example, we say a typical buffer holds five seconds of data. When streaming and playing content, the buffers are constantly changing; new data is continuously being added to the buffer in real time, as older, processed data is sent on and deleted from the buffer. The following figure illustrates how data, in a sense, moves through a fivesecond buffer.
The MP3 standard for audio compression first gained a foothold in college dorm rooms in the late 1990s.
VOL 08
. ENCIPHER 2008
36
The encoder buffers content to enable the codec to optimize compression; the server buffers content to enable Fast Start; and the Player buffer provides smooth playback for the user. Assuming each buffer stores five seconds of data, and taking into account delays that might be added by network devices, the total broadcast delay can be between 15 and 20 seconds. You cannot eliminate the buffers completely; Windows Media components could not work without any buffers. However, by minimizing the buffer sizes using the procedures in the next three sections, you can reduce broadcast delay to approximately six to nine seconds, depending on network conditions. As you reconfigure settings, test playback and watch for problems. For example, when you reduce the encoder buffer, make sure you are comfortable with the quality of the images. The codec has less motion data to work with, so you might notice more video artifacts, and movement might not appear as smooth. If your content does not include fast motion scenes, you may not even notice any degradation of quality. With a small Player buffer size, playback is more likely to be interrupted by buffering, which is caused by the Player running out of data. Buffering might not be a problem for users with stable, high-speed Internet access. However, users with slow connections may actually need to increase their buffer size. Remember you are balancing quality with broadcast delay, and by reducing delay you may have to make compromises. If you do not have to minimize delay, use the buffers. To reduce overall broadcast delay, you can reduce the size of the encoder and Player buffers, and disable the Fast Start buffer on your Windows Media server.
If packets do not arrive in time, the Player edits out the missing data from the buffer. There is a small jump in the presentation where the missing data is, but without the buffer there would be a hole in the presentation. MiniMiZinG DELaY Buffers are an essential part of creating, delivering, and rendering streaming media content, enabling Windows Media components to provide a high-quality end-user experience. However, there are trade-offs. The larger the buffer size, the longer the broadcast delay. The default
The CODECs can be broken down into two simple subsets: Lossless (ALAC, FLAC, AIFF, WMA Lossless) and Lossy (MP3, AAC, WMA).
ENCIPHER 2008
. VOL 08
37
HAT IS a memristor? Memristors are basically a fourth basic building block of circuits joining the resistor, the capacitor, and the inductor, that exhibit their unique properties primarily within the nanoscale. Theoretically, Memristors, a concatenation of memory resistors, are a type of passive circuit elements that maintain a relationship between the time integrals of current and voltage across a two terminal element. Thus, a memristors resistance varies according to a devices memristance function. Memristors were first proposed in 1971 by Professor Leon Chua, a scientist at the University of California, Berkeley. However Hewlett-Packard (HPS) Stan Williams helped develop memristors and the team at HP have already started their work on it.
Set It and Forget it: memristors remembers whether its on or off. Shown here 17 memristors in a row.
it incredible? Just Imagine. This unique property would let a computer effectively avoid a start-up process: as memory would always store its most recent state, computers could be instantly ready for use. The breakthrough would also reduce power consumption by saving the need to reload data and could also lead to human-like learning processes. Today, most PCs use dynamic random access memory (DRAM) which loses data when the power is turned off. But a computer built with memristors could allow PCs that start up instantly, laptops that retain sessions after the battery dies, or mobile phones that can last for weeks without needing a charge. If you turn on your computer it will come up instantly where it was when you turned it off. This new circuit element solves many problems with circuitry today - since it improves in performance as you scale it down to smaller and smaller sizes. Memristors will enable very small nanoscale devices to be made without generating all the excess heat that scaling down transistors is causing today. Lets hope memristors can help the memory revolution to continue! Now virtually every electronics textbook will have to be revised to include the memristor and the new paradigm it represents for electronic circuit theory.
The memristors are so called because they have the ability to remember the amount of charge that has flowed through them after the power has been switched off. Isnt
The ongoing lighting research at the Department of Electrical and Electronics Engineering is based on WLEDs being used as new technology which can replace conventional lighting equipments. The research is mainly focused on characterization of WLED and developing technical standard for WLED which can be used in Nepal for different lighting projects. The research also considers development of test instrument for measuring parameters (I-V characteristic, linear and angular illumination pattern, Spectral power distribution) of WLED. The research was initiated as a part of Asia Link project ENLIGHTEN which is a collaboration of Helsinki University of Technology (HUT) - Finland, Vilnius University (VU) Lithuania with Kathmandu University (KU) from Nepal.
LED lights generate no heat therefore they are cool to the touch and can be left on for hours without incident or consequence if touched.
VOL 08
. ENCIPHER 2008
38
Irsha
signals etc) through the existing power cables is referred to as Power Line Communication (PLC). It is a new and competitive technology, being supported by utilities and manufacturers around the globe. Broadband application of PLC refers to high data rate applications, such as internet over power lines (e.g. Data rates of 3Mbps in uploading and up to 200Mbps in downloading) while its narrowband applications refer to low data rate applications that are useful in control and telemetry of electrical equipments like switches, meters and domestic appliances, operating over a frequency range of 10 KHz-500 KHz. While the technology of PLC avoids the necessity of installing extra wiring or cables in order to transfer information, its hostile and noisy environment presents some challenging tasks of signal conditioning for a system engineer. The impedance and attenuation characteristics of power lines are time varying as well as frequency varying. Also, it is an inherently noisy environment; every time a device turns on or off, it introduces some noise, in the form of short impulses. Energy-saving devices often introduce noisy harmonics into the line. The system must be designed to deal with all such kinds of naturally occurring disruptions. In Kathmandu University, we have been dealing with narrowband applications of PLC. In the past few years, remote control and monitoring, and point-to-point data and voice communication have been successfully tested in the research laboratory. The current research work is mainly focused on data transmission through power line from a computer to a terminal device. In addition, it involves a new concept of indoor visible-light communication, which refers to the use of visible light as a carrier for information signals. After data is correctly received from the power line, it can be fed to WLED lamps. These lamps, used for lighting purpose can also be used to transfer information in a wireless link to other terminal devices utilizing optical sensors, and then suitably displayed in a display device, like the LCD. The waveforms and simplified block diagram of the system are presented below.
BhattaraI
White LEDs offer advantageous properties like high brightness, low power consumption, minimal heat generation etc. compared to incandescent bulbs. White LEDs are now being regarded as the next generation lighting. The concept of using LEDs for lighting as well as communications yields many interesting applications. Also, indoor optical communication exhibits several advantages over Wi-Fi and IR. White-LED radiation is not subject to spectrum licensing regulations because it does not cause any electromagnetic interference, as opposed to RF communication systems. Shadowing effects are also minimized by distributing LEDs across the room. In addition, LEDs are cheaper sources compared to lasers used in IR. Power line communication integrated with visible-light communication can provide very high data rate communications access for indoor networking. The combination of these two techniques has perhaps unlocked a new dimension in the last- mile system. Ms. Bhattarai is a Research Fellow at Kathmandu University - Happy House Foundation.
1997 was the year of first test for bidirectional data signal transmission over the electrical supply network.
39
Sudoku
3 1 9 4 6 1 8 6 4 2 8 4 5 3 2 6 6 6 7 7 8 2
Brain Teaser Three men decided to split the cost of a hotel room. The hotel manager gave them a price of $30. The men split the bill evenly, each paying $10, and retired to their room. However, the manager realized that it was a Wednesday night, which meant the hotel had a special: rooms were only $25. He had overcharged them $5! He promptly called the bellboy, gave him five one-dollar bills and told him to return it to the men. When the bellboy explained the situation to the men, they were so pleased at the honesty of the establishment that they promptly tipped the bellboy $2 of the $5 he had returned and each kept $1 for himself. problem: Each of the three men ended up paying $9 (their original $10, minus $1 back) totalling $27, plus $2 for the bellboy makes $29. Where did the extra dollar go?
For years the electrical utility companies have led the public to believe they were in business to supply electricity to the consumer, a service for which they charge a substantial rate. The recent accidental acquisition of secret records from a well known power company has led to a massive research campaign which positively explodes several myths and exposes the massive hoax which has been perpetrated upon the public by the power companies. The most common hoax promoted the false concept that light bulbs emitted light; in actuality, these 'light' bulbs actually absorb DARK which is then transported back to the power generation stations via wire networks. A more descriptive name has now been coined; the new scientific name for the device is DARKSUCKER. Here, we introduce a brief synopsis of the darksucker theory, which proves the existence of dark and establishes the fact that dark has great mass, and further, that dark particle (the anti-photon) is the fastest known particle in the universe. Apparently, even the celebrated Dr. Albert Einstein did not suspect the truth.. that just as COLD is the absence of HEAT, LIGHT is actually the ABSENCE of DARK... scientists have
Solution: The faulty reasoning lies in the addition at the end. 3 x $9 equals $27, but the $2 tip is included in that $27, so it makes no sense to add the $2 to $27 to make $29. They paid $25 for the hotel room, $2 for the tip ($27), and then got $1 back each to make the original $30. Dark Conspiracy Involving Electrical Power Companies Surfaces now proven that light does not really exist! The basis of the darksucker theory is that electric light bulbs suck dark. Take for example, the darksuckers in the room where you are right now. There is much less dark right next to the darksuckers than there is elsewhere, demonstrating their limited range. The larger the darksucker, the greater its capacity to suck dark. Darksuckers in a parking lot or on a football field have a much greater capacity than the ones used in the home, for example. As with all manmade devices, darksuckers have a finite lifetime caused by the fact that they are not 100% efficient at transmitting collected dark back to the power company via the wires from your home, causing dark to build up slowly within the device. Once they are full of accumulated dark, they can no longer suck. This condition can be observed by looking for the black spot on a full darksucker when it has reached maximum capacity of untransmitted dark... you have surely noticed that dark completely surrounds a full darksucker because it no longer has the capacity to suck any dark at all. HumOR
VOL 08
. ENCIPHER 2008
40
HumOR
VIVA Subject: Electrical Engineering
Examiner: Why is a thicker conductor necessary to carry a current in AC as compared to DC ? Candidate: An AC current goes up and down (drawing a sinusoid) and requires more space inside the wire, so the wire has to be thicker. Examiner: Why does a capacitor block DC but allow AC to pass through? Student: See, a capacitor is like this ---| |--- , OK. DC Comes straight, like this -------, and the capacitor stops it. But AC, goes UP DOWN UP DOWN and jumps right over the capacitor! Examiner: What is a step-up transformer? Student: Transformer that is put on top of electric poles. Examiner(smiling): And then what is a step-down transformer? Student (hesitantly): Uh - A transfomer that is put in the basement or in a pit? Examiner (pouncing): Then what do you call a transformer that is installed on the ground? (Student knows he is caught -- cant answer) Examiner (impatiently): Well? Student (triumphantly): A stepless transformer, sir!
was no longer attached to the frog, which was dead anyway. Galvani's discovery led to enormous advances in the field of amphibian medicine. Today, skilled veterinary surgeons can take a frog that has been seriously injured or killed, implant pieces of metal in its muscles, and watch it hop back into the pond just like a normal frog, except for the fact that it sinks like a stone. But the greatest Electrical Pioneer of them all was Thomas Edison, who was a brilliant inventor despite the fact that he had little formal education and lived in New Jersey. Edison's first major invention in 1877, was the phonograph, which could soon be found in thousands of American homes, where it basically sat until 1923, when the record was invented. But Edison's greatest achievement came in 1879, when he invented the electric company. Edison's design was a brilliant adaptation of the simple electrical circuit: the electric company sends electricity through a wire to a customer, then immediately gets the electricity back through another wire, then (this is the brilliant part) sends it right back to the customer again. This means that an electric company can sell a customer the same batch of electricity thousands of times a day and never get caught, since very few customers take the time to examine their electricity closely. In fact the last year any new electricity was generated in the United States was 1937; the electric companies have been merely re-selling it ever since, which is why they have so much free time to apply for rate increases. Today, thanks to men like Edison and Franklin, and frogs like Galvanis, we receive almost unlimited benefits from electricity. For example, in the past decade scientists have developed the laser, an electronic appliance so powerful that it can vaporize a bulldozer 2000 yards away, yet so precise that doctors can use it to perform delicate operations to the human eyeball, provided they remember to change the power setting from Bulldozer to Eyeball.
. VOL 08
41
inTrODUcTiOn N RECENT years, one of the growing sectors of the communications industry has been wireless communication providing high-speed and better quality information exchange between mobile devices. Some of the major applications of wireless communications include internet-based mobile communications, videoconferencing, distance learning, wireless home appliances etc [1]. However, supporting these applications using wireless techniques poses a significant technical challenge. From a technical point of view, the vital challenge faced in
This article will present an overview of Multiple-Input Multiple-Output (MIMO) system as an emerging technique used in wireless communication. It will put forward some background research leading to the discovery of the enormous potential of MIMO. It will then acquaint the reader with the principles of MIMO along with the MIMO equation and theoretic capacity, which will depict MIMO as an extraordinary bandwidth-efficient approach to wireless communication.
wireless communications lies in the physical properties of wireless channels. In addition to all these challenges, there is an increasing demand for higher data rates, better quality service, and higher network capacity. Multiple Input Multiple Output (MIMO) technology promises a cost-effective way to provide these capabilities. A MIMO system comprises a wireless communication link with multiple antenna elements in both transmitter and receiver. MIMO takes advantage of the multipath scenario by sending a single transmission from the transmitter antenna array to bounce along multiple paths to a receiver. Transmitting data on multiple signal paths increases the amount of information transferred in a system. [2]. Thus the system is able to handle more information and at a faster rate than a single-input single-output (SISO) scenario with a single path.
MIMO was conceived in the early 1970s by the researchers at Bell Laboratories while they were trying to address the bandwidth limitations that signal interference caused in large, high-capacity cables. However it was not thought to be practical in that era because of the high expense incurred in generating the processing power necessary to handle MIMO signals. Later, with the advancement of technology and cost reductions in signal-processing schemes and with the need to meet the increasing demands, the researchers were compelled to reconsider MIMO for wireless systems [2]. Multiple-input multiple-output (MIMO) architecture has emerged as a popular solution to the incessant quest for increased capacity in modern wireless communication systems. It promises enormous capacity with a marginal increase in cost and complexity. This technology is now poised to penetrate large-scale commercial wireless products and networks such as broadband wireless access systems, wireless local area networks (WLAN), third-generation (3G) networks and beyond [3]. A MIMO system comprises a wireless communication link with multiple antenna elements in both transmitter and receiver as illustrated in Figure 1. A vector of signals is transmitted from the transmitter antenna array simultaneously and then travels through the wireless channel. These signals are then received at the receiver antenna array which combines the signals in such a way that the quality of the communication in terms of BER and data rate is significantly improved for each MIMO user. MIMO systems are based on a space-time signal processing architecture where time is complemented
Figure 1 A MIMO Communication System Model with space or distance which is obtained from the spatial distribution of antennas in the transmitter as well as the receiver. Space-time processing in MIMO systems basically increases data rate (spatial multiplexing) and link reliability (space-time coding).
VOL 08
. ENCIPHER 2008
42
C = log 2 [det( I M +
r H N
)................(2) ]
where () means transpose-conjugate, is the SNR at the receive antenna, H is the channel matrix of size M x N, IM is the M x M identity matrix and H is the M x N channel matrix. It should be noted here that N equal power uncorrelated sources are assumed at the transmitter. cOncLUsiOn This article reviews the major features of MIMO links for use in future wireless networks. It is seen that the great capacity gains which can be realized from MIMO. It is clear that the success of MIMO algorithm integration into commercial standards such as 3G, WLAN, and beyond will solve the problem of increasing demand for higher data rates, better quality service and higher network capacity rEfErEncEs [1] A. Goldsmith, Wireless Communications, 1st ed. Cambridge: Cambridge University Press, 2005. [2] G. Lawton, Is MIMO the future of wireless communications? IEEE Computer, vol. 37, no. 7, pp. 20{22, July 2004. [3] X. Wang and H. Poor, Wireless Communication Systems: Advanced Techniques for Signal Reception, 1st ed. New York: Prentice Hall PTR, 2003. [4] G. D. Golden, G. J. Foschini, R. A. Valenzuela, and P. W. Wolniansky, Detection algorithm and initial laboratory results using the v-blast space-time communication architecture, Electron. Lett., vol. 35, no. 1, pp. 14-15, Jan. 1999. [5] D. Gesbert, M. Shafi, D. S. Shiu, P. Smith, and A. Naguib, From theory to practice: An overview of MIMO space-time coded wireless systems, IEEE Journal on Selected Areas in Communications, vol. 21, no. 3, pp. 281-302, Apr. 2003. [6] J. H. Winters, Optimum combining in digital mobile radio with co-channel interference, IEEE Journal on Selected Areas in Communications, vol. 2, no. 4, pp. 528-539, July 1984. [7] G. J. Foschini and M. J. Gans, On limits of wireless communications in a fading environment when using multiple antennas, Wireless Pers. Commun., vol. 6, pp. 311-335, Mar. 1998. Mr. Khatiwoda is a Lecturer at the Department of Electrical and Electronic Engineering.
or in matrix form r = hs + n...................................................(1) where H is the channel matrix of size M x N, r is the M x 1 received signal vector, s is the N x 1 transmitted signal vector, and n is an M x 1 additive white Gaussian noise vector with zero mean and variance 2. The channel matrix H is the factor by which the signal is amplified and is also known as the channel coefficient. The element hmn in the channel matrix H represents the complex gain between transmitter antenna j and receiver antenna i. MiMO capaciTY MIMO systems promise substantial improvements over conventional systems in both quality-of-service (QoS) and transfer rate. In addition to these benefits, it has been shown that MIMO offers absolute gains in terms of capacity bounds. Some fundamental results which compare singleinput-single-output (SISO), single-input-multiple-output (SIMO), multiple-input-single-output (MISO) and MIMO capacities show that capacity grows linearly with m = min(M,N) in MIMO rather than logarithmically as in the
ENCIPHER 2008
In addition to talking for nearly two trillion minutes in the first half of 2007, wireless users were sending more than 28 billion messages a month!
. VOL 08
43
HISTORy OF ELECTRICITy
would answer Benjamin Franklin. On the surface this is partially correct and I certainly wouldn't want to take anything away from Mr. Franklin, for he was truly brilliant. That said, the fact is, evidence has been uncovered that shows there were batteries over 2000 years ago. A clay pot sits in the Baghdad museum that was discovered in 1936. This pot contained copper plate and tin alloy and had an iron rod sealed with asphalt. The iron showed signs of acidic corrosion. By filling this pot with an acidic solution such as vinegar, an electric current could be produced. What it was used for is not known although some speculation would include some form of medicinal value though no one
F yOu asked most people who discovered electricity they knows for sure. In any event it was forgotten by humankind
for well over 1000 years. Until the 1600s, there was no real experimentation with electricity. Up to this point static electricity was played with and it could be produced but nobody really knew what it was or understood it. By rubbing amber, even as early as ancient Greece, it could be made to attract fibers or dust. In the late 1600s a man named Otto von Guericke of Germany is credited with doing some of the first experiments with what we now know is static electricity. Guericke created a machine that could produce static electricity which would enable scientists to experiment with this new found electricity. He also noticed the attribute of electromagnetism.
Discovered magnetic induction. The work Faraday did in his experiments is probably among the most important and led to many advances in the understanding of electricity. His work led to the creation of the generator enabling us to make electricity. If there is a father of electric I think Michael Faraday is it.
Electric Firsts
1889 - Electric Streetcar In Seattle 1902 - Electric Flashlight 1903 - Electric Iron 1906 - Radios With Tuners 1907 - Domestic Vacuum Cleaner 1909 - Electric Toaster 1913 - Electric Refrigerator 1919 - Electric Pop-Up Toaster 1923 - Television
During the whole of his life, Edison received only three months of formal schooling, and was dismissed from school as being retarded.
VOL 08
. ENCIPHER 2008
44
Talking about the university, what changes do you find in students and faculties from then and now? I am not in the position to make comments about the whole university. Talking about our department, the faculties are as friendly and helpful as they were before. The only shortcoming that I can see is they are always young. The School of Engineering in particular has not been able to retain some of the very good faculties. Otherwise the department would have a very strong team of faculties by now. Perhaps this is the inherent trait of a young and growing university. If you look at the brighter side then those faculties who have gone for further studies will, hopefully, return to KU after the completion of their studies and will make our department very strong. Regarding our students, they are well disciplined as always. To compare the students then and now, the students in the past didnt have as much resources available to them as compared to the students now. The students now have easy access to electronic media, I mean internet, which can be quite handy and resourceful. Frankly speaking, the students then were really absorbed to the field because they had to work hard to get the opportunity to study engineering but now although some are really interested, I dont find such zeal in every one of them. Most of the students now take their opportunity to study engineering for granted. how do you compare the students of Ku with the students of other reputed institutes? To be frank with you, KU has not been able to become the first choice for excellent students looking for studying
Starting off, tell us what your childhood was like? And how did you manage to keep yourself with the studies and be an engineer? Actually I was brought to Kathmandu at my early age for my studies. But I was born in Ghelanchowk village in Manang district. I came to Kathmandu in 2040. I went to Ananda Kuti Vidhya Peeth School and passed my SLC in 2048. On passing my SLC, I joined Amrit Science Campus. I then joined IOE Pulchowk for a week or so. In the meantime I got an Indian Government seat to study in India through Indian Embassy and I completed my Bachelors degree in engineering from RIT, Jamshedpur in 1999. Why did you choose the academic field and not others? I had a desire of becoming an academician from my early childhood. I thought of joining the British army but my propensity towards the academia was unmatched. The fact that my parents and grandparents were quite educated must have some role in shaping my childhood dream which I am still pursuing. how did you end up being at Kathmandu university? I worked at Swet Bhairav Power Supply Pvt. Ltd. for 6 months after completion of my bachelors degree. I used to voluntarily teach school children during my free time. I really enjoyed disseminating my knowledge to others. It was at that point that I decided to become an academician. Then I heard about the vacancy at Kathmandu University and I applied. And, Ive been with you since August 2000.
ENCIPHER 2008
. VOL 08
45
INTERVIEw
engineering yet. There are many reasons for that. But I sincerely consider those who chose KU very fortunate. Our students find themselves in a very unique environment conducive for creative thinking besides usual or rather conventional teaching and learning. Through their continued efforts, self motivation and project works, our students somehow mould themselves into someone very competitive in the market. So our students are second to none when they graduate. They are even better in some aspects of practical applications. Our students have maintained a good reputation of this institution not only in Nepal but also abroad. Do you feel that the project and lab works will help us in the practical fields? Project and lab works do help you in an indirect way. They help to clarify and consolidate your theoretical knowledge and make you better prepared for any kind of future challenge. Its hard to find the direct application though, because it is impossible to simulate all the situations you may face in the practical fields. They are many diverse areas you may go to work after your graduation. Moreover, the technology keeps on changing. But the fundamental things remain the same. Its this little fundamental thing that will help you succeed in any field of your choice. If your foundation is strong, I cant imagine what could ever get wrong. Ku is well known for its strong project works. What do you think is more important: the core theory or project works? As an engineering student, strength in both theoretical and practical knowledge is essential. Although the students here have a strong practical know-how, I feel that there is plenty of room to strengthen their theoretical knowledge, because theory is as important as practical. The theories will help you a lot in research works. To strengthen yourself in both domains, your knowledge shouldnt be limited to what the teachers teach. You should have the desire to learn more and more on yourself as well. on to your Masters degree, you received MS by Research degree from this university. how does your Masters by Research degree differ from regular ME degree? What was your research? What are your future plans? Masters by Research focuses more on research work and has lesser regular subjects to study. It requires one to go to the depth of a particular area of study but in the regular ME, preference is given to study many subjects with limited depth. My thesis title was Three Phase Self Excited Induction Generator with a Single Phase Load. This generator is widely used in micro-hydro systems in Nepal. As an academician I want to do PhD. GpA has always been a problem for our students. how does the GpA affect us in future? Yes, I am aware that the students in our department are getting comparatively lower grades than others. Once you get a job, the GPA will not be of much importance. All that matters is how much you can deliver. In many countries, the companies offer jobs based on the students university rather than the grades. Even in Nepal, local companies have started valuing our graduates. But the grades are important for further studies abroad especially if you are competing for financial aids. It is where our graduates may suffer due to their lower grades compared to students from other institutes or other departments. youve seen a lot of our graduates over the years. What do you say about the availability of job to our graduates? Certainly it isnt easy to get a proper job of utmost satisfaction. Getting a job is certainly competitive. The initial phase of getting job is very difficult. I too have suffered from it. But I havent heard any of our graduates getting unemployed for more than a few months. Either they go abroad for further studies or sit as a job holder. Most of the engineering colleges have internships in their final year. Wouldnt it be better if it were so in our university as well? Of course it would be better but this cant be done by the University alone. The industries where you can work as a trainee are limited. The situation is worse in the case of electrical and electronic engineering. And its hard to guarantee the place for all the students too. An industry might take the students for 2 or 3 years but what after that? It might not take the students forever. But with the efforts of students like you, you can make it possible. In the college where I studied, the University could guarantee the internship of only top 10 students. Rest of the students had to find the place on their own. We can start the internship program but we need help from all the sectors including industries, University and the students. is it necessary to pursue higher studies to work or will a Bachelor degree be sufficient? What would be better for further studies - nepal or abroad? To work in the industry, its not necessary to have further study. You can if you like. But if you want to get into the academic field, higher study is important. Go anywhere to get higher education (let it be Nepal or abroad) but come back to serve your country. Love your country!
VOL 08
. ENCIPHER 2008
46
INTERVIEw
Will it be better if we serve our country after we become graduates or can we do anything while were studying? What have you done for our country as an Engineer? What can we do? Your question seems to be interestingly long. Let me answer it one by one. Its not that the undergraduate students cant do anything at the moment. A lot of students get involved someway. But the contribution will definitely be more concrete after you graduate. In our time, there were limited engineering colleges in Nepal where we could get quality education. Many aspiring students had to go abroad to study engineering. I myself studied in India. After completing my bachelors degree I came back to Nepal and have since devoted myself in imparting engineering knowledge to the students. In that way I have served my country by helping it produce quality manpower for its future development. Im satisfied with what Im doing. My contribution in educating my students is by far the most important contribution I have made. There are many things you can do to serve your country. Its not necessarily limited to engineering field. nepal has been touted as having one of the highest per capita hydropower potential at a staggering 43 GW? What are your views on the inability to harness such enormous energy? We should have harnessed more electricity than we are producing today. But this is not entirely an engineering issue; it is more of a political issue. Important and major hydro powers havent started due to the laxity of the government. Now, the situation might change and hopefully the project will go as planned. Governments policy favors micro hydro project a lot. Whats your take on micro hydro projects and major hydro projects? Thanks to the combined effort of government and private sector as well that Nepal has become one of the worlds fastest growing nations in developing and implementing micro hydro systems. I think the micro and major hydro projects should go simultaneously for the development of hydropower. The micro-hydro projects are helping a lot in rural electrification. Small villages in remote areas cannot wait for years or decades for the national grid to reach them. On the other hand, micro hydro alone cannot fulfill the bulk energy demand of the urban and industrial areas. So the micro and major hydro projects should go side by side. Finally, weve come to an end of this wonderful interview. is there any suggestion youd like to give to your students? Well, this has been a wonderful experience for me. Well, my dear students, whatever you do, always remember that its good to be an important person but its even more important to be a good person.
TECHNOLOgy
If youre frustrated by frequently losing battery power in your laptop computer, digital camera or portable music player, then take heart: A better source of juice is in the works. Chemists at Arizona State University in Tempe have created a tiny hydrogen-gas generator that they say can be developed into a compact fuel cell package that can power these and other electronic devices -- from three to five times longer than conventional batteries of the same size and weight. The generator uses a special solution containing borohydride, an alkaline compound that has an unusually high capacity for storing hydrogen, a key element that is used by fuel cells to generate electricity. In laboratory studies, a prototype fuel cell made from this generator was used to provide sustained power to light bulbs, radios and DVD players, the researchers say.
ENCIPHER 2008
One semiconductor plant can require enough electricity to power a city of 60,000 and several million gallons of water a day.
. VOL 08
47
TECHNOLOgy
in fuel cells, including metal hydride sponges and liquids such as gasoline, methanol, ethanol and even vegetable oil. Recently, borohydride has shown promise as a safe, energydense hydrogen storage solution. Unlike the other fuel sources, borohydride works at room temperature and does not require high temperatures in order to liberate hydrogen, Gervasio says. Gervasio and his associates are developing novel chemical additives to increase the useful hydrogen storage capacity of the borohydride solution by as much as two to three times that of simple aqueous sodium borohydride solutions that are currently being explored for fuel cell development. These additives prevent the solution from solidifying, which could potentially clog or damage the hydrogen generator and cause it to fail. In developing the prototype fuel cell system, the researchers housed the solution in a tiny generator containing a metal catalyst composed of ruthenium metal. In the presence of the catalyst, the borohydride in the water-based solution reacts with water to form hydrogen gas. The gas leaves the hydrogen generator by moving across a special membrane separating the generator from the fuel cell component. The hydrogen gas then combines with oxygen inside the fuel cell to generate water and electricity, which can then be used to power the portable electronic device. Commercialization of a practical version of this fuel cell could take as many as three to five years, Gervasio says.
About 10,000,000,000,000,000,000 transistors are shipped each year, or about 100 times the number of ants estimated to be on Earth.
VOL 08
. ENCIPHER 2008
48
TECHNOLOgy TIdBITS
ELECTRICITy FROm THE EXHAuST PIPE
Researchers are working on a thermoelectric generator that converts the heat from car exhaust fumes into electricity. The module feeds the energy into the cars electronic systems. This cuts fuel consumption and helps reduce the CO2 emissions from motor vehicles. In an age of dwindling natural resources, energy-saving is the order of the day. However, many technical processes use less than one-third of the energy they employ. This is particularly true of automobiles, where two-thirds of the fuel is emitted unused in the form of heat. About 30 percent is lost through the engine block, and a further 30 to 35 percent as exhaust fumes. Scientists all over the world are developing ways of harnessing the unused waste heat from cars, machines and power stations, in order to lower their fuel consumption. There is clearly a great need for thermoelectric generators, or TEGs for short. These devices convert heat into electrical energy by making use of a temperature gradient.
June 4, 2008 on the occasion of World Environment Day at CV Raman Auditorium, Kathmandu University, an enthusiastic young political leader Gagan Thapa said, We do have two choices being a new generation of new Nepal; one leave country saying it as the worst place to live and have professional life, second stay in the worst country and make it better. Do we really want to leave this country or do we want to make it better place to live? Think yourself! We dont have electricity; we dont have job opportunity, we dont have good political stability but we do have a hope. We can hope that one day we would have a broadband communication highway in whole Nepal, from east to west and north to south.
T wAS
And also to meet some of the millennium development goals: Telephone lines and cellular Subscribers per 100 population Personal computers in use per 100 population Internet users per 100 population With the east west optical fiber link many urban areas will rejoice the broadband communication facilities, but due to its geographical structure WI-FI technology best suits the Himalayan region to have wireless broad band highway so that remote villages of Nepal can enjoy the communication facilities. With this vision Ramon Magsaysay Award winner Mahabir Pun of Myagdi initiated Nepal Wireless Networking Project and has connected several villages in Myagdi with WI-FI link. So as to extend this project in whole Nepal he comes up with an idea of one dollar a month to raise a fund for establishing a link in remote villages of Nepal. With 30,000 supporters in this fund raising campaign 24 relay station can be built to serve 80 villages and 70,000 people in Myagdi, Parbat and Kaski district along with many other villages in different district of western Nepal. To help in his Noble cause the Society of Electrical and Electronics Engineers appeals to all the students and faculties to join together and save one dollar a month for Nepal Wireless Project. Your saving can make a difference. For further information: Society of Electrical and Electronics Engineers, Kathmandu University. Or www.nepalwireless.net
People in urban area rejoice the facility of new technology but people in remote village still lack the facility. Use of communication technology be it a telephone or the internet can help people in rural areas find out about market prices and sell their products at a better price. It can also overcome traditional barriers to better education by making books available online and opening the door to e-learning. In the developing country like ours it is very important to have a communication highway so as to provide: Telemedicine services from city hospitals to rural clinics Remittance service in villages to help people working abroad send money Credit card acceptance services to tourist in the trekking routes Start local e-commerce service through e-bulletin boards to help villagers sell products Develop voice over Internet protocol (VoIP) services for cheaper communication
ENCIPHER 2008
The worldwide internet population is estimated at 1.08 billion. In 2000 there were 400 million users, and in 1995 20 million users.
. VOL 08
49
ELECTRICAL SAFETy
Accident reports continue to confirm that people responsible for the installation or maintenance of electrical equipment often do not turn the power source off before working on that equipment. Working electrical equipment hot (energized) is a major safety hazard to electrical personnel. The purpose of this article is to alert electrical contractors, electricians, facility technical personnel, and other interested parties to some of the hazards of working on 'hot' equipment and to emphasize the importance of turning the power off before working on electrical circuits. WhY shOULD ThE pOWEr BE TUrnED Off? shOrT circUiT arcinG faULTs A short circuit occurs when conductors of opposite polarity are accidentally bridged by a conductive object or bridged to grounded metal. Metal screwdrivers, wrenches, fish tapes, test instruments, etc. have all been found to have made inadvertent contact while persons were working on live equipment. An arcing fault may be established that is limited only by the total impedance of the circuit. The arcing will continue until the circuit breaker, fuse, or equipment ground fault protection device on the line side of the fault opens the circuit. Even if the short circuit protective device opens the circuit without any intentional delay, portions of the conductors and other metallic materials in the path of the arc may explode violently, showering the area with hot molten metal that can cause severe burns or death. The flash associated with the arc can also cause permanent eye damage. Finally, a short circuit may expel shrapnel toward the workman, penetrating clothing or the body. nOrMaL Or aBnOrMaL sWiTchinG OpEraTiOns Many of the components of an electrical system (switches, circuit breakers, contactors, etc.) are required to be mounted in an enclosure intended to prevent accidental contact with the live electrical parts. The enclosures are also intended to contain byproducts from normal or abnormal switching operations. When a switch, circuit breaker, or contactor opens a circuit that is carrying rated current or perhaps an overcurrent, an arc is established across the contacts of the device. Hot gasses and tiny metal particles may be expelled, under pressure, from the device. This is a perfectly normal consequence, and the closed enclosure contains the hot gasses and particles, protecting personnel from possible severe injury. If the cover of the enclosure is opened or removed while the equipment is still energized and a switching operation occurs, severe burns to the body can result from the hot gases and ejected metal particles, and permanent eye damage can occur as a result of the associated flash. Enclosures for electrical equipment are designed to safely contain normal or abnormal conditions. They cannot do their job if they are opened when equipment is energized. shOcK Or ELEcTrOcUTiOn The human body will conduct electrical current! A circuit path can be through both arms, through an arm or leg to ground, or through any body surface to ground. There is a certain current level at which an individual cannot voluntarily release from the circuit. This is the "no let go current" from which burns and death by electrocution can result. Studies have shown that the perception of electrical shock begins when the current through the affected parts of the body is about 0.002 amperes. When the current increases to about 0.015 to 0.020 amperes, it becomes impossible to let go of the circuit. At higher values of current, e.g. above about 0.100 amperes, ventricular fibrillation and/or heart stoppage will cause certain death. The value of current will depend on the body's electrical resistance and the voltage applied. From Ohms law ( I = V/R ) it can be seen that an increase in current through the body occurs when either the applied voltage increases or the body's resistance decreases. Electrical circuits of 120V can be just as lethal as 240V, 480V, 600V , or higher voltage circuits because the current through the body is dependent on the body's resistance. Electrical shock can also cause involuntary muscular reactions which may result in other injuries. WhY isn'T ThE pOWEr TUrnED Off? LacK Of prOpEr TraininG Many people are just not aware of the inherent dangers as noted above. Victims and witnesses of electrical accidents are often amazed at the violent and explosive nature of electrical energy, the fire balls, bright flashes, acrid smoke and hot molten metal. Often safety training of electricians is done on an informal basis and may be done by instructors who have already developed bad habits. Sometimes unqualified and unlicensed people work on electrical circuits, and safety training is given lip service, or there is no training at all. It
VOL 08
. ENCIPHER 2008
50
ELECTRICAL SAFETy
is essential that safety training be emphasized to preclude any such complacency. There are courses in electrical safety provided by colleges, by the IBEW and other labour groups, and by various associations. Industry management can promote increased safety by requiring more of their employees to attend such formal safety courses. ThE ELEcTricaL sErVicE "can'T" BE inTErrUpTED Countless electrical accidents have been the result of this philosophy. Invariably, the accidents cause major shutdowns, outages, and equipment replacement. Thus, what could not be shut down is shut down! With detailed planning, almost any piece of electrical equipment can be taken out of service. While this planning may take additional time and involve additional costs, the risks of not doing it may be an accident that can result in massive equipment damage, personal injury, or death. The time and cost of an accident will far exceed the time and cost of a properly planned outage. ThE JOB MUsT BE DOnE qUicKLY When the pressures of time dominate any work activity, mistakes and accidents invariably happen. Caution and good judgment give way to haste. Again, a resulting accident will inevitably take more time to resolve. "WE'VE nEVEr haD a prOBLEM BEfOrE" There is a common misconception that if a known safety practice is violated several times without resulting in an accident, then a future accident won't happen either. Many electricians who receive safety training learn on 120V/240V circuits. Much of their work deals with 120V to ground. While it is possible to be shocked, burned and/or electrocuted on 120V/240V systems, these individuals may lose their fear by continually working equipment hot until it becomes second nature. A few shocks, sparks, and burned wires may not deter them. It may be faster to make connections without having to turn off the power. Transferring this 120V experience to 480V and above can be a fatal error. ThE EqUipMEnT nEEDs TO BE EnErGiZED fOr TEsTs It is recognized that there are some situations where electrical measurements need to be taken while the equipment is energized. In these situations, there are certain legal requirements that must be met before any work is performed including ensuring the work is done by a "qualfied person". A qualified person is "one familiar with the construction
ENCIPHER 2008
and operation of the equipment and the hazards involved". The possession of an electrical license may not be sufficient to qualify a person to work on all equipment. Education and training may be necessary for the specific equipment involved. OThEr haZarDs There are a number of other hazards related to working equipment hot which are not obvious. In particular, determining that a circuit is OFF can be difficult in some instances. Even with the best of intentions to avoid working hot, it is necessary and important to check for circuit voltage with an appropriate voltmeter before working on equipment presumed to have been deenergized. This situation results when the equipment involves items such as tie breakers, double throw disconnect switches, automatic transfer switches and emergency generators. In such cases, turning the equipment to OFF may result in power being supplied by another circuit route or from another source. Working on these circuits requires extra knowledge and caution. The use of lock-outs and lock-off tags and equipment is essential when working remotely from a disconnect device. The electrician must assure that the power is OFF and stays OFF. Another less obvious hazard can exist when restarting equipment after a fault. Resetting or replacing an overcurrent protective device without correcting the cause can result in circuit breaker tripping, fuse opening, and possible equipment and personnel damage from arc byproducts. This problem can occur at initial start-up, restart after rework, and restart after incidents such as short circuits or water damage. It is important that the validity of the circuit phase isolation be verified by both dielectric strength testing (hi-pot) and insulation resistance testing (megger). Also, prior to restart, all loads should be shed, i.e., the load switches turned off so that the restart does not close into a large number of motor loads. This sort of activity takes knowledge, education, and training and should only be attempted by qualified persons. in sUMMarY Electrical accidents can't happen when the power is shut off. While that statement seems to make obvious sense, this article has attempted to make another statement clearer: electrical accidents can and do happen when working on equipment that is energized. All electrical personnel should remember that even a 'simple; accident can result in major equipment damage, severe personal injury, or even death.
Currents of approximately 0.2 A are potentially fatal, because they can make the heart fibrillate, or beat in an uncontrolled manner.
. VOL 08
51
mAjOR
OR
Hydropower has always been the talk of the town in Nepal. Nepal is rich in hydro-resources, with access to one of the highest per capita hydropower potentials in the world. Of the economically feasible potential of 43 GW, only 262 MW or 0.2% has been produced so far. The construction of hydroelectric projects contributes to the economical, regional, and social development of the region. Such projects often result in increased investment and economic growth. The establishment of such projects also promotes access to roads, schools, health centers and job opportunities which in-turn increases the living standard of the people in the long run. It also brings the benefits of flood control, flow manipulation for downstream irrigation and water supply. So the question arises to what the government should prioritize major or micro hydro project? Both kinds of projects have their own pros and cons over each other but developing both kinds of mini and major hydropower simultaneously will be conducive to the development of the country. All round development is what our country requires at this moment. But for this government has to emphasize on semi or non government, private companies, and cooperative world to invest in the field. The main advantage of micro-hydro power is that it can be installed at any part of the country. So it would be feasible and exploitable at most of the rural areas. It will help establish the basic infrastructure like road, hospital, school in rural areas. Almost half of the country has no access to electricity; enlightening those parts will boost the living and economical standard of the people. The industrial growth can then prosper in those areas. It creates local employment and income as well as the concept of entrepreneurship and ownership rights to community. In addition, it is a reliable source of energy which can be used for irrigation schemes and transportation. Most of the rural parts in Nepal are located far from serviceable roads and long distances from the NEA's national grid (which would mean high transmission and construction cost). So, mini hydro would be a better option in those areas. Also, the components for micro-hydro can be locally manufactured so the cost can be significantly lowered. Micro-hydro plants are comparatively simple in construction. The scheme can be locally managed, operated and maintained. So, it provides better access for reliable, appropriate and affordable energy which has important role in improving the living condition of the poor people in our country. But for the development of the country, we need industries
There are 1956 micro-hydro plants capable of generating 13,064 kW of power already installed in Nepal.
VOL 08
. ENCIPHER 2008
52
exists between rich and poor areas. While some colleges and Universities of Kathmandu can afford a VSAT installed in their vicinity, and many with internet services from neighboring ISPs, others outside the valley still work with donated Pentium II computers where all that the students do is play chess to enrich their wisdom. All the ISPs are centered in Kathmandu. These ISPs have branches outside the valley but only in limited areas where the first thing to consider is economic benefit. Besides economy, another reason why Nepalese are in the lower end of the digital divide is that many people are not computer literate let alone information technology literate. Many people still think that internet only means email. Thousands of gigabytes of information is still unused by such users. Language is a cause for this. Computers are in English- the language they know only little about. Work has been done however, both nationally and internationally to bridge the divide. One of the examples is Open source. To tackle the hegemony of most business houses, a little penguin (logo of the open source) has been doing a lot. Free operating software like the Nepali version of Linux or Nepalinux has been used to make people computer literate. Madan Purashkaar Pushtakalayas computer literacy campaign initiated in Dailekh had school going student seeing computers for the first time that too in their mother tongue. Some hope arises from the Mobile phone industry. Telecommunications is one of the sustaining industries in Nepal besides packaged noodles. There is hope that competition among mobile phone companies would help newer technology enter more cheaply and more userfriendly. This would in turn help the middle class and the lower middle class use the technology to its maximum extent. All the development in Information Technology in the majority of the country is in grass-root level. The development in IT, is limited, or centralized to the capital to a large extent and to a very small extent to other urban capitals. Villages in Nepal are literally still in the dark ages. When the western youths talk, they talk about convergence, about miniaturization. They talk about whose hard drive is the biggest, whose microprocessor is the fastest, and whose iPhone is the sleekest. While in our part of the world there are people who still say, I saw a computer the other day.
The US, with a population close to the population of the Middle East, has 199 million Internet users while the Middle East has only 16 million.
. VOL 08
53
AdSL TECHNOLOgy
ADSL or Asymmetric Digital Subscriber Line (ADSL) technology uses existing twisted pair telephone lines to create access paths for high-speed data. This exciting technology is overcoming the technology limits of the public telephone network by enabling the delivery of high-speed Internet access to the vast majority of subscribers homes at a very affordable cost. Delivery of ADSL services requires a single copper pair configuration of a standard voice circuit with an ADSL modem at each end of the line, creating three information channels a high speed downstream channel, a medium Speed upstream channel and a plain old telephone service (POTS) channel for voice. Data rates depend on several factors including the length of the Copper wire, the wire gauge, presence of bridged taps, and cross-coupled interference. The line performance increases as the line length is reduced, wire gauge is increased, bridge taps are eliminated and cross-coupled interference is reduced. The modem located at the subscribers premises is called an ADSL transceiver unit-remote (ATU-R), and the modem at the central office is called an ADSL transceiver unit-central office (ATU-C). The ATU-Cs take the form of circuit cards mounted in the digital subscriber line access multiplexer (DSLAM). A residential or business subscriber connects their PC and modem to a RJ-11 telephone outlet on the wall. The existing house wiring usually carries the ADSL signal to the NID located on the customers premises.
There are both technical and marketing reasons why ADSL is in many places the most common type offered to home users. On the technical side, there is likely to be more crosstalk (refers to any phenomenon by which a signal transmitted on one circuit or channel of a transmission system creates an undesired effect in another circuit or channel. Crosstalk is usually caused by undesired capacitive, inductive, or conductive coupling from one circuit, part of a circuit, or channel, to another). The upload signal is weakest at the noisiest part of the local loop, while the download signal is strongest at the noisiest part of the local loop. It therefore makes technical sense to have the DSLAM (Digital Subscriber Line Access Multiplexer) transmit at a higher bit rate than does the modem on the customer end. Since the typical home user in fact does prefer a higher download speed, the telephone companies chose to make a virtue out of necessity, hence ADSL. On the marketing side, limiting upload speeds limits the attractiveness of this service to business customers, often causing them to purchase higher cost Digital Signal services instead. In this fashion, it segments the digital communications market between business and home users. TEchnOLOGY ADSL depends on advanced digital signal processing and creative algorithms to squeeze as much information through twisted-pair telephone lines as possible. In addition, many advances have been required in transformers, analog filters, and A/D converters. Long telephone lines may attenuate signals at one megahertz (the outer edge of the band used by ADSL) by as much as 90 dB, forcing analog sections of ADSL modems to work very hard to realize large dynamic ranges, separate channels, and maintain low noise figures. On the outside, ADSL looks simple -- transparent synchronous data pipes at various data rates over ordinary telephone lines. On the inside, where all the transistors work, there is a miracle of modern technology. To create multiple channels, ADSL modems divide the available bandwidth of a telephone line in one of two ways -- Frequency Division Multiplexing (FDM) or Echo Cancellation. FDM assigns one band for upstream data and another band for downstream data. The downstream path is then divided by time division multiplexing into one or more high speed channels and one or more low speed channels. The upstream path is also multiplexed into corresponding low speed channels. Echo Cancellation assigns the upstream band to over-lap the downstream, and separates the two by means of local echo cancellation, a technique well know in
The distinguishing characteristic of ADSL over other forms of DSL is that the volume of data flow is greater in one direction than the other, i.e. it is asymmetric. Providers usually market ADSL as a service for consumers to connect to the Internet in a relatively passive mode: able to use the higher speed direction for the "download" from the Internet but not needing to run servers that would require high speed in the other direction.
SDSL or Symmetrical Digital Subscriber line provides symmetrical data transfer rate and is ideal for businesses.
VOL 08
. ENCIPHER 2008
54
AdSL TECHNOLOgy
V.32 and V.34 modems. With either technique, ADSL splits off a 4 kHz region for POTS at the DC end of the band. An ADSL modem organizes the aggregate data stream created by multiplexing downstream channels, duplex channels, and maintenance channels together into blocks, and attaches an error correction code to each block. The receiver then corrects errors that occur during transmission up to the limits implied by the code and the block length. The unit may, at the users option, also create superblocks by interleaving data within subblocks; this allows the receiver to correct any combination of errors within a specific span of bits. This allows for effective transmission of both data and video signals alike. hOW DOEs ThE aDsL WOrK? on the wire not render the line unusable: the channel will not be used, merely resulting in reduced throughput on an otherwise functional ADSL connection. Vendors may support usage of higher frequencies as a proprietary extension to the standard. However, this requires matching vendor-supplied equipment on both ends of the line, and will likely result in crosstalk issues that affect other lines in the same bundle. There is a direct relationship between the number of channels available and the throughput capacity of the ADSL connection. The exact data capacity per channel depends on the modulation method used. Modulation Two main modulation schemes are currently being used to implement ADSL: carrierless amplitude/phase (CAP), a single carrier modulation scheme based on quadrature amplitude modulation (QAM); and discrete multi-tone (DMT), a multichannel modulation scheme. The choice between them naturally depends on how well they perform in the presence of impairments on the existing copper twistedpair access cabling, because these can limit the transmission capacity. In addition, high-bit rate services carried by ADSL must not interfere with other services, particularly plain old telephone service (POTS) that are being transported simultaneously over the same lines. A highly adaptive transmission system is needed to cope with all these sources of signal degradation. Having carefully studied the performance and flexibility of both modulation techniques, Alcatel Telecom decided to implement DMT in its ADSL system. DMT has the added advantage of having been standardized by the American National Standards Institute (ANSI). It is also being adopted by the European Telecommunications Standardization Institute (ETSI). Multicarrier Modulation In essence, multicarrier modulation superimposes a number of carrier-modulated waveforms to represent the input bit stream. The transmitted signal is the sum of these subchannels (or tones), which have the same bandwidth and equally spaced center frequencies. The number of tones must be large enough to ensure good performance. In practice, a value of 256 provides near optimum performance, while ensuring manageable implementation complexity. Early problems with maintaining an equal spacing between tones have been resolved with the introduction of digital signal processors, which can accurately synthesize the
The area 1 is the frequency range used by normal voice telephony (PSTN), area 2 (upstream) and area 3 (downstream) areas are used for ADSL.Currently, most ADSL communication is full duplex. Full duplex ADSL communication is usually achieved on a wire pair by either frequency division duplex (FDD), echo canceling duplex (ECD), or time division duplexing (TDD). FDM uses two separate frequency bands, referred to as the upstream and downstream bands. The upstream band is used for communication from the end user to the telephone central office. The downstream band is used for communicating from the central office to the end user. With standard ADSL, the band from 25.875 kHz to 138 kHz is used for upstream communication, while 138 kHz 1104 kHz is used for downstream communication. Each of these is further divided into smaller frequency channels of 4.3125 kHz. During initial training, the ADSL modem tests which of the available channels have an acceptable signal-to-noise ratio. The distance from the telephone exchange, noise on the copper wire, or interference from AM radio stations may introduce errors on some frequencies. By keeping the channels small, a high error rate on one frequency thus need
ENCIPHER 2008
For conventional ADSL, downstream rates start at 256 kbit/s to 8 Mbit/s within 1.5 km (5000 ft) of the DSLAM equipped central office.
. VOL 08
55
AdSL TECHNOLOgy
sum of modulated waveforms, and the FFT (fast fourier transform), which can efficiently compute this sum. Discrete Multi-tone Modulation DMT is a form of multicarrier modulation in which each tone is QAM modulated on a separate carrier. The lowest carriers are not modulated, thereby avoiding interference with POTS. DMT modulation is optimal for band-limited communication channels (such as twisted pair telephone cables), which exhibit large differences in gain and phase with frequency. When the modem is initialized, the number of bits assigned to a tone can be set to compensate for differences in these transmission characteristics. Subsequently, if conditions on the line alter slowly, this bit assignment can be changed "on the fly." Over long distances, a DMT-based ADSL transmission system approaches the fundamental capacity limit of 13.6 Mbps. However, over the distances typically found in the access network (a few kilometers), the maximum capacity drops to about 6 Mbps. In principle, tones are independent of one another. However, in practice some intersymbol interference (ISI) occurs because the tail of one symbol corrupts the start of the following symbol. Fortunately, this can be virtually eliminated by adding a small number of samples (the cyclic prefix) to each DMT symbol. Any ISI is then limited to the prefix that is removed after demodulation by the FFT. aDsL in nEpaL ADSL technology is very new in Nepal. Nepal telecom launched its broadband services by use of ADSL 2+ (=up to 24Mbps) technology to its customers. The service is initially available for Katmandu valley only. It has plans to expand the ADSL network through-out the country within the next three years. Initially only high speed Internet Service has been available and according to NTA; services such as VPN, multicasting, video conferencing, video-on-demand and broadcast application etc. will also be added in future. Nepal Telecom has claimed that they are offering ADSL service to net surfers at the cheapest price. This step of Nepal telecom will surely increase Internet user in Nepal. References: http://www.wikipedia.org; http://www.cs.tut.fi http://howstuffswork.com
VOL 08
. ENCIPHER 2008
56
A better alternative to microcontroller that has often been overlooked by undergraduate students is Programmable Logic Device (PLD). The main advantage of PLD over microcontroller is speed a digital system implemented in hardware is faster than that implemented in software. Some other advantages include higher pin count, higher parallelism and greater flexibility.
Unlike a full-custom IC, PLD has undefined function at the time of manufacture and can be configured later. PLDs can be reprogrammed several times and their functions can be changed even after they have been soldered onto PCBs. PLDs are available in different size ranging from less than a hundred gate equivalent PAL (Programmable Array Logic) devices to multimillion gate FPGAs (Field Programmable Gate Arrays). With PLDs, designers have many options to match their needs and budget. Low or no cost software tools are widely available in the Internet that allow designers to quickly develop, simulate and test their designs. With the development of Hardware Description Languages such as Verilog and VHDL, designing a complex digital circuit has become as easy as writing a simple C program. Although choosing the correct devices for any system is very difficult and is made even harder when too many alternatives are available, it is advisable to consider as much workable alternatives as possible instead of sticking to the 8051 microcontroller as the only viable solution. Mr. Ghimire is an Assistant Professor at Department of Electrical and Electronic Engineering. HumOR
reaches into his wallet, pulls out a five-dollar bill and hands it to the mathematician. Now, it's the engineer's turn. He asks the mathematician "What goes up a hill with three legs and comes down on four?" The mathematician looks up at him with a puzzled look. He takes out his laptop computer and searches all of his references. He taps into the air phone with his modem and searches the net and the Library of Congress. Frustrated, he sends e-mail to his co-workers all to no avail. After about an hour, he wakes the engineer and hands him $50. The engineer politely takes the $50 and turns away to try to get back to sleep. The mathematician then hits the engineer, saying, "What goes up a hill with three legs, and comes down on four?" The engineer calmly pulls out his wallet, hands the mathematician five bucks, and goes back to sleep.
Intel 8051 has 128 bytes of on-chip RAM and 4 KBytes of on-chip ROM.
Photo Corner
Annual Day 2007 Dr. Chhetri performing at SEEE Annual Day 2007
Launching of Tech-Brief Dean of SoE, Dr. Bhola Thapa launching the fortnightly newsletter Tech-Brief
The Victorious 2005 Batch The 2005 Batch defeated the 2004 Batch 2-1 in the SEEE Running Shield Football Tournament