Sie sind auf Seite 1von 47

z

WHAT IS IEEE IEEE is the worlds largest professional association dedicated to advancing technological innovation and excellence for the benefit of humanity. IEEE and its members inspire a global community through IEEE's highly cited publications, conferences, technology standards, and professional and educational activities. more than 90,000 student members 331 sections in ten geographic regions Mission Statement worldwide 1,952 chapters that unite local members IEEE's core purpose is to foster with similar technical interests technological innovation and excellence 1,855 student branches in 80 countries for the benefit of humanity. 483 student branch chapters at colleges and universities 338 affinity groups - IEEE Affinity Groups are non-technical sub-units of one Vision Statement or more Sections or a Council. The Affinity Group patent entities are the IEEE will be essential to the global IEEE-USA Consultants' Network, technical community and to technical Graduates of the Last Decade (GOLD), professionals everywhere, and be Women in Engineering (WIE) and Life universally recognized for the Members (LM) contributions of technology and of *Data current as of 31 Dec 2009 technical professionals in improving IEEE: global conditions. has 38 societies and 7 technical councils representing the wide range of technical The IEEE is engaged in an enterprise-wide interests strategic planning process. A summary of has more than 2.5 million documents in the long-range strategic plan, termed the IEEE Xplore Digital Library with more the IEEE Envisioned Future, details the than 7 million downloads each month main elements of the plan. has 1,300 standards and projects under development IEEE Quick Facts publishes 148 transactions, journals and magazines IEEE has: sponsors over 1,100 conferences in 73 more than 395,000 members in more than countries 160 countries; 45 percent of whom are from outside the United States

IEEE India Council What is IEEE India Council ? IEEE India Council is the umbrella organisation which coordinates IEEE activities in India. Its primary aim is to assist and coordinate the activities of local "Sections", in order to benefit mutually, and avoid duplication of effort and resources. IEEE India Council was established on 20th May 1976 and is one of the five councils in the Asia Pacific Region (Region #10 of IEEE). Sections An "IEEE Section" is an IEEE entity which caters to the needs of a specified geographical area. At present, there are 10 IEEE Sections in India (listed alphabetically): 1. Bangalore Section 2. Bombay Section 3. Calcutta Section 4. Delhi Section 5. Gujarat Section 6. Hyderabad Section 7. Kerala Section 8. Kharagpur Section 9. Madras Section 10. Uttar Pradesh Section Student Branches v. An "IEEE Student Branch" is an IEEE entity which helps students in a specific educational institution. Presently there are 327 student branches located in all major Universities and Engineering Colleges all over India. Each Student Branch is attached to the nearest IEEE Section. Chapters An "IEEE Chapter" groups IEEE members who share a common interest in a specific domain. Usually Chapters are attached to the nearest IEEE Section. Some Chapters have decided to be directly attached to the India Council. The following Eight Chapters are attached to the IEEE India Council: EM 14/IA 34: Joint IEEE Chapter of Engineering Management Society and Industry Applications Society Co 16: IEEE Computer Society PE 31: IEEE Power Engineering Society E 25 : Education Society Chapter i. NPS 05/IE 13: Joint IEEE Chapter of Nuclear & Plasma Sciences Society and Industrial Electronics Society AES 01/COM 19/LE 036: Joint IEEE Chapter of Aerospace &Electronic Systems Society, Communications Society and Lasers & Electro- optics Society ED 15/MIT 17: Joint IEEE Chapter of Electronic Devices Society and Microwave Theory & Techniques Society CPMT 21: IEEE Component, Packaging and Manufacturing Technology Society

ii.

iii.

iv.

vi. vii. viii.

AN OVERVIEW OF THE WINDOWS AZURE PLATFORM


ABHINAV MATHUR

Using computers in the cloud can make lots of sense. Rather than buying and maintaining your own machines, why not exploit the acres of Internet-accessible servers on offer today? For some applications, both code and data might live in the cloud, where somebody else manages and maintains the systems they use. Alternatively, applications that run inside an organizationonpremises applicationsmight store data in the cloud or rely on other cloud infrastructure services. However its done, exploiting the clouds capabilities can improve our world. But whether an application runs in the cloud, uses services provided by the cloud, or both, some kind of application platform is required. Viewed broadly, an application platform can be thought of as anything that provides developer-accessible services for creating applications. In the local, on-premises Windows world, for example, this includes technologies such as Windows Server, the .NET Framework, SQL Server, and more. To let applications exploit the cloud, cloud application platforms must also exist. Microsofts Windows Azure platform is a group of cloud technologies, each providing a specific set of services to application developers. As Figure 1 shows, the Windows Azure platform can be used both by applications running in the cloud and by on-premises applications.

Figure 1: The Windows Azure platform supports applications, data, and infrastructure in the cloud. The components of the Windows Azure platform are: Windows Azure: Provides a Windows-based environment for running applications and storing data on servers in Microsoft data centers. SQL Azure: Provides data services in the cloud based on SQL Server. Windows Azure platform AppFabric: Provides cloud services for connecting applications running in the cloud or on premises.

Each part of the Windows Azure platform has its own role to play. This overview describes all three, first at a high level, then WINDOWS AZURE

in a bit more detail. The goal is to provide a big-picture introduction to this new application platform.

At a high level, Windows Azure is simple to understand: Its a platform for running Windows applications and storing their data in the cloud. Figure 2 shows its main components.

Figure 2: Windows Azure provides Windows-based compute and storage services for cloud applications. As the figure suggests, Windows Azure runs on a large number of machines, all located in Microsoft data centers and accessible via the Internet. A common Windows Azure fabric knits this plethora of processing power into a unified whole. Windows Azure compute and storage services are built on top of this fabric. The Windows Azure compute service is based, of course, on Windows. Developers can build applications using the .NET Framework, unmanaged code, or other approaches. Those applications are written in ordinary Windows languages, such as C#, Visual Basic, C++, and Java, using Visual Studio or another development tool. Developers can create Web applications, using technologies such as ASP.NET, Windows 5 Communication Foundation (WCF), and PHP, applications that run as independent background processes, or applications that combine the two.

Both Windows Azure applications and onpremises applications can access the Windows Azure storage service, and both do it in the same way: using a Restful approach. This service allows storing binary large objects (blobs), provides queues for communication between components of Windows Azure applications, and even offers a form of tables with a simple query language. For applications that need traditional relational storage, the Windows Azure platform provides SQL Azure Database, described later. An application using the Windows Azure platform is free to use any combination of these storage options. Running applications and storing their data in the cloud can have clear benefits. Rather than buying, installing, and operating its own systems, for example, an organization can rely on a cloud provider to do this for them. Also, customers pay just for the computing and storage they use, rather than maintaining a large set of servers only for peak loads. And if theyre written correctly, applications can scale easily, taking advantage of the enormous data centers that cloud providers offer. Yet achieving these benefits requires effective management. In Windows Azure, each application has a configuration file, as shown in Figure 2. By changing this configuration manually or programmatically, an applications owner can control various aspects of its behavior, such as setting the number of application instances that Windows Azure should run. The Windows Azure fabric then monitors the application to maintain this desired state. To let its customers create, configure, and monitor applications, Windows Azure provides a browser-accessible portal. A customer provides a Windows Live ID, then chooses whether to create a hosting account for running applications, a storage account for storing data, or both. An application is

free to charge its customers in any way it likes: subscriptions, per-use fees, or anything else. Windows Azure is a quite general platform that can be used in various scenarios. Here are a few examples: A start-up creating a new Web sitethe next Facebook, saycould build its application on Windows Azure. Because this platform supports both Web-facing services and background processes, the application can provide an interactive user interface as well as executing work for users asynchronously. Rather than spending time and money worrying about infrastructure, the start-up can instead focus solely on creating code that provides value to its customers and investors. The company can also start small, incurring low costs while its application has only a few users. If the application catches on and usage increases, Windows Azure can scale the application as needed. An independent software vendor (ISV) creating a software-as-a-service (SaaS) version of an existing on-premises Windows application might choose to build it on Windows Azure. Because Windows Azure mostly provides a standard Windows environment, moving the applications business logic to this cloud platform wont typically pose many problems. And once again, building on an existing platform lets the ISV focus on their business logicthe thing that makes them moneyrather than spending time on infrastructure. An enterprise creating an application for its customers might choose to build it on Windows Azure. Because Windows Azure supports .NET, developers with the right skills arent difficult to find, nor are they prohibitively expensive. Running the application in Microsofts data centers frees the enterprise from the responsibility and expense of managing its own servers, turning capital

expenses into operating expenses. And especially if the application has spikes in usagemaybe its an on-line flower store that must handle the Mothers Day rush letting Microsoft maintain the large server base required for this can make economic sense.

Running applications in the cloud is one of the most important aspects of cloud computing. With Windows Azure, Microsoft provides a platform for doing this, along with a way to store application data. As interest in cloud computing continues to grow, expect to see more Windows applications created for this new world.

SQL AZURE One of the most attractive ways of using Internet-accessible servers is to handle data. The goal of SQL Azure is to address this area, offering cloud-based services for storing and working with information. While Microsoft says that SQL Azure will eventually include a range of dataoriented capabilities, including data synchronization, reporting, data analytics, and others, the first SQL Azure component to appear is SQL Azure Database. Figure 3 illustrates this.

Figure 3: SQL Azure provides data-oriented services in the cloud. SQL Azure Database provides a cloud-based database management system (DBMS). This technology lets on-premises and cloud applications store relational and other types of data on Microsoft servers in Microsoft data centers. As with other cloud technologies, an organization pays only for what it uses, increasing and decreasing usage (and cost) as the organizations needs change. Using a cloud database also allows converting what would be capital expenses, such as investments in disks and DBMS

software, into operating expenses. SQL Azure Database is built on Microsoft SQL Server. To a great extent, this technology offers a SQL Server environment in the cloud, complete with indexes, views, stored procedures, triggers, and more. This data can be accessed using ADO.NET and other Windows data access interfaces. In fact, applications that today access SQL Server locally will largely work unchanged with data in SQL Azure Database. Customers can also use on-premises software such as SQL Server Reporting Services to work with their cloud-based data. While applications can use SQL Azure Database much as they do a local DBMS, the management requirements are significantly reduced. Rather than worry about mechanics, such as monitoring disk usage and servicing log files, a SQL Azure Database customer can focus on whats important: the data. Microsoft handles the operational details. And like other components of the Windows Azure platform, using SQL Azure Database is straightforward: Just go to a Web portal and provide the necessary information. Applications might rely on SQL Azure in a variety of ways. Here are some examples: A Windows Azure application can store its data in SQL Azure Database. While Windows Azure provides its own storage, relational tables arent among the options it offers. Since many existing applications use relational storage and many developers

know how to work with it, a significant number of Windows Azure applications are likely to rely on SQL Azure Database to work with data in this familiar way. To improve performance, customers can specify that a particular Windows Azure application must run in the same data center in which SQL Azure Database stores that applications information. An application in a small business or a department of a larger organization might rely on SQL Azure Database. Rather than storing its data in a SQL Server or Access database running on a computer under somebodys desk, the application can instead take advantage of the reliability and availability of cloud storage. Suppose a manufacturer wishes to make product information available both to its dealer network and directly to customers. Putting this data in SQL Azure Database would let it be accessed by applications running at the dealers and by a customerfacing Web application run by the manufacturer itself. Whether its for supporting a Windows Azure application, making data more accessible, or other reasons, data services in the cloud can be attractive. As new technologies become available under the SQL Azure umbrella, organizations will have the option to use the cloud for more and more data-oriented tasks.

WINDOWS AZURE PLATFORM APPFABRIC Running applications and storing data in the cloud are both important aspects of cloud computing. Theyre far from the whole story, however. Another option is to provide cloud-based infrastructure services. Filling this gap is the goal of Windows Azure platform AppFabric. The functions provided by AppFabric today address common infrastructure challenges in connecting distributed applications.

Figure 4: Windows Azure platform AppFabric provides cloud-based infrastructure that can be used by both cloud and on-premises applications. The components of Windows Azure platform AppFabric are: Service Bus: Exposing an applications services on the Internet is harder than it might seem. The goal of Service Bus is to make this simpler by letting an application expose endpoints that can be accessed by other applications, whether on-premises or in the cloud. Each exposed endpoint is assigned a URI, which clients can use to locate and access the service. Service Bus also handles the challenges of dealing with network address translation and getting through firewalls without opening new ports for exposed applications. Access Control: This service allows a RESTful client application to authenticate itself and to provide a server application with identity information. The server can then use this information to decide what this application is allowed to do.

As with Windows Azure and SQL Azure, a browser-accessible portal is provided to let customers sign up for AppFabric using a Windows Live ID. Once this has been done, these services can be used in variety of ways, including the following: Suppose an enterprise wished to let software at its trading partners access one of its applications. It could expose this applications functions through SOAP or RESTful Web services, then register their endpoints with Service Bus. Its trading partners could then use Service Bus to find these endpoints and access the services. An application running on Windows Azure might need to access data stored in an onpremises database. Doing this requires making that database available via the Internet, a problem that can be solved by

creating a service that accesses that data, then exposing this service via Service Bus. Imagine an enterprise that exposesapplication services to its trading partners. If those services are exposed using REST, the application could rely on Access Control to authenticate and provide identity information for each client application. Rather than maintaining information internally about each trading partner application, this information could instead be stored in the Access Control service. As just described, Windows Azure platform AppFabric provides cloud-based infrastructure services. Microsoft is also creating an analogous technology known as
LOOKING AHEAD
Microsoft has announced a number of updates that it plans to add to the Windows Azure platform in the near future. They include the following: Windows Azure will get broader support for running existing applications. Also, a technology code-named Sydney will let Windows Azure instances connect to an on-premises environment using IPsec. SQL Azure will provide data synchronization services based on the Microsoft Sync Framework that will allow synchronizing data between SQL Azure Database and on-premises databases.

Windows Server AppFabric. As its name suggests, the services it provides run on Windows Serverthey support on-premises applicationsrather than in the cloud. The on-premises services are also different from those in Windows Azure platform AppFabric, focused today on hosting WCF services and on distributed caching. Dont be confused; throughout this paper, the name AppFabric is used to refer to the cloud-based services. Also, dont confuse the Windows Azure platform AppFabric with the fabric component of Windows Azure itself. Even though both contain the term fabric, theyre wholly separate technologies addressing quite distinct problems.
Functionality from Windows Server AppFabric, the on-premises version, will begin to appear in Windows Azure platform AppFabric. Microsoft Codename Dallas, built on Windows Azure and SQL Azure, will provide a cloud-based marketplace for information. Through RESTful services, developers will be able to subscribe to privately owned data and access public domain data, such as US census information and United Nations statistical data. All of these changes target the same goal: making the Windows Azure platform useful in a broader range of scenarios.

Future of Autonomous Robotics in Space Exploration


Krati kiyawat

The first and basic question arises is what autonomous robotics is all about? Autonomous robotics is a branch of science which deals with robots which can perform desired tasks in unstructured environments without continuous human guidance. They can act on their own, independent of any controller. The basic idea is to program the robot to respond a certain way to outside stimuli. The very simple bump-and-go robot is a good illustration of how this works. This type of robots posses various characteristics like Self maintenance Sensing the environment Performing various tasks Position sensing and navigation This characteristic makes autonomous robotics a very popular alternative in comparison to other primitive hand controlled or computer controlled robots in the field of Space Exploration. NASA is implementing Autonomous Robot for its further space missions which are as follows: About the size of a softball, the Personal Satellite Assistant (PSA) will be equipped with a variety of sensors to monitor environmental conditions in a spacecraft such as the amount of oxygen, carbon dioxide and other gases in the air, the amount of bacterial growth, air temperature and air pressure. The robot will also have a camera for video conferencing, navigation sensors, wireless network connections, and even its own propulsion components enabling it to operate autonomously throughout the spacecraft. www.nasa.gov

Implementing the concept of autonomous robotics require the skills of machine designing, circuit designing and hardware coding which in short is combination of Mechanical Engineering, Electrical Engineering and Computer Science Engineering. The Researchers concerned with creating true artificial life are concerned not only with intelligent control, but further with the capacity of the robot to find its own resources through foraging (looking for food, which includes both energy and spare parts). At the end we can conclude that the branch of Autonomous Robotics has a great scope in the field of Space Exploration as a lot can be done in this area. It has a great future ahead.

Impact of Engineering on Indian Society: A Grandmas version


-Anoop Khatri

I have to write an essay on impact of engineering on society. Please help me with this essay Grandma. pleaded 14year old Ram with his 100 year old grand mother. While Rams mother used to scold him for making such requests, Grandma used to wait for such requests. She enjoyed such requests and was ready to help any one. She was quite upto date with TV discovery channels and internet, now available at the village where she lived with her son and daughter-in-law, who looked after her well. She was a popular high school teacher and had great interest in scienc eand technology and kept up that interest through all channels available. For interested villagers she was the best technology entertainer in the evenings. Ram was very proud of her for his school life was more fun with lot of profit because of her. He was the favorite student of his mathematics teacher for solving all the math homework problems! Physics teacher was proud of him for knowing so much in Physics!! Everyday during evening prayer his first demand to God was to give good health to Grandma. Air Conditioned Earth So listen to me, started his Grandma. Our Mohans story will give you a fair idea of the impact of engineering on society. I will not be surprised if people like him soon build Earth climate conditioning system with a network of satellites around the globe for monitoring and control purposes.So our earth will be air conditioned! Who is Mohan Grandma? Did he benefit a lot from engineering and technology? Is Earth climate conditioning possible like air conditioning of a room? Ram asked with very little hopes of completing the essay. Wondered why Grandma is telling him a story instead of dictating the essay right away. Will I complete the essay today Grandma?You will be able to write a nice essay after this story. Dont worry said Grandma. Why Bright People End Up in US? She continued. Our Mohan studied in the same school where you are studying. He was the brightest boy in the school during his time and now he is one of the best brains in technology in the world. He is working in US. Dont know why you dont like engineering. I have told you about his contributions to internet, digital TV etc... As usual your memory is very weak or just enough to pass the exams!Oh. Sorry Grandma. I remember now. Are you giving him as an example of how engineering motivates people to study well and reach US and contribute to the growth of the US society? Ram was getting mischievous.No. Unlike other countries, which depend mostly on its own human resources to shape the nation, US has managed to invest heavily in the past to create the best infrastructure for creative people to come together and innovate for helping US to lead the world. India on the other hand has never thought seriously about such investments due to various reasons. In our country we have not been able to do consciously and seriously anything to drive technology growth and benefit from it. We buy technology whenever we need by paying a huge price. Dont you see those mainpage articles on

submarine and airplane deals? Have you not seen Volvo buses by which we can go to Bangalore? What is your problem Grandma? Government has enough money and they want to buy to serve and protect our people. By running Volvo buses and airplanes we make good money. Why waste time on doing research and developing technology? Ram did not want to give up.

What is Impact? We will talk about it another day. Now let us focus on the topic of your interest. One day Mohan got up in a hurry to attend to a mobile call at 3am in the morning. Call was from his sister from India. He could see her sad and frustrated face on the video phone. She wanted Mohan to talk to her son and advise him to study hard than spend time on the internet or in front of a TV. According to her, her son had become an internet addict as well as a TV addict. He was not interested in school at all .Mohan could not sleep. It was too late. Mohans sister was married to a rich businessman in Bangalore and they lived in a posh house with best facilities andevery possible latest gadget in the house. Money was not a problem at all. Both his sister and her husband were busyearning money to support such a lavish lifestyle and to keep up with their friends in society by being the first to buythe latest cars and other gadgets in the market. Their house was like a 5 star hotel managed by many servants roundthe clock. I want to live in a house like that Grandma. Why did not you build one or tell your son to build one? Small is Beautiful I did everything possible to convince my son about engineering and used to force him to go and find a good job inBangalore or abroad. But he always wanted to manage our fields and look after me well when I get old. He does notthink chasing money will bring happiness. His wife also thinks engineering has done more damage to society thanhelping people live a better life. So they both listen to my science stories as though they are watching an interesting movie and leave it at that. Mohan was quite different. Mohan was the one of the two children of his parents who lived in a small house in the village with his grandparents .In the house there was a small hall where in one corner a radio set was kept and another corner was a prayer corner. A study table and a chair occupied another corner. Remaining corner was empty for people to sit and talk. There was a small room for the grandparents, one room for his parents and him and his sister and a kitchen and a common toilet plus bathroom. His father was a teacher in the school and he had a small piece of land for growing vegetables. Before going to school Mohan used to water the plants and help his father carry the vegetable bags to the market. Mohans father used to entertain the family in the evenings with his flute. Mohan also learnt playing on the flutefrom his father. According to Mohan those were the happiest days of his life. Come back from school, eat nice foodprepared by mother, argue with father on all aspects of life and get good support from grandparents when parents get angry.Though there was limited income there was no shortage of food or happiness. They were content with what theycould earn and focused on living happily without even thinking of cheating or exploiting others for their advantage.Even workers loved working in their house for they were treated with respect and concern. It was a great house tolive for he always felt very happy to go home. Is Science or Engineering Addictive?

He used to visit our place and I used to tell him many interesting things about science and technology to which I wasa sort of addicted. But he became a bigger addict and used to ask very difficult questions. In fact I used to study moreabout science, technology so that I could have good conversation with him. He was a great fan of science and itsability to provide rational explanations for so many doubts he had in his mind. He used to tell me that he wanted to dovery well in engineering and help the school and people in the village when he grows up. I feel very proud of him andhis achievements Mohan as well as his sister studied well and Mohan went on to study engineering and hissisterchose medicine. There was no pressure on their parents as they secured scholarships and money for study was never an issue. Due to unknown laws of gravitation Mohan ended up in America and was a very successful engineer whocontributed to the development of internet and many other projects which were supposed to help mankind do moreby doing less. His parents as well the whole village was very proud of their son and his achievements which theynever understood well. They saw him only on the TV giving a lecture about technologies for the future.Mohan married Shanthi of our village. Shanthi was brainwashed by me to marry Mohan. I used to tell her aboutMohan and all the great things he used to do as a student and all the greater things he was doing in US. Mohansmarriage was a great celebration time for the entire village. Everyone came to bless Mohan and Shanthi. Aftermarriage they left for US. Shanthi also managed to find a good but very hectic job there. His sister had a son. Mohancould not get any children. People attributed it to various things including work pressure and stress. So, all theirattention was on his sisters son. He was given the best of everything in technology as money was not a problem. Hecould call Mohan anytime on his Blackberry mobile phone. Impact on Family Wish I also could have one blackberry mobile phone Grandma! Will you tell my parents? I also need an IPod. Manyof my friends have it. Ram was begging. You will get all of them Ram. Do you know how Blackberry phoneattracted its customers? They said it will help you remain close to your dear ones and still attend to office from home!In reality you end up working for your company 24 hours a day if you are serious about work, promotion and perks.In fact this gadget and many such gadgets have helped in moving people away from each other more and more.What do you mean? I thought it helps to bring people together again. They can stay back with their family andcheck their emails from home right? Dont you think it helps bring families together? Ram did not agree.No. As you know industrial revolution pulled people from villages and joint family homes to industrialestablishments and started the culture of small nuclear families. Now Blackberry or mobile phones are trying tocreate a culture of atomic families with very weak bonds. If you are worried about your work 24 hours when will youget time to talk to your wife or your parents? We will always be tuning to email alerts or sms alerts even when we aresleeping!! Since your manager sends emails at odd hours you also do the same to show him that you also work. It iseasy to understand how people will behave! So in the name of connecting people Nokia and other companies aredisconnecting people more and more. Do you agree? No Grandma. What is wrong in that? I thought mobile phones are a good way to stay connected as Nokia says.Ram did not want to give up. True Ram, it is good for business, not for people or society. Do you like the idea ofhaving dinner while your mother is on the phone talking to a customer? Think about it. Impact on Parents

After leaving home, neither Mohan nor his sister could take care of their parents due to work pressures and otherreasons. Work was always more important to them and their organizations. Some villagers wrote letters requestingthem to help at least financially. They did not send any money as they thought it will be misused by the villagers as their parents were quite old. Withhelp from villagers their parents lived a happy life. In the village there are always people who are willing to help eachother and such mutual help binds the village and makes happy living possible.In order to live a happy life you need time to help others as well as yourself. We need time to spend with other people.We need time to think about life. So what is going wrong? Are there better ways to educate people like Mohan? Why business becomes the only important task in our life tillw die or we are forced to die. Having tasted the advantage of exploiting natural resources; living and nonliving, foragriculture, transportation, medicine and entertainment humans are exploiting humans more and more to meet theirbusiness needs. Exploitation is the key mantra of our society. If one does not know how to exploit the human societyand rest of the nature they do not have a right to live. We have reached a state where we even exploit ourselves tomake more money! Any Ways Out Let us think about this madness and its impact and see if there are ways out of this mess or the World Wide Web thatwe are creating for doing business at the speed of light while fooling ourselves that we are doing this to increase ourhappiness. Looks like, there is no way out. It is very strange that humans are in a race to destroy their own happinesswith a breakneck speed.Do I sound too harsh Ram? It is very easy to criticize all the good work people are doing and feel great. Purpose of Life I can understand Grandma. You have told me this before also. All this is because you dont know why we are bornand what we are supposed to do. We are forced to follow some lifestyle based on where we take birth and on theeducation we can get from teachers like you who dont know much! If you knew Gods email address we could askand know what to do and what we should not do. What do you think Grandma? Many of my teachers say that we should have an aim in life and bigger the aim it is better. How did you handle such questions Grandma? You have toldme that saints like Veda Vyasa, Shankara and Buddha knew more about life and its purpose. Ram sounded verysympathetic.Dont know. I dont have good answers to your questions. I dont know of any Math or Physics to understand them. I can only understand Pythagoras theorem or Fermats Last theorem and simple things like that. I also know that theyare useful tools to keep us busy, challenged and confused. Hope we will discover better tools to find purpose of lifesome day much before it is too late. Fortunately, we know quite a lot about how to live a good life at least in theory.That is by controlling our senses as best as possible so as to live a life with minimum exploitation of the nature andother living beings. Animals and plants may be fortunate for they have inbuilt control systems in place by which thisis achieved. It is interesting that we are given the power to control our senses and also allowed to think. I am surethere is some purpose or intention behind this design. We still dont know. May be we are still evolving. Indian Approach Most of the Indian philosophers tried to understand purpose of life and came up with tools and techniques to live agood and happy life by taking good control of our body and mind while exploiting the natural resources minimallyfor living a better life. This was perhaps made possible

by treating earth, water, wind and fire as gods and finding outtechniques for living a better life by worshipping them. Different forms of worshipping were developed to changethe mood of the gods. As usual there are plenty of methods for worshipping depending on the philosopher and theirbeliefs. Dont know how this worship and control culture dominated over other techniques such as investigate andlearn and control which became widely popular in the western world. Western Approach In the western world philosophers investigated the nature of earth, water, wind and fire as well as living and nonlivingthings and discovered many laws which were useful in explaining natural events. These curiosities lead themto engineering and its enormous potential which eventually lead to industrial revolution and confidence to control the world and exploit the nature more and more. Real Impact Anyone who discovers these laws or has machines built using such laws will feel more powerful. Such people wouldlike to have a better lifestyle, control lives of others and amass wealth using their new knowledge. Moreover, inorder to stay ahead of others they would like to earn and invest more and more. This leads to ever increasingeconomic activity and human beings become just the natural resources. Everything we do will be dictated by the profits to be made than anything else.This type of economic activity driven research culture dominated by industry and management theories will lead toexploitation of all living and nonliving resources. Time has become a scarcity today for everyone. Universities and research labs today dont have the right freedom to think freely and develop robust solutions for helping mankindlive a real better and happy life with minimum exploitation of nature. Due to pressure, researchers and developers today try their half baked ideas on society as early as possible. So we arealways getting pushed with new gadgets, medicines, unmanageable buildings, faster vehicles and roads which wecan never cross and so on. This way we are creating a stressed, confused and very fragile society with very poor and weak relationships.Everyone is worried more and more about themselves and society is just a concept with no real use.In this industry centric world order humans are the next vanishing species after trees, animals and other naturalresources. Everything is done to make this industry more and more efficient with better cars, better aero planes,better trains, buses, roads and mobile phones. Do you see how human centric society has become an industry centricsociety?What next? Are we really searching for happiness or purpose of life? Who knows? May be this is the only way.Investigate, learn and control is a world culture today. This may be enough to explore the material world. Will itchange to worship and control when we start working on understanding mind and its connection to external world is a good question to ask. We today know that rational explanation is very difficult to give for every question we ask.Therefore, for a complete picture we may have to depend both on rational, irrational as well as imaginary (complex)explanations. Would God emerge again as a result of suchinvestigations?Thanks Grandma. You said too much just now for me to remember or digest. I am only fourteen year old Grandma!!I will now go and write a nice essay. I am sure my teacher will love it. Love you Grandma. Good night. One last question! What can be done to correct the system? Tomorrow we will talk about it. God bless you.

Future User Interfaces


Several new user interface technologies and interaction principles seem to define a new generation of user interfaces that will move off the flat screen and into the physical world to some extent. Many of these next generation interfaces will not have the user control the computer through commands, but will have the computer adapt the dialogue to the user's needs based on its inferences from observing the user. Most current user interfaces are fairly similar and belong to one of two common types: Either the traditional alphanumeric full screen terminals with a keyboard and function keys, or the more modern WIMP workstations with windows, icons, menus, and a pointing device. In fact, most new user interfaces released after 1983 have been remarkably similar. In contrast, the next generation of user interfaces may move beyond the standard WIMP paradigm to involve elements like virtual realities, head mounted displays, sound and speech, pen and gesture recognition, animation and multimedia, limited artificial intelligence, and highly portable computers with cellular or other wireless communication capabilities. It is hard to envision the use of this hodgepodge of technologies in a single, unified user interface design, and indeed, it may be one of the defining characteristics of the next generation user interfaces that they abandon the principle of conforming to a canonical interface style and instead become more radically tailored to the requirements of individual tasks. The fundamental technological trends leading to the emergence of several experimental and some commercial systems approaching next generation capabilities certainly include the well known phenomena that CPU speed, memory storage capacity, and communications bandwidth all increase exponentially with time, often doubling in as little as two years. In a few years, personal computers will be so powerful that they will be able to support very fancy user interfaces, and these interfaces will also be necessary if we are to extend the use of computers to larger numbers than the mostly penetrated markets of office workers. Traditional user interfaces were function oriented, the user accessed whatever the system could do by specifying functions first and then their arguments. For example, to delete a file in a lineoriented system, the user would first issue the delete command in some way such as typing delete. The user would then further specify that the name of the item to be deleted. The typical syntax for function oriented interfaces was a verb noun syntax. In contrast, modern graphical user interfaces are object oriented, the user first accesses the object of interest and then modifies it by operating upon it. There are several reasons for going with an object oriented interface approach for graphical user interfaces. One is the desire to continuously depict the objects of interest to the user to allow direct manipulation. Icons are good at depicting objects but often poor at depicting actions, leading objects to dominate the visual interface.

Furthermore, the object oriented approach implies the use of a noun verb syntax, where the file is deleted by first selecting the file and then issuing the delete command (for example by dragging it into the recycle bin). With this syntax, the computer has knowledge of the operand at the time where the user tries to select the operator, and it can therefore help the user select a function that is appropriate for that object by only showing valid commands in menus and such. This eliminates an entire category of syntax errors due to mismatches between operator and operand. A further functionality access change is likely to occur on a macro level in the move from application oriented to document oriented systems. Traditional operating systems have been based on the notion of applications that were used by the user one at a time. Even window systems and other attempts at application integration typically forced the user to use one application at a time, even though other applications were running in the background. Also, any given document or data file was only operated on by one application at a time. Some systems allow the construction of pipelines connecting multiple applications, but even these systems still basically have the applications act sequentially on the data. The application model is constraining to users who have integrated tasks that require multiple applications to solve. Approaches to alleviate this mismatch in the past have included integrated software and composite editors that could deal with multiple data types in a single document. No single program is likely to satisfy all computer users, however, no matter how tightly integrated it is, so other approaches have also been invented to break the application barrier. Cut and paste mechanisms have been available for several years to allow the inclusion of data from one application in a document belonging to another application. Recent systems even allow live links back to the original application such that changes in the original data can be reflected in the copy in the new document (such as Microsoft`s OLE technology). However, these mechanisms are still constrained by the basic application model that require each document to belong to a specific application at any given time. An alternative model is emerging in object oriented operating systems where the basic object of interest is the user's document. Any given document can contain sub objects of many different types, and the system will take care of activating the appropriate code to display, print, edit, or email these data types as required. The main difference is that the user no longer needs to think in terms of running applications, since the data knows how to integrate the available functionality in the system. In some sense, such an object oriented system is the ultimate composite editor, but the difference compared to traditional, tightly integrated multi-media editors is that the system is open and allows plug and play addition of new or upgraded functionality as the user desires without changing the rest of the system. Even the document oriented systems may not have broken sufficiently with the past to achieve a sufficient match with the users' task requirements. It is possible that the very notion of files and a file system is outdated and should be replaced with a generalised notion of an information space with interlinked information objects in a hypertext manner. As personal computers get multi Gigabyte harddisks, and additional Terabytes become available over the Internet, users will need to access hundreds of thousands or even millions of information objects. To cope with this mass of information, users will need to think of them in more flexible ways than simply as files, and information retrieval facilities need to be made available on several different levels of granularity

to allow users to find and manipulate associations between their data. In addition to hypertext and information retrieval, research approaching this next generation data paradigm includes the concept of piles of loosely structured information objects, the information workspace with multiple levels of information storage connected by animated computer graphics to induce a feeling of continuity, personal information management systems where information is organised according to the time it was accessed by the individual user, and the integration of fisheye hierarchical views of an information space with feedback from user queries. Also, several commercial products are already available to add full text search capabilities to existing file systems, but these utility programs are typically not integrated with the general file us

Marmota mobile AR identifies landscape features


Augmented Reality, or AR, is currently one of the hot areas for mobile app development for some reason, people seem quite smitten with the idea of being able to point their mobile devices camera at a street, and having information about the buildings and businesses appear on their screen superimposed over the images in real time. Now, a prototype mobile AR device is The Marmota mobile AR being tested, that concentrates more on topography than urban prototype exploration. The Marmota mobile AR can tell you things like what the names of those mountain peaks over there are, what their elevation is, and how far away they are. The Marmota was designed by Michele Zanin, Claudio Andreatta and Paul Chippendale, researchers at the Technologies of Vision Unit (TeV) in the Information Technology Centre of Fondazione Bruno Kessler (FBK) in Trento, Italy. "The system integrates technologies and findings from different disciplines, spanning cartography to computer graphics, and sophisticated machine vision algorithms, said Zanin. Each pixel of the image is associated with information such as altitude, latitude, longitude and distance from the observer. When activated by a user, the device locates itself with a built-in GPS, then sends that information via the Internet to the central Marmota server at FBK. Once those coordinates have been processed by that server, a data package of about 50 to 120 KB is sent back to the device, and displayed as a highresolution 360-degree augmented onscreen overlay. The device itself reportedly only uses a small amount of memory, letting the server do all the heavy lifting. Besides giving specs on mountains, the Marmota can also provide things like the names and locations of counties, roads, hiking trails, rivers and lakes, and will draw these items onto the screen to highlight them. It limits itself to whats visible from the users point of view, so as not to create confusion with an overabundance of information. TeV has been working on the project since 2007. The Android-based Marmota currently works anywhere in the world between 60 degrees latitude north and 60 degrees south. "User testing will follow in the immediate future and will involve volunteers from outside FBK and will hopefully identify critical points in the system, thus helping us to transform the current 'prototype' into an application that can be enjoyed by the general public said Zanin.er interface.

I P SPOOFING
Definition :
The term IP(Internet Protocol) address spoofing refers to the creation of IP packets with a forged(spoofed) source IP address with the purpose of concealing the identity of sender om impersonating another computing system.

A Brief history :
The concept of IP spoofing, was initially discussed in academic circles in the 1980's. While known about for sometime, it was primarily theoretical until Robert Morris, whose son wrote the first Internet Worm, discovered a security weakness in the TCP protocol known as sequence prediction. Stephen Bellovin discussed the problem in-depth in Security Problems in the TCP/IP Protocol Suite, a paper that addressed design problems with the TCP/IP protocol suite. Another infamous attack, Kevin Mitnick's Christmas Day crack of Tsutomu Shimomura's machine, employed the IP spoofing and TCP sequence prediction techniques. While the popularity of such cracks has decreased due to the demise of the services they exploited, spoofing can still be used and needs to be addressed by all security administrators.

Internet Protocol
Internet protocol (IP) is a network protocol operating at layer 3 (network) of the OSI model. It is a connectionless model, meaning there is no information regarding transaction state, which is used to route packets on a network. Additionally, there is no method in place to ensure that a packet is properly delivered to the destination.

How it works :
To completely understand how these attacks can take place, one must examine the structure of the TCP/IP protocol suite. A basic understanding of these headers and network exchanges is crucial to the process.

Examining the IP header, we can see that the first 12 bytes (or the top 3 rows of the header) contain various information about the packet. The next 8 bytes (the next 2 rows), however, contains the

source and destination IP addresses. Using one of several tools, an attacker can easily modify these addresses specifically the source address field. It's important to note that each datagram is sent independent of all others due to the stateless nature of IP. Keep this fact in mind as we examine TCP in the next section..

as another machine, bypassing any sort of authentication that was previously conducted on that connection.

Types of IP Spoofing :
Blind spoofing :In this type of attack, a cracker outside the perimeter of the local network transmits multiple packets to his intended target in order to receive a series of sequence numbers, which are generally used to assemble packets in the order in which they were intended -packet 1 is to be read first, then packet 2, 3 and so on. The cracker is blind to how transmissions take place on this network, so he needs to coax the machine into responding to his own requests so he can analyze the sequence numbers. By taking advantage of knowing the sequence number, the cracker can falsify his identity by injecting data into the stream of packets without having to have authenticated himself when the connection was first established. Non-blind spoofing: In this type of attack, the cracker resides on the same subnet as his intended target, so he can, by sniffing the wire for existing transmissions, gain knowledge of an entire sequence/acknowledge cycle between his target and other hosts . Once the sequence is known, the attacker can hijack sessions that have already been built by disguising himself

Man In the Middle Attack : Both types of spoofing are forms of a common security violation known as a man in the middle (MITM) attack. In these attacks, a malicious party intercepts a legitimate communication between two friendly parties. The malicious host then controls the flow of communication and can eliminate or alter the information sent by one of the original participants without the knowledge of either the original sender or the recipient. In this way, an attacker can fool a victim into disclosing confidential information by spoofing the identity of the original sender, who is presumably trusted by the recipient. Denial-of-service attack :To keep a large-scale attack on a machine or group of machines from being detected, spoofing is often employed by the malefactors responsible for the event to disguise the source of the attacks and make it difficult to shut it off. Spoofing takes on a whole new level of severity when multiple hosts are sending constant streams of packet to the DoS target -- in that case, all of the transmissions are generally spoofed, making it very difficult to track down the sources of the storm. Counter measures: 1.Filterin at the router : Implementing ingress and egress filtering on your border routers is a great place to start

your spoofing defence. You will need to implement an ACL(acess control list). 2.Encryption and Authentication :Implementing encryption and authentication will also reduce spoofing threats. Both of these features are included in Ipv 6, which will eliminate current spoofing threats.

Conclusion

Secure Internet Voting Protocol using Secure Multiparty Computational Methods


VIJENDRA MISHRA1, PURNIMA TRIVEDI2, DURGESH KUMAR MISHRA3 1Student, Sanghvi Institute of Management and Science, Indore 2Student, ShriVaishnav Institute of Technology and Science, Indore 3Acropolis Institute of Technology and Research, Indore

Abstract: In this paper, we are providing a new Internet Voting mechanism which is full proof androbust against all the threats encountered in the previous solutions. Voting is a crucial matter and internet voting is even a step ahead of it, so strong privacy and security mechanisms for maintaining the fairness, effectiveness, integrity and anonymity of the entire election proceedings is must. Secure Multiparty Computation (SMC) can be considered as one of the effective means of providing secure and anonymous computations in a multi-tiered web environment. Therefore, SMC can provide solutions to the various e-voting problems and help in its easier and more convenient implementation than its previous counterparts. 1. Introduction Election is an extremely critical matter that decides the future representatives of the nation. These representatives work and make a nations destiny. The fairness of elections can be evaluated from the combination of the methods and procedure employed to conduct the elections and the vote tally (counting) procedure. Reliable elections are an obligation for obtaining honest Representatives. Internet voting is a relatively new emerging voting style in India which is although convenient and far more advanced, but poses great threat to the security, cost involved in conducting elections, effectiveness of the overall procedure and voters trust. Through this paper, we have provided a solution to the internet voting problems by providing a new SMC protocol which may serve the two essential voting features of Anonymity and Integrity. Secure Multiparty Computation (SMC) stands for computations in which multiple parties involved in the computation supply their inputs and after that the system secretly computes the required global computation and announces the final results to all the parties involved. Each party in this case knows nothing more than its own input and the final computation results. In this case, the consideration of the adversary

and the extent to which it can harm our system is in advance and then the protocol (solution) is designed accordingly to overcome the discovered and several unexplored threats, without affecting the integrity and efficiency of the overall system. Internet voting suffers from the obvious drawback of lack of transparency verifiability. It is also subjected to frauds, internet crimes, and accidental flaws [20]. Internet voting can be done as poll site voting using traditional election booths having computer terminals at each location and then on the tallying site as well, or by Remote Internet Voting. A third kind of voting that is intermediately to the above two is Kiosk Voting which means casting vote outside the official polling sites that are publicly accessible. It is notable that the concept of Kiosk voting is similar to the ATM (Automatic Teller Machines). Remote site voting is most difficult to implements as it needs maximum security provisions and is highly susceptible to internet attacks. A unified and complete solution to all the voting problems and threats can be given by using SMC. Here, the new anonymous SMC protocol for Zero-Hacking can be employed properly and it fits best to overcome the problems in all the 3 e-voting types [7]. Let us

consider that there exist several voters and let all the election booths are the Trusted Third Parties (TTP). All these TTPs collectively support the overall SMC operations. All the votes are cast at a particular booth and from that point on, the votes are transferred randomly to all the multiple TTPs. In this case, if honest majority exist, then the verification and vote tallying procedure can be easily carried out, but if no honest majority exist, even then zero hacking protocol guarantees accuracy, security and anonymity [20]. 1.1 Literature Survey Internet voting has been as yet unexplored and untouched subject in India and has not been seriously observed, although, such voting systems have been used in foreign countries for about a decade. Various news articles suggest that these systems have undergone various modifications, since its conceptualization .Young sork et. al. initially explained and analyzed the current and future state of e-voting for multiparty and anonymous channels [8].Thomas presented the various risks and hindrances in solving e-voting [9]. Craigetl. introduced Robust voting architecture in distributed system [10]. David Evans et. al. also threw light on the serious matter of election security, studied its various perceived and real aspects [11]. Rolf Oppliger proposed the various platform security problems in remote Internet voting in the state of Geneva [11]. David et. al. explained the project named as SERVE through his literature on secure electronic registration and voting experiments and studied its security issues [12]. Cryptographic e-voting mechanism has also been explored by SCYTL[13]. Dahlia et al. gave non-cryptographic evoting techniques. Earl et. al. gave the concept of fixed federal e-voting to increase the transparency and win the voters trust in evoting[14].Aggeloset. al. gave a case study on optical scan e-voting [15]. Homomorphism

approach for e-voting was given by Kazue et. al.[16]. Patrick et. al. explained e-cash and eatte station with the help of short linkable signatures for e-voting [17]. Margret et. al. gave the transparency issues of e-voting for democratic and commercial interests [18]. Large amount of work has been done on SMC to provide secure joint computations among mutually distrusted entities. This computation can be anything like selective information sharing, arithmetic/relational operations, sorting, searching, hashing or other such operations. Mishra et.al. suggested new anonymous protocol known as zero hacking protocol for Secure Multiparty Computation using multiple TTP [20]. This protocol could work even in adverse situations with no honest majority. It yields accurate results even in case of single honest party [20]. 1.2 Proposed Methodology The proposed methodology is based on the concept of secure multiparty computation with the view of achieving anonymity and security during internet voting [20]. Zerohacking protocol for SMC [7], which has been very successfully accepted as a most secure protocol for web and distributed computing, can be effectively moulded for providing solutions tovarious e-voting risks and threats. It has the capability of providing two level security and protection, which almost nullifies the potential problem of data loss, hacking or network eavesdropping [20]. 1.3 Architecture of Zero Hacking Internet Voting Protocol Figure1. shows the architecture for zero hacking protocol for Internet Voting. It suggests that at the lowest level are the polling booths, at which the voting is actually carried out. In case of poll site and kiosk voting, voters have to go to the polling booth to vote. But in case of remote voting, it can be any terminal with internet connection [20]. Votes shall be caste at the polling booth. From, there on,

we can both fragment and distribute individual votes across the network to the anonymizer or a bunch of some pre decided number of votes, say n. The anonymizer does nothing, except identity hiding. Now, no tally system can identify the location from where the votes were initially sent. Thus, nobody can know which party got maximum vote from particular location, only overall results are declared publicly, without disclosing the intermediate vote counts of each locality. These votes are then recollected at the tally systems where they are used for final counting. Any one tally system is arbitrarily allowed to count the votes and is considered trustworthy. This selection is made randomly and instantly. Thus, the possibility of such a tally system to be untrustworthy is just reduced to 1/n [20].

2.Informal Description The various layers and interfaces of zero hacking protocol according to fig.1 and there functions can be described as below: Poll Center:

Encryption Interface: the vote and hands over data securely to the next level. Fragmentation Layer: size packets along with the encryption key. Redirecting Interface: data packets are redirected to them. Anonymous Layer: The packet from the fragmentation layers aresent randomly to any of the anonymous layer. without holding it (for secure transmission). Security Interface: are forwarded accurately, correctly and saflyto the computation layer. network leaks. Computation Layer:

Fig.1 Architecture for Zero Hacking Internet Voting Protocol [20] Also, anonymous layer can be implemented by means of secure socket layer or by transport layer security that ensures security even in case of high level programming and in adversia conditions. Thus, although packets arriving from the polling booth can help in predicting source polling booth and then the respective locality, but because of this intermediate anonymous layer, this source identification cant be made and thus completely secure, open, transparent elections can be conducted and accurate results can be found.

as master TTP and hand over the whole data to the master TTP through the security interface. hands over their complete data to that TTP, which performs the final computation and announces the results. 3.Formal Description Assumptions: All votes from the poll center are encrypted and then fragmented into equi sized packets,sayk. Let the party PP fragments vote into NP packets and forwards it to the successive layer. Packet transmission is kept completely secure so that there is no packet loss.

Variables List: Size of each packet: - k Number of Poll Centers: - P Number of packets from ith center: - Ni Total Vote Data: - TD Total Number of Anonymizers: - TAN Minimum Number of Packets of any Anonymizer: - Amin Minimum Number of Packets of any TTP: - Tmin Total number of TTPs: - TTTP Acknowledgement (Vote bit status):- ACK Randomly Selected Anonymizer: - ANX Remaining Packets after Distribution: A_R_P Master TTP: - MTTP Forwards data from TTPi to TTPj: Forward (TTPi, MTTP) Final Computations are done using Compute () function Step 1: Voter Authentication () If (ACK==1) /* Use any of the encryption algorithm to encrypt the vote data and attach the decryption key also to the data so that it can also be fragmented along with data and cant be separately identified. */ Packet_division () //vote bit status is 0; indicates that person has not yet caste his vote. Else { Vote not permitted; //vote has been caste and therefore, not allowed again Exit (); } Step 1: Packet_division () for (i=1 to P; i++) { Divide the vote data into packets; If (size of (last packet) <k) then Add fillers to make its size k; } TD=p Ni; Amin= TD/ TAN; Tmin= TD/ TTTP;

End //End of Packet_division () Algorithm Step 2: Anonymizer_Selection () for (i=1 to TAN; i++) { If (count (AX<AMIN) { Randomly select an AnonymizerANX depending upon the generator function used Send packet to ANX and count (AX++) } } //End of Anonymizer_Selection () Step 3: TTP_Selection_Forwarding () //Packets are not stored in the anonymizer but are just forwarded for (j=1 to TAN; j++) { for (i=1 to P_I_A) { Randomly select TTPX using any generator function; For (k=1 to A_R_P; k++) { Choose one TTP randomly; Send the packet from ANX respectively; } } } //End of TTP_Selection_Forwarding () Step 4: Master_TTP_Selection_Computation () for( i=1 to TTTP; i++) { Select a Master TTP from TTTP and call it MTTP Forward (TTPi, MTTP); /* All TTPs forward their data to randomly selected master TTP that decrypts the data and does the final computation. */ } Compute (); //This function is used for final computation. //End of Master_TTP_Selection_Computation ()

The database of the entire authenticated voters data is very essential during this voting process, as this will help to overcome the problem of multi-vote. As shown in fig. , the voters table contains one field voters status, which decides whether the vote has been caster yet to be submitted. This Boolean field is initially kept false, which indicates that no vote has been made; once the voter castes the vote and his vote gets confirmed, this field is made true and True indicates that voter has caste the vote. So, if he arrives again, then while authentication, he can be caught for multivoting. 4. Performance and Analysis of Zero hacking Internet Voting Protocol After dividing data of each party, the probability of hacking one packet from total data of rth party is P(r) =Xr/TD (1) Probability of hacking data of any rth pollingcenter is P(r) Data= Xr/TD (Xr1)/ (TD-1) (1)/(TD-Xr) (2) It can be clearly seen from the eq. (1) and (2) that if the value of Xr increases, then the probability of hacking the data decreases. Fig2 shows that when the numbers of packets are increased, the probability of hacking the data will decease by the harmonic mean. Case 1: If number of packets are increased from n to (n+1), then the effect on hacking can be observed as P(r) n=Xr/TD (Xr-1)/ (TD-1) (1)/(TD-Xr+1) (3) 0 0.02 0.04 0.06 0.08 0.1 0.12 1234567

Fig. 2 Probability of hacking the data P(r) n+1= Xr/TD (Xr-1)/ (TD-1) (1)/(TD-Xr+2) (4) From Eq. 3 and 4, we can compute the ratio as P(r) n+1 / P(r) n= (Xr+1) / (TDXr) (5)It is clear from the fig.3 that security increases linearly with increase in number of packets.Also, by increasing the number of packets in multiples, we can increase the security ratio. Case 2: TTP selection and number of TTPs According to our assumptions, we have taken multiple TTPs out of which any one is randomly selected and appointed as master. If there is only one TTP, then he can get malicious and result in system failure. To overcome this problem, we have used multiple TTPs, Now, system will fail under only those circumstances 50 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 1234567 Number of Packets Probability Ratio

Fig. 4 Probability of TTPs to become malicious 5. Conclusion As per the above mentioned feature, we can conclude that zero hacking e-voting protocol can easily used to carry out evoting and is inefficient solution to overcome all the e-voting threats. It can be easily deployed to provide all the essential features of a secure and evoting facility. It provides security, privacy, anonymity, correctness, accuracy and ease of use. SMC can prove out as an unbeatable solution to overcome all the e-voting threats and problems. References [1] David Evans and Nathanael Paul Election Security: Perception and Reality, IEEE Security and Privacy, www.computer.org/security, Jan-Feb, 2004. [2] Wikipedia, the free encyclopedia, http://en.wikipedia.org/wiki/E-voting [3] Yong-sork HER and kouchi SAKURAI,The Analysis of Current State and Future onE-voting System [4] Rolf Oppliger, Addressing the Secure Platform Problem for Remote Internet Voting in Geneva, e-SECURITY Technologies RolfOppligerBeethovenstrasse 10, Switzerland , may 3, 2002. [5] Rolf Oppliger, Security Technologies forth World Wide Web, Artech House, Norwoord, MA, 2000,http://www.esecurity.ch/books/wwse c.html. [6] Rolf Oppliger, Internet and Intranet Security, Second Edition, Artech House, Norwoord, MA, ISBN 1-58053-1660,2002, http://www.esecurity.ch/books/iis2e.htm [7] Durgesh Kumar Mishra and Manohar Chandwani, A Zero Hacking Protocol for Secure Multiparty Computation using Multiple TTP, In the proceedings of TENCON 08, pp 1-6.

Fig. 3 Probability ratio of hacking the data with increase in number of packets when all the TTPs get corrupt, otherwise, and there are no other chances of system failure /hacking. Probability that r out of n TTPs will get corrupt is P(r) = crPrQnr Probability that all of n TTPs will get corruptis P (n) = ncnPn It can be easily shown from the fig. 4 that if we take less than 3 TTPs, then there are chances of system failure, but for 3 TTPs and more there are no chances of any kind of failure. -0.2 0 0.2 0.4 0.6 0.8 1 1.2 123456 Number of TTPsProbability

[8] Yong Sork HER and kouickisakurai, The Analysis of the Current State and the Future of Evoting,ysher2001@hotmail.com, sakurai@csce.kyushu-u.ac.jp. [9] Thomas W. Lauer, Risks in E-voting, School of Business Administration, Oakland(JTES) Delving: Journal of Technology and Engineering Sciences Vol 1, No. 1 January June 200951 University, Rochestor, USA,lauer@oakland.edu, ISSN-1479439X. [10] Craig Burton, ShanikaKarunasekera, Aaron Harrwood, A Distributed Network Architecture for Robust E-voting System, Department of Computer Science and SoftwareEngineering, University of Melbourne,Australia, , March 24, 2005. [11] David Evans and Nathaenal Paul, Election Security: Perception and Reality, University of Virginia, 2004 IEEE, 1540-7993/04. [12] Rolf Oppliger Ph.D. Geneva, Addressing the Secure Platform of Remote Internet Votingin Geneva, May 3, 2002, A report prepared onbehalf of the Chancellory of the State ofGeneva. [13] SCYTL Online World Security, SA, Applied Cryptography Enabling TrustworthyElectronic Voting, http:// www.scytl.com. [14] Earl Barr, Matt Mishop, Mark Gondree,Fixed Federal E-voting Standards, USA Haney, March 2007/vol.50, no.3.,communication of the ACM, ACM 0001-0782/07/0300 [15] Aggelo sKiayias, Laurent Michel, Alexander Russell, Narsimha Shashidhar, Andrew See, Alexander Shvartzman, Seda Davtyan,{aggelos,dm, aas, sedaacr, karpoor, andysee }@engruconn.edu Department of Computer Science, University of Connecticut,Storrs, CT. [16] Kazue Sako and Joe Kilian, Secure Voting using Partially Compatible Homomorphism, 1NEC Corporation, 4-14, Miyazaki miyamae, Kawasaki 216, Japan,

1Miyazaki miyamae, Kawasaki 216, Japan,2NEC Research Institute, 4 Independence Day, Princeton, NJ 08540, USA. [17 ]Patrick P. Tsang and Victor k.Wei, Short linkable E-signatures for E-voting, Ecash and Attestation*, Department of Information engineering, The Chinese University of Hong Kong, Shatin, Hong Kong, {ptsang3,kwwei}@ie.cuhk.edu.hk. [18 ]Margaret McGaley and Joe McCarthy, Transparency and E-voting: Democratic vs Commercial Interests, mmcgaley@cs.may.ie, joe.mccarty@arkaon.com. [19 ]Durgesh kumar Mishra and Manohar Chandwani, Zero-hacking Protocol for Secure Multiparty Computation using Multiple TTP, In the Proceedings of IEEEs Conference TENCON08. [20 ]Purnima Trivedi and Durgesh kumar Mishra, A Secure Multiparty Computation Zero Hacking Protocol for E-voting System, In the proceedings of International Conference on Security and Identity Management (SIM)-09, May 11-12, 2009, pp 75-82.

by

Moodle is a software package for producing Internet-based courses and web sites. It is a global development project designed to support a social constructionist framework of education. Moodle is a course management system designed to help educators who want to create quality online courses.

Online learning is being used to increase flexibility and communication within existing courses and to enable courses that could never exist before. Moodles have been implemented for users at all age levels. The software is used all over the world by universities, schools, companies and independent teachers. Moodle is provided freely as Open Source software (under the GNU Public License). Basically this means Moodle is copyrighted, but that you have additional freedoms. You are allowed to copy, use and modify Moodle provided that you agree to: provide the source to others; not modify or remove the original license and copyrights, and apply this same license to any derivative work. Moodle can be installed on any computer that can run PHP, and can support an SQL type databaseIt can be run on Windows and Mac operating systems and many flavors of Linux. The word Moodle was originally an acronym for Modular Object-Oriented Dynamic Learning

Environment, which is mostly useful to programmers and education theorists. Anyone who uses Moodle is a Moodler. In India, moodle is being used by reputed universities and colleges like IIT ,IIM, etc WHO ARE MOODLE PARTNERS: Moodle Partners are a group of competenta professionals who are serious about providing quality services to users of Moodle software, ranging from basic support to the development and implementation. All Moodle Partners contribute directly to the ongoing development of Moodle software via funding or expertise. Moodle Partners are the ONLY people who can officially certify your skills in Moodle-based e-learning using the rigorous and highly-regarded assessment programme developed by the international Moodle community.

Molecular Switches
The world of molecular computing, with its ultrafast speeds, low power needs and inexpensive materials, is one step closer to reality. Using chemical processes rather than silicon based photolithography, researchers at Rice University and Yale University in the US have created a molecular computer switch with the ability to be turned on and off repeatedly. A molecular switch is a molecule that can be reversibly shifted between two or more stable states The oldest forms of synthetic molecular switches are pH indicators, which display distinct colors as a function of pH. Currently synthetic molecular switches are of interest in the field of nanotechnology for application in molecular computers. Molecular switches are also important to in biology because many biological functions are based on it, for instance allosteric regulation and vision. They are also one of the simplest examples of molecular machines. Such a switch, or logic gate, is a necessary computing component, used to represent ones and zeros, the binary language of digital computing. As far as building the basic components of molecular computing is concerned, 50 percent of the job is done, the other 50 percent is memory. Rice and Yale researchers plan to announce a molecular memory device soon. The cost of the molecular switches would be at least several thousand times less expensive than traditional solid state devices. They also promise continued miniaturisation and increased computing power, leapfrogging the limits of silicon. The switch works by applying a voltage to a 30 nanometer wide self assembled array of the molecules, allowing current to flow in only one direction within the device. The current only flows at a particular voltage, and if that voltage is increased or decreased it turns off again making the switch reversible. In other previous demonstrations of a molecular logic gate there was no reversibility. In addition the difference in the amount of current that flows in the on/off state, known as the peak to valley ratio is 1000 to 1. The typical silicon device response is at best, 50 to 1. The dramatic response from off to on when the voltage is applied indicates the increased reliability of the signal. The active electronic compound, 2'-amino4-ethynylphenyl-4'-ethynylphenyl-5'-nitro1-benzenethiol, was designed and synthesised at Rice. The molecules are one million times smaller in area than typical silicon-based transistors. Not only is it much smaller than any switch that you could build in the solid state, it has complementary properties, which in this case if you want a large on/off ratio it blows silicon away. The measurements of the amount of current passing through a single molecule occurred at a temperature of approximately 60 Kelvin, or about -350 degrees Fahrenheit.

In addition to logic gates, potential applications include a variety of other computing components, such as high frequency oscillators, mixers and multipliers.

It really looks like it will be possible to have hybrid molecular and silicon based computers within five to 10 years.

Robots get emotional


In November 2008, we reported on the FEELIX GROWING (Feel, Interact, eXpress: a Global approach to development with Interdisciplinary Grounding) projects aim of developing robots that are capable of identifying different emotions based on facial expressions. Now, that same project has announced the completion of its first prototype robots that are not only capable of developing their own emotions as they interact with their human caregivers, but they can also express those emotions. Dr Canamero with a sad The robots, created by an interdisciplinary team led by Dr. Lola Caamero at the University of Hertfordshire, and in collaboration with a consortium of universities and robotic companies across Europe, differ from others in the way that they form attachments, interact and express emotion through bodily expression. They have been developed so that they learn to interact with and respond to humans in a similar way as children learn to do it, and use the same types of expressive and behavioral cues that babies use to learn to interact socially and emotionally with others.
Robots modeled on chimp and human infants

robot

The robots have been created through modeling the early attachment process that human and chimpanzee infants undergo with their caregivers when they develop a preference for a primary caregiver. They are programmed to learn to adapt to the actions and mood of their human caregivers, and to become particularly attached to an individual who interacts with the robot in a way that is particularly suited to its personality profile and learning needs. The more they interact, and are given the appropriate feedback and level of engagement from the human caregiver, the stronger the bond developed and the amount learned.
Robots express themselves

The robots are capable of expressing anger, fear, sadness, happiness, excitement and pride and will demonstrate very visible distress if the caregiver fails to provide them comfort when confronted by a stressful situation that they cannot cope with or to interact with them when they need it.

"This behavior is modeled on what a young child does," said Dr Caamero. This is also very similar to the way chimpanzees and other non-human primates develop affective bonds with their caregivers. The robots creators say that this is the first time that early attachment models of human and nonhuman

Photonic Networks
In the future, photonics will be able to do many of the same kinds of things as electronics such as amplifying, switching and processing signals. But the key difference is that photonic devices work with optical signals, not electrical signals. This has several advantages, the most important advantage is that photonics can be used to manipulate signals with very high bandwidth (high information content), far beyond the bandwidth limitations of electronics. It is expected that the amount of information carried on computer and telecommunications networks to continue to grow very rapidly into the future, so photonics will play an important role. However, most photonic devices (based on non-linear optical effects are still quite primitive in comparison to electronics. It is still early days. Many devices are still confined to research laboratories. The stage of development in photonics today is probably roughly equivalent to the vacuum tube used in the electronic systems of the 1940s. Why use optical signals ? Optics is important because optical fiber cables can be used to transport large amounts of information over very long distances, much more effectively and cheaply than electrical cables or radio.The enormous bandwidth carrying capacity of optical fiber (potentially as great as 10 thousand billion bits per second - equal to about one million simultaneous TV channels) was recognised from the very earliest days of development of optical fiber, more than 30 years ago. However this huge potential capacity has yet to be realised. Today's networks still consist of electronic switches and routers interconnected by point to point optical transmission channels. So in practice, the amount of information that can be carried on one of these channels is not limited by the fiber, but by the information processing speed of the electronic equipment used at each end of the fiber link. The approach to increase the capacity of fiber systems is by using optical timedivision multiplexing (OTDM). OTDM is a method of carrying information on a single channel in the form of ultrashort optical pulses at very high rates - 100 Gbit/s and higher, beyond the envisaged speed limits of electronics. The underlying principle in

OTDM is that many lower speed data channels, each transmitted in the form of extremely short optical pulses, are time interleaved to form a single high speed data stream. OTDM is currently viewed as a longer term solution because it relies on different, and much less mature, technologies. Many of the key components needed for OTDM are still confined to the research laboratory. However OTDM has some very important advantages for future photonic networks. Moreover, the two approaches WDM and OTDM are not incompatible, and in the future they will be used in combination to best utilise the fiber bandwidth. So what is a Photonic Network ? The basic approach used to create today's networks - electronic switches interconnected by optical point to point links, has the drawback that all of the information carried on a fiber must be processed and switched using the electronic equipment at the ends of each individual link. But since very often the bulk of the information is passing through in transit to some other network destination, this means that the electronic equipment has to be much bigger and more complex than is really necessary. But in a new approach currently being developed, data will be transmitted across the future photonic network entirely in the form of optical signals, without the need for big electronic switches. Photonic devices will be used to process and route the information in its optical form. This avoids the need continually to convert optical signals into electronic ones and back again. But even more important, systems based on photonic processing and routing will have the ability to handle much greater volumes of information and at lower cost.

One method of creating a photonic network is to route the optical signals according to their wavelength (e.g. one wavelength for London, another for Paris, and so on). This analogue optical approach to routing signals across a network is very simple, since it requires only passive components such as wavelength filters and gratings, but it has some serious practical limitations. A more advanced approach is to carry the information across the network in the form of short high speed data packets, in effect a form of OTDM . The information is routed according to the destination address encoded into the packet. This digital optical approach is more akin to the way the routing is done today using electronics, but ultrafast photonic devices will be used instead. Not only will these signals be routed towards their destination at the speed of light, but they will be regenerated and processed in the optical domain too. It will be possible to transmit these signals over almost infinite distances through great numbers of network nodes without degradation. The digital optical approach thus overcomes the physical limitations of analogue routing, and the information speed of these signals is no longer limited by intermediate electronics. This looks like being a much more efficient and economic way of handling the massive amounts of information that will be carried over communications networks in the future. Moreover there have been recent important advances in the development of ultrafast optical devices that could open up these digital optical techniques to a much wider range of future applications, such as in local area networks and massive capacity routers. The rapid proliferation of information technology in commerce, finance, education, health, government, security, and home information and entertainment, together with the ever increasing power of computers and

data storage devices, is producing a massively increasing demand for network interconnection and traffic. This trend is expected to continue into the future. For example, it is predicted that the processing speed of high end work stations and mass market personal computers will increase by more than 1000 times in the next 10-15 years, and it is predicted that over the same time the traffic demand on core telecommunications networks will grow by

at least 100 fold. Photonic networking techniques have the power to satisfy this explosive increase in demand at relatively low cost. It is becoming increasingly likely that in the longer term, ultrafast photonic techniques, together with wavelength multiplexing, will be used in networks at all levels, from the transcontinental backbone to the desktop.

SOFT SKILLS
by Anuu Goyal

Soft skills is a sociological term relating to a person's "EQ" (Emotional Intelligence Quotient), the cluster of personality traits, social graces, communication, language, personal habits, friendliness, and optimism that characterize relationships with other people.[1] Soft skills complement hard skills (part of a person's IQ), which are the occupational requirements of a job and many other activities. A person's soft skill EQ is an important part of their individual contribution to the success of an organization. Particularly those organizations dealing with customers face-to-face are generally more successful if they train their staff to use these skills. Screening or training for personal habits or traits such as dependability and conscientiousness can yield significant return on investment for an organization.[2] For this reason, soft skills are increasingly sought out by employers in addition to standard qualifications. It has been suggested that in a number of professions soft skills may be more important over the long term than occupational skills. The legal profession is one example where the ability to deal with people effectively and politely, more than their mere occupational skills, can determine the professional success of a lawyer.[3] People's ability to handle the soft skills side of business - influencing - communication - team management - delegating - appraising - presenting - motivating is now recognised as key to making businesses more profitable and better places to work. Increasingly, companies aren't just assessing their current staff and future recruits on their business skills.They are now assessing them on a whole host of soft skill competencies around how well they relate and communicate to others.We now find it a bit shocking and somewhat disturbing when someone displays the old autocratic style of bullying management tactics (though we

know it is still unfortunately far more prevalent than is desirable).Many companies simply will now no longer put up with it (bravo!).Measuring these soft skills is no easy thing. But in the most progressive companies, managers are looking for people's ability to communicate clearly and openly, and to listen and respond empathetically.They also want them to have equally well-honed written skills so that their correspondence (including emails) doesn't undo all the good work their face-to-face communication creates.Good soft skills also include the ability of people to balance the commercial needs of their company with the individual needs of their staff.Being flexible and able to adapt to the changing needs of an organisation also qualify as soft skills, as do being able to collaborate with others and influence situations through lateral and more

creative thinking.The ability to deal with differences, multiculturalism and diversity is needed more than ever.Very few companies are untouched by the ever-widening influence of other cultures and good soft skills facilitate better communication and people's ability to manage differences effectively.Everyone already has some form of soft skills (probably a lot more than they realise)They just need to look at areas in their personal life where they get on with others, feel confident in the way they interact, can problem solve, are good at encouraging, can schmooze with the best of them.All these skills are soft and all of them are transferable to the workplace.Not only that, the best news of all is that soft skills can be developed and honed on an on-going basis through good training, insightful reading, observation and of course, practise, practise, practise.

INTRODUCTION OF JAVA
Java is an object-oriented programming language developed by James Gosling and colleagues at Sun Microsystems in the early 1990s. Unlike conventional languages which are generally designed either to be compiled to native (machine) code, or to be interpreted from source code at runtime, Java is intended to be compiled to a bytecode, which is then run (generally using JIT compilation) by a Java Virtual Machine. The language itself borrows much syntax from C and C++ but has a simpler object model and fewer low-level facilities. Java is only distantly related to JavaScript, though they have similar names and share a C-like syntax. History Java was started as a project called "Oak" by James Gosling in June 1991. Gosling's goals were to implement a virtual machine and a language that had a familiar C-like notation but with greater uniformity and simplicity than C/C++. The first public implementation was Java 1.0 in 1995. It made the promise of "Write Once, Run Anywhere", with free runtimes on popular platforms. It was fairly secure and its security was configurable, allowing for network and file access to be limited. The major web browsers soon incorporated it into their standard configurations in a secure "applet" configuration. popular quickly. New versions for large and small platforms (J2EE and J2ME) soon were designed with the advent of "Java 2". Sun has not announced any plans for a "Java 3". Philosophy There were five primary goals in the creation of the Java language:

1. It should use the object-oriented programming methodology. 2. It should allow the same program to be executed on multiple operating systems. 3. It should contain built-in support for using computer networks. 4. It should be designed to execute code from remote sources securely. 5. It should be easy to use by selecting what was considered the good parts of other

object-oriented languages. To achieve the goals of networking support and remote code execution, Java programmers sometimes find it necessary to use extensions such as CORBA, Internet Communications Engine, or OSGi. Object orientation The first characteristic, object orientation ("OO"), refers to a method of programming and language design. Although there are many interpretations of OO, one primary distinguishing idea is to design software so that the various types of data it manipulates are combined together with their relevant operations. Thus, data and code are combined into entities called objects. An object can be thought of as a self-contained bundle of behavior (code) and state (data). The principle is to separate the things that change from the things that stay the same; often, a change to some data structure requires a corresponding change to the code that operates on that data, or vice versa. This separation into coherent objects provides a more stable foundation for a software system's design. The intent is to make large software projects easier to manage, thus improving quality and reducing the number of failed projects. Another primary goal of OO programming is to develop more generic objects so that software can become more reusable between projects. A generic "customer" object, for example, should have roughly the same basic set of behaviors between different software projects, especially when these projects overlap on some fundamental level as they often do in large organizations. In this sense, software objects can hopefully be seen more as pluggable components, helping the software industry build projects largely from existing and well-tested pieces, thus

leading to a massive reduction in development times. Software reusability has met with mixed practical results, with two main difficulties: the design of truly generic objects is poorly understood, and a methodology for broad communication of reuse opportunities is lacking. Some open source communities want to help ease the reuse problem, by providing authors with ways to disseminate information about generally reusable objects and object libraries. Platform independence The second characteristic, platform independence, means that programs written in the Java language must run similarly on diverse hardware. One should be able to write a program once and run it anywhere. This is achieved by most Java compilers by compiling the Java language code "halfway" to bytecode (specifically Java bytecode) simplified machine instructions specific to the Java platform. The code is then run on a virtual machine (VM), a program written in native code on the host hardware that interprets and executes generic Java bytecode. Further, standardized libraries are provided to allow access to features of the host machines (such as graphics, threading and networking) in unified ways. Note that, although there's an explicit compiling stage, at some point, the Java bytecode is interpreted or converted to native machine instructions by the JIT compiler. There are also implementations of Java acompilers that compile to native object code, such as GCJ, removing the intermediate bytecode stage, but the output of these compilers can only be run on a single architecture. Sun's license for Java insists that all implementations be "compatible". This

resulted in a legal dispute with Microsoft after Sun claimed that the Microsoft implementation did not support the RMI and JNI interfaces and had added platformspecific features of their own. In response, Microsoft no longer ships Java with Windows, and in recent versions of Windows, Internet Explorer cannot support Java applets without a third-party plug-in. However, Sun and others have made available Java run-time systems at no cost for those and other versions of Windows. The first implementations of the language used an interpreted virtual machine to achieve portability. These implementations produced programs that ran more slowly than programs compiled to native executables, for instance written in C or C++, so the language suffered a reputation for poor performance. More recent JVM implementations produce programs that run significantly faster than before, using multiple techniques. The first technique is to simply compile directly into native code like a more traditional compiler, skipping bytecodes entirely. This achieves good performance, but at the expense of portability. Another technique, known as just-in-time compilation (JIT), translates the Java bytecodes into native code at the time that the program is run which results in a program that executes faster than interpreted code but also incurs compilation overhead during execution. More sophisticated VMs use dynamic recompilation, in which the VM can analyze the behavior of the running program and selectively recompile and optimize critical parts of the program. Dynamic recompilation can achieve optimizations superior to static compilation because the dynamic compiler can base optimizations on knowledge about the runtime environment and the set of loaded

classes. JIT compilation and dynamic recompilation allow Java programs to take advantage of the speed of native code without losing portability. Portability is a technically difficult goal to achieve, and Java's success at that goal has been mixed. Although it is indeed possible to write programs for the Java platform that behave consistently across many host platforms, the large number of available platforms with small errors or inconsistencies led some to parody Sun's "Write once, run anywhere" slogan as "Write once, debug everywhere". Platform-independent Java is however very successful with server-side applications, such as Web services, servlets, and Enterprise JavaBeans, as well as with Embedded systems based on OSGi, using Embedded Java environments. Automatic garbage collection One idea behind Java's automatic memory management model is that programmers should be spared the burden of having to perform manual memory management. In some languages the programmer allocates memory to create any object stored on the heap and is responsible for later manually deallocating that memory to delete any such objects. If a programmer forgets to deallocate memory or writes code that fails to do so in a timely fashion, a memory leak can occur: the program will consume a potentially arbitrarily large amount of memory. In addition, if a region of memory is deallocated twice, the program can become unstable and may crash. Finally, in non garbage collected environments, there is a certain degree of overhead and complexity of user-code to track and finalize allocations.

In Java, this potential problem is avoided by automatic garbage collection. The programmer determines when objects are created, and the Java runtime is responsible for managing the object's lifecycle. The program or other objects can reference an object by holding a reference to it (which, from a low-level point of view, is its address on the heap). When no references to an object remain, the Java garbage collector automatically deletes the unreachable object, freeing memory and preventing a memory leak. Memory leaks may still occur if a programmer's code holds a reference to an object that is no longer neededin other words, they can still occur but at higher conceptual levels. The use of garbage collection in a language can also affect programming paradigms. If, for example, the developer assumes that the cost of memory allocation/recollection is low, they may choose to more freely construct objects instead of pre-initializing, holding and reusing them. With the small cost of potential performance penalties (inner-loop construction of large/complex objects), this facilitates thread-isolation (no need to synchronize as different threads work on different object instances) and datahiding. The use of transient immutable value-objects minimizes side-effect programming. Comparing Java and C++, it is possible in C++ to implement similar functionality (for example, a memory management model for specific classes can be designed in C++ to improve speed and lower memory fragmentation considerably), with the possible cost of extra development time and some application complexity. In Java, garbage collection is built-in and virtually invisible to the developer. That is, developers may have no notion of when garbage collection will take place as it may

not necessarily correlate with any actions being explicitly performed by the code they write. Depending on intended application, this can be beneficial or disadvantageous: the programmer is freed from performing low-level tasks, but at the same time loses the option of writing lower level code. Syntax The syntax of Java is largely derived from C++. However, unlike C++, which combines the syntax for structured, generic, and object-oriented programming, Java was built from the ground up to be virtually fully object-oriented: everything in Java is an object with the exceptions of atomic datatypes (ordinal and real numbers, boolean values, and characters) and everything in Java is written inside a class. Java Runtime Environment The Java Runtime Environment or JRE is the software required to run any application deployed on the Java Platform. End-users commonly use a JRE in software packages and Web browser plugins. Sun also distributes a superset of the JRE called the Java 2 SDK (more commonly known as the JDK), which includes development tools such as the Java compiler, Javadoc, and debugger. Java Hello World Program Our first application will be extremely simple - the obligatory "Hello World". The following is the Hello World Application as written in Java. Type it into a text file or copy it out of your web browser, and save it as a file named HelloWorld.java. This program demonstrates the text output function of the Java programming language by displaying the message "Hello world!".

Java compilers expect the filename to match the class name. A java program is defined by a public class that takes the form:
classprogram-name { optional variable declarations and methods public static void main(String[] args) { statements } optional variable declarations and methods }

the path to the program (such as c:\j2se\bin\javac HelloWork.java). If the compilation is successful, javac will quietly end and return you to a command prompt. If you look in the directory, there will now be a HelloWorld.class file. This file is the compiled version of your program. Once your program is in this form, its ready to run. Check to see that a class file has been created. If not, or you receive an error message, check for typographical errors in your source code. You're ready to run your first Java application. To run the program, you just run it with the java command: java HelloWorld

Source Code Sample Run In your favorite editor, create a file called HelloWorld.java with the following contents: /** Comment * Displays "Hello World!" to the standard output. */
classHelloWorld { public static void main (String args[]) { System.out.println("Hello World!"); //Displays the enclosed String on the Screen Console } } Note: It is important to note that you use the full name with extension when compiling (javac HelloWorld.java) but only the class name when running (java HelloWorld).

Hello world! The source file above should be saved as myfirstjavaprog.java, using any standard text editor capable of saving as ASCII (eg Notepad, Vi). As an alternative, you can download the source for this tutorial.
HelloWorld.java

You've just written your first Java program! Congratulations!! (Next PROGRAM: Java Comments) Java Comments The Java programming language supports three kinds of comments: /* text */

To compile Java code, we need to use the 'javac' tool. From a command line, the command to compile this program is: javac HelloWorld.java For this to work, the javac must be in your shell's path or you must explicitly specify

The compiler ignores everything from /* to */. /** documentation */ This indicates a documentation comment (doc comment, for short). The compiler ignores this kind of comment, just like it ignores comments that use /* and */. The JDK javadoc tool uses doc comments when preparing automatically generated documentation. // text The compiler ignores everything from // to the end of the line. Example Java denotes comments in three ways: 1. Double slashes in front of a single line comment: int i=5; // Set the integer to 5 2. Matching slash-asterisk (/*) and asteriskslash (*/) to bracket multi-line comments: /* Set the integer to 5 */ int i=5; 3. Matching slash-double asterisk (/**) & asterisk-slash(*/) for Javadoc automatic hypertext documentation, as in /** This applet tests graphics. */ public class testApplet extends applet{... or /** * Asterisks inside the comment are ignored by javadoc so they * can be used to make nice line markers.

**/ The SDK tool javadoc uses the latter /** ..*/ comment style when it produces hypertext pages to describe a class. (Next PROGRAM: Java Data and Variables) Java Data and Variables There are 8 primitive data types. he 8 primitive data types are numeric types. The names of the eight primitive data types are:
b yt e sh or t I n t lo n g fl oa t do ubl e boo lea n

ch ar

There are bothintegerand floating pointprimitive types. Integer types have no fractional part; floating point types have a fractional part. On paper, integers have no decimal point, and floating point types do. But in main memory, there are no decimal points: even floating point values are represented with bit patterns. There is a fundamental difference between the method used to represent integers and the method

used to represent floating point numbers.


Integer Primitive Data Types Type Size Byte 8 bits Short 16 bits Int 32 bits Long 64 bits

Range -128 to +127 -32,768 to +32,767 (about)-2 billion to +2 billion (about)-10E18 to +10E18 Range -3.4E+38 to +3.4E+38 -1.7E+308 to 1.7E+308

Floating Point Primitive Data Types Type Size float 32 bits double 64 bits

Examples

intyr = 2006; double rats = 8912 ; For each primitive type, there is a corresponding wrapper class. A wrapper class can be used to convert a primitive data value into an object, and some type of objects into primitive data. The table shows primitive types and their wrapper classes: primitive type Wrapper type Byte Byte short Short Int Int Long Long float Float double Double Char Character boolean Boolean Variables only exist within the structure in which they are defined. For example, if a variable is created within a method, it cannot be accessed outside the method. In addition, a different method can create a variable of the same name which will not conflict with the other variable. A java variable can be thought of as a little box made up of one or more bytes that can hold a value of a particular data type: Syntax: variabletypevariablename = data; Source Code ( demonstrating declaration of a variable ) class example { public static void main ( String[] args ) { long x = 123; //a declaration of a variable named x with a datatype of long System.out.println("The variable x has: "

+ x ); } } Source Code public class MaxDemo { public static void main(String args[]) { //integers byte largestByte = Byte.MAX_VALUE; short largestShort = Short.MAX_VALUE; intlargestInteger = Integer.MAX_VALUE; long largestLong = Long.MAX_VALUE; //real numbers float largestFloat = Float.MAX_VALUE; double largestDouble = Double.MAX_VALUE; //other primitive types char aChar = 'S'; booleanaBoolean = true; //Display them all. System.out.println("largest byte value is " + largestByte + "."); System.out.println("largest short value is " + largestShort + "."); System.out.println("largest integer value is " + largestInteger + "."); System.out.println("largest long value is " + largestLong + "."); System.out.println("largest float value is " + largestFloat + "."); System.out.println("largest double value is " + largestDouble + "."); } } Sample Run

The largest byte value is 127. The largest short value is 32767. The largest integer value is 2147483647. The largest long value is 9223372036854775807. The largest float value is 3.4028235E38. The largest double value is 1.7976931348623157E308. (Next PROGRAM: Java Command Line Arguments) Java Command Line Arguments

public class ReadArgs { public static final void main(String args[]) { for (int i=0;i<args.length;++i) { System.out.println( args[i] ); } } }

Sample Run This class demonstrates how command line arguments are passed in Java. Arguments are passed as a String array to the main method of a class. The first element (element 0) is the first argument passed not the name of the class. With the following command line, the output shown is produced. javaReadArgs zero one two three Output: Source Code An example that prints in the command line arguments passed into the class when executed. The following command line arguments were passed: arg[0]: zero arg[1]: one arg[2]: two arg[3]: three

IEEE Student Branch SVITS Activities for Session 2009-10.

Shri Vaishnav Institute of Technology and Science (SVITS), Indore is one of the leading academic institutions of central India, known for its technical excellence. Its goal is to create technical leaders who can transform India from a developing nation to a developed nation. IEEE Student branch has always been committed for innovation and over the years have organized major IEEE events of the Central India. IEEE accorded the status of a Student Branch to IEEE Student Branch, SVITS in the month of July 2003. The Student Branch since its inception assured a continual effort in pursuing its aim of benefiting the society through endeavors in technical and social fields. IEEE Student Branch has always been very active in organizing various technical and social activities. The session 2009-2010 started with a seminar on How to Benefit from this Century of Chaos by Prof. Siddiqui, IIT Delhi. This was followed by a Research and Entrepreneurship Workshop on 12th of September. IEEE Student Branch organized INCOMM 10, a two day National Conference on 9th and 10th April 2010. The national conference on Recent Trends in Instrumentation, Communication and Microelectronics received more than 150 technical papers, out of which 42 were selected for presentation.

To promote Java programming, Student Branch organized Code Masters, a JAVA Programming Contest. It was a two stage contest and attracted more than 250 students. The students were evaluated on their programming skills by judges from Impetus Infotech (India) Pvt. Ltd. Handsome cash prizes were given out to winners. The IEEE SB also organized a Quiz Contest named Quizorama which attracted a huge crowd; the quiz was excellently hosted and carried out by the students of IT Dept., SVITS. IEEE Student Branch has many plans for further such activities, which includes programs to aware students of IEEE and its benefits. Workshops targeting all fields of engineering, Career guidance programs and competitions. We also have plans for conducting a series of lectures on various different technical topics.

Das könnte Ihnen auch gefallen