Sie sind auf Seite 1von 22

QUANTA PROFESSIONAL CIRCLE

TALCHER THERMAL POWER STATION, NTPC LIMITED


TEAM MEMBERS: 1. AVPS KUMAR, 2. SAUBHIK DATTA, 3. KUNAL KUMAR SARAF
ACTIVITY REPORT FOR THE FINANCIAL YEAR: 2014-15
CIRCLE CO-ORDINATOR: MR. S.K. GHOSH, AGM (BE), TTPS
TOTAL MEETINGS HELD IN THIS YEAR= 15

SERI
AL
NO.
1
2
3
4
5

7
8
9
10
11
12
13
14

15

DATE

TOPIC PRESENTED

10.08.2014 COST COMPETITIVENESS


THROUGH LEADERSHIP
14.09.2014 DATA MINING
13.10.2014 KNOWLEDGE SECURITY
09.11.2014 HUMAN ASSET VALUATION
16.11.2014 SUSTAINABILITY &
BUSINESS RESPONSIBILITY
REPORTING
30.11.2014 WIRELESS TECHNOLOGY IN
POWER PLANT
APPLICATIONS
14.12.2014 360 DEGREE APPRAISAL
28.12.2014 FLUIDISED BED
COMBUSTION SYSTEM
11.01.2015 INNOVATION BY DESIGN
18.01.2015 BIO DIESEL
25.01.2015 CLOUD COMPUTING
05.02.2015 WIRELESS POWER
TRANSMISSION
15.02.2015 SMART GRID
18.02.2015 VALUES, ETHICS, MORALTHE EMERGING
CHALLENGE
22.02.2015 KNOWLEDGE MANAGEMENT

PRESENTED
BY

EXECUTIVE
SUMMARY

AVPS KUMAR

ENCLOSED

S. DATTA
S. DATTA
AVPS KUMAR
K. K. SARAF

ENCLOSED
ENCLOSED
ENCLOSED
ENCLOSED

AVPS KUMAR

ENCLOSED

S. DATTA
S. DATTA

ENCLOSED
ENCLOSED

AVPS KUMAR
K. K. SARAF
AVPS KUMAR
S. DATTA

ENCLOSED
ENCLOSED
ENCLOSED
ENCLOSED

S. DATTA
S. DATTA

ENCLOSED
ENCLOSED

AVPS KUMAR

ENCLOSED

TOPIC 1: COST COMPETITIVENESS THROUGH LEADERSHIP


EXECUTIVE SUMMARY
In the globalized business environment governed by market driven economy,
understanding concept of competitiveness and its implementation are the key factors for
a firms growth on a sustained basis. Since the concept is translated and put into
operation by leaders, their role as transformational agent in improving competitiveness
of a firm is also equally important.
Competitiveness of a firm has been defined as the ability of a firm to design, produce
and market products superior to those offered by competitors, considering the price and
non-price qualities. A study has argued that good leadership should enable an
organization to integrate, share and use his knowledge and experience innovatively in
order to improve competitiveness (Senge et al, 1994).
Applying common sense, one can say that an unit A is more competitive than another
unit B under the same business environment when productivity measured in terms of
capital, labor or what goes in the name of total factor productivity is more than that of B
with more market share. But then, an improved productivity which is reflected in a
reduced cost of production might not necessarily mean a high return on invested capital
because the rate of return depends on the rate of profit which is not necessarily
accelerated with a reduction in the cost of production.
As one knows, concept of marginal costing helps a manager for a number of
management decision makings. One of them is to calculate value of optimum output.
This is required to find out what should be the level of output of a firm that will generate
adequate cash flow to meet not only the fixed cost but also to repay the loan along with
interest. Once the level of optimum output is found out, a manager can assess whether
the unit has got capacity to produce that level of output. In the absence of capacity, a
decision can be taken to incur additional capital expenditure for enhancing capacity. A
manager can also assess whether the optimum output produced by a firm can be sold
in the market and at what price.
In this era, a firm devoid of competitive advantage cannot survive. The present era of
economy in our country can be conceived as a period of boom in a Schumpeterian
paradigm of capital development. At the level of a firm, the message is that the
competitive efficiency is the key for survival. Unless practicing managers leading a firm
understand this reality and adopt a competitive strategy for turnaround, that firm will
simply be eliminated from the market.

TOPIC 2: DATA MINING


EXECUTIVE SUMMARY
The past two decades has seen a dramatic increase in the amount of information or
data being stored in electronic format. This accumulation of data has taken place at an
explosive rate. It has been estimated that the amount of information in the world
doubles every 20 months and the size and number of databases are increasing even
faster. The increase in use of electronic data gathering devices such as point-of-sale or
remote sensing devices has contributed to this explosion of available data.
It was recognized that information is at the heart of business operations and that
decision-makers could make use of the data stored to gain valuable insight into the
business. Database Management systems gave access to the data stored but this was
only a small part of what could be gained from the data. Traditional on-line transaction
processing systems, OLTPs, are good at putting data into databases quickly, safely and
efficiently but are not good at delivering meaningful analysis in return. Analyzing data
can provide further knowledge about a business by going beyond the data explicitly
stored to derive knowledge about the business. This is where Data Mining or
Knowledge Discovery in Databases (KDD) has obvious benefits for any enterprise.
Data Mining, or Knowledge Discovery in Databases (KDD) as it is also known, is the
nontrivial extraction of implicit, previously unknown, and potentially useful information
from data. This encompasses a number of different technical approaches, such as
clustering, data summarization, learning classification rules, finding dependency net
works, analysing changes, and detecting anomalies.
Basically data mining is concerned with the analysis of data and the use of software
techniques for finding patterns and regularities in sets of data. It is the computer which
is responsible for finding the patterns by identifying the underlying rules and features in
the data. The idea is that it is possible to strike gold in unexpected places as the data
mining software extracts patterns not previously discernable or so obvious that no-one
has noticed them before.
Data mining analysis tends to work from the data up and the best techniques are those
developed with an orientation towards large volumes of data, making use of as much of
the collected data as possible to arrive at reliable conclusions and decisions. The
analysis process starts with a set of data, uses a methodology to develop an optimal
representation of the structure of the data during which time knowledge is acquired.
Once knowledge has been acquired this can be extended to larger sets of data working
on the assumption that the larger data set has a structure similar to the sample data.
Again this is analogous to a mining operation where large amounts of low grade
materials are sifted through in order to find something of value.

TOPIC 3: KNOWLEDGE SECURITY


EXECUTIVE SUMMARY
In this era of global economy, ever-changing enterprise risk, cross-organization
collaboration and online trade, information security has become more of a business
enabler than ever thought possible. As new and evolving research, standards, tools and
technologies emerge, enterprises now have the mechanisms to help secure their
business transactions as well as the underlying infrastructure and information involved.
Yet enterprises still struggle to keep up with regulatory requirements, economic
conditions and risk management. The exact role of information security is still not clearly
defined in many organizations. While some still view information security as a cost
center, it has been shown that effectively managed information security organizations
can be instrumental in helping an enterprise meet its business goals by improving
efficiency and aligning business objectives.
Enterprises too often view information security in isolation: the perception is that
security is someone elses responsibility and there is no collaborative effort to link the
security program to business goals. It is easy for this compartmentalized approach to
lead to weaknesses in security management, possibly resulting in serious exposure.
From a financial perspective, it is possible for this lack of comprehension to result in
unnecessary expenditure on security and control. From an operational perspective,
information security efforts may not achieve the intended business benefit, resulting in
information at risk.
The Business Model for Information Security began life as a model for systemic security
management, created by Dr. Laree Kiely and Terry Benzel at the USC Marshall School
of Business Institute for Critical Information Infrastructure Protection. In 2008 ISACA
acquired from the university the rights to develop the model to help embed its concepts
in information security practices globally.
The model takes a business-oriented approach to managing information security. Its
holistic and dynamic approach to information security within the context of business
demonstrates to the enterprise that information security can be both predictive and
proactive.
The model can be used regardless of the size of the enterprise or the information
security framework (if any) the enterprise currently has in place. The model is
independent of any particular technology or technological changes over time. Likewise,
it is applicable across industries, geographies, and regulatory and legal systems. It
includes not only traditional information security but also privacy, linkages to risk,
physical security and compliance.

Starting in 2006, the security organization began tracking security incidents and
reported suspicious activities. Security reviews the information, looks for patterns and
regularly shares the findings with sales. An unanticipated outcome of this information
sharing has been the creation of a heightened sense of awareness. Today, many
incidents are disclosed that were unreported in the past or were seen as stolen property
issues rather than as possible losses of corporate sensitive information.

TOPIC 4: HUMAN ASSET VALUATION


EXECUTIVE SUMMARY
Despite being an important asset for a company, human resource is an asset always
ignored by accountants. With regards to assets, there is a fundamental conflict in
accounting practices between human and non-human assets. Only non-human assets
find a place on a company's balance sheet. Keep in mind that even intangibles like
'goodwill' are accounted for.
An asset is simply something that can generate cash / value in the future. If you go by
this definition, human assets should also be accounted for the balance sheet. So why
aren't human assets given their due recognition? It is because the formal definition of
asset does not recognize people as 'accountable assets' for a company. According to
the formal definition, an asset is a resource controlled by the entity as a result of past
events and from which future economic benefits are expected to flow to the entity.
Simply stated, assets are economic resources which represent ownership of value that
can be converted into cash.
Human assets do generate cash for a company, but they are not owned by the
company. Hence, they cannot be incorporated into the balance sheet. However, the
definition and accounting practices do not make human assets any less. They still
command the same value and importance for the company. Thus, there are few
companies that choose to value their human asset . One example of such companies is
the 2nd largest software company in India, Infosys.
There are many financial models to value human assets. Infosys used the Lev &
Schwartz model to calculate the value of their human resources. In the past, companies
such as Bharat Heavy Electricals Ltd (BHEL), Steel Authority of India Ltd (SAIL),
Minerals and Metals Trading Corporation of India Ltd (MMTC Ltd), Oil and Natural Gas
Corporation Limited (ONGC) and National Thermal Power Corporation Ltd (NTPC) have
also used the same model. The model uses several factors such as age, annual
earnings up to retirement, retirement age of the employees & cost of capital to value the
human assets of the company. However, the model ignores productivity of employees,
attrition rate and training expenses in its calculation.
Conceptual thinking about valuing human resources is still developing. Accounting
bodies all over the world still haven't accepted any model for valuing human resource.
Hence, a company does not need to value its human assets for the purpose of external
financial reporting. However, people are one of the most valuable assets for any
company. No matter how good the company's business is, it is the people who steer it in
a particular direction. Therefore, while making internal decisions related to human
resource management, a company should consider human asset valuation. The

company should look at the parameters such as return on human resource value, ratio
of total income to human asset value. All these parameters give a clear picture of
efficiency of human resources employed by the company.
TOPIC 5: SUSTAINABILITY & BUSINESS RESPONSIBILITY REPORTING
EXECUTIVE SUMMARY
Corporate sustainability reporting has a long history going back to environmental
reporting. The first environmental reports were published in the late 1980s by
companies in the chemical industry which had serious image problems. The other group
of early reporters was a group of committed small and medium-sized businesses with
very advanced environmental management systems.
An important, globally-accepted framework for accomplishing this expanded disclosure
and reporting is the Global Reporting Initiative (GRI) Framework. GRI is a global,
network-based mechanism organized as a foundationand is based in the
Netherlands. GRI has pioneered the development of the worlds most widely-used
sustainability reporting framework and as such is a reporting mechanism with broad
credibility.
The goal of GRI is to assist organizations in their disclosure of environmental, social
and governance (ESG) performance. A wide range of participants have embraced GRI
reporting, including members of the global business community, civil society, the public
sector, and labor, academic and professional institutions.
The GRIs third generation of reporting framework and guidance the G-3 is used
by a growing number of public companies, either as a general guide or for specific
reporting of their ESG performance against the Framework Boundaries, Indicators
and Disclosure expectations. (The application level system has various requirements
and disclosures for each application level selected by the reporter.)
G3.1 is a two-part guideline providing the GRIs Reporting Framework to aid
organizations in disclosing their sustainability performance.

Part 1 of the G3.1 Guideline consists of principles to define report content, quality
and to describe how to set the report boundary.

Part 2 outlines the standard disclosure in terms of strategy and profile,


management approach, and performance indicators. In the G3.1 guidelines, GRI
has updated its guidance in topics such as Human Rights, Local Community
Impacts, and Gender.

Companies that report on their sustainability strategies, initiatives, programs and


performance are more likely to be selected for key Sustainability reputational lists,
ranked higher by Sustainability raters and rankers, and selected for inclusion on leading
Sustainability indices. In addition, our study indicates that companies that are managing

their sustainability issues tend to perform better over the long-term in the markets,
although we do agree that evaluating a larger number of companies over a longer
period of time would be more definitive in this regard.
TOPIC 6: WIRELESS TECHNOLOGY IN POWER PLANT APPLICATIONS
EXECUTIVE SUMMARY
Wireless technology offers benefits beyond just wiring cost savings. With a multifunctional, plant-wide wireless network, utility and power generation facilities can
improve safety, reliability and efficiency through optimized employees, equipment and
processes.
Power plants implementing wireless systems do so for the same reason as the
designers of the first telegraph system cost savings. Utilities look to wireless to add
real business value, both in terms of installation costs and optimized operations from
increased data availability.
An ultra-secure and ultra-reliable wireless field infrastructure supports not just wireless
instruments, but also IEEE 802.11 WLAN applications and mobile clients such as handheld computers and mobile Human-Machine Interfaces (HMIs). A single wireless
network, supporting multiple wireless technologies and classes of service, can handle
diverse tasks ranging from communicating sensor information back to a host system, to
closed-loop control, information, HMI, video, communication, and enterprise
applications. Wireless technologies developed for building management and security
can also be utilized in process plants to support both asset management and personnel
tracking.
For example, wireless mobility tools provide a fully functional PC environment that
personnel can interact with directly from a handheld device while performing
maintenance rounds, data collection and inspections. These solutions are optimized for
specific end user applications, ranging from read-only access over the Intranet by
multiple casual users, to secure system access for mobile operators. This wireless
collaboration can improve decision-making, production uptime and process monitoring,
and incident avoidance.
Handheld access to process data allows technicians in the field to view the latest plant
information to help identify failures and causes that may previously have gone
unrecorded, and can open the door for further investigation of a systems reliability.
Users can integrate field data with data from multiple other sources, including
production, control, and work management systems. They also provide mechanical and
engineering data and support calibration of instrument databases. On-site computing
helps management improve the tracking and reporting of inspections, tests, and repairs
for pumps, actuators, valves, vents, pipes and other plant process equipment.

To get started with wireless, and unlock the possibilities of this innovative technology, it
is important to view your wireless implementation as a partnership between the plant
operator, company IT department, and wireless supplier. Each party has a share in
determining the outcome of this effort.
TOPIC 7: 360 DEGREE APPRAISALS
EXECUTIVE SUMMARY
In human resources or industrial psychology, 360-degree feedback, also known as
multi-rater feedback, multi-source feedback, or multi source assessment, is feedback
that comes from members of an employee's immediate work circle. Most often, 360degree feedback will include direct feedback from an employee's subordinates, peers
(colleagues), and supervisor(s), as well as a self-evaluation. It can also include, in some
cases, feedback from external sources, such as customers and suppliers or other
interested stakeholders. It may be contrasted with "upward feedback," where managers
are given feedback only by their direct reports, or a "traditional performance appraisal,"
where the employees are most often reviewed only by their managers.
The results from a 360-degree evaluation are often used by the person receiving the
feedback to plan and map specific paths in their development. Results are also used by
some organizations in making administrative decisions related to pay and promotions.
When this is the case, the 360 assessment is for evaluation purposes, and is
sometimes called a "360-degree review." However, there is a great deal of debate as to
whether 360-degree feedback should be used exclusively for development purposes, or
should be used for appraisal purposes as well.

What a 360 Feedback Survey Measures:


1. 360 feedback measures behaviors and competencies
2. 360 assessments provide feedback on how others perceive an employee
3. 360 feedback addresses skills such as listening, planning, and goalsetting
4. A 360 evaluation focuses on subjective areas such as teamwork,
character, and leadership effectiveness

What 360 Feedback Surveys do not assess


1. 360 feedback is not a way to measure employee performance objectives
(MBOs)
2. 360 feedback is not a way to determine whether an employee is meeting
basic job requirements

3. 360 feedback is not focused on basic technical or job-specific skills


4. 360 feedback should not be used to measure strictly objective things such
as attendance, sales quotas, etc.

TOPIC 8: FLUIDISED BED COMBUSTION SYSTEM


EXECUTIVE SUMMARY
Fluidized bed combustion (FBC) is a combustion technology used to burn solid fuels.
In its most basic form, fuel particles are suspended in a hot, bubbling fluidity bed of ash
and other particulate materials (sand, limestone etc.) through which jets of air are blown
to provide the oxygen required for combustion. The resultant fast and intimate mixing of
gas and solids promotes rapid heat transfer and chemical reactions within the bed. FBC
plants are capable of burning a variety of low-grade solid fuels, including most types of
coal and woody biomass, at high efficiency and without the necessity for expensive fuel
preparation (e.g., pulverizing). In addition, for any given thermal duty, FBCs are smaller
than the equivalent conventional furnace, so may offer significant advantages over the
latter in terms of cost and flexibility.
FBC reduces the amount of sulfur emitted in the form of SOx emissions. Limestone is
used to precipitate out sulfate during combustion, which also allows more efficient heat
transfer from the boiler to the apparatus used to capture the heat energy (usually water
tubes). The heated precipitate coming in direct contact with the tubes (heating by
conduction) increases the efficiency. Since this allows coal plants to burn at cooler
temperatures, less NOx is also emitted. However, burning at low temperatures also
causes increased polycyclic aromatic hydrocarbon emissions. FBC boilers can burn
fuels other than coal, and the lower temperatures of combustion (800 C / 1500 F)
have other added benefits as well.
There are two reasons for the rapid increase of FBC in combustors. First, the liberty of
choice in respect of fuels in general, not only the possibility of using fuels which are
difficult to burn using other technologies, is an important advantage of fluidized bed
combustion. The second reason, which has become increasingly important, is the
possibility of achieving, during combustion, a low emission of nitric oxides and the
possibility of removing sulfur in a simple manner by using limestone as bed material.
Fluidized-bed combustion evolved from efforts to find a combustion process able to
control pollutant emissions without external emission controls (such as scrubbers-flue
gas desulfurization). The technology burns fuel at temperatures of 1,400 to 1,700 F
(750-900 C), well below the threshold where nitrogen oxides form (at approximately
2,500 F / 1400 C, the nitrogen and oxygen atoms in the combustion air combine to
form nitrogen oxide pollutants); it also avoids the ash melting problems related to high
combustion temperature. The mixing action of the fluidized bed brings the flue gases
into contact with a sulfur-absorbing chemical, such as limestone or dolomite. More than

95% of the sulfur pollutants in coal can be captured inside the boiler by the sorbent. The
reductions may be less substantial than they seem, however, as they coincide with
dramatic increases in carbon (monoxide?) and polycyclic aromatic hydrocarbons
emissions.

TOPIC 9: INNOVATION BY DESIGN


EXECUTIVE SUMMARY
If one look at the history of organizations, one can find examples galore of many great
organizations which have been obliterated from the history of mankind, as they could
not innovate with the changing circumstances, technology, environment or customer
choice, which created demand and opportunity for them to do so, but they failed to
capitalize on those opportunities to innovate.
When we want organizations to use the creative potential of its people and innovate,
then these have to be built on certain foundations. These foundations can also be seen
as the preconditions and premise on which the process of innovation starts. For bringing
about an organization where employees continually innovate, all these conditions must
be met. In the absence of any one of these, we may fail to meet the objective of getting
innovation in the organization. At the same time there has to be complete synergy
among the different foundation blocks for resulting in innovation.
A. Most importantly it has to have norms, values and beliefs which encourage and
facilitate innovation. The organization culture has to act as an enabler for
bringing about innovation. (Innovation culture)
B. The capacity and capability of employees to innovate can never be undermined.
It is one of the prerequisites for generating innovation in the organization.
(Innovation Capability)
C. It has to have some deliberately designed formal mechanisms which removes
the disablers and reinforces the enablers for facilitating innovation. The
organization has to create the right kind of structures and processes for
facilitating innovation in the organization. (Innovation process and organizational
support system)
D. Innovation in a normal sense could mean doing something different or doing
something differently. In that sense, it could mean developing a new product,
adopting new process or it could also mean solving problems differently. So all
three aspects of product innovation, process innovation or innovative problem
solving comes into the ambit of innovation. (Innovation Opportunity)

TOPIC 10: BIO DIESEL


EXECUTIVE SUMMARY
Biodiesel refers to a vegetable oil - or animal fat-based diesel fuel consisting of longchain alkyl (methyl, ethyl, or propyl) esters. Biodiesel is typically made by chemically
reacting lipids (e.g., vegetable oil, animal fat (tallow) with an alcohol producing fatty acid
esters.
Biodiesel is meant to be used in standard diesel engines and is thus distinct from the
vegetable and waste oils used to fuel converted diesel engines. Biodiesel can be used
alone, or blended with petro diesel in any proportions. Biodiesel blends can also be
used as heating oil.
Biodiesel Compared to Petroleum Diesel
Advantages

Disadvantages

Domestically produced from nonpetroleum, renewable resources

Use of blends above B5 not yet


approved by many auto makers

Can be used in most diesel


engines, especially newer ones

Lower fuel economy and power


(10% lower for B100, 2% for B20)

Less air pollutants (other than


nitrogen oxides)

Currently more expensive

B100 generally not suitable for use


in low temperatures

Concerns about B100's impact on


engine durability

Less greenhouse gas emissions


(e.g., B20 reduces CO2 by 15%)
Biodegradable

Non-toxic

Safer to handle

Slight increase in nitrogen oxide


emissions possible in some
circumstances

Biodiesel has promising lubricating properties and cetane ratings compared to low sulfur
diesel fuels Depending on the engine, this[clarification needed] might include high
pressure injection pumps, pump injectors (also called unit injectors) and fuel injectors.
The calorific value of biodiesel is about 37.27 MJ/kg. This is 9% lower than regular
Number 2 petro diesel. Variations in biodiesel energy density are more dependent on
the feedstock used than the production process. Still, these variations are less than for
petro diesel. It has been claimed biodiesel gives better lubricity and more complete
combustion thus increasing the engine energy output and partially compensating for the
higher energy density of petro diesel.
Biodiesel contains virtually no sulfur, and it is often used as an additive to ULSD.
TOPIC 11: CLOUD COMPUTING
EXECUTIVE SUMMARY
Cloud computing is a recently evolved computing terminology or metaphor based on
utility and consumption of computing resources. Cloud computing involves deploying
groups of remote servers and software networks that allow centralized data storage and
online access to computer services or resources. Clouds can be classified as public,
private or hybrid.
The goal of cloud computing is to apply traditional supercomputing, or high-performance
computing power, normally used by military and research facilities, to perform tens of
trillions of computations per second, in consumer-oriented applications such as financial
portfolios, to deliver personalized information, to provide data storage or to power large,
immersive computer games.
To do this, cloud computing uses networks of large groups of servers typically running
low-cost consumer PC technology with specialized connections to spread dataprocessing chores across them. This shared IT infrastructure contains large pools of
systems that are linked together. Often, virtualization techniques are used to maximize
the power of cloud computing.
The standards for connecting the computer systems and the software needed to make
cloud computing work are not fully defined at present time, leaving many companies to
define their own cloud computing technologies. Cloud computing systems offered by
companies, like IBM's "Blue Cloud" technologies for example, are based on open

standards and open source software which link together computers that are used to
deliver Web 2.0 capabilities like mash-ups or mobile commerce.
Cloud computing has started to obtain mass appeal in corporate data centers as it
enables the data center to operate like the Internet through the process of enabling
computing resources to be accessed and shared as virtual resources in a secure and
scalable manner.
For a small and medium size business (SMB), the benefits of cloud computing is
currently driving adoption. In the SMB sector there is often a lack of time and financial
resources to purchase, deploy and maintain an infrastructure (e.g. the software, server
and storage).
In cloud computing, small businesses can access these resources and expand or shrink
services as business needs change. The common pay-as-you-go subscription model is
designed to let SMBs easily add or remove services and you typically will only pay for
what you do use.

TOPIC 12: WIRELESS POWER TRANSMISSION


EXECUTIVE SUMMARY
Wireless power transfer (WPT) or wireless energy transmission is the transmission of
electrical power from a power source to a consuming device without using solid wires or
conductors. It is a generic term that refers to a number of different power transmission
technologies that use time-varying electromagnetic fields. Wireless transmission is
useful to power electrical devices in cases where interconnecting wires are
inconvenient, hazardous, or are not possible. In wireless power transfer, a transmitter
device connected to a power source, such as the mains power line, transmits power by
electromagnetic fields across an intervening space to one or more receiver devices,
where it is converted back to electric power and utilized.
Wireless power techniques fall into two categories, non-radiative and radiative. In nearfield or non-radiative techniques, power is transferred over short distances by magnetic
fields using inductive coupling between coils of wire or in a few devices by electric fields
using capacitive coupling between electrodes. Applications of this type are electric
toothbrush chargers, RFID tags, smartcards, and chargers for implantable medical
devices like artificial cardiac pacemakers, and inductive powering or charging of electric
vehicles like trains or buses. A current focus is to develop wireless systems to charge
mobile and handheld computing devices such as cellphones, digital music player and
portable computers without being tethered to a wall plug. In radiative or far-field
techniques, also called power beaming, power is transmitted by beams of
electromagnetic radiation, like microwaves or laser beams. These techniques can
transport energy longer distances but must be aimed at the receiver. Proposed

applications for this type are solar power satellites, and wireless powered drone aircraft.
[9] An important issue associated with all wireless power systems is limiting the
exposure of people and other living things to potentially injurious electromagnetic fields.

TOPIC 13: SMART GRID


EXECUTIVE SUMMARY
A smart grid is a modernized electrical grid that uses analog[1] or digital information and
communications technology to gather and act on information - such as information
about the behaviors of suppliers and consumers - in an automated fashion to improve
the efficiency, reliability, economics, and sustainability of the production and distribution
of electricity. Electronic power conditioning and control of the production and distribution
of electricity are important aspects of the smart grid.
The smart grid represents the full suite of current and proposed responses to the
challenges of electricity supply.

Reliability: The smart grid will make use of technologies, such as state estimation
that improve fault detection and allow self-healing of the network without the
intervention of technicians. This will ensure more reliable supply of electricity, and
reduced vulnerability to natural disasters or attack.

Flexibility in network topology: Next-generation transmission and distribution


infrastructure will be better able to handle possible bidirectional energy flows,
allowing for distributed generation such as from photovoltaic panels on building
roofs, but also the use of fuel cells, charging to/from the batteries of electric cars,
wind turbines, pumped hydroelectric power, and other sources.

Efficiency: Numerous contributions to overall improvement of the efficiency of


energy infrastructure are anticipated from the deployment of smart grid
technology, in particular including demand-side management, for example
turning off air conditioners during short-term spikes in electricity price, reducing
the voltage when possible on distribution lines through Voltage/VAR Optimization
(VVO), eliminating truck-rolls for meter reading, and reducing truck-rolls by
improved outage management using data from Advanced Metering Infrastructure
systems. The overall effect is less redundancy in transmission and distribution
lines, and greater utilization of generators, leading to lower power prices.

Sustainability: The improved flexibility of the smart grid permits greater


penetration of highly variable renewable energy sources such as solar power and
wind power, even without the addition of energy storage. Current network
infrastructure is not built to allow for many distributed feed-in points, and typically
even if some feed-in is allowed at the local (distribution) level, the transmissionlevel infrastructure cannot accommodate it. Rapid fluctuations in distributed
generation, such as due to cloudy or gusty weather, present significant
challenges to power engineers who need to ensure stable power levels through
varying the output of the more controllable generators such as gas turbines and
hydroelectric generators. Smart grid technology is a necessary condition for very
large amounts of renewable electricity on the grid for this reason.

TOPIC 14: VALUES, ETHICS, MORAL- THE EMERGING CHALLENGE


EXECUTIVE SUMMARY
In this age of globalization, open market economy and fierce competition, it is a real
challenge for the business organizations to operate ethically. Various alluring factors are
constantly afflicting the people working in the organizations and thus, making it a
herculean task for the managers to actualize values and uphold the moral culture
effectively.
The present study has made an attempt to analyze the various factors that are
hindering the process of establishing a sound business ethics in Indian organizations,
and the initiatives taken by various firms in this regard.
In order to gain some insight on the mindset of the people working in the corporate
world on this issue, a survey was conducted among the executives of Talcher Thermal
Power Station, NTPC Limited. From the data gathered, we tried to analyze the factors
which need to be worked upon by the managers to instill organizational ethics and
values into the entire workforce.
In our proposed solutions, we have tried to address each of these factors. If
implemented, these measures will help NTPC Limited as well as any other business

entity to establish itself as a more responsible and ethical corporate citizen, ensuring the
reputation of the company among its stakeholders, guiding towards a flourishing and
sustainable future.
Values and principles are the essence of human lives that justify human beings as
supreme. We need to start seeing values from a fresh perspective, arouse a general
awakening towards the worth of values in life. If we are not aware towards the imminent
crisis in values today, we may not be able to pass on these important aspects to the
next generation.
Failure in business ethics and corporate governance is a real threat to the future of
every corporation. With the effective governance based on core values of integrity and
trust, companies can gain much competitive advantage which attracts and retains best
multiple alternatives and generates positive reactions in the marketplace- if any
company got reputed for ethical behavior in the competitive market, it engenders not
only customer loyalty but also employee loyalty. A great deal depends upon fairness,
honesty, integrity and the manner in which companies conduct their affairs. Companies
must make a profit to survive and grow; however, the pursuit of profits must stay within
ethical bounds.
TOPIC 15: KNOWLEDGE MANAGEMENT
EXECUTIVE SUMMARY
Knowledge management involves any systematic activity related to the capture and
sharing of knowledge by organization.
Knowledge is an asset and Knowledge management is a cluster of all asset.it is a single
platform where Knowledge is shared, updated, refreshed and grown. For any
organization it becomes imperative to increase the circulation of Knowledge and
information amongst the employees and to provide an organizational environment which
helps in developing the right attitude and mutual trust among the employees.
The term "knowledge management" is now in widespread use, having appeared in the
titles of many new books about knowledge management as a business strategy, as well
as in articles in many business publications, including The Wall Street Journal. There
are, of course, many ways to slice up the multi-faceted world of knowledge
management. However, its often useful to categorize them.

The Benefits of Knowledge Management

Facilitates better, more informed decisions

Contributes to the intellectual capital of an organization

Encourages the free flow of ideas which leads to insight and innovation

Eliminates redundant processes, streamlines operations, and

enhances

employee retention rates

Improves customer service and efficiency

Can lead to greater productivity.

Knowledge Management does not have a beginning and an end. It is ongoing, organic,
and ever-evolving.
Understanding Knowledge Management

KM is about people. It is directly linked to what people know, and how what they
know can support business and organizational objectives. It draws on human
competency, intuition, ideas, and motivations. It is not a technology-based
concept. Although technology can support a KM effort, it shouldnt begin there.

KM is orderly and goal-directed. It is inextricably tied to the strategic objectives


of the organization. It uses only the information that is the most meaningful,
practical, and purposeful.

KM is ever-changing. There is no such thing as an immutable law in KM.


Knowledge is constantly tested, updated, revised, and sometimes even
"obsoleted" when it is no longer practicable. It is a fluid, ongoing process.

KM is value-added. It draws upon pooled expertise, relationships, and alliances.


Organizations can further the two-way exchange of ideas by bringing in experts
from the field to advise or educate managers on recent trends and
developments. Forums, councils, and boards can be instrumental in creating
common ground and organizational cohesiveness.

KM is visionary. This vision is expressed in strategic business terms rather than


technical terms, and in a manner that generates enthusiasm, buy-in, and
motivates managers to work together toward reaching common goals.

KM is complementary. It can be integrated with other organizational learning


initiatives such as Total Quality Management (TQM). It is important for knowledge
managers to show interim successes along with progress made on more
protracted efforts such as multiyear systems developments infrastructure, or
enterprise architecture projects.

Das könnte Ihnen auch gefallen