Sie sind auf Seite 1von 28

CHAPTER 1

INTRODUCTION
Increased interconnection and loading of the power system along with deregulation and environmental concerns has brought new challenges for electric power system operation, control and automation. In liberalized electricity market, the operation and control of power system become complex due to complexity in modeling and uncertainties. Power system models used for intelligent operation and control are highly dependent on the task purpose. In competitive electricity market along with automation, computational intelligent techniques are very useful. As electric utilities are trying to provide smart solutions with economical, technical (secure, stable and good power quality) and environmental goals, there are several challenging issues in the smart grid solutions such as, but not limited to, forecasting of load, price, ancillary services; penetration of new and renewable energy sources; bidding strategies of participants; power system planning & control; operating decisions under missing information; increased distributed generations and demand response in the electric market; tuning of controller parameters in varying operating conditions, etc.

Risk management and financial management in electric sector are concerned with finding an ideal trade-off between maximizing the expected returns and minimizing the risks associated with these investments. Artificial intelligence emerged as a computer science discipline in the mid 1950s. Since then, it has produced a number of powerful tools, many of which are of practical use in engineering to solve difficult problems normally requiring human intelligence. Three of these tools will be reviewed in this paper. They are: fuzzy logic, neural networks and genetic algorithms. All of these tools have been in existence for more than 30 years and have found applications in engineering. This paper lists various potential areas of power systems and provides the roles of Artificial intelligence in the emerging power systems. A brief review of intelligence techniques is also presented. Back-propagation is an iterative, gradient search, supervised algorithm which can be viewed as multiplayer non-linear method that can re-code its input space in the hidden layers and thereby solve hard learning problems. The network is trained

AI in Power System

using ANN technique until a good agreement between predicted gain settings and actual gains is reached. During last three decades, the assessment of potential of the sustainable eco-friendly alternative sources and refinement in technology has taken place to a stage so that economical and reliable power can be produced. Different renewable sources are available at different geographical locations close to loads, therefore, the latest trend is to have distributed or dispersed power system. Examples of such systems are wind-diesel, winddiesel- micro-hydro-system with or without multiplicity of generation to meet the load demand. These systems are known as hybrid power systems. To have automatic reactive load voltage control SVC device have been considered. The multi-layer feed-forward ANN toolbox of MATLAB 6.5 with the error backpropagation training method is employed.

Computer based energy management systems are now widely used in energy control centers. Power system analysis programs and other application programs are employed in Energy Management Systems for the purposes of investigating and predicting the behavior of power systems under steady-state operations. The energy management system (EMS) is the center of a control system organized in a hierarchical structure utilizing remote terminal units, communication links, and various levels of computer processing systems. The function of the EMS is to ensure the secure and economic operation of the power system as well as to facilitate the minute-by-minute tasks carried out by the operations personnel. While these programs are powerful tools, their ability to assist operation engineers to make efficient decisions is very limited when unplanned or unexpected modes of system operation occur. The abnormal modes of system operation may be caused by network faults, active and reactive power imbalances, or frequency deviations. An unplanned Operation may lead to a complete system blackout. Under these emergency situations, power systems are restored back to the normal state according to decisions made by experienced operation engineers. For efficient diagnosis of network faults, determination of operational strategies for network restoration, and balancing active and reactive powers, there is clearly a need to develop new computer techniques and methods to build programs where the precious knowledge of experienced operation engineers can be accounted for in addition to the conventional power system application programs. There is also a need to

AI in Power System

develop fast and efficient methods for the prediction of abnormal system behavior. Artificial intelligence (AI) has provided techniques for encoding and reasoning with declarative knowledge. It provides conventional computing techniques and methods for solving problems of power system planning, operation and control. This paper first reports areas in power systems that artificial intelligence has been applied to. It then summarizes the artificial intelligence techniques which have been employed and makes suggestions for the improvement of existing artificial intelligence tools.

AI in Power System

CHAPTER 2

ARTIFICIAL INTELLIGENCE TECHNIQUES EMPLOYED


The research in artificial intelligence has developed many techniques and methodologies which are useful for solving complicated power system problems. These include knowledge of representation methods, search strategies, automated reasoning techniques, expert system or knowledge-based system methodologies, general problem solving approach, blackboard architecture and computer languages for symbolic and list processing. The artificial intelligence techniques and the expert system approach are some new tools for power engineers. Increased interconnection and loading of the power system along with deregulation and environmental concerns has brought new challenges for electric power system operation, control and automation. In liberalized electricity market, the operation and control of power system become complex due to complexity in modeling and uncertainties. Power system models used for intelligent operation and control are highly dependent on the task purpose. In competitive electricity market along with automation, computational intelligent techniques are very useful.

As electric utilities are trying to provide smart solutions with economical, technical (secure, stable and good power quality) and environmental goals, there are several challenging issues in the smart grid solutions such as, but not limited to, forecasting of load, price, ancillary services; penetration of new and renewable energy sources; bidding strategies of participants; power system planning & control; operating decisions under missing information; increased distributed generations and demand response in the electric market; tuning of controller parameters in varying operating conditions, etc. Risk management and financial management in electric sector are concerned with finding an ideal trade-off between maximizing the expected returns and minimizing the risks associated with these investments. Computational intelligence (CI) is a new and modern tool for solving complex problems which are difficult to be solved by the conventional techniques. Heuristic optimization techniques are general purpose methods that are very flexible and can be applied to many types of objective functions and constraints. Recently, these new heuristic tools have been combined among themselves and new methods have

AI in Power System

emerged that combine elements of nature-based methods or which have their foundation in stochastic and simulation methods. Developing solutions with these tools offers two major advantages: development time is much shorter than when using more traditional approaches, and the systems are very robust, being relatively insensitive to noisy and/or missing data/information known as uncertainty. Due to environmental, right-of-way and cost problems, there is an increased interest in better utilization of available power system capacities in both bundled and unbundled power systems.

Natural evolution is a hypothetical population-based optimization process. Simulating this process on a computer results in stochastic optimization techniques that can often perform better than classical methods of optimization for real-world problems. Evolutionary computation (EC) is based on the Darwins principle of survival of the fittest strategy. An evolutionary algorithm begins by initializing a population of solutions to a problem. New solutions are then created by randomly varying those of the initial population. All solutions are measured with respect to how well they address the task. Finally, a selection criterion is applied to weed out those solutions, which are below standard. The process is iterated using the selected set of solutions until a specific criterion is met. The advantages of EC are adaptability to change and ability to generate good enough solutions but it needs to be understood in relation to computing requirements and convergence properties. EC can be subdivided into GA, evolution strategies, evolutionary programming (EP), genetic programming, classified systems, simulated annealing (SA), etc. The first work in the field of evolutionary computation was reported by Fraser in 1957 (Fraser, 1957) to study the aspects of genetic system using a computer. After some time, a number of evolutionary inspired optimization techniques were developed. Computational intelligence (CI) is a new and modern tool for solving complex problems which are difficult to be solved by the conventional techniques. Heuristic optimization techniques are general purpose methods that are very flexible and can be applied to many types of objective functions and constraints. Recently, these new heuristic tools have been combined among themselves and new methods have emerged that combine elements of nature-based methods or which have their foundation in stochastic and simulation methods. Developing solutions with these tools offers two major

AI in Power System

advantages: development time is much shorter than when using more traditional approaches, and the systems are very robust, being relatively insensitive to noisy and/or missing data/information known as uncertainty. Computational intelligence (CI) methods, which promise a global optimum or nearly so, such as expert system (ES), artificial neural network (ANN), genetic algorithm (GA), evolutionary computation (EC), fuzzy logic, etc. have been emerged in recent years in power systems applications as effective tools. These methods are also known as artificial intelligence (AI) in several works. In a practical power system, it is very important to have the human knowledge and experiences over a period of time due to various uncertainties, load variations, topology changes, etc.

AI in Power System

CHAPTER 3

EXPERT SYSTEMS
An expert system is a software paradigm where knowledge concerning a complex problem. It is encoded into a computer program. The framework of expert systems is designed to enable easy encoding of knowledge and easy checkout of the expert systems performance. A general architecture for expert systems is shown in Fig. 1. Four major software elements comprise an expert system: the knowledge base, an inference engine, building and checkout utilities, and the user interface. Expert systems also provide the ability to explain the reasoning used (e.g., to trace the rules used in a rule-based system) which is important in checking it out. Depending on the representation scheme, an AI program becomes either rule based, frame-based, or logic-based. AI programs that achieve expert-level competence in solving the problems by bringing knowledge about specific tasks are called knowledge-based or expert systems (ES) which was first proposed by Feigenbaum et al. in the early 1970s (Feigenbaum et al, 1971). ES is a knowledge-based or rule based system, which uses the knowledge and interface procedure to solve problems that are difficult enough to be solved by human expertise. Main advantages of ES are: a) It is permanent and consistent b) It can be easily transferred or reproduced c) It can be easily documented. Main disadvantage of ES is that it suffers from a knowledge bottleneck by having inability to learn or adapt to new situations. The knowledge engineering techniques started with simple rule based technique and extended to more advanced techniques such as object-oriented design, qualitative reasoning, verification and validation methods, natural languages, and multi-agent systems. For the past several years, a great deal of ES applications has been developed to prepare plan, analyze, manage, control and operate various aspects of power generation, transmission and distributions systems. Expert system has also been applied in recent years for load, bid and price forecasting.

AI in Power System

Figure 1: Architecture of Expert Systems

3.1 Rule-Based System: The rule-based system has two kinds of memory: short-term (or working memory) and longterm. The short-term memory (STM) contains factual knowledge, to be modified as the Computation proceeds. The long-term memory (LTM) contains the production rules themselves. The inference engines of the rule-based system test the premise-part by matching it against the factual knowledge in the STM (matching cycle). If it succeeds, the action-part of the rule is executed resulting in some changes to the STM (firing cycle). The engine then goes back to the matching cycle. There may be more than one rule which succeeds in matching and the inference engine then invokes a conflict resolution mechanism to decide which rule shall be used. The rule-based method was applied to the areas of fault diagnosis and control of nuclear power plants. 3.2 Frame-Based System: In the rule-based system, factual knowledge is stored in the STM without regard to relationships between different objects. However, a relation between the objects of many problems and a frame-based knowledge representation allows the user to set up and make use of these relationships. For example, consider the objects of a substation such as breakers, switches, buses, transformers, and transmission lines. Several objects comprise a substation, and a set of substations becomes an area. Depending on the status of individual breakers and switches, buses may be split or deenergized. Transformers and lines may be connected, open-ended, or deenergized depending on the status of the terminating bus sections, etc.

AI in Power System

3.3 Logic-Based System: The frameworks we have dealt with so far are appropriate to represent procedural knowledge such as: what to do when certain conditions are met. A different way to represent knowledge requires one to specify what instead of how. A logic-based system provides us with such means. Prolog is a programming language to represent a what-type knowledge. Logic-based systems have an advantage when specifying system requirements, but they have a disadvantage in specifying procedure-oriented knowledge. Systems developed based on logic and logic programming has demonstrated that these techniques are suitable for building expert systems and artificial intelligence systems for solving power system combinatorial problems.

AI in Power System

CHAPTER 4

AI METHODS USED IN POWER SYSTEM


Some of the method or algorithms by the help of which we can implement Artificial Intelligence in Power System are as follows:

a) FUZZY LOGIC b) NEURAL NETWORKS c) GENETIC ALGORITHM 4.1 Fuzzy Logic: Fuzzy logic (FL) was developed by Zadeh (Zadeh, 1965) in 1964 to address uncertainty and imprecision, which widely exist in the engineering problems. FL was first introduced in 1979 for solving power system problems. Fuzzy set theory can be considered as a generalization of the classical set theory. In classical set theory, an element of the universe either belongs to or does not belong to the set. Thus, the degree of association of an element is crisp. In a fuzzy set theory, the association of an element can be continuously varying. Mathematically, a fuzzy set is a mapping (known as membership function) from the universe of discourse to the closed interval [0, 1]. Membership function is the measure of degree of similarity of any element in the universe of discourse to a fuzzy subset. Triangular, trapezoidal, piecewise-linear and Gaussian functions are most commonly used membership functions. The membership function is usually designed by taking into consideration the requirement and constraints of the problem. Fuzzy logic implements human experiences and preferences via membership functions and fuzzy rules. Due to the use of fuzzy variables, the system can be made understandable to a non-expert operator. In this way, fuzzy logic can be used as a general methodology to incorporate knowledge, heuristics or theory into controllers and decision makers. The advantages of fuzzy theory are as follows: i. ii. It more accurately represents the operational constraints of power systems and Fuzzified constraints are softer than traditional constraints.

10

AI in Power System

iii.

Momoh et al. (2000) have presented the overview and literature survey of fuzzy set theory application in power systems.

A recent survey shows that fuzzy set theory has been applied mainly in voltage and reactive power control, load and price forecasting, fault diagnosis, power system protection/relaying, stability and power system control, etc. Fuzzy logic has rapidly become one of the most successful of today's technologies for developing sophisticated control systems. The reason for which is very simple. Fuzzy logic addresses such applications perfectly as it resembles human decision making with an ability to generate precise solutions from certain or approximate information. It fills an important gap in engineering design methods left vacant by purely mathematical approaches (e.g. linear control design), and purely logic-based approaches (e.g. expert systems) in system design. While other approaches require accurate equations to model real-world behaviors, fuzzy design can accommodate the ambiguities of real-world human language and logic. It provides both an intuitive method for describing systems in human terms and automates the conversion of those system specifications into effective models. As the complexity of a system increases, it becomes more difficult and eventually impossible to make a precise statement about its behavior, eventually arriving at a point of complexity where the fuzzy logic method born in humans is the only way to get at the problem. Fuzzy logic is used in system control and analysis design, because it shortens the time for engineering development and sometimes, in the case of highly complex systems, is the only way to solve the problem. The fuzzy logic analysis and control method is, therefore: First, Receiving of one, or a large number, of measurement or other assessment of conditions existing in some system we wish to analyze or control. Second, processing all these inputs according to human based, fuzzy "If-Then" rules, which can be expressed in plain language words, in combination with traditional non-fuzzy processing. Third, Averaging and weighting the resulting outputs from all the individual rules into one single output decision or signal which decides what to do or tells a controlled system what to do. The output signal eventually arrived at is a precise appearing, defuzzified, "crisp" value. Fuzzy logic is a superset of conventional (Boolean) logic that has been extended to handle the concept of partial truth- truth-values between "completely true" and

11

AI in Power System

"completely false". As its name suggests, it is the logic underlying modes of reasoning which are approximate rather than exact. The importance of fuzzy logic derives from the fact that most modes of human reasoning and especially common sense reasoning are approximate in nature. The essential characteristics of fuzzy logic as founded by Zadeh Lotfi are as follows. In fuzzy logic, exact reasoning is viewed as a limiting case of approximate reasoning. In fuzzy logic everything is a matter of degree. Any logical system can be fuzzified. In fuzzy logic, knowledge is interpreted as a collection of elastic or, equivalently, fuzzy constraint on a collection of variables Inference is viewed as a process of propagation of elastic constraints. The third statement hence, defines Boolean logic as a subset of Fuzzy logic.

A paradigm is a set of rules and regulations, which defines boundaries and tells us what to do to be successful in solving problems within these boundaries. For example the use of transistors instead of vacuum tubes is a paradigm shift - likewise the development of Fuzzy Set Theory from conventional bivalent set theory is a paradigm shift. Bivalent Set Theory can be somewhat limiting if we wish to describe a 'humanistic' problem mathematically. The whole concept can be illustrated with this example. Let's talk about people and "youthness". In this case the set S (the universe of discourse) is the set of people. A fuzzy subset YOUNG is also defined, which answers the question "to what degree is person x young?" To each person in the universe of discourse, we have to assign a degree of 5 membership in the fuzzy subset YOUNG. The easiest way to do this is with a membership function based on the person's age. Young (x) = {1, if age (x) <= 20, (30-age (x))/10, if 20 < age (x) <= 30, 0, if age (x) > 30} a graph of this looks like: Given this definition, here are some example values:

12

AI in Power System

Person Age degree of youth -------------------------------------Johan 10 1.00 Edwin 21 0.90 Parthiban 25 0.50 Arosha 26 0.40 Chin Wei 28 0.20 Rajkumar 83 0.00

So given this definition, we'd say that the degree of truth of the statement "Parthiban is YOUNG" is 0.50.

Fuzzy Rules: Human beings make decisions based on rules. Although, we may not be aware of it, all the decisions we make are all based on computer like if-then statements. If the weather is fine, then we may decide to go out. If the forecast says the weather will be bad today, but fine tomorrow, then we make a decision not to go today, and postpone it till tomorrow. Rules associate ideas and relate one event to another. Fuzzy machines, which always tend to mimic the behavior of man, work the same way. However, the decision and the means of choosing that decision are replaced by fuzzy sets and the rules are replaced by fuzzy rules. Fuzzy rules also operate using a series of if-then statements. For instance, if X then A, if y then b, where A and B are all sets of X and Y. Fuzzy rules define fuzzy patches, which is the key idea in fuzzy logic. A machine is made smarter using a concept designed by Bart Kosko called the Fuzzy Approximation Theorem (FAT). The FAT theorem generally states a finite number of patches can cover a curve as seen in the figure below. If the patches are large, then the rules are sloppy. If the patches are small then the rules are fine.

Fuzzy Patches simply means that all our rules can be seen as patches and the input and output of the machine can be associated together using these patches. Graphically, if the rule patches

13

AI in Power System

shrink, our fuzzy subset triangles get narrower. Because even novices can build control systems that beat the best math models of control theory. Naturally, it is math-free system.

Fuzzy control, which directly uses fuzzy rules, is the most important application in fuzzy theory. Using a procedure originated by Ebrahim Mamdani in the late 70s, three steps are taken to create a fuzzy controlled machine: 1) Fuzzification (Using membership functions to graphically describe a situation) 2) Rule evaluation (Application of fuzzy rules) 3) Defuzzification (Obtaining the crisp or actual results)

Figure 2: Block diagram of Fuzzy controller.

4.2 Neural Networks: A neural network is a computational model of the brain. Neural network models usually assume that computation is distributed over several simple units called neurons, which are interconnected and operate in parallel (hence, neural networks are also called parallel distributedprocessing systems or connectionist systems). The most popular neural network is the multilayer perceptron, which is a feed forward network: All signals flow in a single direction from the input to the output of the network. Feed forward networks can perform static mapping between an input space and an output space: the output at a given instant is a function only of the input at that instant. Recurrent networks, where the outputs of some neurons are fed back to the same neurons or to neurons in layers before them, are said to have a dynamic memory: the output of such networks at a given instant reflects the current input as well as previous inputs and outputs.

14

AI in Power System

Implicit knowledge is built into a neural network by training it. Some neural networks can be trained by being presented with typical input patterns and the corresponding expected output patterns. The error between the actual and expected outputs is used to modify the strengths, or weights, of the connections between the neurons. This method of training is known as supervised training. In a multi-layer perceptron, the back-propagation algorithm for supervised training is often adopted to propagate the error from the output neurons and compute the weight modifications for the neurons in the hidden layers. Some neural networks are trained in an unsupervised mode, where only the input patterns are provided during training and the networks learn automatically to cluster them in groups with similar features. A neuro-fuzzy can be used to study both neural as well as fuzzy logic systems. A neural network can approximate a function, but it is impossible to interpret the result in terms of natural language. The fusion of neural networks and fuzzy logic in neuro fuzzy models provide learning as well as readability. Control engineers find this useful, because the models can be interpreted and supplemented by process operators.

Figure 3: Indirect adaptive control: The controller parameters are updated indirectly via process model.

A neural network can model a dynamic plant by means of a nonlinear regression in the discrete time domain. The result is a network, with adjusted weights, which approximates the plant. It is a problem, though, that the knowledge is stored in an opaque fashion; the learning results in a (large) set of parameter values, almost impossible to interpret in words. Conversely, a fuzzy rule base consists of readable if-then statements that are almost natural language, but it cannot learn the rules itself. The two are combined in neuro fuzzy in order to achieve readability

15

AI in Power System

and learning ability at the same time. The obtained rules may reveal insight into the data that generated the model, and for control purposes, they can be integrated with rules formulated by control experts (operators). Assume the problem is to model a process such as in the indirect adaptive controller in Fig. 1. A mechanism is supposed to extract a model of the nonlinear process, depending on the current operating region. Given a model, a controller for that operating region is to be designed using, say, a pole placement design method. One approach is to build a two-layer perceptron network that models the plant, line arise it around the operating points, and adjust the model depending on the current state (Nrgaard, 1996). The problem seems well suited for the so-called TakagiSugeno type of neuro fuzzy model, because it is based on piecewise linearization. Extracting rules from data is a form of modeling activity within pattern recognition, data analysis or data mining also referred to as the search for structure in data. An artificial neural network (ANN) is an information-processing paradigm that is inspired by the biological nervous systems, such as the brain, process information (Bishop, 1995). The key element of this paradigm is the novel structure of the information processing system composed of a large number of highly interconnected processing elements (neurons) working in unison to solve the specific problems. ANNs, like people, learn by example. The starting point of ANN application was the training algorithm proposed and demonstrated, by Hebb in 1949, how a network of neurons could exhibit learning behaviour. During the training phase, the neurons are subjected to a set of finite examples called training sets, and the neurons then adjust their weights according to certain learning rule. ANNs are not programmed in the conventional sense, rather they learn to solve the problem through interconnections with environment. Very little computation is carried out at the site of individual node (neuron). There is no explicit memory or processing locations in neural network but are implicit in the connections between nodes. Not all sources of input feeding a node are of equal importance. It all depends on weight which can be negative or positive. Inputs arriving at a node are transformed according to the activation function of node.

The main advantages of ANNs are as follows:

16

AI in Power System

I.

ANNs with suitable representation can model any degree of non-linearity and thus, are very useful in solving the nonlinear problems.

II. III.

These do not require any apriori knowledge of system model. ANNs are capable of handling situations of incomplete information, corrupt data and they are fault tolerant.

IV.

Massive parallelism provided by the large number of neurons and the connections amongst them provide good search and distributed representations.

V.

ANN is fast and robust. It possesses learning ability and adapts to the data.

Though the neural network (NN) training is generally computationally expensive, it takes negligible time to evaluate correct outcome once the network has been trained. Despite the advantages, some disadvantages of the ANN are: (i) large dimensionality,(ii) selection of the optimum configuration, (iii) choice of training methodology, (iv) the black-box representation of ANN they lack explanation capabilities and (v) the fact that results are always generated even if the input data are unreasonable. Another drawback of neural network systems is that they are not scalable i.e. once an ANN is trained to do certain task, it is difficult to extend for other tasks without retraining the NN. Artificial neural networks are most promising method for many power system problems and have been used for several applications. ANNs are mainly categorized by their architecture (number of layers), topology (connectivity pattern, feed forward or recurrent, etc.) and learning regime. Based on the architecture ANN model may be singlelayer ANN which includes perceptron model (suggested by Rosenblot, in 1959) and ADALINE (suggested by Widrow & Hoff in 1960). ANN model can be further categorized as Feed forward NN and Feed Backward NN based on neuron interactions. Learning of ANN may be Supervised Learning, Unsupervised Learning, and Reinforcement Learning. Based on neuron structure ANN model may be classified as multilayer perceptron model, Boltzman machine, Cauchy machine, Kohonen self-organizing maps, bidirectional associative memories, adaptive resonance theory-I (ART-1), adaptive resonance theory-2 (ART-2), counter propagation ANN. Some other special ANN models are parallel self-hierarchical NN, recurrent NN, radial basis function NN, knowledge based NN, hybrid NN, wavelet NN, cellular NN, quantum NN, dynamic NN, etc.

17

AI in Power System

TRIAL AND ERROR: The input space, that is, the coordinate system formed by the input variables (position, velocity, error, change in error) are partitioned into a number of regions. Each input variable is associated with a family of fuzzy term sets, say, negative, zero, and positive. The expert must then define the membership functions. For each valid combination of inputs, the expert is supposed to give typical values for the outputs. The task for the expert is then to estimate the outputs. The design procedure would be first, Select relevant input and output variables. Second, determine the number of membership functions associated with each input and output, and, Design a collection of fuzzy rules. Considering data given,

Figure 4: A fuzzy model approximation (solid line, top) of a data set (dashed line, top). The input space is divided into three fuzzy regions (bottom).

A better approach is to approximate the target function with a piece-wise linear function and interpolate, in some way, between the linear regions. In the Takagi-Sugeno model (Takagi & Sugeno, 1985) the idea is that each rule in a rule base defines a region for a model, which can be linear. The left hand side of each rule defines a fuzzy validity region for the linear model on the right hand side. The inference mechanism interpolates smoothly between each local model to provide a global model. The general Takagi-Sugeno rule structure is If f (e1is A1, e2 is A2 ek is Ak), then y=g(e1,e2,..) Here f is a logical function that connects the sentences forming the condition, y is the output, and g is a function of the inputs e1. An example is If error is positive and change in error is positive then U=Kp (error + Td*change in error) Where x is a controllers output, and the constants Kp and Td are the familiar tuning constants for a proportionalderivative (PD) controller. Another rule could specify a PD controller with different tuning

18

AI in Power System

settings, for another operating region. The inference mechanism is then able to interpolate between the two controllers in regions of overlap. FEATURE DETERMINATION: In general, data analysis concerns objects, which are described by features. A feature can be regarded as a pool of values from which the actual values appearing in a given column are drawn in Figure 4.,

Figure 5: Interpolation between two lines (top) in the overlap of input sets (bottom).

Figure 6: Above is an example of clusters.

19

AI in Power System

4.3 Genetic Algorithm A problem with back propagation and least squares optimization is that they can be trapped in a local minimum of a nonlinear objective function, because they are derivative based. Genetic algorithm-survival of the fittest! -Are derivative-free, stochastic optimization methods, and therefore less likely to get trapped. They can be used to optimize both structure and parameters in neural networks. A special application for them is to determine fuzzy membership functions. A genetic algorithm mimics the evolution of populations. First, different possible solutions to a problem are generated. They are tested for their performance, that is, how good a solution they provide. A fraction of the good solutions is selected, and the others are eliminated (survival of the fittest). Then the selected solutions undergo the processes of reproduction, crossover, and mutation to create a new generation of possible solutions, which is expected to perform better than the previous generation. Finally, production and evaluation of new generations is repeated until convergence. Such an algorithm searches for a solution from a broad spectrum of possible solutions, rather than where the results would normally be expected. The penalty is computational intensity. The elements of a genetic algorithm are explained next. Step 1: Encoding. The parameter set of the problem is encoded into a bit string representation. For instance, a point (x, y) = (11, 6) can be represented as a chromosome which is a concatenated bit string 1 0 1 1 0 1 1 0 each coordinate value is a gene of four bits. Other encoding schemes can be used, and arrangements can be made for encoding negative and floating-point numbers. Step 2: Fitness evaluation. After creating a population the fitness value of each member is calculated. Step 3: Selection. The algorithm selects which parents should participate in producing off springs for the next generation. Usually the probability of selection for a member is proportional to its fitness value. Step 4: Crossover. Crossover operators generate new chromosomes that hopefully retain good features from the previous generation. Crossover is usually applied to selected pairs of parents with a probability equal to a given crossover rate. In one-point crossover a crossover point on the

20

AI in Power System

genetic code is selected at random and two parent chromosomes interchange their bit strings to the right of this point. Step 5: Mutation. A mutation operator can spontaneously create new chromosomes. The most common way is to flip a bit with a probability equal to a very low, given mutation rate. The mutation prevents the population from converging towards a local minimum. The mutation rate is low in order to preserve good chromosomes.

An example of a simple genetic algorithm for a maximization problem is the following. A. Initialize the population with randomly generated individuals and evaluate the fitness of each individual. (a) Select two members from the population with probabilities proportional to their fitness values. (b) Apply crossover with a probability equal to the crossover rate. (c) Apply mutation with a probability equal to the mutation rate. (d) Repeat (a) to (d) until enough members are generated to form the next generation. B. Repeat steps 2 and 3 until a stopping criterion is met. If the mutation rate is high (above 0.1), the performance of the algorithm will be as bad as a primitive random search. Genetic algorithm (GA) is an optimization method based on the mechanics of natural selection and natural genetics. Its fundamental principle is the fittest member of population has the highest probability for survival. The most familiar conventional optimization techniques fall under two categories viz. calculus based method and enumerative schemes. Though well developed, these techniques possess significant drawbacks. Calculus based optimization generally relies on continuity assumptions and existence of derivatives. Enumerative techniques rely on special convergence properties and auxiliary function evaluation. The genetic algorithm, on the other hand, works only with objective function information in a search for an optimal parameter set. The GA can be distinguished from other optimization methods by following four characteristics. i. The GA works on coding of the parameters set rather than the actual parameters.

21

AI in Power System

ii.

The GA searches for optimal points using a population of possible solution points, not a single point. This is an important characteristic which makes GA more powerful and also results into implicit parallelism.

iii.

The GA uses only objective function information. No other auxiliary information (e.g. derivatives, etc.) is required.

iv. v.

The GA uses probability transition rules, and not the deterministic rules. Genetic algorithm is essentially derived from a simple model of population genetics. It has five following components: Chromosomal representation of the variable characterizing an individual. An initial population of individuals. An evaluation function that plays the role of the environment, rating the individuals in terms of their fitness that is their aptitude to survive. Genetic operators that determine the composition of a new population generated from the previous one by a mechanism similar to sexual reproduction. Values for the parameters that the GA uses.

The advantages of GA over traditional techniques are as follows: It needs only rough information of the objective function and puts no restriction such as differentiability and convexity on the objective function. The method works with a set of solutions from one generation to the next, and not a single solution, thus making it less likely to converge on local minima. The solutions developed are randomly based on the probability rate of the genetic operators such as mutation and crossover as the initial solutions would not dictate the search direction of GA. Major disadvantage of GA method is that it requires tremendously high computational time in case of the large variables and constraints. The treatment of equality constraints is also not well established in G.A.

22

AI in Power System

CHAPTER 5

POSSIBLE APPLICATIONS OF AI TO POWER SYSTEM OPERATIONS


A. Alarm Processing: The alarm processing problem is an diagnosis problem. When a serious disruption occurs on the power system, operators can be overloaded with alarm messages. Because many of the alarm messages are redundant or present information related to the same event the operators may have difficulty in understanding precisely what has happened. The use of AI to intercept alarm messages and present a concise diagnosis is now under active development in several organizations. B. Switching Operations: Statistics show that about 40 percent of the tasks at a power system control center are related to operations on circuit breakers and line switches. Therefore, the automation of these tasks should benefit system operators. One potential application is the automatic generation of switching sequences. Some work has been done on verification of the switching sequences. Another application is the identification and isolation of faulted line. C. Restoration Control: A large-scale blackout may happen on a power system, although quite infrequently. The fact that blackouts happen infrequently makes the operators job that much harder because of the limited exposure to solving the problem of restoring the system. As a result, most control centers have restoration plans and attempt to train operators in restoration using training simulators. However, the number of possible ways to restore a power system is very large and can change depending on the state of critical components at the time the blackout occurs. To this end, a system which supports operators by giving them timely guidance and provides them with a tool for short term operations planning is quite desirable.

23

AI in Power System

CHAPTER 6

FUTURE AREAS OF RESEARCH


There are several problems in the power systems which cannot be solved using the conventional approaches as these methods are based on several requirements which may not be true all the time. In those situations, computational intelligence techniques are only choice however these techniques are not limited to these applications. The following areas of power system utilize the application of computational intelligence. Power system operation (including unit commitment, economic dispatch, hydro-thermal coordination, maintenance scheduling, congestion management, load/power flow, state estimation, etc.) Power system planning (including generation expansion planning, transmission expansion planning, reactive power planning, power system reliability, etc.) Power system control (such as voltage control, load frequency control, stability control, power flow control, dynamic security assessment, etc.) Power plant control (including thermal power plant control, fuel cell power plant control, etc.) Network control (location and sizing of facts devices, control of facts devices, etc.) Electricity markets (including bidding strategies, market analysis and clearing, etc.) Power system automation (such as restoration and management, fault diagnosis and reliability, network security, etc.) Distribution system application (such as operation and planning of distribution system, demand side management & demand response, network reconfiguration, operation and control of smart grid, etc.) Distributed generation application (such as distributed generation planning, operation with distributed generation, wind turbine plant control, solar photovoltaic power plant control, renewable energy sources, etc.)

24

AI in Power System

Forecasting application (such as short term load forecasting, electricity market forecasting, long term load forecasting, wind power forecasting, solar power forecasting, etc.)

Further research and development efforts can be directed to the areas below so that more powerful systems for power engineering can be built. A. Knowledge base: (a) Verification and maintenance of knowledge bases. (b) Automating the process of knowledge acquisition. B. Reasoning methods: (a) Uncertainty reasoning. (b) Reasoning about time-dependent data and events. (c) Improvement in the speed of reasoning. C. Machine learning: (a) Learning information from data and patterns. (b) Learning from past events. (c) Combined artificial intelligence and neural network learning methods.

25

AI in Power System

CHAPTER 7

FUTURE AREAS OF APPLICATION


The need for AI tools is very high in the Japanese power industry. A combination of several factors, somewhat unique to Japan, has caused this to happen. New engineers and plant operators, employed in large numbers for the post-World War I1 reconstruction of Japan, are now retiring. And due to the consistent efforts of the industry over the last three decades, both the equipment and the power grid have become very reliable. As a result. the new engineers operators are not getting enough exposure to faults and other related system problems This has raised concern among the utility managers that the new engineers are probably not being adequately trained to face the difficult task of operating a power system in the event of a major fault. There are also several other reasons for applying AI tools to power systemic problems. These include their desire to operate the system with smaller margins, increase productivity, quicker fault clearing and service restoration, better customer relations, and to provide the utility engineer with more powerful design and planning tools. The areas in power systems that artificial intelligence has so far been applied are by no means exhaustive. Some other important areas of applications can be: (a) Network topology identification (b) Power system state estimation (c) steady-state and transient stability assessments (d) Distribution automation (e) Generation plant and equipment maintenance (f) Generator, transformer and transmission line design (g) Electricity pricing (h) System operation simulator

26

AI in Power System

CONCLUSION
Over the past 40 years, artificial intelligence has produced a number of powerful tools. This paper has reviewed five of those tools, namely fuzzy logic, neural networks and genetic algorithms. Applications of the tools in engineering have become more widespread due to the power and affordability of present-day computers. It is anticipated that many new engineering applications will emerge and that, for demanding tasks, greater use will be made of hybrid tools combining the strengths of two or more of the tools reviewed. Several AI-based systems have already operated in real-life power systems. The success of these applications strongly indicates that artificial intelligence approaches can provide powerful means to develop intelligent software systems for Assisting planning and operation engineers to solve various power engineering problems. Finally, people having the multidisciplinary skills of power system engineering, computer science, and cognitive science must be trained for such tasks.

27

AI in Power System

REFERENCES

[1] The IEEE website. [Online]. Available:http://www.ieee.org/ [2] Stuart Russell and Peter Norvig, Artificial Intelligence a Modern Approach. [3] Artificial Intelligence in electric Power Systems- A Survey Of The Japanese Industry, IEEE transactions on Power systems, Vol.8, No.3, August1993 [4] Bruce F. Wollenberg and Toshiaki Sakaguchi, artificial Intelligence in Power system Operations, IEEE trans. on power systems [5]J.Zurada, Introduction to Artificial Neural Systems [6]W.D.Stevenson, Elements of Power System Analysis [9]A.K.Sawhney, A Course in Electrical Machine Design [10] Artificial Intelligence In Power System Using Various Algorithm Paper published in ETREEE-2012 by Nimesh Pareek and Harshika Mathur.

28

AI in Power System

Das könnte Ihnen auch gefallen