Beruflich Dokumente
Kultur Dokumente
Ch.Lavanya, V.Srividya,
K.S.R.M.College of Engineering, K.S.R.M.College of Engineering,
III E.E.E, I SEM, III E.E.E, I SEM,
KADAPA. KADAPA.
e-mail: lallu18_18@yahoo.com e-mail: sri_vidya_eee@yahoo.co.in
Y. Swathi,
K.S.R.M.College of Engineering,
III E.E.E, I SEM,
KADAPA.
e-mail: yswathibtech@yahoo.co.in
ABSTRACT
This paper presents a method for short term load forecasting in eectric power
system using artificial neural network. A multilayered feed forward network with back
propagation learning algorithm is used because its good generalising property. The input
to the neural network is in terms of past load data which was heuristically chosen such
that they reflect the trend, load shape as well as some influence of weather. The weather
data is not used to train the metwork. The network is trained to predict one hour ahead
load forecasting. The generalisation capability of the neural network is also studied.
Simulation results using the system data are presented.
Introduction:
Short term load forecasting is an essential tool in operation and planning of the
power system. It helps in coordinating the generation and area interchange to meet the
load demand. It also helps in security assessment, dynamic state estimation, load
management and other related functions. In the last few decades, various methods for
short term load forecasting have been proposed. The methods vary from simple
regression and extrapolation of fading memory Kalman filter and knowledge based
systems.
Among the various methods available in the literature, most can be classified into
two categories. In the first category are the methods, which rely solely on the past data
and fit the load pattern as a time series. In the second category are the methods, which
give emphasis to the weather variables, ie, temperature, humidity, lightintensity, etc, and
find a functional relationship between these variables and the load demand.
Recently, Artificial Neural Networks (ANN) has been used for short term load
forecasting. Both time series models and weather dependent models have been used in
ANN based short tem load forecasting. In this paper, a short-term load forecasting
method using the ANN is proposed. A multilayered feed forward (MLFF) neural
network with back propagation learning algorithm has been used because of its simplicity
and good generalization property. The input, to the neural network is based only on past
load data and are heuristically chosen in such a manner that they inherently reflect all the
major components, such as, trend, type of day, load shape as well as weather which
influence the system load.
The main contributions of this paper are: (i) Heuristic choice of a small set of
input which inherently represents the major components of the load pattern (ii)
introduction of a stopping criteria during learning phase to avoid over fitting of the
network to learning examples, and (iii) A detailed analysis of the generalisation
properties like interpolation/extrapolation ability of the ANN, working life of a trained
network, ie, useful period of a network after which a retaining is required etc.
FIGURE 1
SINGLE PROCESSING UNIT(PE)
NEURON
X! W
I
N X2 OUTPUT
P . Z =Σ W X
j u i
f(Z)
j
U .
T .
XN
FIGURE 2
SHEMATIC ILLUSTRATION OF MULTILAYER
FEED FORWARD (MLFF) NETWORK
OUTPUT PATTERN
k
..... OUTPUT LAYER
Wkj
Wji
is minimized, where,
tk = desired output for unit in layer k
Ok = actual output for unit in layer k
The minimisation process is based on gradient descent algorithm. The interconnecting
weights between jth layer (upper layer) neurons and ith layer (lower layer) neurons is
modified using the following relationship.
Wji (new) = Wji (old) + η δ j Oi + ∝ [∆ Wji (old)]
Where, if, PEj is an output layer PE, then
δ j = Oj (tj – Oj) (1 - Oj)
if, PEj is an hidden layer PE, then
where k is over all PE’s in the layer above the jth layer of PE and η , the learning rate, ∝,
the momentum factor. The momentum term helps in faster convergence of the algorithm.
Once the network gets trained, the resulting connection weights are frozen. In the
operation stage the network is used to compute an output from a set of inputs.
PROPOSED METHOD
Characteristics of the Load Data
In order to reflect the load behavior in the input information, the historical hourly
load data for 1 year of a number of systems were analyzed. It was observed that the load
data exhibits a daily and weekly periodicity. It was also observed that the daily load
pattern for the working days showed marked similarity whereas the holiday load patterns
were quite different from those of the working days. Therefore, hourly loads for working
days and holidays were treated separately. Auto-correlation of hurly load was obtained
using
n-k - -
Σ (yt – y )(yt+k – y )
t=1
rk = ---------------------------
n -
Σ (yt – y )2
t=1
where,
rk = auto-correlation factor for time lag k
n = total number of available data
-
y = mean value of that available data
Figure 3
Auto-Correlation Factor (rk) for two weeks load on best system
Among these, L-24 and L-168 reflect the daily and weekly periodicity of the hourly load.
L-1, L-2, L-168 and L-169 reflect the trend of the hourly load pattern and L-1 and L-2 also
implicitly reflect the weather effect.
Scaling of the Input and Output Data
The input and output variables for the neural network will have very different
ranges if the actual hourly load data is directly used. This may cause convergence
problem during the learning process. To avoid this, the input and output load data were
scaled such that they were within the range (0,1), with majority of the data having values
near to 0.5. For this purpose the actual load was scaled using the following relationship.
L - Lmin
Ls = ------------------------
Lmax - Lmin
Where,
L = the actual load
Lmax = the maximum load, 1.5 to 2 times the peak load for the whole year
ANN Architecture
The artificial neural network architecture used is a feed forward network with three
layers, ie, input layer, one hidden layer and output layer. The number of neurons in the
input layer is equal to the number of variables in the input data. The output layer consists
of one neuron. Although, the choice of number of hidden layer neurons is arbitrary and a
optimal number of hidden layer neurons is generally obtained through trail and error. On
the basis of a large number of simulations a large number of neurons in the hidden layer
leads to large training time, as well as, it creates a grandmother network. The new
network memorizes the learning patterns very well but does not perform well for new set
of input. Whereas, with too small number of hidden layer neurons, the network has
difficulty in learning, as it is unable to create the required complex decision
boundaries. Therefore, a good starting point for optimal choice of hidden layer neuron by
trail and error is to use geometric mean of the input and output layer neurons.
Stopping Criteria
Fig 4 shows the convergence characterizes of the learning algorithm for IEEE 24
bus system. The testing was done after every iteration during learning. Initially, the Mean
Square error (MSE) for both the training and testing set decreases gradually. But after
some iterations, is, around 2000 iterations, the MSE for the testing examples increases,
through, the MSE for the learning examples still decreases, is, network starts over fitting
for the training set from this point. Thus, the learning should be stopped at this point.
TABLE 1
INPUTS TO THE DIFFERENT NETWORKS
System Day η α Period for No. of Period for No of
Name Type Training set Patterns for Test set Patterns for
Samples Training set Samples Test set
Supervised Learning
The ANNs used for forecasting hourly load consists of five input neurons, two 25
25hidden layer neurons and one output layer neuron. Twenty-four separate ANNs one for
each hour forecast, were trained using the input-output data pairs. Separate ANNs were
trained for weekdays and weekends. Thus, a total of 48 ANNs were trained for each
system. On the basis of a large number of simultaneous optimal values for the learning
coefficient(η ) and momentum factor(α ) used for training of each ANN was obtained.
After the convergence of the training algorithm, each ANN was tested using input output
TABLE 3
TABLEpairs
2 from the test set data. The details of the training DESCRIPTION
set and test set
FORdata as wellAND
TRAINING as the
TESTING
SUMMARY OF ANN FORECASTING RESULTS DATA SET FOR ANN GENERALISATION TEST
learning parameters η and α for a particular hour (10AM) are presented in Table 1. The
Training Set Data Testing Set Data Remark
System Index Peak
summary of theValley Max
ANN forecasts Av, day (24 hours) is presented in Table 2.
for one
Name Load Load forecast forecast Case I Taken randomly Taken randomly Testing
error(%) error(%) from region B&C from region A&D extrapolating
ability in both
hour 11th 4th direction
IEEE-24 Act value(MW) 1982.37 1149.76 1.945 0.748 Case II Taken randomly Taken randomly Testing
Bus Pred value(MW) 2011.63 1149.63 from region C&D from region A&B extrapolating
Error(%) 1.476 0.011 ability in upward
direction
hour 21th 7th
IEEE-24 Act value(MW) 1931.26 1197.25 1.881 0.913 Case III Taken randomly Taken randomly Testing
Bus Pred value(MW) 1919.29 1186.99 from region A&B from region C&D extrapolating
Error(%) 0.62 0.857
ability in downward
direction
hour 19th 15th
OSEB Act value(MW) 1056.04 724.973 2.627 1.224
Pred value(MW) 1045.78 726.998 Case IV Taken randomly Taken randomly Testing
Error(%) 0.97 0.279 from region A,B, from region A,B extrapolating
C &D C &D ability of the ANN
Testing Generalisation Property
In order to test the generalization property or exploitation and interpolation
capability of the net in more details, the hourly load data was divided into four groups, ie,
A, B, C and D. Four distinct training and testing sets were prepared as detailed in Table
3. The results are presented in Table 3. From Table 4 it can be seen that the network is
able to perform both interpolation and extrapolation quite well with less than 5% average
error. Extrapolation ability is of particular interest as it shows that network can predict
even for unknown situations. Only for few stray cases, the errors have been more than
5%.
TABLE 4
RESULTS FROM GENERALISED NETWORKS
References: