Beruflich Dokumente
Kultur Dokumente
Electricity Consumption
James Foot* Valeriu Mihai**
*Faculty of Science and Technology, University of the Algarve (e-mail: a40650@ualg.pt).
** Faculty of Science and Technology, University of the Algarve (e-mail: a41635@ualg.pt).
Abstract: In this paper, we present a different approach for a short-term prediction of the electricity load
demand (ELD) for the Portuguese power grid. Mainly, we apply a Multilayer perceptron (MLP) artificial
neural network (ANN) to forecast the ELD in a one-step-ahead fashion or every 15 minutes based on
historical (altered) data from the Portuguese power grid company Redes Electricas Nacionais (REN). In
designing an ANN-MLP for time series forecasting, the variables that are depended to develop the model
process include the number of input, hidden and output neurons are important. There are no specific ways
to determine these parameters, but through the iteration process. To obtain the best performance in
prediction, ANN models require an experimental approach to analyse the ANN design space and
application of different training strategies. The NN models are trained by the Levenberg-Marquardt
algorithm. Different experiments were carried out to show which parameters are crucial to have good
prediction accuracy using non-linear autoregressive predictive model (NAR).
Keywords: Electricity load demand; Power grid; Multilayer perceptron; Artificial Neural Network;
Prediction; Forecast; Modeling; Non-linear autoregressive predictive model.
1. INTRODUCTION
The electricity load demand forecasting is an important
aspect for any modern energy company, with respect to their
system management. Load forecasting can be used for
scheduling maintenance, reducing spinning reserve capacity,
scheduling individual plant production, which will improve
the reliability of the grid and reduce cost for the company and
the end consumer. There are several different kinds of
forecast lengths depending on the objectives:
1.
2.
3.
N of measurements
1 hour
24 hours (1 day)
48 hours (2 days)
1 week (7 days)
4
96
192
672
5. RESULTS
Training
Validation
Testing
Percentage (%)
Number of points
Number of days
70
12096
126
15
2592
27
15
2592
27
4. EXPERIMENT
As express in the previous sections there are three parameters
that can be changed in order to improve the: number of
hidden layers, number of neurons in each layer and the
number of delay. With these parameters we conducted three
groups of experiments, in which in each group we altered
only one of the parameters. But before we start the
experiments we need to import the data from a .txt file to a
Matlab column vector in order to be able to pre-process, i.e.,
normalizing them in-between 1 and -1. Next phase was to
decide what would be the values for the different parameters.
Our control experiment, experiment A, will have 1 hidden
layer with 4 neurons and n = 3 delays.
Table 3. Parameters for each experiment
Experiment
Parameter
Variations
B, C
D, E, F, G
N hidden layers
N of neurons/layer
2, 3
8, 16, 32, 64
H ,I, J
N of delays
6, 9, 96
3.1 Training
After that we want to start training our ANN. To do this we
used some functions from the Matlab NN toolbox. First we
create the network using the function narnet (Beale
et.al.,2014) that has inputs for the number of delays, and
hidden layer topology (number of hidden layers and number
of neurons in each layer). Then we use the preparets (Beale
et.al.,2014) function to prepare the values for the training and
simulation. After that we divide the data into the 3 sets
mention earlier, 70% for the training set, 15% for the
validation and 15% for the testing, using the function
divideblock ((Beale et.al.,2014). Next we used the train
(Beale et.al.,2014) function to commence the training of the
ANN. This training function is using the LM algorithm.
3.2 Outputs
The outputs of the training function are a series of plots that
display performance of the ANN that is the MSE per
iteration. The root-mean-square error, that is used measure
the difference between the value predicted by a model and
the values actually observed .The Time-Series Response that
shows the error between the target and the output for each
one of the 3 value sets. The weight values for each of the
connects between the neurons.
At the end all the data is restored to their original values, so
that it makes it easier to understand the results.
MSE
RMSE
Iterations
1
2
3
Average
0.00081489
0.00077090
0.00080887
0.00079822
55.078
53.571
54.875
54.508
11
61
5
26
MSE
RMSE
Iterations
1
2
3
Average
0.00077030
0.00079379
0.00080769
0.00079059
53.784
54.361
54.835
54.327
34
65
12
37
MSE
RMSE
Iterations
1
2
3
Average
0.00078439
0.00075733
0.00078205
0.00077459
54.038
53.098
53.957
53.698
88
430
14
177
MSE
RMSE
Iterations
1
2
3
Average
0.00077628
0.00077807
0.00077769
0.00077735
53.758
53.820
53.807
53.795
22
29
62
38
MSE
RMSE
Iterations
1
2
0.00078601
0.00079043
54.094
54.246
28
18
3
Average
0.00079298
0.00078907
54.333
54.224
5
17
MSE
RMSE
Iterations
1
2
0.00073054
0.00073731
52.150
52.391
431
114
3
Average
0.00073797
0.00073527
52.415
52.319
57
201
MSE
RMSE
Iterations
1
2
3
Average
0.00073008
0.00073290
0.00073913
0.00073404
52.134
52.234
52.456
52.275
130
56
51
79
MSE
RMSE
Iterations
1
2
0.00072018
0.00073590
51.779
52.341
106
135
3
Average
0.00072338
0.00072649
51.894
52.005
83
108
MSE
RMSE
Iterations
1
2
3
Average
0.00073024
0.00073409
0.00075043
0.00073825
52.139
52.277
52.855
52.424
49
64
49
54
MSE
RMSE
Iterations
1
2
3
Average
0.00038865
0.00039017
0.00041169
0.00039687
38.037
38.112
39.149
38.433
73
72
40
62
6. CONCLUSIONS
Regarding the experiments made in the previous section can
conclude that they are acceptable for the problem of ELD
forecasting. All the experiments produced valuable insight to
the working of the MLP ANN and could be important for
future work. Starting with the first three experiments (A, B
and C) change the number of hidden layers did improve the
performance of the NN, but not very significant way, but o
the other hand maid it more complex, as you can see by the
increased number of iterations needed. Next we try to change
the number of neurons in the one hidden layer (experiment D,
E, F, G) and see what would happen. Again comparing them
with experiment A the performance improved slightly, but
the complexity grow. And last experiments (H, I, J) we
changed the number of delays, here we observed that when
we used one days worth of delays the performance value had
a bigger drop, the number of iterations is higher then in
experiment A, but still acceptable.
After this analysis done in the section above, we can see that
there is clearly still rom for improvements. For this paper we
only did a small number of experiments that maybe were not
enough to obtain better conclusions.
Some suggestions to improve the result from our experiments
would be to use a genetic algorithm in order to generate the
best network to topology in order to get the smallest error
possible.
Use additional data like an input for weekdays, weekends,
holidays, weather and temperature.
REFERENCES
Ferreira, P. M., Ruano, A. E., Pestana, R., (2010), Evolving
RBF Predictive Models to Forecast the Portuguese
Electricity Consumption., IFAC Conference on Control
Methodologies and Technology for Energy Efficiency.
Hagan, M. T., Menhaj, M. B., (1994), Training feedforward
networks with the Marquardt algorithm, IEEE Trans. on
Neural Networks, vol. 5, pages 989 - 993.
Ruano, A. E., Artificial Neural Networks, Centre for
Intelligent Systems, University of Algarve, pages 7-119.
Mller, Fodslette M., (1993), A scaled conjugate gradient
algorithm for fast supervised learning., Neural networks
6.4, pages 525-533.
Yassin I. M., (2008), Face detection using artificial neural
network trained on compact features and optimized
using particle swarm optimization, M. S. thesis, Faculty
of Electrical Engineering, Universiti Teknologi MARA,
Shah Alam.
Yu, Hao, and Wilamowski B. M., (2011), Levenbergmarquardt training. The Industrial Electronics
Handbook 5, pages 1-15.