Sie sind auf Seite 1von 6

2013 IEEE Seventh International Symposium on Service-Oriented System Engineering

Cloud Client Prediction Models for Cloud Resource Provisioning in a Multitier Web Application Environment
Akindele A. Bankole Samuel A. Ajila

Department of Systems and Computer Engineering, Carleton University 1125 Colonel By Drive, Ottawa K1S 5B6, ON Canada {aabankol,ajila}@sce.carleton.ca

AbstractIn order to meet Service Level Agreement (SLA)


requirements, efficient scaling of Virtual Machine (VM) resources must be provisioned few minutes ahead due to the VM boot-up time. One way to proactively provision resources is by predicting future resource demands. In this research, we have developed and evaluated cloud client prediction models for TPCW benchmark web application using three machine learning techniques: Support Vector Machine (SVM), Neural Networks (NN) and Linear Regression (LR). We included the SLA metrics for Response Time and Throughput to the prediction model with the aim of providing the client with a more robust scaling decision choice. Our results show that Support Vector Machine provides the best prediction model. Keywords-component; Cloud computing, provisioning, Resource prediction, Machine learning Resource

I.

INTRODUCTION

The advent of cloud computing has made contemporary business owners (with limited capital for example) to rent and use infrastructure resources or services needed to run their businesses in a pay-as-you-use manner [1]. Amazon [2], Google App engine [3] and Salesforce [4] are common leading cloud providers in the area of Infrastructure, Platform and Software as-a-Service (IaaS, PaaS and SaaS) respectively. This trend is a departure from the traditional method of owning a data center that warehouses infrastructure. In trying to meet up with both client Service Level Agreement (SLA) for Quality of Service (QoS) and their own operating cost, cloud providers are faced with the challenges of underprovisioning (a starvation of VM resources that leads to service degradation) and over-provisioning. Underprovisioning would often lead to SLA penalty on the part of the cloud providers and also a poor Quality of Experience (QoE) for the cloud clients customers. Furthermore, overprovisioning can leads to high energy consumption by the providers, culminating in high operating cost and waste of resources. Accurate virtual machine (VM) provisioning [8] is a challenging research area that can address the two extremes. VM boot up time has been reported to span various time durations, from between 5 and 10 minutes [10, 9], and between 5 and 15 minutes [6]. We reckon that during this time of system and resource unavailability, requests cannot be serviced which can lead to penalty on the part of the cloud providers. Multiplying this lag time over several server instantiations in a data center can result in heavy cumulative
978-0-7695-4944-6/12 $26.00 2012 IEEE DOI 10.1109/SOSE.2013.40 156

penalties. Cloud clients can take a proactive step to mitigate reputational loss by controlling their VM provisioning using the Cloud provider's Application Programming Interface (API). One of the numerous strategies available to the client is a predictive approach wherein insight into the future resource usage (CPU, memory, network and disk I/O utilization) may help in scaling decisions ahead of time, thus, compensating for the start-up lag time [7]. In this study we shall evaluate three machine learning techniques: Neural Network (NN), Linear Regression (LR) and Support Vector Machine (SVM). In addition, we have added business level SLAs (throughput and response time) as input parameters to the chosen prediction approaches. The motivation for this addition is hinged on the supposition that a Web server for example, may not necessarily be saturated to experience a breach of SLA metrics such as response time. Therefore, CPU based scaling decisions may not achieve the goal of accurate VM provisioning. The scope of this work is hinged on the IaaS model which offer developers more flexibility in their choice of programming language as opposed to PaaS providers that restrict users to their platforms programming model (like Java and Ruby on Rails) [10]. Thus, the research question is SLA is a better resource provisioning prediction model compared to Linear Regression and Neural Network. The contributions of this paper are, firstly: the evaluation of the resource usage prediction capability of Support Vector Machine, Neural Network and Linear Regression using three benchmark workloads from TPC-W [11]. Secondly, the extension of the prediction model to include business level SLA metrics thus providing wider and better scaling decision options for clients. The rest of this paper is organized as follows: Section II discusses related work, while Section III presents our methodology. Section IV evaluates the methodology from an experimental setup. We discuss our results in Section V, and finally, present our conclusions and possible future work in Section VI. II. RELATED WORK

Several authors have worked in the area of resource usage prediction, for example, [7] presented a resource usage prediction algorithm that used a set of historical data to identify similar usage patterns to predict future usage. Though they reported impressive prediction capabilities of 0.9 to 4.8% prediction error, there was no record of the metric used to arrive at this value. In addition, their prediction was made for

only the future 100 seconds. Going by the 5-10 minutes boot up time reported by [6, 9, 10] the possibility of reduced QoS and QoE is highly probable. The authors in [12] focused on estimating the resource requirements an application would require in a virtual environment from utilization traces collected in the applications native environment. They included various workload mixes in their work, something a typical enterprise application would exhibit. Using CPU traces from TPC-W and RUBiS applications, they employed Linear Regression to forecast future CPU utilization. While the authors reported a prediction error of less than 5% in the 90th percentile, they also used only CPU as a metric and reported their inability to include response time in their prediction model. Using CPU alone [13] as a scaling decision may be misleading, as an increase in CPU utilization may be due to inadequate memory or paging I/O. Islam et al, in [6] analyzed the problem of resource provisioning from the application providers point of viewpoint so that the hosted applications can make scaling decisions based on future resource usage. They also employed a set of machine learning techniques NN and LR with both sliding and non-sliding window option. Though they reported impressive prediction accuracy (PRED 251) of about 85.7% using NN, they did not report the testing of the trained prediction model. It is not uncommon to have impressive training model prediction accuracy and poor test prediction accuracy. In fact, the possibility of over-fitting or under-fitting is highly probable in the absence of a suitable and independent test data for model validation [15]. Furthermore, scaling decisions was based on only CPU resource utilization i.e. using the CPUs future resource utilization to scale up VM instances. The accuracy of using a single metric has been faulted by [13]. Our work aims at analyzing the problem of resource provisioning from the Cloud clients perspective with the ability of the hosted application to make scaling decisions not only evaluating the future resource utilization but also considering business SLA metrics of response time and throughput; thus providing a tripartite auto-scaling decision matrix. The prediction model used to achieve this is a set of machine learning techniques Neural Network, Linear Regression and Support Vector Machine. These techniques would be evaluated using Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE) and PRED (25). III. METHODOLOGY

A. Linear Regression This is one of the staple methods in statistics and it finds application in numeric prediction especially where both the output or target class and the attributes or features are numeric [18]. Basically, the output or target class is expressed as a linear combination of the attributes, with preset weights: y = w0 + w1a1 + w2a2 + ... + wkak (1) where y is the target class and are attribute values, and are weights. The weights are calculated from the training data [18]. We can simplify (1) as: y = (2) where y, n and k are the target value, the number of instances and the number of features respectively. The goal of linear regression is to choose the coefficients that will minimize the sum of squares of the differences between the actual and predicted values over all the training instances B. Neural Network This is a connection of interconnected neurons that incrementally learn from their environment (data) to capture essential linear and nonlinear trends in complex data so that it provides reliable predictions for new situations containing even noisy and partial information [19]. For time series prediction, NN captures temporal patterns in the data in the form of past memory embedded in the model and then uses this to define future behavior. In our study, we have used the Multilayer perceptron (MLP) as it is the most well-known neural network for nonlinear prediction [19]. A typical MLP consist of an input layer, a hidden layer and an output layer of neurons all linked by connections called weights. C. Support Vector Machines (SVM) SVM has the advantage of reducing the problems of overfitting or local minima. In addition, it is based on structural risk minimization as opposed to the empirical risk minimization of neural networks [15]. SVM now finds application in regression and is termed Support Vector Regression (SVR). The goal of SVR is to find a function that has at most (the precision by which the function is to be approximated [28]) deviation from the actual obtained target for all training data with as much flatness as possible [21]. Given training data ( ( , where x is an ndimensional input with and the output is , the linear regression model can be written as [20] : ( 3) where is the target function and denotes the dot product in . To achieve the flatness mentioned by [21], we minimize i.e. = . This can further be written as a convex optimization problem: minimize subject to the constraint (4) Equation (4) assumes that there is always a function f that approximates all pairs of with precision. However this may not be obtainable and thus [21] introduces slack variables , to handle infeasible constraints, with equation (4) leading to Minimize + C

In this section, we describe our methodology for predictive resource provisioning for multi-tier web applications using machine learning to develop the performance prediction model. NN and LR have been widely explored by several authors in building prediction models [6, 10, 12, 13]. Recently SVM, a powerful classification technique [14] has been gaining significant popularity in time series and regression prediction [14, 15, 16, and 17]. We introduce these learning techniques below.
1

Percentage of observation whose prediction accuracy falls within 25% of actual value

157

subject to (5) The constant C>0 determines the tradeoff betw ween the flatness of f and the amount up to which deviations larger than are nd solved to give tolerated. Equation (5) can be reformulated an the optimal Lagrange multipliers with and given as (6) and b = (7) rting (6) and (7) are the support vectors, thus inser into (3) yields (8) ed for nonlinear This generic approach is usually extende functions. This is done by replacing with ; a feature and [20]. space that linearizes the relation between a Therefore, (8) can be re-written as: (9) where = < , is the so called k kernel function. IV. THE SETUP For our experiments, we used a virtual l machine (VM) web server and a database server. The VM is given a single processor, 1GB memory, 100MB/s NIC, and 10GB Disk for a d database server Linux based web server. The Windows based (running MySQL) is a Dual Core AMD Ath hlon 64bit Model 5000+ with 4GB memory, 100MB/s NIC and 230GB Disk. Feature selection, Our experimental setup is divided into: F Historical data collection, Feature reduction, Data preprocessing (scaling and normalizing), Training and testing of the historical dataset. A. Feature selection on a continuous Usually, prediction models are based o observation of a number of specific features s [16]. From the study of computer system activity by [22], the e following initial features were selected for the three targe et values (CPU utilization, response time and throughput): i. Write transactions per second (wtps) ved per second ii. Total number of packets receiv (rxpck/s) tted per second iii. Total number of packets transmit (txpck/s) iv. Amount of free memory available s v. Amount of used memory in kilobytes vi. System load average for the past last minute minutes vii. System load average for the past 5 m viii. Page free ix. Context switches per second x. Run queue length (number of task waiting for run time) xi. Page in from disk xii. Page out to disk xiii. Number of tasks in the task list xiv. Read transactions per second (rtps) xv. Percentage of used swap space

B. Data collection using TPC-W be enchmark The TPC-W [11] benchmark co onsist of a set of operations designed to exercise a web serve er/ database system in a manner representative of a typ pical internet commerce application environment [24]. It has h been used by several authors [5, 12, 24] for resource provisioning and capacity planning [6]. Similar to [6], we e have employed a Java implementation of TPC-W that em mulates an online bookshop and deployed the application in a-two-tier architecture as depicted in Fig. 1. From the diagram, we can observe that the system resource metrics like CPU, memory etc were collected ponse time and throughput from the Web server while the resp were measured from the clients end. Client is used as a reference point and the TPC-W has a remote browser emulator t emulate several clients. (RBE) that allows a single node to Furthermore, this is justified becau use the client emulator, the web server and the database ser rver are all on the same network. In implementing this mode el on a cloud infrastructure, a load balancer within the cloud pro ovider infrastructure would be the reference point for measuring the SLA metrics. The e time lag between when a response time in this context is the page request is made to the recept tion of the last byte of the HTML response page. Similarly, the t throughput is the total number of web interactions comple eted during an experimental run.

Fig.1 Model implementati ion architecture

load mixes Shopping, The TPC-W offers three workl Browsing and Ordering. Detail ls of the various mix characteristics can be found in [24]. In our work we have used a combination of the three mixes for every experimental run to y adjusting the number of create more realistic scenarios. By emulated clients (from 100 to 100 00) in a mix of linear and random pattern, we created a cha anging workload that sent request to the Web server in a cont tinuous fashion throughout the duration of the experiment. Using Ubuntus sysstat package, we collected the data met trics (defined in Section A above) every 60 seconds. The duration for the entire 0 minutes. These data was experiment was approximately 170 used to build the prediction model from f which forecast can be made for future resource requireme ent and SLA metrics of the Web server. C. Feature reduction 16] we employed the Weka To avoid junk in, junk out [1 [25], a machine learning tool to determine d the relevance of each feature in an instance to the ta arget class (CPU, response

158

time and throughput). Using attribute selection functionality, we eliminated the least correlated attributes: rtps, run queue length, page in, page out, percentage of used swap space and number of tasks. D. Data preprocessing During this phase, the 11 input features are scaled to values between 0 and 1. Normalization or scaling is carried out by finding the highest value within each input in the 167 dataset, and dividing all the values within the same feature by the maximum value. The main advantage for normalizing is to avoid attributes in greater numeric ranges dominating those in smaller numeric range [26]. E. Training of Dataset We used the normalized sampled dataset to train the prediction model. First, we trained the model with CPU utilization as the target class using the three machine learning techniques discussed above. Next, using the same dataset, we trained a new model for both response time and throughput. We have used the metrics in Table 1 to evaluate both training and testing results of our models. 1) Model 1 - CPU utilization Neural Network: Using the Weka tool, we trained the model with the following parameters: learning rate = 0.7 [6], number of hidden layers = 1, momentum = 0.2 and epoch or training time =1000. The parameter selection is based on heuristics as there is no mathematical formula or theory that has been proposed to select the best parameters. Linear Regression: We also used the Weka tool to train the model. The only parameter set was the ridge parameter which was set to the default of 1.0E-8. Support Vector Regression: SVR has four kernels that can be used to train a model. They are: linear, polynomial, radial basis function (RBF) and sigmoid [15]. We tried the four different models and RBF showed promising result with the least MAPE value [26]. Before training, we used the Grid Parameter Search for Regression with cross validation (vfold cross validation) [27] to estimate the C and . The cross-validation is a technique used to avoid the over fitting problem [15, 26]. The search range for C was between 2-3 to 25 and that of between 2-10 and 22. These values are purely heuristics [15, 26]. The search returns the optimal C and by using the Mean Square Error to evaluate the accuracy of the various C and combinations. The best C and were 12 and 0.085. Using these parameters, we trained the model with the Radial Basis Function (RBF) Kernel. We used these parameters to train the historic dataset with the WEKA machine learning tools and Table 2 shows the values evaluation metrics. 2) Model 2 Response time and Throughput We approached the business SLA metrics in similar way as Model 1; however, we used a single set of parameter to train for both response time and throughput. This was done to reduce the number of prediction models in the system; more so, we were able to get impressive results when the same parameters were used. For SVR, C and were 1 and 0.005

respectively. NN values for , hidden layer and momentum were 0.7, 1 and 0.2 respectively. Finally the ridge parameter for LR was 1.0E-8.
Table 1 Performance metrics and their calculations Metric Calculation MAPE2 where and are the actual and

predicted values respectively RMSE3 PRED 25 No. of observations with relative error 25% / No. of observation

F. Testing of trained model We used a training-to-testing ratio of 60%:40% as this gave the optimal prediction output for our model. We have adopted a 12 minute prediction interval to test our prediction model. This is based on the reports from previous works [9, 10] regarding VM boot up time and motivation from the work of [6]. However, we have reported the prediction trend at the 9th, 10th, 11th and 12th minute. We added this to check for consistency and reliability in the prediction models of SVR, NN and LR. V. RESULTS AND DISCUSSION

The objectives of this research work are to evaluate the accuracy of the selected machine techniques in forecasting future resource usage and to integrate business level SLAs into the prediction model. CPU utilization prediction model and SLA (response time and throughput) prediction model are used to achieve and meet the objectives. A. Results The training and test results for Model 1 (CPU utilization) are displayed in tables 2 and 3 respectively. In addition, table 4 shows the 9 12 minute step prediction MAPE metric for the test dataset. For Model 2 (Response Time and Throughput), tables 5 and 7 shows the metric results for test dataset while tables 6 and 8 shows the 9-12 minutes step prediction.
Table 2: CPU utilization Training performance metric

Model SVR NN LR

MAPE 16.15 26.18 18.07

RMSE 6.75 8.68 6.72

PRED(25) 0.77 0.60 0.74

Table 3: CPU utilization Test performance metric

Model SVR NN LR
2 3

MAPE 16.84 40.86 22.01

RMSE 12.21 28.45 16.18

PRED(25) 0.84 0.25 0.68

Mean Absolute Percentage Error Root Mean Square Error

159

Table 4: CPU utilization step prediction for MAPE

between the 9th to 11th prediction steps. Fig.6 shows the actual 12-min 16.84 40.86 22.01

Model SVR NN LR

9-min 16.41 42.67 19.42

10-min 16.86 30.20 21.16

11-min 16.72 33.02 21.45

Table 5: Response Time Test dataset performance metric

Model SVR NN LR

MAPE 14.17 13.35 14.30

RMSE 1.923 1.742 2.000

PRED(25) 0.893 0.911 0.911

Fig. 3 Actual and Predicted CPU utilization - Linear Regression

Table 7: Response Time step prediction for MAPE

Model SVR NN LR

9-min 14.17 16.02 13.48

10-min 14.32 16.30 13.87

11-min 14.33 16.09 13.56

12-min 14.17 13.35 14.30

B. Discussion The actual CPU utilization as observed in figures 3, 4 and 5 shows a random utilization level. We simulated a realistic scenario where web users make requests in a random fashion. Thus as a user request is terminating, another user initiates a request. It was during this random initiate and terminates cycle that we observed the drop and rise in CPU utilization. We summarize our results below. Using the metrics defined in table 1, the training result from table 2 (Model 1) shows that SVR outperforms NN and LR in the MAPE and PRED (25) metrics. As shown in tables 2 and 3, the MAPE values for both training and test data respectively are very close. This approach was also applied to the SLA models. The test dataset results for SVR demonstrated superior prediction metric as seen in table 3. The NN model shows very poor metric values that are even below LR. Figures 3, 4 and 5 respectively display the graph of the actual and predicted CPU utilization for LR, NN and SVR. SVR is most responsive to random or nonlinear user request pattern; thus displaying a strong generalization property. This, unsurprisingly, led to its excellent prediction consistency as shown in table 5. NN is very erratic as we can see a 42.67 at the 9th minute to 33.02 at the 11th minute and then a leap to 40.86 at the 12th minute. This accounts for about 24% prediction difference between the 11th and 12th minute. Comparing this result with that of [6], the distinct difference is that they have used a linear user request pattern while we have approached the same problem from a random or nonlinear perspective. LR comes second to SVR in stability of prediction. For the Response Time prediction model, NN turned out to be the best with SVR close behind (table 5). However, taking a closer look at the prediction steps from 9th to 12th minute in table 6, we see that NN is still plagued with inconsistency especially between the 11th and 12th minute where a 21% difference is recorded. In fact, LR shows a much better result than NN and a slightly superior result than SVR

Fig. 4 Actual and Predicted CPU utilization Neural Network

Fig. 5 Actual and Predicted CPU utilization Support Vector Regression

Fig. 6 Actual and Predicted Response Time Support Vector Regression Table 7: Throughput Test dataset performance metric

Model SVR NN LR

MAPE 10.67 12.22 22.01

RMSE 1.370 1.551 2.865

PRED(25) 0.875 0.964 0.607

Table 8: Throughput Time step prediction for MAPE

160

Model SVR NN LR

9-min 10.47 11.86 111.90

10-min 10.57 12.04 93.17

11-min 10.52 12.10 58.06

12-min 10.67 12.22 22.01

7.

8.

Finally on the SLA metric, SVR had the best overall prediction metric for throughput (table 7). Though NN had the highest PRED (25) metric, preference is given to MAPE and RMSE as PRED (25) deals with a range of value as opposed to specific value. Again, the prediction consistency of SVR is steady as seen in table 8. While NN showed more stable prediction accuracy, LRs step prediction metric is unacceptably inconsistent with over 400% difference between the 9th and 12th minute prediction interval. From the two models (CPU and Response Time, Throughput SLA parameters), SVR is the most preferred model for forecasting. It has displayed a strong generalization property and the absence of over fitting. Furthermore, its prediction accuracy is most consistent over time compared to the NN and LR. VI. CONCLUSION AND FUTURE WORK

9. 10. 11. 12. 13.

14.

15.

16.

In this paper, we have built three forecasting models using Linear Regression, Neural Network and Support Vector Regression for a two-tier TPC-W web application. Asides from the traditional single metric prediction using CPU utilization, we have added two SLA metrics; response time and throughput to prediction model. The Support Vector Regression model displayed superior prediction accuracy over both the Neural Network and Linear Regression in a 9 to 12 minutes window thus confirming our research question. Furthermore, the addition of business level SLA metrics (response time and throughput) into the prediction model paves the way for a three-fold combination decision matrix for adaptive resource allocation; specifically scaling up VM infrastructure. We find this very useful as response time and throughput may have degraded long before an application reaches its set CPU utilization threshold. As a future work, we plan implementing and evaluating this model on a public cloud infrastructure and extend. Thus, cases of unsaturated Webserver and Saturated Database and imminent saturation of both servers can be forecasted and subsequent provisioning made before SLA penalties become enforced. REFERENCES
1. D. Hiley, Cloud Computing: A Taxonomy of Platform and Infrastructure-level Offerings, in Scholarly Materials and Research at Tech, Georgia Tech Library. April, 2009 Amazon elastic compute cloud (amazon ec2), 2012. [Online]. Available: http://aws.amazon.com/ec2 Google AppEngine, 2012. [Online]. Available: https://developers.google.com/appengine Salesforce, 2012. [Online]. Available: http://www.salesforce.com W. Iqbal et al., Adaptive resource provisioning for read intensive multitier applications in the cloud in Future Generation Computer Systems, vol. 27, pp. 871 879, November, 2010 S. Islam et al., Empirical prediction models for adaptive resource provisioning in the cloud, Future Generation Computer Systems, vol. 28, no. 1, pp 155 165, January, 2012

17.

18.

19.

20.

21. 22.

23.

24.

25. 26.

2. 3. 4. 5.

27.

28.

6.

E. Caron, F. Desprez and A. Muresan, "Forecasting for Grid and Cloud Computing On-Demand Resources Based on Pattern Matching" in Cloud Computing Technology and Science (CloudCom), in 2010 IEEE Second International Conference , vol., no., pp.456-463, November, 30 2010 A. Quiroz et al., Towards autonomic workload provisioning for enterprise Grids and clouds in Grid Computing, 2009 10th IEEE/ACM International Conference pp 50-57, October, 2009 Amazon elastic compute cloud (amazon ec2), 2012. [Online]. Available: http://aws.amazon.com/ec2/faqs J. Kupferman et al., Scaling Into the Cloud TPC, TPC-W Benchmark, Transaction Processing Performance Council (TPC), San Francisco, CA, USA, 2003 T. Wood et al., Profiling and Modeling Resource Usage of Virtualized Applications in Proc. Of ACM/IFIP/USENIX Middleware, 2008 Kundu, S.; Rangaswami, R.; Dutta, K.; Zhao, M.; , "Application performance modeling in a virtualized environment," High Performance Computer Architecture (HPCA), 2010 IEEE 16th International Symposium on , vol., no., pp.1-10, 9-14 Jan. 2010 S. Kundu et al., Modeling virtualized applications using machine learning techniques, in Proc. Of 8th ACM SIGPLAN/Sigpos conference on Virtual Execution Environments, pp3 14, London, UK 2012 A. Khashman and N.I Nwulu, "Intelligent prediction of crude oil price using Support Vector Machines," Applied Machine Intelligence and Informatics (SAMI), 2011 IEEE 9th International Symposium, vol., no., pp.165-169, 27-29 Jan. 2011 G.E Sakr et al., "Artificial intelligence for forest fire prediction," in Advanced Intelligent Mechatronics (AIM), 2010 IEEE/ASME International Conference, vol., no., pp.1311-1316, 6-9 July 2010 P. SangitaB, and S.R Deshmukh, "Use of support vector machine for wind speed prediction," in Power and Energy Systems (ICPS), 2011 International Conference, pp.1-8, 22-24 Dec. 2011 I. H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Academic Press, USA, 2000 S. Samarasinghe, Neural Networks for Applied Sciences and Engineering: From Fundamentals to Complex Pattern Recognition. Auerbach Publications, USA, 2007 H. Guosheng et al., "Grid Resources Prediction with Support Vector Regression and Particle Swarm Optimization," Computational Science and Optimization (CSO), 2010 Third International Joint Conference, vol.1, pp.417-422, 28-31 May 2010 A.J Smola and B. Scholkopf, A tutorial on support vector regression in Statistics and Computing vol 14, pp. 199 222, 2004 Comp-activ dataset 2012. [Online]. Available: http://www.cs.toronto.edu/~delve/data/compactiv/compActivDetail.html X. Cheng-Zhong, R. Jia and B. Xiangping, URL: A unified reinforcement learning approach for autonomic cloud management, Journal of Parallel and Distributed Computing, Vol 72, Issue 2, February 2012, Pages 95-105 H. W. Cain et al., An Architectural Evaluation of Java TPC-W in Proceedings of the Seventh International Symposium on HighPerformance Computer Architecture, 2001. H. Mark et al., The WEKA Data Mining Software: An Update, SIGKDD Explorations, Volume 11, Issue 1. 2009 H. Chih-Wei, C. Chih-Chung and L. Chih-Jen, A practical guide to support vector classification. Technical report, Department of Computer Science and Information Engineering, National Taiwan University, Taipei, 2003. [Online]. Available: http://www.csie.ntu.edu.tw/cjlin/libsvm/ C. Chih-Chung and L. Chih-Jen, LIBSVM : a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1--27:27, 2011. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm N. Sapankevych and R. Sankar, "Time Series Prediction Using Support Vector Machines: A Survey," Computational Intelligence Magazine, IEEE , vol.4, no.2, pp.24-38, May 2000

161

Das könnte Ihnen auch gefallen