Cloud Computing Labs: Workload Prediction and Resource Scaling for Web Applications in the Cloud

The primary purpose of this assignment is to get familiar with the most popular Infrastructure-as-a-Service: Amazon Web Services, and study the resource provisioning and optimization methods for web applications in the cloud. This assignment requires a lot of activities on the command line and some coding work.

Part 1: Accessing EC2 and S3 from the command line

Review the lecture about using AWS. Create your own AWS account and redeem the AWS coupon code assigned to you. You should be able to find the assigned code in the file "aws_code" from your home directory at nimbus17. Note that a credit/debit card will be used for creating the account.

Now, answer the following questions:

Question 1.1 Have you successfully started an instance with command line? After the instance becomes stable, copy the output of the command "ec2-describe-instances instance_id" to the report.

Question 1.2 Use the boto APIs to implement a python function start_instances(num_instances), where the parameter num_instances is the number of instances you will be creating. This function will create a number of instances with the AMI ami-bba18dd2. It should wait until the state of the instances become "running", and then return the list of instances. Paste your code here.

Question 1.3 Write a python script that uses the boto APIs to find out all the files in the bucket "wsu2014", print out the contents in the files, and copy the files to your own bucket. Keep your bucket undeleted until we finish grading. Paste your code here.

Question 1.4 How much time did you spend on this task?

Question 1.5 How useful is this task to your understanding of EC2 and S3?

Part 2: Monitoring instances and dynamical resource provisioning.

This task has two subtasks.

Task 2.1 You need to implement a tool with Python to monitor the status of the instances you created. You will use the start_instances function to create 2 instances. The instance information is passed to the monitoring tool. This monitoring tool will constantly (e.g., every 5 seconds) print out the CPU usage of each instance.

Note that you can execute commands remotely via ssh, using the Python ssh library such as Paramiko. Here is an example of using Paramiko. The system statistics can be obtained with the "mpstat" command, which is in the "sysstat" package. You can manually install it before you test your program. The command execution via ssh will return the output as a string. By manually executing "mpstat" you should understand what the output looks like. You should process the returned string to find out the CPU usage.

After you implement the tool, please answer the following questions:

Question 2.1 Paste your script here.

Question 2.2 How much time did you spend on this task?

Question 2.3 How useful is this task to your understanding of EC2 and boto programming?(low, medium, high)

Task 2.2 Dynamic resource provisioning/releasing is critical to econmical and reliable web applications in the cloud. The key of this task is to predict the workload change and adjust the resources correspondingly (e.g., either provisioning more nodes or releasing some nodes). We will use a simulator to test your workload prediction and resource provisioning strategies.

You can download the simulator and the "Cluster" (i.e., the simulated multi-node computer cluster) python code. The simulator extremely simplifies the real multi-sever web application scenario. The each line of the input file (sample workload 1 and sample workload 2) will represent the total number of user requests received by the web application, which are evenly distributed to the nodes in your cluster. Assume each node has 100 resource units thatcan serve 100 requests at maximum in each round. If x requests (x is greater than 100) are received, x-100 requests will be discarded (i.e., counted as failures). If x is less than 100, then 100-x resource units are idle. The idle resource units will waste your money and the failures will be penalized. Let's define (idle_resource_units + 10*failures)/total_number_of_requests as the cost of your resource provisioning strategy. The lower the cost is, the better the strategy.

You don't need to change the file and most of the file. The core is to implement the "Cluster.adjust" method. You can use any part of the workload history to predict the future workload and define the adjustment strategy (+x nodes, -y nodes, or keep unchanged). For example, you can use the average of the k historical workload values to define the predicted next workload value, or use linear regression (if you want to learn more you can find any short tutorial about linear regression.) to find the function between workload and the time. Come up at least two resource provisioning/releasing strategies and compare their costs for the two sample workloads in simulation. Report the best two strategies.

You should answer the following questions:

Question 3.1 briefly describe your best 2 resource provisioning strategies and paste your implementation of "Cluster.adjust" method here.

Question 3.2 report your simulation results with top 2 strategies in the table.

strategy 1strategy 2
Workload 1 cost
Workload 2 cost

Question 3.3 Describe your design for a real dynamic resource provisioning tool that works with the AWS virtual machine instances.

Question 3.4 How much time did you spend on this task?

Question 3.5 How useful is this task to your understanding of dynamic resource provisioning for cloud-based web applications? (low, medium, high)

Final Survey Questions

Question 4.1 Your level of interest in this lab exercise (high, average, low);

Question 4.2 How challenging is this lab exercise? (high, average, low);

Question 4.3 How valuable is this lab as a part of the course (high, average, low);

Question 4.4 Are the supporting materials and lectures helpful for you to finish the project? (very helpful, somewhat helpful, not helpful);

Question 4.5 How much time in total did you spend in completing the lab exercise;

Quertion 4.6 Do you feel confident on applying the skills learned in the lab to solve other problems with AWS EC2 and S3?


Turn in the report that answers all the questions to the Pilot project submission before the deadline. Also print out a hardcopy and turn in to the class.

Make sure that you have terminated all instances after finishing your work! This can be easily done with the AWS web console.

This page, first created: Feb 26 2015; last updated: Feb 26 2015