Monitoring live virtual resources by means of benchmarking
Access status:
USyd Access
Type
ThesisThesis type
Doctor of PhilosophyAuthor/s
Chaudry, RabiaAbstract
Many web information systems and applications are now run as cloud-hosted systems. The consumers often rely on the cloud provider for notification of any disruption of the offered services. But these notifications may not always be very timely. We propose that the consumer should ...
See moreMany web information systems and applications are now run as cloud-hosted systems. The consumers often rely on the cloud provider for notification of any disruption of the offered services. But these notifications may not always be very timely. We propose that the consumer should measure the platform resources themselves (a procedure called benchmarking). This will permit the client to observe the available capacity and watch for changes that might signal disruption. We have carried out a long-running experiment for benchmarking of a number of cloud resources on Amazon Cloud from a consumer’s perspective, and we compared the observations to the service disruptions announced by the cloud provider. We found that the performance of the cloud can vary significantly over time which leads to unpredictable application performance. We showed that benchmarking data can help detect failures before any announcement is made by the provider, as well as significant degradation of performance that is not always connected with Amazon service disruption announcements. As benchmarking, or measuring the capacity of resources, induces complicated interactions between the measurements and the ongoing applications that are running in the consumers’ instances, we study these interactions and we propose (and evaluate) approaches to limit the interactions: we show how to statistically improve the benchmarks estimate of capacity to account for application load, and we show how to mitigate the impact on application latency from the benchmarking load. Our approach works well in a restricted setting.
See less
See moreMany web information systems and applications are now run as cloud-hosted systems. The consumers often rely on the cloud provider for notification of any disruption of the offered services. But these notifications may not always be very timely. We propose that the consumer should measure the platform resources themselves (a procedure called benchmarking). This will permit the client to observe the available capacity and watch for changes that might signal disruption. We have carried out a long-running experiment for benchmarking of a number of cloud resources on Amazon Cloud from a consumer’s perspective, and we compared the observations to the service disruptions announced by the cloud provider. We found that the performance of the cloud can vary significantly over time which leads to unpredictable application performance. We showed that benchmarking data can help detect failures before any announcement is made by the provider, as well as significant degradation of performance that is not always connected with Amazon service disruption announcements. As benchmarking, or measuring the capacity of resources, induces complicated interactions between the measurements and the ongoing applications that are running in the consumers’ instances, we study these interactions and we propose (and evaluate) approaches to limit the interactions: we show how to statistically improve the benchmarks estimate of capacity to account for application load, and we show how to mitigate the impact on application latency from the benchmarking load. Our approach works well in a restricted setting.
See less
Date
2016-11-11Licence
The author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission.Faculty/School
Faculty of Engineering and Information Technologies, School of Information TechnologiesAwarding institution
The University of SydneyShare