Developing and validating a practical methodology for cloud services evaluation

Date

2015

Authors

Li, Zheng

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

By allowing consumers to access computing services without owning computing infrastructures, Cloud computing has emerged as one of the most promising computing paradigms in industry. An increasing number of commercial Cloud services supplied by various providers are now available in the Cloud market. Since different Cloud providers have their own idiosyncratic characteristics when developing services, different and competitive Cloud services have been offered with different terminologies, definitions, techniques and pricing models. Furthermore, even the same Cloud provider can supply different kinds of services with similar functionalities for different purposes. For example, Amazon has provided several storage service options, such as S3, EBS, and the local disk on EC2. As such, rigorous evaluation would be one of the prerequisites of employing Cloud services. In fact, Cloud services evaluation is crucial and beneficial for all the participants in the Cloud ecosystem ranging from service consumers (e.g., cost-benefit analysis) to providers (e.g., direction of improvement). When it comes to evaluation implementations, a suitable methodology essentially plays a strategic role in directing evaluation activities. However, according to our systematic literature review, there is a lack of a sound methodology to guide evaluating Cloud services. Although each of the existing studies must have (at least intuitively) followed their own approach, not many evaluators were strictly concerned with or specified their evaluation methodologies. Different evaluation approaches described in different reports vary, and some of them even have flawed considerations. For example, some evaluators directly treated benchmarks as their evaluation methods, while not considering the other important steps like the selection of metrics and experimental factors. To address the problems identified in the domain of Cloud services evaluation, we developed a systematic and practical methodology we name the Cloud Evaluation Experiment Methodology (CEEM). The development of CEEM was mainly based on: (1) investigating the existing empirical studies of Cloud services evaluation, (2) borrowing the lessons from evaluations of traditional computing systems, and (3) referring to the guidelines of Design of Experiments (DOE). When evaluating Cloud services, CEEM drives a two-part (linear-process part and spiral-process part) evaluation procedure that covers ten generic steps. Each evaluation step has its own input, activities, strategies and output. The strategies integrated in CEEM are mostly the usage of five knowledge artefacts we have developed, namely Taxonomy, Metrics Catalogue, Boosting Metrics, Experimental Factor Framework, and Conceptual Model. By replicating two past studies of Cloud services evaluation, we show that CEEM can help evaluators obtain more objective experimental results and draw more convincing conclusions. By conducting four new case studies, we show that CEEM can be used by different people to systematically and practically implement evaluations of different Cloud services, to satisfy real-world and complex requirements. In particular, the rigorous activities with strategies at each evaluation step make CEEM a systematic methodology, while the domain-specific knowledge artefacts make CEEM more practical for Cloud services evaluation. In other words, the evaluation activities involved in CEEM could be conveniently adapted to other computing domains by collating the domain-specific evaluation experiences.

Description

Keywords

Citation

Source

Type

Thesis (PhD)

Book Title

Entity type

Access Statement

License Rights

DOI

10.25911/5d5143224dc2e

Restricted until