What the reader will learn:
• That
‘cloud computing’ is a relatively new term, and it is important to clearly de
fi ne what we mean
• Cloud
computing is a new delivery model for IT but that it uses established IT
resources
• That
the concept of abstraction is critical to the implementation of cloud
architectures
•
Businesses will adopt cloud computing because it offers fi n ancial benefi t s and business agility, not because the technology is inherently ‘better’
1.1 What Is Cloud
Computing?
Everybody seems
to be talking about cloud computing. As technology trends go, cloud computing
is generating a lot of interest, and along with that interest is a share of
hype as well. The aim of this book is to provide you with a sophisticated
understanding of what cloud computing is and where it can offer real business
advantage. We shall be examining cloud computing from historical, theoretical
and practical perspectives, so that you will know what to
use, in which situation, and when it will be most appropriate.
So fi
rst of all, just what is cloud computing? This isn’t such a silly question.
That many things now attract the cloud computing badge, that it is dif fi cult
to understand what cloud actually means.
I n a nutshell, cloud computing
is a means by which computational power, storage, collaboration infrastructure,
business processes and applications can be delivered as a utility, that is, a
service or collection of services that meet your demands. Since services
offered by cloud are akin to a utility, it also means that you only pay for
what you use. If you need extra processing power quickly, it is available for
use in an instant. When you’ve fi n ished with the extra power and revert back
to your nominal usage, you will only be billed for the short time that
you needed the extra boost. So you don’t need to invest in a lot of hardware to
cater for your peak usage, accepting that for most of the time it will be
underutilised. This aspect of the cloud is referred to as elasticity
and is an extremely important concept within cloud computing.
T hat’s the short answer and not necessarily
the key to becoming an expert in cloud computing; for some extra information,
read on. To understand what makes a cloud different from other established
models of computing, we shall need to consider the conceptual basis of
computing as a utility and how technology has evolved to date.
1.2 Utility Computing
Utility computing
was discussed by John McCarthy in the 1960s whilst working at Massachusetts
Institute of Technology (McCarthy 1983), and the
concept was thoroughly expanded by Douglas Parkhill in 1966 (The Challenge of
the Computing Utility, Parkhill 1966).
Parkhill examined the nature of utilities such as water, natural gas and
electricity in the way they are provided to create an understanding of the
characteristics that computing would require if it was truly a utility. When we
consider electricity supply, for example, in the developed world, we tend to
take it for granted that the actual electrical power will be available in our
dwellings. To access it, we plug our devices into wall sockets and draw the
power we need. Every so often we are billed by the electricity supply company,
and we pay for what we have used. In the summer time, the daylight hours are
longer and we place less demand on devices that provide lighting, hot water or
space heating. During the winter months, we use electric lighting and space
heating more, and therefore, we expect our bills to re fl ect the extra usage
we make of the utility. Additionally, we do not expect the electricity to ‘run
out’; unless there is a power cut, there should be a never-ending supply of
electricity.
So the same goes for computing resources as a
utility. We should expect the resource to be available where we want, by
plugging into or by accessing a network. The resource should cater for our
needs, as our needs vary, and it should appear to be a limitless supply.
Finally, we expect to pay only for what we use. We tend to consider the
provision of utilities as services.
1.3 Service Orientation
T he term service orientation refers to the clear demarcation of a function that
operates to satisfy a particular goal. For instance, businesses are composed of
many discrete services that should sustainably deliver value to customers now and
in the future. Utility companies offer their services in the form of energy
supply, billing and perhaps, as energy conservation becomes more widespread,
services that support a customer’s attempt to reduce their energy consumption.
The services that are offered to the consumer are likely to be aggregations of
much fi n er-grained services that operate internally to the business. It is
this concept of abstraction,
1.3 Service Orientation
combined with object-oriented principles such as encapsulation
and cohesion, that helps de fi ne services within an organisation.
Service-oriented architecture (SOA) utilises
the principle of service orientation to organise the overall technology
architecture of an enterprise. This means that technology is selected, specifi
e d and integrated to support an architectural model that is speci fi ed as a
set of services. Such an approach results in technologically unique
architectures for each enterprise, in order to realise the best possible chance
of supporting the services that the business requires. However, whilst the
overall architecture may appear bespoke, the underlying services are discrete
and often reusable and therefore may be shared even between organisations. For
instance, the processing of payroll information is common to most enterprises
of a certain size and is a common choice for service outsourcing to third-party
suppliers.
From an
organisation’s perspective, SOA has some key advantages:
• The
adoption of the principles of service orientation enables commonly utilised
functionality to be reused, which signifi c antly simplifi e s the addition of
new functionality, since a large portion of the existing code base is already
present. Additionally, the emergence of standard protocols for service
description and invocation means that the actual service is abstracted away
from the implementation program code, so it doesn’t matter if the constituent
parts of a newly composed service are implemented in different ways, as long as
their speci fic ation conforms to a commonly declared interface contract.
• Changes
in business demand that require new services to be speci fi ed can be
accommodated much easier, and it is quicker to react to business market forces.
This means that an SOA is much more fl e et of foot, enabling new business
opportunities to be explored quickly with less cost.
• The
abstraction of service also facilitates consideration of the enterprise’s
performance at the process level; quality of service (QoS), lead times and
defect rates become more obvious measures to observe and therefore targets to
specify, since the underlying complexity is shrouded behind a service
declaration.
• Tighter
integration along value chains is enabled, as a particular functionality can be
made available as a service between an enterprise and its satellite suppliers.
A supplier may deal with several business customers, and it might not be
practical to adopt a number of different systems to integrate with. SOA simpli
fi es this by the publication of services that suppliers can ‘hook into’ with
their own systems. This has the added advantage that any changes to a
customer’s system are encapsulated behind the service description, and
therefore, no other modifi c ations will be required from those that consume
that service.
S ervice orientation and its
architectural model SOA are key concepts for the realisation of utility
computing. Now, we shall consider some technological developments that can
support this realisation. Later, in Chap. 5 , we shall encounter SOA again, where you will
be building a Google App as an exemplar use of web services.
1.4 Grid Computing
Grid computing
emerged in the 1990s, as Ian Foster and Carl Kesselman suggested that access to
compute resources should be the same as connecting to a power grid to obtain
electricity (Foster and Kesselman 1999). The
need for this was simple: Supercomputers that could process large data sets
were prohibitively expensive for many areas of research. As an alternative, the
connection and coordination of many separate personal computers (PC) as a grid
would facilitate the scaling up of computational resources under the guise of a
virtual organisation (VO). Each user of the VO, by being connected to the grid,
had access to computational resources far greater than they owned, enabling
larger scientifi c experiments to be
conducted by spreading the load across multiple machines. Figure 1.1 gives a
brief overview of a grid architecture. A number of separate compute and storage
resources are interconnected and managed by a resource that schedules
computational jobs across the grid. The collective compute resource is then
connected to the Internet via a gateway. Consumers of the grid resource then
access the grid by connecting to the Internet.
A s network speeds and storage
space have increased over the years, there has been a greater amount of
redundant computational resource that lays idle. Projects such as the Search
for Extraterrestrial Intelligence (SETI@HOME, http://setiathome. berkeley.edu/ ) have made use of this by
scavenging processing cycles from PCs that are either doing nothing or have low
demands placed upon them. If we consider how computer processors have developed
in a relatively short time span, and then we look at the actual utilisation of
such computational power, particularly in the of fi ce desktop environment,
there are a lot of processor cycles going spare. These machines are not always
used during the night or at lunch breaks, but they are often left switched on
and connected to a network infrastructure. Grid computing can harness this
wastage and put it to some prede fi ned, productive use.
O ne characteristic of grid
computing is that the software that manages a grid should enable the grid to be
formed quickly and be tolerant of individual machines (or nodes) leaving at
will. If you are scavenging processor cycles from someone
1.5 Hardware Virtualisation
else’s PC, you have to be prepared for them turning their
machine off without prior warning. The rapid setup is required so that a grid
can be assembled to solve a particular problem. This has tended to support
scienti fi c applications, where some heavy analysis is required for a data set
over a short period, and then it is back to normal with the existing resources
when the analysis is done. Collaboration and contribution from participants has
been generally on a voluntary basis, which is often the basis of shared
ventures in a research environment.
Whilst grid computing has started to realise
the emergence of computing resources as a utility, two signi fi cant challenges
have hindered its uptake outside of research. Firstly, the ad hoc,
self-governing nature of grids has meant that it is dif fi cult to isolate the
effect of poorly performing nodes on the rest of the grid. This might occur if
a node cannot process a job at a suitable rate or a node keeps leaving the grid
before a batch job is completed. Secondly, the connection of many machines
together brings with it a heterogeneous collection of software, operating
systems and con fi gurations that cannot realistically be considered by the
grid software developer. Thus, grid applications tend to lack portability,
since they are written with a speci fi c infrastructure in mind.
1.5 Hardware Virtualisation
Hardware
virtualisation is a developing technology that is exploiting the continued
increase in processor power, enabling ‘virtual’ instances of hardware to
execute on disparate physical infrastructure. This technology has permitted
organisations such as data centres to improve the utilisation and management of
their own resources by building virtual layers of hardware across the numerous
physical machines that they own. The virtualisation layer allows data centre
management to create and instantiate new instances of virtual hardware
irrespective of the devices running underneath it. Conversely, new hardware can
be added to the pool of resource and commissioned without affecting the
virtualised layer, except in terms of the additional computational power/storage/memory
capability that is being made available. Figure
1.2 illustrates the key parts of a
virtualised architecture. Working from the physical hardware layer
upwards, fi rstly there is a hypervisor. The role of the hypervisor
is to provide a means by which virtual machines can access and communicate with
the hardware layer, without installing an operating system. On top of the
hypervisor, virtual machines (VM) are installed. Each VM appears to function as
a discrete computational resource, even though it does not physically exist. A
host operating
system (OS) is installed upon each VM, thus enabling traditional computing
applications to be built on top of the OS.
Virtualisation offers three key advantages for
data centre management. Firstly, applications can be con fi ned to a particular
virtual machine (VM) appliance, which increases security and isolates any
detrimental effect of poor performance on the rest of the data centre.
Secondly, the consolidation of disparate platforms onto a uni fi ed hardware
layer means that physical utilisation can be better managed, leading to
increased energy effi c iency. Thirdly, virtualisation allows guest operating
systems Fig. 1.2 Virtualisation
overview to be stored as snapshots to retain any bespoke confi g uration
settings, which allows images to be restored rapidly in the event of a
disaster. This feature also facilitates the user capture of provenance data so
that particular situations can be realistically recreated for forensic
investigation purposes or to recreate a speci fi c experimental environment.
Virtualisation is discussed in more detail in Chap. 4 .
1.6 Autonomic Computing
A s computing technology becomes more complex, there is
a corresponding desire to delegate as much management as possible to automated
systems. Autonomic computing attempts to specify behaviours that enable the
self-management of systems. Self-con fi guration, self-healing, self-optimising
and self-protection (otherwise known as self-CHOP) are the four principles defi
n ed by IBM’s autonomic computing initiative (IBM Research 2012). If we consider the cloud computing concept of
elasticity, we can see that to obtain the ‘resource-on-demand’ feature will
require a variety of computational resources to be confi g ured and, once
running, optimised for performance.
I f we now consider a grid
architecture as a computational resource, then the operations described above
will need to take into account some more aspects particular to the technologies
involved, including disparate and heterogeneous hardware and software standards.
Finally, if we add to the mix hardware virtualisation, there will be a
requirement to instantiate and migrate virtual machines (VM) across disparate
hardware, dynamically as demand dictates. Such is the complexity of myriad
physical and virtualised hardware architectures and software components, that
it is essential that this management is automated if true, seamless elasticity
is to be realised.
W e have now explored the key
concepts and technologies that have shaped the emergence of cloud computing, so
we shall now explore a more formal defi n ition and observe how this informs
the present description of cloud computing architectures.
1.7 Cloud Computing: A Definition
1.7 Cloud Computing: A De fi nition
It won’t take
you long to fi nd a number of ‘de fi
nitions’ of cloud computing. The World Wide Web is awash with attempts to
capture the essence of distributed, elastic computing that is available as a
utility. There appears to be some stabilisation occurring with regard to an
accepted defi n ition, and for the purposes of this book, we’ll be persevering
with that offered by the National Institute of Standards and Technology (NIST):
Cloud computing is a model for
enabling ubiquitous, convenient, on-demand network access to a shared pool of
confi g urable computing resources (e.g., networks, servers, storage,
applications, and services) that can be rapidly provisioned and released with
minimal management effort or service provider interaction. This cloud model is
composed of fi ve essential
characteristics, three service models, and four deployment models.
The essential
characteristics that NIST’s de fi nition refers to are as follows:
• On-demand self-service. Traditionally,
hosted computing has enabled consumers to outsource the provision of IT
infrastructure, such as data storage, so that hardware purchases could be
minimised. However, whilst these solutions allowed customers to increase the
storage available without purchasing any extra hardware, the request for data
storage was typically an order that was fulfi l led some time later. The time
lag between request and actual availability meant that such increases had to be
planned for and could not be depended upon as a reactive resource. Cloud
computing should incorporate suffi c ient agility and autonomy, that requests
for more resource are automatically and dynamically provisioned in real time,
without human intervention.
• Broad network access. As a utility,
cloud computing resources must be available over networks such as the Internet,
using established mechanisms and standard protocols. Access devices can include
(though are not limited to) personal computers, portable computers, mobile
phones and tablet devices.
• Resource pooling. This characteristic
brings together aspects of grid computing (where multiple compute resources are
connected together in a coordinated way) and hardware virtualisation. The
virtualised layer enables the resources of a cloud computing provider to be
pooled together into one large virtual resource, enabling large-scale ef fi
ciencies to be achieved by the dynamic management of hardware and virtualised
resources. This results in the appearance of homogenous resources to the
consumer, without indicating the physical location or granularity of that
resource.
• Rapid elasticity. Requests for extra
resource are self-managed and automatic in relation to demand. From the
consumer’s perspective, the supply of compute resources is limitless.
• Measured service. In the same way that
energy usage can be monitored, controlled and reported, cloud computing
resource providers dynamically optimise the underlying infrastructure and
provide a transparent metering service at a level of abstraction that is
relevant to the consumer.
One theme that is emerging here is that of
abstraction; the characteristics above are reliant upon a fundamental
architecture of hardware resources that are discrete and varied, upon which
there is an abstraction layer of software that realises the characteristics of
cloud computing. The physical hardware resource layer includes processor,
storage and networking components, and the abstraction layer consists of at
least a self-managed virtualisation infrastructure.
1.8 Cloud Computing Service Models
O
f course, in cloud-speak we refer to services,
and there are three categories of service model described by NIST as
illustrated in Fig. 1.3. Working
from the physical layer upwards, the fi
rst service model layer is known as Infrastructure as a Service (IaaS).
IaaS is usually the lowest level service
available to a cloud computing consumer and provides controlled access to a
virtual infrastructure upon which operating systems and application software
can be deployed. This can be seen as a natural extension of an existing
hardware provision, without the hassle and expense of buying and managing the
hardware. As such, there is no control over the physical hardware, but the
consumer retains control over operating system parameters and some aspects of
security. There is a trend emerging for ‘bare metal’ services, where access to
the hardware at its most basic is provided, but this is more akin to
traditional data centre or ‘hosting’ services. For the majority of potential
cloud consumers, there is a desire to move away from as much of the detail as
possible and therefore progress upwards through the cloud service model stack.
P
latform as a Service (PaaS) sits atop IaaS. This
layer is ready for applications to be deployed, as the necessary operating
system and platform-related tools such as language compilers are already
installed and managed by the cloud computing provider. Consumers may be able to
extend the existing tool set by installing their own tools, but absolute
control of the infrastructure is still retained by the provider. Thus, the
consumer has control over application development, deployment and confi g
uration, within the confi n es of the hosted environment. This situation has
most in common with traditional web hosting, where consumers rented remote
servers that had existing
1.9 Cloud Computing Deployment Models
development platforms installed upon them. The key
difference with cloud computing in this case, however, is the rapid
provisioning or elasticity; classic web hosting relied upon manual management
of provisioning and therefore required human intervention if demand increased
or decreased.
Finally (for the NIST de fi nition), there is
Software as a Service (SaaS). This service model abstracts the consumer away
from any infrastructure or platform level detail by concentrating upon the
application level. Applications are available via thin client interfaces such
as internet browsers or program interfaces such as mobile phone apps. Google’s
Gmail is one popular example of a cloud computing application. An organisation
can adopt Gmail and never concern itself with hardware maintenance, uptime,
security patching or even infrastructure management. The consumer can control
parameters within the software to con fi gure speci fi c aspects, but such
interventions are managed through the interface of the application. The end
user gets an email service and does not worry as to how it is provided.
S o far, we have described the
essential characteristics of cloud computing and then three different service
models. As the abstraction concept develops, consumers are fi n ding new ways
of using cloud computing to leverage business advantage through the creation of
a Business Process as a Service model (BPaaS). Strictly speaking, this sits
within SaaS and is not a fourth layer which would fall outside of the NIST defi
n ition. We shall revisit this service model later in Chap. 4,
so for the time being, we shall consider the models by which cloud
computing can be deployed.
1.9 Cloud Computing Deployment Models
A public cloud, as its name implies, is
available to the general public and is managed by an organisation. The organisation
may be a business (such as Google), academic or a governmental department. The
cloud computing provider owns and manages the cloud infrastructure. The
existence of many different consumers within one cloud architecture is referred
to as a multi-tenancy model.
C onversely, a private
cloud has an exclusive purpose for a particular organisation. The cloud
resources may be located on or off premise and could be owned and managed by
the consuming organisation or a third party. This may be an example of an
organisation who has decided to adopt the infrastructure cost-saving potential
of a virtualised architecture on top of existing hardware. The organisation
feels unable to remotely host their data, so they are looking to the cloud to
improve their resource utilisation and automate the management of such
resources. Alternatively an organisation may wish to extend its current IT
capability by using an exclusive, private cloud that is remotely accessible and
provisioned by a third party. Such an organisation may feel uncomfortable with
their data being held alongside a potential competitor’s data in the
multi-tenancy model.
Community
clouds are a model of cloud computing where the resources exist for a
number of parties who have a shared interest or cause. This model is very
similar to the single-purpose grids that collaborating research and academic
organisations have created to conduct large-scale scientifi c experiments (e-science). The cloud is owned
and managed by one or more of the collaborators in the community, and it may
exist either on or off premise.
Hybrid
clouds are formed when more than one type of cloud infrastructure is
utilised for a particular situation. For instance, an organisation may utilise
a public cloud for some aspect of its business, yet also have a private cloud
on premise for data that is sensitive. As organisations start to exploit cloud
service models, it is increasingly likely that a hybrid model is adopted as the
specifi c characteristics of each of the
different service models are harnessed. The key enabler here is the open
standards by which data and applications are implemented, since if portability
does not exist, then vendor lock-in to a particular cloud computing provider
becomes likely. Lack of data and application portability has been a major
hindrance for the widespread uptake of grid computing, and this is one aspect
of cloud computing that can facilitate much more fl e xible, abstract
architectures.
At this point, you should now have a general
understanding of the key concepts of cloud computing and be able to apply this
knowledge to a number of common use cases in order to hypothesise as to whether
a particular cloud service model might be appropriate. The next part of this
chapter will dig a bit deeper into the deployment models and explore some fi ner-grained challenges and opportunities
that cloud computing presents.
1.10 A Quick Recap
B efore we proceed, let us just quickly summarise what we
understand by cloud computing:
• It’s
a model of computing that abstracts us away from the detail. We can have broad
network access to computing resources without the hassle of owning and
maintaining them.
• Cloud
computing providers pool resources together and offer them as a utility.
Through the use of hardware virtualisation and autonomic computing
technologies, the consumer sees one homogenous, ‘unlimited’ supply of compute
resource.
• Computing
resources can be offered at different levels of abstraction, according to
requirements. Consumers can work at infrastructure level (IaaS) and manage
operating systems on virtualised hardware, at platform level (PaaS) using the
operating systems and development environments provided, or at application
level (SaaS), where specifi c applications
are offered by the provider to be con fi gured by the consumer.
• Cloud
computing provides metered usage of the resource so that consumers pay only for
what they use. When the demand for more computing resource goes up, the bill
increases. When the demand falls, the bill reduces accordingly.
• Cloud
computing can be deployed publicly in a multi-tenancy model (public cloud),
privately for an individual organisation (private cloud ), across a community
1.11 Beyond the Three Service Models
of consumers with a shared interest (community cloud), or a
mixture of two or more models (hybrid cloud).
1.11 Beyond the Three Service Models
T he explanations and discussions so far have allowed
us to gain a broad understanding of cloud computing. However, like most things
in life, it isn’t that simple. When chief executive of fi cers declare that an
organisation will embrace ‘the cloud’, the chief information offi c er (CIO)
may be less enthusiastic. We shall now consider more deeply some of the
business drivers and service models for cloud adoption and explore the issues
that these drivers can present.
1.11.1 The Business Perspective
L arge IT vendors have realised for some time that new
technology is sold most successfully on its ability to improve pro fi tability.
Grid computing and serviceoriented architecture (SOA) are two relatively recent
examples. Grid computing has demonstrated massive bene fi ts when disparate
compute resources are harnessed together to do supercomputing on the cheap. The
problem was that the software and protocols that made these large distributed
systems perform were inaccessible to those outside of the grid community.
Vendors such as IBM and Oracle have both attempted to sell the advantages to
business of grid computing, but the lack of realisation of the concept of
utility (which informed the selection of the name ‘grid’) has meant that insuf
fi cient consumers were interested and the ultimate bene fi ts could not be
enjoyed.
SOA has had a similar ‘reduce your business
costs’ drive over the years, with many organisations reporting an overall
increase in expenditure after the costs of migrating to SOA have been accounted
for. So what is different about cloud computing?
One of the attractions of cloud computing is
the rapid provisioning of new compute resources without capital expenditure. If
the marketing director makes claims about a new market niche, then it is much
more cost-effective to experiment with new products and services, since cloud
computing removes traditional barriers such as raising capital funds, lengthy procurement
procedures and human resource investment. Also, if cloud computing is already
part of the organisation’s IT infrastructure, then new requests merely become
additional demands upon the systems, rather than designing and specifying new
systems from scratch. Business agility is therefore one key driver for the
adoption of cloud computing.
T he other key business driver is
the potential reduction in ongoing capital
expenditure costs afforded by cloud computing. As the use of IT becomes
more sophisticated, greater demands are placed upon IT fundamentals such as
data storage, and if the requirements fl u ctuate signifi c antly, the
pay-per-use model of cloud computing can realise operational savings beyond the
costs of the potential extra hardware requirement.
1.12 When
Can the Service Models Help?
1.12.1 Infrastructure as a Service
A s described earlier, IaaS is about servers,
networking and storage delivered as a service. These resources will actually be
virtualised, though the consumer wouldn’t know any different. The resources may
come with or without an operating system. IaaS is a form of computing rental
where the billing is related to actual usage, rather than ownership of a
discrete number of servers. When the consumer wants more ‘grunt’, the IaaS
management software dynamically provisions more resources as required.
Typically, there will be an agreed limit between the consumer and the provider,
beyond which further authorisation is required to continue scaling upwards (and
thus incur extra cost). IaaS is particularly suited to organisations who want
to retain control over the whole platform and software stack and who need extra
infrastructure quickly and cheaply. For instance, the research and development
department of an organisation may have speci fi c applications that run on
optimised platforms. Sporadically, applications are required to process massive
data sets. Using a cloud, it would cost the same to have 500 processors run for
1 hour, as it does to run 1 processor for 500 hours, so the research unit opts
for speed without having to invest in hardware that would be nominally
underutilised.
1.12.2 Platform as a Service
P aaS has parallels with web hosting, in that it is a
complete set of software that enables the complete application development life
cycle within a cloud. This includes the tools for development and testing as
well as the actual execution environment. As with IaaS, the resources are
dynamically scaled, and for the most part, this is handled transparently by the
cloud provider without making any extra demands upon the developer. For
specialist applications that require low-level optimisation, either IaaS or a
private cloud is more suitable. One of the potential drawbacks of PaaS is lack
of portability and therefore vendor lock-in, as you are developing applications
with the tool sets that are supplied by the cloud provider. If, at a later
date, you would like to move provider or you want to use another cloud service
concurrently, there may be a substantial effort required to port your
application across to another vendor’s cloud platform. PaaS is a good option if
your existing application’s development environment is matched by that of a
cloud provider or if you would like to experiment with new products and
services that can be rapidly composed from pre-existing services that are
provided by the platform.
1.12 When Can the Service Models Help?
1.12.3 Software as a Service
I n some ways, SaaS is the easiest way into cloud
computing. You see some software and you try it out for a limited time. If you
like it, you continue and start paying to use it, otherwise you look for
something else. The software automatically scales to the number of users you
have (but you don’t notice this), and your data is backed up. You will probably
have to invest a bit of time in getting your existing data into the
application, and any tweaks to existing systems that you have may also require
some work to get them to connect to your new cloud application. SaaS is useful
if you are in the situation whereby a legacy application you own has been
replicated by a SaaS provider or if a particular SaaS application offers a
capability that you don’t currently have but can see the business benefi t of having it. Customer Relationship
Management (CRM) is one example; many organisations operate without CRM systems
as they can be expensive and it is impossible to justify the initial
investment. Salesforce.com saw the opportunity to bring enterprise-level CRM to
the masses via SaaS and has subsequently opened up their own platform,
Force.com, as part of a PaaS service model.
Applications like CRM SaaS have enabled
organisations to abstract themselves away from the infrastructure headaches,
and as a result, they can think more about the actual business workfl o ws that
take place. Whilst it would seem that SaaS is all about pre-packaged software,
the vendors have realised that consumers should be able to confi g ure these
offerings so that the application can be suitably customised to integrate with
existing systems. This has led to a new interest in the abstraction of business
process management (BPM), whereby organisational units create highlevel process
descriptions of their operations, within software that interfaces the process
descriptions to an underlying, transactional code base. This offers substantial
bene fi ts including:
• No
knowledge of the underlying program code is required.
• Process
descriptions are closer to the real operations and are easier to derive and
communicate between business users.
• Process
optimisation and waste identi fi cation is simpli fi ed and easier to
implement.
• Process
commonality is more visible, and therefore, process reuse is more prominent,
both internally within an organisation and outside of the normal process
boundaries with suppliers.
• Libraries
of process descriptions enables the rapid composition of new processes.
From a conceptual stance, Business Process as
a Service (BPaaS) might be viewed as a fourth layer, above SaaS, but from an
architectural perspective, it is clearly a subset of SaaS as Fig. 1.4 illustrates.
BPaaS creates new opportunities for
organisations to exploit the cloud, as the abstraction away from technical and
integration issues gives organisations a new way to conduct their business.
This topic will be explored more fully in Chap. 10 , which is all about enterprise
cloud computing.
1.13 Issues for Cloud Computing
As with any new
approach or technology, there are limits by which bene fi ts can be realised,
and a new way of working may introduce additional risks. Cloud computing is no
different in this respect, particularly as the model is still maturing.
From a consumer’s perspective there is a great
deal of focus upon security and trust. Many users are ambivalent about where
‘their’ data is stored, whereas other users (specifi c ally organisations) are
more sceptical about delegating the location of the data along with the
management processes that go with it. For many smaller organisations, the cloud
computing providers will be bringing enterprise-level security to the masses as
part of the offering. Most private individuals and small businesses are unaware
of the risks of lost data and the adverse impact that it can have upon daily
operations. As a consequence, it is likely that they have not put the
appropriate security measures in place. In this case, a move towards the cloud
can bring real bene fi ts.
H owever, there may be specifi
c legislation that exists to govern the
physical location of data; a multi-tenanted public cloud may place your data in
a country that is outside the scope of the jurisdiction that you need to comply
with. Additionally, the notion of service as a core component of the cloud
leads to new service composition from readily available services. The use of
third-party services potentially introduces security and privacy risks, which
may therefore require an additional auditing overhead if the services are to be
successfully and reliably trusted.
Another concern is that of vendor lock-in. If
an organisation utilises IaaS, it may fi n d that the platforms and
applications that it builds upon this service cannot be transferred to another
cloud computing provider. Similarly, services at PaaS and SaaS can also introduce
nonstandard ways of storing and accessing data, making data or application
portability problematic.
Q uality of service (QoS) is an
issue that many organisations already face either as consumers or providers of
services. Whilst cloud computing providers offer
1.13 Issues for Cloud Computing
measurement and monitoring functions for billing, it
might be considered incumbent upon consumers to develop their own monitoring
mechanisms to inform any future actions.
M uch has been claimed about the
potential energy-saving opportunities of organisations moving to the cloud. The
ability to pool resources and dynamically manage how these resources are
provisioned will of course permit computing resource usage to be more
optimised. However, there is an assumption that this occurs at a certain scale,
and perhaps less obviously, it is dependent upon the service model required.
For instance, an IT department may decide to evaluate the potential of hardware
virtualisation as part of a private cloud. The hardware already exists, and the
maintenance costs are known. In theory, the more fl e xible provisioning that
cloud architectures offer should release some extra compute resources. In terms
of any investment in cooling, for example, then better utilisation of the
existing hardware will come cheaper than the purchase of additional
air-conditioning units.
U nfortunately, it is only through
the provision of compute resources on a massive scale that signifi c ant
amounts of resource can be redeployed for the benefi t of others. The private cloud may be able to
scavenge extra processor cycles for heavier computational tasks, but storage
management may not be that different from that achieved by a storage area
network (SAN) architecture. Thus, signifi c ant energy savings can only be
realised by using the services of a cloud provider to reduce the presence of
physical hardware on premise.
It follows therefore, that it is the massive
data centres who offer SaaS that can maximise scalability whilst signi fi
cantly reducing energy usage. For everyone else, energy reduction might not be
a primary motivator for adopting a private cloud architecture.
O f course, as organisations move
to the cloud, there is a heightened awareness of measures of availability and
the fi nancial impact that a temporary
withdrawal of a service might incur. Good practice would suggest that there
should be ‘no single point of failure’, and at
fi rst glance a cloud-based system would offer all the resource
redundancy that an organisation might want. However, whilst the IaaS, PaaS or
SaaS may be built upon a distributed system, the management and governance is
based upon one system. If Google or Microsoft went bust, then any reliance upon
their comprehensive facilities could be catastrophic. This risk gets greater
the higher up the cloud stack that the engagement occurs—if Salesforce.com
collapsed, then a great deal of an organisation’s business logic would
disappear along with the data, all wrapped up in a SaaS application.
S oftware bugs are a major concern
for all software development activity, and many instances of ‘undocumented
features’ occur only when an application is under signi fi cant load. In the
case of a distributed system, it is not always practical to recreate the open
environment conditions, so there remains the potential risk that something
catastrophic might occur. Hardware virtualisation can be a way of containing
the scope of software bugs, but as many SaaS vendors created their offerings
before the widespread use of virtualisation, this form of architectural
protection cannot be relied upon. This is clearly a case for open architectural
standards for cloud architectures to be established.
As cloud use increases, organisations will
place ever-increasing demands that present signifi c ant data transfer
bottlenecks. Additionally, the distributed architecture of a cloud application
may result in a data transfer that would not have occurred had the application
been hosted in one physical space. Even though network speeds are getting
faster, in some cases the volume of data to be transferred is so large that it
is cheaper and quicker to physically transport media between data centres. Of
course this only works for data that is not ‘on demand’ and therefore is
relevant when data needs to be exported from one system and imported into
another.
W ith regard to the benefi t s of
scalability, the case for optimising processor cycles across a vast number of
units is clear; processors can be utilised to perform a computation and then
returned back to a pool to wait for the next job. However, this does not
translate as easily to persistent storage, where in general the requirement
just continues to increase. Methods for dealing with storage in a dynamic way,
that preserve the performance characteristics expected from an application that
queries repositories, have yet to be developed and remain a potential issue for
cloud computing going forward.
1.14 Summing Up
C loud computing is a new delivery model for IT that
uses established IT resources. The Internet, hardware virtualisation, remote
hosting, autonomic computing and resource pooling are all examples of
technologies that have existed for some time. But it is how these technologies
have been brought together, packaged and delivered as a pay-per-use utility
that has established cloud computing as one of the largest disruptive
innovations yet in the history of IT. As organisations shift from concentrating
on back-of fi ce processes, where transactional records are kept and
maintained, towards front-end processes where organisations conduct business
with customers and suppliers, new business models of value creation are being
developed. There is no doubt that the cloud is fuelling this shift.
You’ve now had a whistle-stop tour of the
exciting world of cloud computing. We have covered a lot, and you will probably
have some questions that haven’t been answered yet. The rest of this book
explores a number of important areas in more depth, so that by the end you will
not only have a broad understanding of cloud computing, but if you have
completed the exercises, you’ll be able to implement the technology as well!
1.15 Review Questions
The answers to these
questions can be found in the text of this chapter.
1. Explain
how energy utility provision has informed the emergence of cloud computing.
2. Brie
fl y discuss the differences between cloud computing service models.
References
3. Which
combination of cloud computing characteristics is the best case for reducing
energy consumption?
4. Explain
the similarities between grid and cloud computing.
5. Describe
the different levels of abstraction that cloud providers can offer.
1.16 Extended Study Activities
These activities
require you to research beyond the contents of this book and can be approached
individually for the purposes of self-study or used as the basis of group work.
1. You
are a member of a team of IT consultants, who specialise in selling IT s ystems
to organisations that have between 100 and 500 staff. Prepare a case for the
adoption of cloud computing. Consider the types of IT architecture and systems
that might already be in place and whether there are speci fi c business
functions made available by cloud computing that an organisation might benefi
t from.
2. An
IT department has decided to investigate the use of cloud computing for
application development. What are the issues that they should consider, and how
would you advise that they mitigate any risks?
References
Foster, I.,
Kesselman, C.: The Grid: Blueprint for a New Computing Infrastructure. Morgan
Kaufmann Publishers, San Francisco (1999). ISBN 1-55860-475-8
IBM
Research: Autonomic Computing. http://www.research.ibm.com/autonomic/ (2012). Last
Accessed July 2012
McCarthy,
J.: Reminiscences on the History of Time Sharing. Stanford University. http://wwwformal.stanford.edu/jmc/history/timesharing/timesharing.html (1983)
Parkhill, D.: The Challenge of the Computer Utility.
Addison-Wesley, Reading (1966). ISBN
0-201-05720-4
0 komentar:
Posting Komentar