How to Scale the Cloud in Cloud Computing

2 of 6 in Series: The Essentials of Managing Data in Cloud Computing

From the provider's point of view, the whole point of cloud computing is to achieve economies of scale by managing a very large pool of computing resources in a highly economic and efficient fashion.

The graph shows a graph of the cost per user of running just one software application using different kinds of computer resources; this is charted against the number of users. The one application runs in different computing environments, starting with inefficient dedicated servers all the way up to massively scaled grids.

An important point to note is that the Y-axis of user populations is logarithmic. That means that the curve is much less steep than if it were drawn on a proportional scale of equal steps. If it were drawn on a proportional scale, it'd need miles of paper.

Cloud computing economies of scale.
Cloud computing economies of scale.

Note the following:

  • One end of the X-axis shows data center costs between $1-$50 per user per annum. The cost per user is extremely low.

  • The other end of the X-axis shows data center costs between $1,000-$5,000 per user per annum.

Basically, on the left, you have very efficient use of computer resources and, on the right, very inefficient use of resources.

Points on the line indicate the kind of computing resources that serve specific group sizes:

  • Inefficient servers: The cost of managing a single server in a data center will be thousands of dollars per year and this is as expensive as computing ever gets per user.

  • Virtual machines: Applications and user numbers that can't use a whole server get virtualized (split among several virtual servers).

  • Efficient servers (and small clusters): User populations from the hundreds to 1,000 can be served reasonably efficiently with a single or multiple servers if there's only one application being run on a server; servers can be highly efficient, yielding a relatively low cost per user.

  • Mainframe and large Unix clusters: They're shown separately on the grid only for the sake of space. Both can handle very large database applications from thousands to tens of thousands of users.

  • Grids: From the hundreds of thousands to a million users, you're in the area where Software as a Service (SaaS) vendors such as Salesforce.com operate. Business applications offered by SaaS vendors present a thorny scaling problem because it's a transactional database application.

  • Large grids: Concurrent users above one million. Still a very heavy workload and only possible via a scale-out (which lets a single workload expand by using more of the identical inexpensive resources) approach with a grid.

  • Massively scaled grid: This is for user populations in the tens of millions. Example: Each query on Google search is resolved by a purpose-built grid of up to 1,000 servers; Google routes queries to many such grids.

The dotted box indicates the traditional domain and kinds of resources of corporate computing. The same servers used in corporate environments could be used just as easily in scaled-out arrangements, where workloads aren't at all mixed.

The reduction in per-user costs doesn't, at the moment, come from using different computer equipment or different operating systems: It comes from running a small number (or even just one) workload and scaling it up as much as possible. That's how cloud computing reduces costs dramatically.

No corporation that runs a mixed workload is ever going to achieve cloud computing's economies of scale.

blog comments powered by Disqus

SERIES
The Essentials of Managing Data in Cloud Computing

Advertisement

Inside Dummies.com