Workload Complexities in a Hybrid Cloud Environment
Workload complexities are more prevalent in a hybrid cloud environment than they were with a single cloud architecture. With the advent of a hybrid cloud world, many more applications and services exist across geographies that have to run.
Some workloads may be permanent and need to run constantly, such as an online commerce site or a control system that manages a critical environmental process. Virtualized workloads add another level of complexity. Business services and various application models are also added into the mix.
In a hybrid cloud environment, your workloads may be running on different clouds, running different kinds of infrastructures using different operating systems. You’re bringing together workloads from different environments that often have to behave as though they’re a unified system.
What is the connection between workloads and workload management in the cloud? It’s actually at the center of determining whether you have a well-performing cloud environment or not. This is true whether you are a service provider offering either a public or private cloud to customers, or if you’re managing an internal private cloud to benefit internal customers and external customers and partners.
You may think that all you have to do is get some automation software (to automatically schedule resources and to perform some other functions associated with allocating resources) and you’re set. When you look at workloads from an operational perspective, it becomes clear that lots of issues need to be taken into account when determining how you create an overall hybrid cloud environment that both performs at a quality level and meets security and governance requirements. This is not a static requirement; from an operational perspective, organizations need to be able to dynamically change workload management based on changing business requirements.
APIs: Key to managing cloud workloads
Application programming interfaces (APIs) enable a software product or service to communicate with another product or service. For example, if you’re a software developer who has written a spreadsheet program and you want to allow another developer to add some specialized functions to enhance your application, you can provide the developer with an API that enables him to write to your application. The API specifies how the one application can work together with another one. It provides the rules and the interfaces. The developer doesn’t need to know the nitty-gritty of your application because the API abstracts the way these programs can work together.
An API also provides an abstracted way to exchange data and services. Because of this abstraction, the API can hide things from developers. For example, you don’t want an outside developer to learn the details of your internal security, so those details of the system are hidden. The API allows the developer to execute only the intended task.
APIs are important for managing workloads in a cloud environment. The Amazon Elastic Compute Cloud environment offers a rich set of APIs that allows customers to build their own workloads on top of Amazon’s compute and storage services. In fact, every company that offers a foundational cloud service such as IaaS (Infrastructure as a Service), SaaS (Software as a Service), and PaaS (Platform as a Service) develops APIs for its customers.
Everything is great as long as you manage your workload within the environment where you created it or where you will deploy it. However, different APIs aren’t always compatible. For example, one API may be built to support a 32-bit operating system, and the cloud environment that the developer wants to move the workload to supports a 64-bit implementation. How do you manage workloads across incompatible environments?
The necessity of a standard workload layer
No standard API allows the developer to work with different cloud models provided by different cloud vendors. What is actually needed is a standard layer that creates compatibility among cloud workloads. In service orientation, the XML model allows for interoperability among business services. There’s no equivalent model for the hybrid cloud.
You can find ways to work around complicated problems. Companies such as cloud management provider RightScale, IBM’s Workload Deployer, and BMC’s Control-M create customizable templates that allow developers to make allowances for the differences in APIs and thus are able to deploy and migrate workloads.
Portability of workloads
Discussing APIs and standards is essential because workload management is fundamental to the operation of the hybrid cloud. In a hybrid cloud environment, being able to move workloads around and optimize them based on the business problem being addressed is critical. Despite the fact that workloads are abstracted, they are built with middleware and operating systems.
Workloads must be tuned to perform well in a specific hardware environment. In today’s hybrid computing world, a lot of manual intervention is needed to achieve workload portability. However, we anticipate future standards and well-defined approaches that will make hybrid cloud workload management a reality.
The advent of hybrid computing will lead to the evolution of a new component in cloud computing. The broker of hybrid service workloads will provide a layer that will examine the infrastructure of the underlying cloud-based service and provide a consistent and predictable way to handle different workloads as though they were built the same way. We expect that this hybrid service workload broker will provide the hybrid workload management that the market will demand. When standards evolve, the need for part of this layer will go away, but the broad use of standards takes time.