Managing Workloads in a Hybrid Cloud Model
There are principles to think about when managing workloads in a hybrid cloud model. Management, in this context, refers to how resources are assigned in order to process workloads. Assignments might be based on resource availability, business priorities, or event scheduling.
In the unified mainframe computing era, workload management was pretty straightforward. When a task had to be executed, a job was scheduled to run on that system. The instructions for running that task or job were typically written in a complex job-control instruction language. This set of commands helped the IT organization carefully plan the execution of workloads.
If a mission-critical workload required a huge amount of time to run, a set of instructions could be established to stop that workload and allow another workload to run. When the second workload finished executing its task, the long-running workload could resume. If there were dependencies that the workload needed to complete a task, a command could be issued to go find that task so it could be executed and then the result added to the workload.
Consider the following principles as you start to think about managing workloads in a cloud model:
Understand processing requirements. You need to understand how your computing resources can execute your workloads on average and at peak demand. In general, IT often engineers its computing resources to meet the peak workload.
Use modeling resources. You need to work out what CPU, disk, and memory are needed to execute workloads. Generally, you create some sort of model to do this. Your model might be a simple linear model that calculates the amount of CPU per service, or it might be something more complex.
Determine the capacity you need. Optimize your resources based on required response time, number of services, and numerous other variables that need to be considered depending on what you’re trying to accomplish with your workload.
The challenge in managing any workload is making sure that it can be executed and delivered at the right performance level. The principle is not that difficult if you’re dealing with applications running on one server. However, as IT infrastructures become more complex and heterogeneous (such as in the hybrid cloud), this becomes harder to do.