They may sound mundane, but job scheduling and automated workflows are increasingly important in today’s IT environment. As we look at these capabilities in detail, it’s important to note that there is a major trend that sees them rapidly becoming part of a wider systematic approach to automation.
Indeed, Gartner has retired the Magic Quadrant for workload automation. They said that IT operations leaders should now evaluate workload automation in the context of broad data center or application and process automation efforts.
Reasons for Investing
As you know, IT organizations are managing increasingly complex processes that are codependent on one another and span technological and departmental types. Some of the major reasons they are being driven to invest in IT automation are efficiency, cost reduction, risk mitigation and predictability.
Due to the sophistication of current IT infrastructure, organizations need to look at their automation solutions end to end. Once considered as discrete initiatives, batch processing, runbook automation, application release automation, and even the datacenter itself are now isolated silos. Organizations have to step back and understand the big picture impact that implementing these automation silos can have on the both the IT organization and the business as a whole.
With that in mind, job scheduling and workload automation are key components of a large unified solution.
Today’s organization needs to ensure that IT tasks that the business depends on occur on a regular basis and with a high degree of certainty. A sophisticated unified solution should provide operators with a single pane of glass through which they can manage, execute and monitor batch processes – regardless of the server or platform on which those processes run.
Take job scheduling. In its traditional form, job scheduling is a mature market. It is currently undergoing a transformation toward an IT workload automation broker technology. Increasingly, job schedulers are capable of orchestrating the integration of real-time business activities with traditional background IT processing across different operating system platforms and business application environments.
Why is this important?
Most operating systems provide basic job scheduling capabilities, notably by at and batch, cron, and the Windows Task Scheduler. Web hosting services provide job scheduling capabilities through a control panel or a webcron solution. Many programs such as DBMS, backup, ERPs and BPM also include relevant job-scheduling capabilities.
However, operating system or individual program-supplied job scheduling will not usually provide the ability to schedule beyond a single OS instance or outside the remit of the specific program. That means you end up with dozens to hundreds or thousands of unique, standalone scheduled jobs running, without overall monitoring or problem resolution.
Clearly, that is a problem in need of a solution. These solutions do exist and they all have a number of capabilities in common. Basic features you should insist upon in your job scheduler software are:
- Interfaces that help to define workflows and/or job dependencies
- Automatic submission of job and batch executions
- Interfaces to monitor the results and status of the executions
- Priorities and/or queues to control the execution order of unrelated jobs
Going beyond the basic functionality, these days you should also expect:
Real-time scheduling based on external, unpredictable events. Native features accommodate many methods for automating a workload such as schedules, triggers and dependencies.
Automatic restart and recovery in event of failures. While some batch job failures merit dedicated investigation, many can be resolved by simply resubmitting. Some systems give users the ability to re-run processes without bothering IT operations people. Once a job runs successfully, the scheduler resolves the incident and logs it appropriately.
Alerting and notification to operations personnel. Today’s systems can deliver an even higher level of service by discovering trends and patterns within your automated batch processes. Are you dealing with isolated job failures? Or are job failures on a particular agent machine causing wider ranging issues? A good alert management reporting facility arms you with the statistics needed to make informed decisions.
Generation of incident reports. You can gain significant efficiencies in your IT organization by configuring the system to automatically generate new incidents based upon failure alerts for the highest priority jobs.
Audit trails for regulatory compliance purposes. Centralized access controls and native reports make it easy to respond to auditor inquiries.
Consolidation Makes Sense
These advanced capabilities can be written by in-house developers but are also provided by suppliers who specialize in systems-management software. They often leverage more experience with the many different APIs from vendor’s toolsets.
Given the complexity of a modern IT environment – and the business necessity of maintaining a holistic approach to IT services and automation across the enterprise – the evolutionary shift to consolidate multiple automation tools into a unified solution makes perfect sense.
Doing so lays the foundation for a policy-driven automation strategy that drives governance, visibility and control, and as a result, improved service levels to the business.
Keep in mind that a chain is only as strong as its weakest link. Make sure you focus on the strengths of the individual key components of any solution you are considering.