How a Good IT Operations Decision Engine Enables You to Manage Your Service Uptime with Confidence

Consider the decision engine. It’s a core aspect of an IT operations platform by virtue of the processes that it enables. Let’s take a look at why, from the business perspective of a managed service provider (MSP), it’s an essential element.

The decision engine is where it all comes together. Log entries that exceed a given threshold, state change of machines and services, alerts and warnings are all passed through correlation logic, tying widely disparate events into one overall picture. The decision engine knows the topology of the network and of the connections and interdependencies of the devices in the network.

When a service becomes unavailable, the decision engine knows what hardware is involved and what is affected upstream and downstream. It knows what other software services are impacted across the service delivery infrastructure.

The decision engine then takes these correlated events and automatically pinpoints the root cause of the service issue – working through different scenarios until it identifies the exact reason for the failure. It might single out the network device, the server, the app and so on. This takes just seconds.

Root Cause is Just the Beginning

The root cause process is powered by advanced logic profiles, intelligent rulesets that provide automated diagnostics and troubleshooting for specific IT services and technologies. The single root cause incident for each service issue is actionable – and it enables automations that take action to remediate those incidents.

This is key. The impact to your business is based on the mean time to repair (MttR) of the affected service. Getting to the bottom of what caused an outage is critical, but those business processes are still dead in the water until the issue is resolved. Automations are key to having resolution begin as soon as the cause is known.

An easy to overlook, but vitally important automation is simply confirming that the outage exists. This process of “validation” ensures that the alarms or alerts that triggered the event were indeed real and that the service is still unavailable. Many times issues clear on their own. They may initially be caused by a busy processor or storage array that hung for a moment, but then resumed normal operations. In a situation such as this, without proper validation, an unnecessary remediation workflow could be kicked off that could cause additional service interruptions, like a domino effect.

The workflows that can be automated are limitless. They can perform additional correlation, open and update tickets, collect information from end devices for ticket enrichment, trigger re-polling of network nodes and much more. They are built from vendor best practices, from hard-won experience of the entire network and systems engineering teams. Properly built, they leverage institutional knowledge from across the organization. They are certainly not limited to real-time problem resolution routines.

The Value of Automations

From an MSP’s business perspective, here’s why automating IT processes is so important.

  • Greater Speed and Efficiency. Manual processes are inherently slow. In the past, this was less of a concern because the pace of IT itself was slower. However, that’s no longer the case. IT is expected to respond instantly to business needs. If they don’t, they act as a brake on the business itself.
  • Improved Service Availability. When mission-critical services go down – such as e-commerce portals, contact centers or supply chain management systems – productivity, revenues and customer confidence all suffer. And yet, many IT organizations still rely on manual processes to keep these services up and running. That’s a huge limitation.
  • Increased Accuracy. Whenever someone makes a manual change to resolve an issue, it’s easy to get it wrong. Even if changes are vetted and reviewed by peers, ultimately it’s a stressed-out, overworked human that actually makes the change. And, humans make mistakes. In fact, errors made during routine maintenance – and during problem resolution – are responsible for a significant proportion of service outages. Even if the service isn’t immediately affected, errors still have to be detected and corrected – resulting in a significant amount of rework.Likewise, by automating configuration and provisioning processes, IT organizations can dramatically increase accuracy, reducing the potential for mistakes. Unlike humans, automated processes do things repeatably, reliably and consistently. When the same type of change needs to be made over and over again, automation dramatically reduces both risks and costs.
  • Enhanced Visibility. When IT organizations use manual processes, visibility is a major problem. There’s no easy way to track activities when information is spread out over innumerable emails and spreadsheets. Even when an IT organization uses some sort of recording system – for example, a ticketing system – the system still relies on manual updates, leading to incomplete and inconsistent data. As a result, it’s incredibly difficult to measure, analyze and improve processes – or to meet regulatory and internal compliance requirements.

Automated processes deliver dramatically increased visibility. For instance, when an incident management process is automated, every step is recorded. This makes it simple to analyze trends, identify process bottlenecks and drive proactive processes (such as problem management) to prevent incidents from reoccurring. Ultimately, this leads to improved service quality, reduced incident volumes and lower operational costs. And incident management is just one example.

This is why a good, reliable, state-of-the-art decision engine provides core functionality. It triggers strong, knowledge-based automated workflows that let you look at your IT infrastructure in a new way – with confidence.

Success stories


“The Optanix single unified platform replaced multiple point tools, reducing the TCO.”