In our last blog entry, we discussed the growing necessity of deploying a capable Security Information and Event Management (SIEM) tool to effectively manage your IT infrastructure. But as we noted, log-centric SIEMs make it difficult to detect and investigate today’s complex threats in a timely manner because they don’t provide full visibility across an enterprise.
While some solutions provide wider visibility by collecting data across more capture points (for example, logs, packets, net flows and endpoints), computing platforms (physical, virtual and cloud) and threat intelligence sources, these solutions can also hit operational limits.
Especially if your business has begun to rely on IoT devices and services, the tidal wave of data can overwhelm your analysis platform. Cloud computing offers services at the infrastructure level that can scale to IoT storage and processing requirements, but management of that data will require decentralization.
The majority of current IoT data processing solutions transfer the data to the cloud for processing. This is mainly because existing data analytics approaches are designed to deal with a large volume of data, but not real-time data processing and dispatching. With millions of things generating data, transferring all of that to the cloud is neither scalable nor suitable for real-time decision making.
Sometimes Management is Important
The proper management of those devices is relevant to many scenarios. For example, there are applications such as health monitoring and emergency response that require low latency. Any delay caused by transferring data to the cloud and back to the application can seriously impact their performance.
This has led to the concept of fog computing, where cloud services are extended to the edge of the network to decrease the latency and network congestion. The first level of network performance management needs to happen here.
Additionally, several challenges need to be addressed to realize the full potential of edge and IoT paradigms for real-time analytics. The first and most critical problem is designing resource management techniques that determine which modules of analytics applications are pushed to each edge device to minimize the latency and maximize the throughput.
While various vendors are pushing differing approaches, the necessity of having edge-based analytics and triage is indisputable.
Help is Already There
Fortunately, other implementation paradigms will help the situation. For example, MQTT (Message Queuing Telemetry Transport) is a venerable, ISO standard publish-subscribe-based messaging protocol. It was originally designed for connections to remote locations where a "small code footprint" was required, or where the network bandwidth was limited. It has now found value in the IoT/edge environment due to its ability to transfer vast volumes of data without creating unnecessary congestion.
Similarly, alternative protocols such as the Advanced Message Queuing Protocol (AMQP), Streaming Text Oriented Messaging Protocol (STOMP), the IETF Constrained Application Protocol and the Web Application Messaging Protocol (WAMP) will all help to ease network congestion.
On the application side, monitoring tools that rely on paying attention to just the metrics that change state or exceed a threshold will be able to disregard vast amounts of traffic.
On the deployment side, discovering assets, configuring monitoring profiles and aggregating data will not only need to be platform and protocol-agnostic, but also driven by automated workflows.
The Advent of Layered Monitoring
There is also the tiered approach, where first-tier monitoring modules in the edge and fog “triage” the performance metrics of the IoT infrastructure and only pass on significant events. Indeed, solutions where intelligent event correlation and remediation capabilities are incorporated into the fog modules will gain widespread acceptance.
The only messages that need go upstream are messages such as "we had this issue” and “we resolved it." The action is still noted and logged for long-term analysis, but the central management applications don’t need to act on every issue from the periphery.
This type of layered monitoring, correlation of events and production of alerts in real-time will allow network administrators to keep their network secure from threats and performance issues, and by design, will scale with your growing footprint.