I often get client inquiries about latency requirements to support real-time analytics and operational workloads. The challenge is that “real time” can vary from milliseconds to minutes depending on the use case. At its core, real-time data refers to data made available immediately (or almost immediately) to support operational and analytical workloads. This data may come from transactional systems, clickstream, log streams, sensors, social media, devices, or events. Organizations typically use real-time data to support varied use cases such as fraud detection, customer experience, asset monitoring, inventory control, internet-of-things systems, and patient monitoring, as well as a variety of analytics.
Minimal latency
Businesses use operational data to support their operations and systems, primarily in a real-time manner. For example, real-time data from GPS can help track fleets, optimize routes, and provide delivery estimates. Generally, for mission-critical applications, operational data is expected to be accessible within a narrow window of 1–2 seconds. Even in scenarios when operational applications are not deemed mission-critical but still need real-time data, the criteria remain stringent, typically requiring data to be available in under 60 seconds.
“Near real time”
Unlike operational data, real-time analytics involves moving, aggregating, and processing data, which necessarily requires additional time. Latency occurs in extracting, transferring, loading, and preparing data for analytics. Based on client interactions, organizations commonly aim to establish a data accessibility target of under 15 minutes for facilitating real-time analytics when derived from transactional systems. For streaming sources such as clickstream, log streams, and sensors, organizations try to make data available for use in under 5 minutes.
Use-case-specific
The accepted analytical and operational data latency depends on the specific use case. Here are some examples of real-time latencies for operational and analytical that customers have mentioned during our interactions:
Use case | Workload | Typical latency |
Fraud detection | Operational | <1 second |
Patient monitoring | Operational | <1 second |
Internet-of-things insights | Operational | <5 seconds |
Customer service/experience | Operational | <10 seconds |
Customer analytics | Analytics | <5 minutes |
Social media analytics | Analytics | <5 minutes |
Analytics dashboard | Analytics | <10 minutes |
Business intelligence | Analytics | <15 minutes |
New Technologies Can Reduce Data Latency
Supporting truly real-time analytics is often not straightforward, especially with growing data volumes, disparate data silos, and legacy systems, but new and emerging technology, such as translytical data platforms and data fabric, can help reduce the friction with data collection and processing latencies. For example, a translytical platform can run multiple workloads in a single platform, eliminating the need for data movement and helping deliver analytics in seconds.
Originally posted on Forrester