Speed layer

The speed layer in a Lambda Architecture provides near real-time processing of events. Since the expectation is to process the events in near-real-time there is a limited amount of processing that can be done on a limited size of information. This may also include machine learning or complex event processing algorithms that can be run for near-real-time scenarios.

The term near-real-time processing is a relative term and may mean different things for different people and different scenarios. For instance, for a customer reservation, this may mean of the order of 2-3 seconds, however for a use case such as recommendation engine, it may mean a few minutes.

In terms of Lambda Architecture, this layer should receive the same event/message which otherwise is also captured by the batch layer, but both of these layers would give very different meaning to the data once processed, complementing their respective purposes for realizing a use case.

Speed layer generally comprises of stream processing of the event received from acquisition layer, and generally there is a presence of a messaging middleware for guaranteed delivery as well as loosely coupled integration with the acquisition layer.

Some of the early frameworks in this space have been Apache Storm, Flume, and Apache Kafka with consumer-based stream processing. Flume has remained a popular choice in this space, but recently Spark Streaming and Flink have been gaining a good adoption for their support and simplicity when it comes to deploying parallel processing and pipeline processing with support for scale-out architecture. There are very specific differences in the way each of these frameworks operates, some of which are explained next.