- Data Lake for Enterprises
- Tomcy John Pankaj Misra
- 186字
- 2021-07-02 22:47:09
Spark Streaming
Shown in the next image is a very simplified view of the Spark streaming process. Spark was originally designed for faster processing of batches of data from Hadoop and was translated for near-real-time use cases as Spark streaming, retaining some of the fundamental building blocks and patterns in both the scenarios. One of the primary building blocks of Spark Streaming is DStreams, Receivers, and Resilient Distributed Datasets (RDD). While Spark started with optimizing batch processing and was translated for near-real-time use cases, the fundamental behavior remained somewhat similar. Even for near-real-time use cases, Spark streaming works with micro-batches with a batch interval. This batch interval also introduces some latency in Spark stream-based processing, limiting the near-real-time behavior to a few seconds rather than a fraction of a second.
Figure 08: Spark streaming
As shown here, the overall Spark streaming approach works with data streams having real-time data inflow for near-real-time processing. The Spark streaming components divide the incoming data stream into multiple micro-batches. These micro-batches are then submitted to the core Spark Engine, which processes these micro-batches to produce batches of processed data.