Streams Flashcards

1
Q

What is stream processing and how does it differ from batch processing?

A

Stream processing is the real-time analysis and processing of data as it’s generated. Unlike batch processing, which processes data after collection and storage, stream processing provides immediate insights and decisions based on live data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the three phases of stream processing?

A

Stream processing typically involves three phases: Ingest, where data is collected from various sources; Process, where data is filtered, transformed, and aggregated; and Output, where processed data is sent to different destinations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does a stream processor handle data ingestion?

A

A stream processor ingests data from diverse sources like sensors, devices, and applications. The data usually comes in the form of events, which are messages describing occurrences or actions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What role does a stream processor play in real-time analytics?

A

A stream processor analyzes and processes data in real time, applying predefined rules for data transformation and aggregation. This enables immediate insights and decision-making, crucial in scenarios like fraud detection in banking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How do stream processors ensure data processing flexibility?

A

Stream processors can process data as soon as it’s ingested without waiting for the entire dataset. They also can output data to multiple destinations simultaneously, offering flexibility and efficiency in data handling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the importance of message brokers in stream processing architectures?

A

Message brokers act as intermediaries between data producers and consumers, decoupling them for scalability and resilience. They enable asynchronous communication, allowing producers to send messages without waiting for consumer processing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How can Amazon Kinesis be used for stream processing on AWS?

A

Amazon Kinesis is a managed service for real-time data streaming. It can ingest, process, and output large streams of data at scale. Users can create Kinesis applications to process data in the stream and configure data sinks to receive processed data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What additional AWS services can enhance stream processing?

A

AWS Lambda can be used for complex data transformations. Amazon S3 can store processed data for long-term analysis. AWS SNS can send alerts on specific events like fraud detection, and AWS Redshift or Athena can be used for data warehousing and analytics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why might microservices architecture be beneficial in a streaming environment?

A

In a streaming environment, microservices offer scalability, reliability, and ease of debugging. They allow the system to be flexible, with each microservice handling a specific task and communicating via message queues.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the considerations for storing processed messages in stream processing?

A

Storing all processed messages enables complete data tracking and auditing, aiding in troubleshooting and performance analysis. However, this can be expensive, so some opt to store only failed messages to reduce costs while maintaining some level of auditing capability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How does stream processing contribute to real-time fraud detection?

A

Stream processing allows for the immediate analysis of transaction data, enabling the detection of fraudulent activities in real time. By processing each transaction as it occurs, suspicious patterns can be identified and addressed promptly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the role of edge computing in stream processing?

A

In edge computing, data processing is performed closer to the data source, reducing latency. This is particularly useful in stream processing for real-time analytics in IoT and other applications where immediate data processing is crucial.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How do stream processors handle large-scale data from sources like IoT devices or social media feeds?

A

Stream processors can handle high volumes of data by efficiently ingesting, processing, and routing data from various sources like IoT devices or social media feeds, ensuring scalable and timely data management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the significance of data transformation in stream processing?

A

Data transformation in stream processing involves modifying and standardizing data formats, enriching data, and extracting valuable information. This enhances the quality and usability of the data for downstream applications and analytics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How do stream processing systems achieve fault tolerance and high availability?

A

Stream processing systems achieve fault tolerance and high availability through techniques like data replication, checkpointing, and automatic failover, ensuring continuous operation and data integrity in case of system failures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why is load management important in stream processing, and how is it achieved?

A

Effective load management ensures balanced data processing and prevents bottlenecks. It’s achieved through techniques like partitioning data streams, scaling resources dynamically, and employing efficient data routing strategies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How does stream processing facilitate real-time decision making in business applications?

A

By processing data streams instantly, stream processing enables businesses to make timely decisions based on current data, such as dynamic pricing adjustments, instant customer feedback analysis, or operational optimizations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the challenges associated with implementing stream processing?

A

Implementing stream processing can be challenging due to the need for managing large-scale data ingestion, ensuring data quality, handling variable data rates, maintaining state across streams, and integrating with existing systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How do stream processing and batch processing complement each other in data analytics?

A

While stream processing handles real-time data analysis, batch processing is used for comprehensive analysis of accumulated data. Together, they provide a complete view of data analytics, covering both immediate insights and in-depth historical analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What considerations should be made when choosing a stream processing technology or platform?

A

When choosing a stream processing technology, consider factors like scalability, ease of integration, support for different data sources, processing latency, fault tolerance, and the ability to handle specific data processing requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How does stream processing handle time-sensitive data?

A

Stream processing is designed to handle time-sensitive data by processing it immediately as it arrives. This is crucial in scenarios like monitoring systems, real-time analytics, or live data feeds, where timely data processing is essential.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is event-driven architecture in the context of stream processing?

A

In event-driven architecture, components react to and process data as events, which are triggered by actions or changes in data. This architecture is integral to stream processing, enabling responsive, real-time data handling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How is data consistency maintained in stream processing?

A

Data consistency in stream processing is maintained through techniques like ensuring idempotence (processing the same data multiple times without changing the result), using exactly-once processing semantics, and maintaining state consistency across distributed systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the role of windowing in stream processing?

A

Windowing in stream processing involves grouping incoming data into windows based on time or size criteria, allowing for the processing of data in batches within a stream, which is useful for aggregations or temporal analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

How do stream processing systems handle varying data formats?

A

Stream processing systems often include data normalization and transformation capabilities to handle varying data formats, ensuring that data from different sources can be integrated and processed uniformly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Why is backpressure management important in stream processing?

A

Backpressure management is crucial for handling scenarios where data is generated faster than it can be processed. It involves controlling the flow of data to prevent system overload and ensure stable operation.

27
Q

How does stream processing contribute to predictive analytics?

A

Stream processing enables predictive analytics by providing real-time data that can be used to make forward-looking predictions. This is useful in areas like fraud detection, demand forecasting, and proactive maintenance.

28
Q

What is the impact of stream processing on database performance?

A

Stream processing can offload real-time data handling from databases, reducing the load on them and improving overall performance. It can also enrich and preprocess data before it is stored, optimizing database storage.

29
Q

How do stream processing frameworks differ in terms of features and use cases?

A

Different stream processing frameworks offer varied features like processing latency, fault tolerance, scalability, and ease of use. The choice of framework often depends on specific use cases, system requirements, and existing technology stacks.

30
Q

What are some best practices for designing a robust stream processing system?

A

Best practices include ensuring scalability, handling variable load, implementing effective error handling and retry mechanisms, maintaining data quality, and integrating monitoring and alerting systems for operational visibility.

31
Q

How is aggregation performed in stream processing?

A

Aggregation in stream processing involves combining multiple data elements in a stream to produce a summary or comprehensive view. Common aggregation operations include counting, summing, averaging, or finding min/max values over a specific window of time or data set.

32
Q

What are time windows in stream aggregation?

A

Time windows are used in stream aggregation to define the time frame over which data is aggregated. They can be fixed (tumbling windows), overlapping (sliding windows), or session-based, where the window is defined by periods of activity.

33
Q

How do stream processing systems handle late-arriving data in aggregations?

A

Stream processing systems handle late-arriving data by using techniques like watermarking, which allows a certain threshold of delay for data, or by updating the aggregates once the late data arrives, ensuring accuracy in the aggregation results.

34
Q

What is the role of state management in stream aggregation?

A

State management is crucial in stream aggregation for maintaining the state of ongoing aggregations. It tracks the current state of data as it passes through the system, allowing for accurate and consistent aggregations over time.

35
Q

How do micro-batching and aggregation relate in stream processing?

A

Micro-batching in stream processing involves processing small, discrete batches of streaming data. It is often used in aggregation to group a set of incoming data points, allowing for batch-like processing within a streaming context.

36
Q

What challenges are associated with aggregation in stream processing?

A

Challenges in stream aggregation include managing out-of-order data, handling large volumes of data in real-time, ensuring low-latency processing, and maintaining state consistency across distributed systems.

37
Q

How is window aggregation optimized in high-throughput scenarios?

A

In high-throughput scenarios, window aggregation is optimized by parallelizing the processing, using efficient data structures for state management, and applying techniques like pre-aggregation to reduce the volume of data that needs to be processed.

38
Q

Why is aggregation important in real-time analytics?

A

Aggregation is vital in real-time analytics as it condenses large volumes of streaming data into meaningful summaries, trends, and patterns, enabling quick insights and decision-making based on current data.

39
Q

What are the considerations for choosing an aggregation strategy in stream processing?

A

Choosing an aggregation strategy involves considering factors like the nature of the data, the required latency of results, the scale of data processing, and the specific business or analytical objectives of the aggregation.

40
Q

Why is Kafka optimized for high throughput?

A

Kafka is designed to move a large volume of data efficiently, similar to a large pipe moving liquid. Its architecture and design choices enable it to handle high data throughput, meaning it can process a significant amount of data quickly.

41
Q

What is the impact of Kafka’s reliance on sequential I/O?

A

Kafka’s use of sequential I/O, which involves reading and writing data blocks one after the other, enhances its performance. Unlike random access, sequential access is faster on hard drives as it doesn’t require moving the arm to different locations on the disks.

42
Q

How does the append-only log contribute to Kafka’s performance?

A

Kafka uses an append-only log as its primary data structure. This means new data is continuously added to the end of the file, creating a sequential access pattern that is significantly faster compared to random writes.

43
Q

What are the benefits of sequential access in Kafka’s context?

A

Sequential access in Kafka allows for much faster read and write speeds compared to random access. On modern hardware, sequential writes can reach hundreds of megabytes per second, making Kafka’s data handling highly efficient.

44
Q

Why is Kafka cost-effective in terms of storage?

A

Kafka’s design allows it to use hard disks, which are cheaper and offer more capacity compared to SSDs. This makes Kafka cost-effective for retaining large amounts of messages over extended periods without significant performance penalties.

45
Q

How does Kafka’s zero-copy principle enhance its efficiency?

A

Kafka’s zero-copy principle optimizes data transfer from disk to network. It minimizes unnecessary data copying and system calls, allowing direct data transfer from the OS cache to the network interface card buffer, enhancing overall efficiency.

46
Q

What is the role of the sendfile() system call in Kafka’s performance?

A

The sendfile() system call in Kafka enables direct data transfer from the OS cache to the network interface card buffer. It reduces the number of data copies and system calls, making the data transfer process more efficient.

47
Q

What is DMA, and how does it contribute to Kafka’s performance?

A

DMA (Direct Memory Access) allows data to be transferred directly to the network card buffer without involving the CPU. This process is highly efficient and plays a critical role in Kafka’s ability to transfer data quickly.

48
Q

How do modern network cards enhance Kafka’s data transfer process?

A

Modern network cards support DMA, which facilitates efficient data copying directly from the OS cache to the network card buffer. This hardware optimization aligns well with Kafka’s zero-copy principle.

49
Q

Why are sequential I/O and zero-copy important for Kafka?

A

Sequential I/O and the zero-copy principle are key to Kafka’s high performance. They enable Kafka to utilize modern hardware efficiently, achieving high throughput and fast data processing.

50
Q

How does Kafka’s design support high durability of data?

A

Kafka ensures high data durability by replicating data across multiple nodes in a Kafka cluster. This replication means that even if a node fails, the data is still available on other nodes, ensuring its persistence and durability.

51
Q

What is the role of partitioning in Kafka’s scalability?

A

Kafka partitions topics (streams of messages) across multiple servers. This partitioning allows Kafka to handle more data by distributing the load, thereby enhancing scalability and parallel processing capabilities.

52
Q

How does Kafka handle large-scale data distribution efficiently?

A

Kafka efficiently handles large-scale data distribution by using a publish-subscribe model, where data producers send messages to topics and consumers retrieve messages they’re interested in. This decouples data production from consumption, enabling efficient and scalable distribution.

53
Q

Why is Kafka suitable for real-time data processing applications?

A

Kafka’s ability to handle high-throughput, low-latency data transfers makes it suitable for real-time data processing applications. It can process and deliver large volumes of data with minimal delay, essential for real-time analytics and decision-making.

54
Q

How does Kafka’s log compaction feature contribute to its efficiency?

A

Kafka’s log compaction feature allows it to maintain only the latest value for each key in its log, reducing the size of data stored and transmitted. This is particularly useful for stateful applications where only the most recent data is relevant.

55
Q

What is the significance of Kafka’s consumer groups for parallel processing?

A

Kafka’s consumer groups allow multiple consumers to work together as a group to consume data from topics. This enables parallel processing as different consumers in the group can read different partitions of a topic simultaneously, increasing throughput.

56
Q

How does Kafka ensure message ordering within partitions?

A

Kafka guarantees that messages within a single partition are ordered according to the time they were produced. This ensures a consistent and reliable order of messages, which is crucial for many data processing tasks and applications.

57
Q

What makes Kafka’s storage mechanism efficient for large data volumes?

A

Kafka’s storage mechanism is efficient for handling large data volumes due to its use of immutable log files and sequential disk I/O. This minimizes disk seek time and maximizes throughput, even with very large data sets.

58
Q

How does Kafka support stream processing frameworks like Apache Flink or Spark Streaming?

A

Kafka is often used as a data source for stream processing frameworks like Apache Flink or Spark Streaming. Its ability to provide high-throughput, durable, and real-time data streams makes it an ideal backbone for complex stream processing tasks.

59
Q

What are Kafka Connect and Kafka Streams, and how do they enhance Kafka’s capabilities?

A

Kafka Connect is a tool for scalably and reliably streaming data between Apache Kafka and other systems. Kafka Streams is a client library for building applications and microservices where the input and output data are stored in Kafka topics. Both tools extend Kafka’s capabilities for data integration and processing.

60
Q

The SOLID Principles

S — Single Responsibility

A

If a Class has many responsibilities, it increases the possibility of bugs because making changes to one of its responsibilities, could affect the other ones without you knowing.

Goal

This principle aims to separate behaviours so that if bugs arise as a result of your change, it won’t affect other unrelated behaviours.

61
Q

The SOLID Principles

O — Open-Closed

A

Classes should be open for extension, but closed for modification
Changing the current behaviour of a Class will affect all the systems using that Class.

If you want the Class to perform more functions, the ideal approach is to add to the functions that already exist NOT change them.

Goal

This principle aims to extend a Class’s behaviour without changing the existing behaviour of that Class. This is to avoid causing bugs wherever the Class is being used

62
Q

The SOLID Principles

L — Liskov Substitution

A

If S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program.
When a child Class cannot perform the same actions as its parent Class, this can cause bugs.

If you have a Class and create another Class from it, it becomes a parent and the new Class becomes a child. The child Class should be able to do everything the parent Class can do. This process is called Inheritance.

The child Class should be able to process the same requests and deliver the same result as the parent Class or it could deliver a result that is of the same type.

The picture shows that the parent Class delivers Coffee(it could be any type of coffee). It is acceptable for the child Class to deliver Cappucino because it is a specific type of Coffee, but it is NOT acceptable to deliver Water.

If the child Class doesn’t meet these requirements, it means the child Class is changed completely and violates this principle.

Goal

This principle aims to enforce consistency so that the parent Class or its child Class can be used in the same way without any errors.

63
Q

The SOLID Principles

I — Interface Segregation

A

Clients should not be forced to depend on methods that they do not use.
When a Class is required to perform actions that are not useful, it is wasteful and may produce unexpected bugs if the Class does not have the ability to perform those actions.

A Class should perform only actions that are needed to fulfil its role. Any other action should be removed completely or moved somewhere else if it might be used by another Class in the future.

Goal

This principle aims at splitting a set of actions into smaller sets so that a Class executes ONLY the set of actions it requires.

64
Q

The SOLID Principles

D — Dependency Inversion

A

Firstly, let’s define the terms used here more simply

High-level Module(or Class): Class that executes an action with a tool.

Low-level Module (or Class): The tool that is needed to execute the action

Abstraction: Represents an interface that connects the two Classes.

Details: How the tool works

This principle says a Class should not be fused with the tool it uses to execute an action. Rather, it should be fused to the interface that will allow the tool to connect to the Class.

It also says that both the Class and the interface should not know how the tool works. However, the tool needs to meet the specification of the interface.

Goal

This principle aims at reducing the dependency of a high-level Class on the low-level Class by introducing an interface.