Introduction to Data Engineering Flashcards

1
Q

What is Data Engineering?

A

Data Engineering is the field of dealing with large-scale data collection, storage, and retrieval, involving the development and maintenance of architectures like databases and processing systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the key responsibilities of a Data Engineer?

A

Data Engineers are responsible for managing and organizing data, transforming, cleansing, and ensuring its integrity for analysis. They develop, construct, test, and maintain data architectures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which programming languages are commonly used in Data Engineering?

A

Common programming languages in Data Engineering include SQL for database management, Python for data manipulation and analysis, and sometimes Java or Scala, particularly in big data environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the significance of ‘big data’ in Data Engineering?

A

In Data Engineering, ‘big data’ refers to large, complex data sets. The significance lies in the engineer’s ability to prepare and process this data for analysis, which can lead to valuable insights for businesses and organizations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are some common tools and technologies used in Data Engineering?

A

Common tools include SQL for database querying, Hadoop and Apache Spark for big data processing, ETL (Extract, Transform, Load) tools, and data warehousing solutions like Amazon Redshift or Google BigQuery.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How does the skill set of a Data Engineer differ from that of a Data Scientist?

A

Data Engineers typically have strong software engineering skills with expertise in database design and large-scale processing systems. Data Scientists, on the other hand, have skills in statistics, machine learning, and data visualization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

In what way does the output of a Data Analyst differ from a Data Scientist?

A

Data Analysts usually provide more straightforward, descriptive analytics and reporting based on existing data, whereas Data Scientists deliver more complex, predictive and prescriptive insights, often creating models to predict future trends.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the key responsibilities of a Data Analyst?

A

Data Analysts primarily focus on processing and performing statistical analysis on existing datasets. They interpret data, analyze results, and provide ongoing reports, often using tools like Excel, SQL, or BI tools.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the main role of a Data Scientist?

A

A Data Scientist’s main role is to analyze and interpret complex data to help make informed decisions. They use advanced statistical techniques, machine learning, and predictive modeling to uncover insights and trends.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a Data Pipeline?

A

A data pipeline is a set of actions that extract data from various sources, transform it into a format suitable for analysis, and load it into a final storage system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the key components of a Data Pipeline?

A

Key components include data sources, data extraction tools, data transformation processes, data storage destinations, and often orchestration tools to manage the workflow.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do you start building a Data Pipeline?

A

Begin by defining the data sources and the type of data you need. Next, decide how to extract this data, determine the transformations needed to make the data useful, and choose where to store the processed data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is ETL in the context of Data Pipelines?

A

ETL stands for Extract, Transform, Load. It’s a process where data is extracted from various sources, transformed into a format that can be analyzed, and then loaded into a data warehouse or other systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a Real-Time Data Pipeline?

A

A real-time data pipeline processes data immediately as it is generated, without delay, enabling instant data analysis and decision-making.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How does a Real-Time Data Pipeline differ from a Batch Data Pipeline?

A

Unlike batch data pipelines, which process data in periodic batches, real-time pipelines handle data continuously and immediately as it arrives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are common use cases for Real-Time Data Pipelines?

A

Common use cases include fraud detection, live financial trading analysis, social media monitoring, and real-time advertising and recommendation systems.

17
Q

What are key technologies used in Real-Time Data Pipelines?

A

Technologies often used include Apache Kafka for data ingestion, Apache Flink and Apache Storm for stream processing, and real-time databases like Apache Cassandra.

18
Q

What are the challenges of building Real-Time Data Pipelines?

A

Challenges include ensuring low-latency processing, handling variable data loads, maintaining data quality and consistency, and integrating with different systems and technologies.

19
Q

What is Big Data?

A

Big data refers to complex and voluminous data sets that traditional data processing software cannot manage effectively. It’s characterized by high volume, high velocity, and high variety.

20
Q

What are the 3 Vs of Big Data?

A

The 3 Vs are Volume (large amounts of data), Velocity (speed of data in and out), and Variety (range of data types and sources).

21
Q

How is Big Data Used?

A

Big data is used for predictive analytics, user behavior analytics, and other advanced data analytics methods that extract value from data.

22
Q

What are some challenges associated with Big Data?

A

Challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating, and information privacy.

23
Q

What is Apache Hadoop?

A

Apache Hadoop is an open-source framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines.

24
Q

What is Apache Spark?

A

Apache Spark is an open-source, distributed computing system that offers speed, ease of use, and a sophisticated analytics toolkit. It performs up to 100 times faster than Hadoop MapReduce for certain applications.

25
Q

What are NoSQL Databases?

A

NoSQL databases are non-tabular databases that store data differently than relational tables. These databases come in a variety of types based on their data model, such as key-value, document, wide-column, and graph formats.

26
Q

What is a Data Lake?

A

A data lake is a centralized repository that allows you to store all your structured and unstructured data at any scale. It can store data in its native format and run different types of analytics to extract insights.

27
Q

What is Apache Kafka?

A

Apache Kafka is an open-source stream-processing software platform developed by LinkedIn and later donated to the Apache Software Foundation. It is used for building real-time data pipelines and streaming apps.

28
Q

What is the purpose of Machine Learning in Big Data?

A

In big data, machine learning is used to analyze large volumes of data to identify patterns and make predictions. It automates analytical model building and can uncover insights that might not be evident from manual analysis.

29
Q

What is OLTP (Online Transaction Processing)?

A

OLTP is a class of systems that facilitate and manage transaction-oriented applications, typically for data entry and retrieval transaction processing. It is characterized by a large number of short online transactions.

30
Q

What are the key characteristics of OLTP systems?

A

OLTP systems are optimized for managing transactional data, characterized by high volume but short and fast transactions, quick query processing, and maintaining data integrity in multi-access environments.

31
Q

How does OLTP differ from OLAP (Online Analytical Processing)?

A

OLTP is focused on transactional processing, handling a large number of small transactions like updating a sales record. OLAP, on the other hand, is focused on analytical processing, suitable for complex queries and data analysis.

32
Q

What is the importance of OLTP in businesses?

A

OLTP systems are crucial in businesses as they support critical day-to-day transactional tasks in sectors like banking, retail, manufacturing, and any field that requires constant data processing.

33
Q

What are some common features of OLTP databases?

A

Common features include rapid transaction processing, high data availability, support for multi-user environments, atomicity, consistency, isolation, and durability (ACID) properties, and frequent but short database transactions.

34
Q

What is an OLAP Database?

A

An OLAP database is designed for fast, effective analysis of data. It enables complex queries, data analysis, and reporting, often dealing with large volumes of data from multiple sources.

35
Q

How does an OLAP Database differ from an OLTP Database?

A

While OLTP (Online Transaction Processing) databases are optimized for transactional processing with high volumes of short transactions, OLAP (Online Analytical Processing) databases are designed for complex queries and data analysis.

36
Q

What are the key features of OLAP Databases?

A

OLAP databases feature multidimensional data models, allowing for complex analytical and ad-hoc queries with rapid execution. They often provide advanced capabilities like data aggregation and summarization.

37
Q

Why are OLAP Databases important in Business Intelligence?

A

OLAP databases are crucial in Business Intelligence for their ability to quickly analyze data from multiple perspectives, assisting in decision-making, trend analysis, and forecasting.

38
Q
A