Exam questions Flashcards Preview

Google Cloud Platform > Exam questions > Flashcards

Flashcards in Exam questions Deck (107):
1

What is the storage transfer service?

The storage transfer service is a tool to import inline data into GCS.

2

When would you use the storage transfer service?

When transferring data from an online source (HTTP(S), Amazon S3 etc.) to a data sink (Eg. GCS).

3

What is GCS?

Google Cloud Storage, Google's online file storage service.

4

What is a bucket?

It's similar to a partition (file system) on a hard drive. A segregated place to store files.

5

Which access rights do you need to use the storage transfer service?

Editor or owner of the project that creates the transfer job. Viewers can view info/jobs.

6

When do you use storage transfer service vs. gsutil?

On-premise, use gsutil. Use STS when transferring from remote cloud storage providers.

7

What is Google Transfer Appliance?

High capacity storage servers rented from Google. Used to transfer data too large for network transferring.

8

When would you use the Google Transfer Appliance?

When transferring your data over the Internet takes more than a few weeks. (Evaluate data / speed)

9

How is Google Transfer Appliance priced?

100TB: $300
480TB: $1800

Late fees:

10

How is the storage transfer service priced?

No extra costs for the transfer service itself. Other fees apply, like:
- GCS storage/bandwidth ($0.01/GB regional or ~$0.1 inter-regional)
- Data source's pricing
- Data insertion costs (PUT operations).

11

What is nearline storage?

Storage for commonly used data. Higher cost per GB, lower price for bandwidth.

12

What is coldline storage?

Storage for rarely used data, eg. backups. Low cost per GB, but high bandwidth cost.

13

What's the price difference between nearline/coldline storage?

Nearline: $0.01/GB
Coldline: $0.007/GB

14

What is data egress?

Data transfer across regions.

15

When would you use GCS as your storage platform?

If
- your data is not structured
- you don't need mobile SDKs

16

When would you use Google Firebase as your storage platform?

When
- your data is (un)structured,
- your data is non-relational
- your main workload is not analytics
- you need mobile SDKs.

17

When would you use Cloud Spanner as your storage platform?

When
- your data is relational,
- you don't primarily need analytics
- you need horizontal scaling.

18

What is horizontal scaling?

When you add more machines.

19

What is vertical scaling?

When you add more power to an existing machine.

20

When would you use Cloud SQL as your storage platform?

When
- your data is relational
- you don't primarily need analytics
- you don't need to scale horizontally.

21

When would you use Cloud Datastore as your storage platform?

When
- your data is structured
- your data is non-relational
- your primary workload is not analytics

22

When would you use Cloud Bigtable as your storage platform?

When
- your data is structured
- your data is non-relational
- your workload is analytics
- you need low latency

23

When would you use BigQuery as your storage platform?

When
- your data is structured
- your data is non-relational
- your workload is analytics
- you don't need low latency

24

Describe Google Cloud Storage.

A scalable, fully-managed, highly reliable, and cost-efficient object / blob store.

25

What is CBT?

A scalable, fully-managed NoSQL wide-column database that is suitable for both real-time access and analytics workloads.

26

What is cloud datastore?

A scalable, fully-managed NoSQL document database for your web and mobile applications.

27

What is cloud sql?

A fully-managed MySQL and PostgreSQL database service that is built on the strength and reliability of Google’s infrastructure.

28

What is cloud spanner?

Mission-critical, relational database service with transactional consistency, global scale and high availability.

29

What is bigquery?

A scalable, fully-managed Enterprise Data Warehouse (EDW) with SQL and fast response times.

30

What is OLAP short for?

Online analytical processing.

31

Give some examples of unused data.

- Google Street View data.
- Emails.
- Parking footage.
- Purchase history.

32

What are some barriers to big data analysis?

- Unstructured data
- Too large data amounts
- Data quality
- Too fast data streams

33

Big data is often called counting problems. What's the difference between easy and hard counting problems?

Hard problems:
Difficult to quantify "fitness". Eg. vision analysis or natural language processing.

Easy problems:
Straightforward problems but large data amounts.

34

Is one petabyte large?

Depends on data type and funds. PB is a lot of text, but not necessarily with pictures or video.

BUT a lot does not necessarily impact processing time.

35

Describe how MapReduce works.

Split the data into small, parallelizable chunks. The output is then aggregated later.

36

What is the difference between typical development with Dataproc and typical Spark/Hadoop?

Dataproc manages all the setup necessary in Spark/Hadoop.

Spark/Hadoop has a lot of setup, config and optimization.

37

What are some drawbacks of managing a Hadoop cluster yourself?

- Difficult to scale/add new hardware.
- Less than 100% utilization -> bigger cost.
- Downtime when upgrading/redistributing tasks.

38

What is a cluster?

A setup of master and worker nodes for crunching big data tasks. Data is centralized in master nodes and distributed (mapped) to worker nodes.

39

Why use nearby zones?

- Lower latency
- Egress (exporting) data might incur costs

40

What's the difference between cluster masters and nodes?

Master: Contains and splits data so workers can work in chunks. This is called mapping. Aggregates data later in reducing.

Worker: Data power attached to a master node. Receives data and processes it. Workers might be configured as preemptive and disappear from the cluster.

41

What is a preemptive worker?

Unused data power from Google may be allocated and utilized. Think of last-minute airplane tickets. They can be revoked when someone requests that data power.

42

What can images do for Dataproc?

Clusters can be installed with different versions of software stack.

43

What is the gcloud tool?

A commandline program for interfacing with gcloud services, including creating dataproc clusters and submitting jobs.

44

What is pyspark?

A python interface to the Spark framework for distributed computing.

45

What is the hamburger stack?

The three lines in the left corner of the Google web console interface.

46

How can you make custom machines? What can be changed?

Through the web console or command line.

CPU and memory can be changed.

47

How are data and processing structured in MapReduce?

The data and operations are separated.

48

How is data stored in a Hadoop system?

Data is typically split into multiple parts on the Hadoop file system (HDFS). This is called sharding.

49

What is sharding?

Splitting data into several chunks for processing.

50

What is the traditional way of storing data on Hadoop vs. Google's way?

Traditional: Sharded data is transferred to each node separately.

Google's way: Data is stored in Google Cloud Storage.

51

Describe a traditional Google workflow.

Ingest -> process -> analysis
using
Pub/Sub -> Dataflow -> BigQuery

52

What's a problem with keeping data on Hadoop nodes?

In the node dies, its data must be moved.

53

How should you move data to Hadoop on Dataproc?

1) Move data to GCS.
2) Update prefixes (hdfs:// to gs://).
3) Start using Hadoop on Dataproc as usual.

54

How do you install software to a Dataproc stack?

1) Write an init script.
2) Upload it to GCS.
3) Provide it when creating a Dataproc cluster.

55

What is Hadoop?

Apache Hadoop is an open-source software framework used for distributed storage and processing of dataset of big data using the MapReduce programming model.

56

What is Apache Pig?

Apache Pig is an abstraction over MapReduce. It's a tool/platform used to analyze larger datasets with a data flow representation.

57

What is PySpark?

PySpark is a Python library for interacting with Spark.

58

What is Spark?

Spark is a big data platform similar to Hadoop.

59

What is BigQuery?

BigQuery is a data warehouse for data analysis. It's built to run large SQL statements. It supports streaming ingestion of data, which offers real-time analysis.

60

What is DataFlow?

DataFlow is a service for transforming and enriching data in stream and batch modes.

61

In statistics, what is accuracy?

How many items did you get right out of the total?

(TP + TN) /
(TP + TN + FP + FN)

62

Assume you're using a device to test for infected people in a village. Describe what recall is in this case.

Recall: Out of the people that tested positive, what percentage was actually infected?

TP / (TP + FP)

63

Assume you're using a device to test for infected people in a village. Describe what precision is in this case.

Precision is the percentage of people who were actually infected when your device said they were.

TP / (TP + FN)

64

What is dataproc?

Dataproc is Google's managed Hadoop service.

65

What is the core idea behind MapReduce?

Split data into smaller chunks. Run operations on these chunks in parallel (Mapping function to data). Aggregate the results from this functions (Reducing).

Eg. parallelize the squaring of every number in a list, then summing the results.

66

What is a cluster in dataproc?

A combination of a master node and several worker worker nodes.

67

Where is data stored in dataproc?

Data is best stored in GCS, then copied to each worker node as needed.

68

When would you use dataproc over BigQuery?

When you need to run other things than SQL, eg. machine learning algorithms.

69

What is the sql code for a windowing function?

select
over (
partition by
[order by]
frame
)

Eg.
SELECT
AVG(value)
OVER (ORDER BY value
ROWS BETWEEN 10 PRECEDING AND CURRENT ROW)
FROM Dataset;

70

When would you use windowing functions in bigquery?

For instance when calculating running averages and analysis of time series.

71

What is a UDF?

User defined function.

72

How can you optimize BigQuery queries

- Select only the columns you need.
- Big joins first, small joins later.

73

Does Dataflow support compressed files?

Yes, TextIO supports them.

74

Where should you store data when using Dataflow for batch processing?

"Anywhere", but GCS is a great place to start. BigQuery also works if you have structured data.

75

What is ParDo?

A function in Dataflow for operating on data in parallel.

76

In Dataflow, what is the difference between GroupBy and Combine?

Combine is typically predefined functions optimized for one task.

GroupBys are slower, but lets you write the function yourself.

77

What is a side input in Dataflow?

A side input is like another set of parameters. This is typically done with "views". You can combine two flows into one.

78

What is softmax?

An ML term. The softmax function takes in a vector and outputs it so they sum to 1. Think classification probabilities in a neural network.

79

What is argmax?

An ML term. The argmax function takes in a vector. The output has 1 in the cell with the highest value, all other cells are 0.

80

How many neurons should you use in a neural network?

Only as many as needed. Unused neurons' weights tend towards 0 (not activated).

81

What is a typical machine learning workflow? (5)

Collect data
Organize
Prepare/preprocess
Number crunching
Deployment

82

How much data do you need for a neural network?

Enough to cover every case you want to predict.

83

How can you confuse a neural network? (2)

Negative examples, same label - cloud vs. cartoon clouds.
Outliers - Only a problem when there are too few.

84

What is a regression problem?

A problem on (semi-)continuous data. Eg. house pricing.

85

What is logistic regression?

Classification problems - is this a cat or a dog?

86

What is MSE?

Mean squared error, often used as a fitness/error function in neural networks.

87

How do you calculate the mean squared error?

1) For each value in the dataset, sum (real_i - predicted_i)^2 == (Y_n - y_n)^2.
2) Divide by the number of data points.

88

When do you use MSE?

To evaluate the error in a regression problem.

89

What is cross entropy?

An error function for logistic regression.

90

How do you calculate cross entropy?

1) Sum (y_n * log(Y_n) + (1 - y_n) * log(Y_n)).
2) Divide by number of data points ( |Y| ).

91

What is a confusion matrix?

A confusion matrix has 4 values: number of true positive, true negative, false positive and false negative.

92

What does a confusion matrix help you with?

The confusion matrix help illustrate the performance of your ML model.

93

What is an unbalanced dataset?

Balanced datasets have a roughly even distribution of categories/values. Unbalanced datasets are skewed.

94

What is thresholding in ML?

A value to separate positive/negative guesses in a classification model.

Eg, the guess is that the image is 75% cat. Should this be counted as a cat? Threshold might be at 80%.

95

What is an epoch in ML?

Feeding the neural network your whole dataset once.

96

What is training loss in ML?

How bad the neural network performs when comparing input/output. Synonym: error. Target function to minimize.

97

What is batch size in ML?

How many examples are shown to the neural network before backpropagation.

98

What is feature engineering?

Studying and selecting which features to use/not use in a machine learning model.

99

What is one-hot encoding?

A way of giving the model classification data.

Eg. for a rating feature from 1-5:
[3] vs. [0, 0, 0, 0, 1]

100

What is a sparse input?

Inputs where only a few inputs are activated at the same time. Eg. one-hot is sparse.

101

What is dense input?

Most of the input is activated at the same time.

102

What is a hyperparameter?

A tunable part of an ML model not related to its inputs (examples). Eg. how many neurons to use or learning rate.

103

What is hyperparameter tuning?

Trying out several hyperparameters to find a good combination.

104

What is streaming?

Processing of unbounded data, eg. data coming in over time.

105

What are the three Vs of streaming analytics?

Volume - lots of data.
Velocity - data is generated quickly.
Variety - unstructured, lots of different kinds.

106

What's the difference between tight and loose coupling?

Tight coupling: one receiver/sender for all data.

Loose coupling: message buffer between sender/receiver.

107

What are senders/receivers called in pub/sub?

Sender: publisher.
Receiver: subscriber.