# Processing Hadoop Jobs with Dataproc on Google Cloud - Sheet1 Flashcards

1
Q
  1. What is Dataproc in Google Cloud?
A

Dataproc on Google Cloud enables the processing of Hadoop jobs using open-source data tools for batch processing, querying, streaming, and machine learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  1. What are the benefits of using Dataproc for cluster management?
A

Dataproc provides automation for creating and managing clusters, allowing for easy cluster management and cost savings by turning off clusters when not in use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
  1. How does Dataproc compare to traditional on-premises solutions?
A

Compared to traditional on-premises solutions and other cloud services, Dataproc offers unique advantages for clusters of various sizes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
  1. Is there a need to learn new tools when switching to Dataproc?
A

No need to learn new tools or APIs when using Dataproc. Existing projects can be moved to Dataproc without redevelopment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
  1. What popular tools are updated and supported on Dataproc?
A

Popular tools like Spark, Hadoop, Pig, and Hive are frequently updated and supported.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
  1. How is Dataproc priced?
A

Dataproc is priced at $0.01 per virtual CPU per cluster per hour, on top of other Google Cloud resources used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  1. What are the cost benefits of Dataproc?
A

Clusters can include preemptible instances with lower compute prices, and billing is based on second-by-second usage with a one-minute minimum billing period.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
  1. How fast can Dataproc clusters start, scale, and shut down?
A

Dataproc clusters start, scale, and shut down quickly, with each operation taking an average of 90 seconds or less.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  1. How flexible are the Dataproc clusters?
A

Clusters can be created and scaled rapidly, offering various virtual machine types, sizes, numbers of nodes, and networking options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
  1. Does Dataproc allow the use of open-source tools?
A

Dataproc allows the use of Spark and Hadoop tools, libraries, and documentation. Native versions of Spark, Hadoop, Pig, and Hive are frequently updated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
  1. How does Dataproc integrate with other Google Cloud Services?
A

Dataproc provides built-in integration with Cloud Storage, BigQuery, and Cloud Bigtable to ensure data integrity. Cloud logging and monitoring are available to create a complete data platform.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
  1. How does Dataproc handle ETL processes?
A

Dataproc can effortlessly perform ETL processes by loading row-logged data directly into BigQuery for business reporting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
  1. How does cluster management happen in Dataproc?
A

Users can easily interact with clusters and Spark or Hadoop jobs without the need for an administrator or special software. Cluster management can be done through the Cloud Console, Cloud SDK, or Dataproc REST API.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
  1. What is the significance of image versioning in Dataproc?
A

Dataproc supports image versioning, allowing switching between different versions of Apache Spark, Apache Hadoop, and other tools.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
  1. What measures are there to ensure high availability in Dataproc?
A

Clusters can run with multiple primary nodes, and jobs can be set to restart on failure, ensuring high availability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
  1. What developer tools does Dataproc offer?
A

Dataproc offers multiple ways to manage a cluster, including the Cloud Console, Cloud SDK, RESTful APIs, and SSH access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q
  1. What are initialization actions in Dataproc?
A

Initialization actions enable the installation or customization of settings and libraries when creating a cluster. They allow for cluster customization by specifying executables or scripts to run on all nodes in the cluster immediately after setup.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q
  1. What are optional components in Dataproc?
A

Optional components can be selected when deploying a cluster, including Anaconda, Hive, Jupyter notebook, Zeppelin notebook, Druid, Presto, and ZooKeeper. These components enhance the capabilities of the cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q
  1. Can a Dataproc cluster contain both preemptible secondary workers and non-preemptible secondary workers?
A

No, a Dataproc cluster can contain either preemptible secondary workers or non-preemptible secondary workers, but not both.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q
  1. How should a Dataproc cluster be considered in terms of longevity?
A

It is recommended to consider a Dataproc cluster as short-lived rather than long-lived. Spin up clusters when compute processing is required for a job and then turn them down.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q
  1. How should data storage be managed in Dataproc?
A

Persistent data storage should be connected to other Google Cloud products rather than relying solely on native HDFS on the cluster.

22
Q
  1. How can Cloud Storage be used in place of HDFS in Dataproc?
A

Cloud Storage with the HDFS connector can be used as an alternative to native HDFS for storage. Existing Hadoop code can be adapted to use Cloud Storage by changing the prefix from hdfs:// to gs://.

23
Q
  1. What are some alternatives for off-cluster storage and large analytical workloads in Dataproc?
A

Consider using Cloud Bigtable for HBase off-cluster storage, and BigQuery for large analytical workloads instead of relying on Hadoop directly.

24
Q
  1. What are the key stages in a Dataproc workflow?
A

Dataproc involves a sequence of events: setup, configuration, optimization, utilization, and monitoring.

25
Q
  1. How can a cluster be created in Dataproc?
A

Setup includes creating a cluster, which can be done through the Cloud Console, command line (GCloud command), YAML files, Terraform configurations, or the REST API.

26
Q
  1. What types of clusters can be created in Dataproc?
A

Clusters can be single VMs, standard with a single primary node, or high availability with three primary nodes.

27
Q
  1. How can regions and zones be specified in Dataproc?
A

Users can specify the region and zone or choose a global region and allow the service to select the zone.

28
Q
  1. What optional components can be included in a Dataproc cluster?
A

Optional components from the Hadoop ecosystem, such as Anaconda, Hive Web Cat, Jupyter notebook, and Zeppelin notebook, can be included.

29
Q
  1. How can a Dataproc cluster be customized?
A

Cluster properties, user labels, and metadata can be defined to customize the cluster further.

30
Q
  1. What VM options are available for worker nodes in Dataproc?
A

Worker nodes, including preemptible nodes, have separate VM options for CPU, memory, and storage.

31
Q
  1. How can resource utilization and cluster startup be optimized in Dataproc?
A

Custom machine types and custom images can be used for optimized resource utilization and faster cluster startup.

32
Q
  1. How can jobs be submitted in Dataproc?
A

Jobs can be submitted through the Console, GCloud command, REST API, or orchestration services like Dataproc workflow and Cloud Composer.

33
Q
  1. How can a Dataproc cluster be monitored?
A

Monitoring can be done using Cloud Monitoring, allowing the creation of custom dashboards with graphs and setting up alert policies for notifications.

34
Q
  1. What metrics can be monitored using Cloud Monitoring in Dataproc?
A

Metrics related to HDFS, Yarn, and cluster performance can be monitored using Cloud Monitoring.

35
Q
  1. What are the key features of Dataproc on Google Cloud?
A

Key features include seamless transition, low-cost, super-fast operations, resizable clusters, open source ecosystem, integration with data platforms, managed environment, image versioning, developer tools, and cluster customization options.

36
Q
  1. How does Dataproc integrate with Google’s BigQuery service?
A

Dataproc provides built-in integration with BigQuery and can load row-logged data directly into BigQuery for business reporting, performing ETL processes effortlessly.

37
Q
  1. How can you interact with Spark or Hadoop jobs on Dataproc?
A

Users can interact with Spark or Hadoop jobs through the Cloud Console, Cloud SDK, or Dataproc REST API, without the need for an administrator or special software.

38
Q
  1. How can one manage cost-effectiveness with Dataproc?
A

Dataproc allows the turning off of idle clusters easily, saving costs. Also, it uses second-by-second billing with a one-minute minimum billing period.

39
Q
  1. What does it mean that Dataproc supports image versioning?
A

This means that Dataproc allows switching between different versions of Apache Spark, Apache Hadoop, and other tools for maximum flexibility and compatibility.

40
Q
  1. What is the significance of Dataproc’s managed environment feature?
A

This means users can interact with clusters and Spark or Hadoop jobs without needing an administrator or special software. It simplifies cluster management.

41
Q
  1. What are the ways to submit jobs to Dataproc?
A

Jobs can be submitted through the Console, GCloud command, REST API, or orchestration services like Dataproc workflow and Cloud Composer.

42
Q
  1. What are the primary node options in Dataproc?
A

Clusters can be single VMs, standard with a single primary node, or high availability with three primary nodes.

43
Q
  1. How does Dataproc support customizability with initialization actions?
A

Initialization actions enable the installation or customization of settings and libraries when creating a cluster, allowing executables or scripts to run on all nodes in the cluster immediately after setup.

44
Q
  1. What are some optional components that can be included when deploying a cluster in Dataproc?
A

Optional components include Anaconda, Hive, Jupyter notebook, Zeppelin notebook, Druid, Presto, and ZooKeeper.

45
Q
  1. What is the recommended practice for storing data in a Dataproc cluster?
A

It is recommended to connect persistent data storage to other Google Cloud products rather than relying solely on native HDFS on the cluster.

46
Q
  1. How can Cloud Storage be used in a Dataproc workflow?
A

Cloud Storage with the HDFS connector can be used as an alternative to native HDFS for storage. The prefix in existing Hadoop code can be changed from hdfs:// to gs:// to use Cloud Storage.

47
Q
  1. How does Dataproc handle large analytical workloads?
A

Consider using BigQuery for large analytical workloads instead of relying on Hadoop directly in Dataproc.

48
Q
  1. How can alerts and monitoring be set up in Dataproc?
A

Monitoring can be done using Cloud Monitoring, which allows for the creation of custom dashboards with graphs and setting up alert policies for notifications.

49
Q
  1. What options are available for worker nodes in Dataproc?
A

Worker nodes, including preemptible nodes, have separate VM options for CPU, memory, and storage.

50
Q
  1. How can the performance of a Dataproc cluster be optimized?
A

By using custom machine types and custom images for optimized resource utilization and faster cluster startup.