Study Guide, Practice Questions Flashcards

Google Cloud Certified Professional Cloud Architect Study Guide, 2nd Edition by Dan Sullivan

1
Q

Building for Builders LLC manufactures equipment used in residential and commercial building. Each of its 500,000 pieces of equipment in use around the globe has IoT devices collecting data about the state of equipment. The IoT data is streamed from each device every 10 seconds. On average, 10 KB of data is sent in each message. The data will be used for predictive maintenance and product development. The company would like to use a managed service in Google Cloud. What would you recommend?
A. Apache Cassandra
B. Cloud Bigtable
C. BigQuery
D. Cloud SQL

A

Option B is correct. Bigtable is the best option for streaming IoT data, since it supports low-latency writes and is designed to scale to support petabytes of data.
Option A is incorrect because Apache Cassandra is not a managed database in GCP.
Option C is incorrect because BigQuery is a data warehouse. While it is a good option for analyzing large volumes of data, Bigtable is a better option for ingesting the data.
Option D is incorrect. Cloud SQL is a managed relational database. The use case does not require a relational database, and Bigtable’s scalability is a better fit with the requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You have developed a web application that is becoming widely used. The front end runs in Google App Engine and scales automatically. The backend runs on Compute Engine in a managed instance group. You have set the maximum number of instances in the backend managed instance group to five. You do not want to increase the maximum size of the managed instance group or change the VM instance type, but there are times the front end sends more data than the backend can keep up with and data is lost. What can you do to prevent the loss of data?
A. Use an unmanaged instance group.
B. Store ingested data in Cloud Storage.
C. Have the front end write data to a Cloud Pub/Sub topic, and have the backend read from that topic.
D. Store ingested data in BigQuery.

A

The correct answer is C. A Cloud Pub/Sub topic would decouple the front end and backend, provide a managed and scalable message queue, and store ingested data until the backend can process it.
Option A is incorrect. Switching to an unmanaged instance group will mean that the instance group cannot autoscale.
Option B is incorrect. You could store ingested data in Cloud Storage, but it would not be as performant as the Cloud Pub/Sub solution.
Option D is incorrect because BigQuery is a data warehouse and not designed for this use case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You are setting up a cloud project and want to assign members of your team different roles that have appropriate permissions for their responsibilities. What GCP service would you use to do that?
A. Cloud Identity
B. Identity and Access Management (IAM)
C. Cloud Authorizations
D. LDAP

A

The correct answer is B. IAM is used to manage roles and permissions.
Option A is incorrect. Cloud Identity is a service for creating and managing identities.
Option C is incorrect. There is no GCP service with that name at this time.
Option D is incorrect. LDAP is not a GCP service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You would like to run a custom stateless container in a managed Google Cloud service. What are your three options?
A. App Engine Standard, Cloud Run, and Kubernetes Engine
B. App Engine Flexible, Cloud Run, and Kubernetes Engine
C. Compute Engine, Cloud Functions, and Kubernetes Engine
D. Cloud Functions, Cloud Run, and App Engine Flexible

A

The correct answer is B. You can run custom stateless containers in App Engine Flexible, Cloud Run, and Kubernetes Engine.
Option A is incorrect because App Engine Standard does not support custom containers.
Option C is incorrect because Compute Engine is not a managed service and Cloud Functions does not support custom containers.
Option D is incorrect because Cloud Functions does not support custom containers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

PhotosForYouToday prints photographs and ships them to customers. The front-end application uploads photos to Cloud Storage. Currently, the back end runs a cron job that checks Cloud Storage buckets every 10 minutes for new photos. The product manager would like to process the photos as soon as they are uploaded. What would you use to cause processing to start when a photo file is saved to Cloud Storage?
A. A Cloud Function
B. An App Engine Flexible application
C. A Kubernetes pod
D. A cron job that checks the bucket more frequently

A

The correct answer is A. A Cloud Function can respond to a create file event in Cloud Storage and start processing when the file is created.
Option B is incorrect because an App Engine Flexible application cannot directly respond to a Cloud Storage write event.
Option C is incorrect. Kubernetes pods are the smallest compute unit in Kubernetes and are not designed to directly respond to Cloud Storage events.
Option D is incorrect because it does not guarantee that photos will be processed as soon as they are created.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The chief financial officer of your company believes that you are spending too much money to run an on-premises data warehouse and wants to migrate to a managed cloud solution. What GCP service would you recommend for implementing a new data warehouse in GCP?
A. Compute Engine
B. BigQuery
C. Cloud Dataproc
D. Cloud Bigtable

A

The correct answer is B. BigQuery is a managed analytics database designed to support data warehouses and similar use cases.
Option A is incorrect. Compute Engine is not a managed service.
Option C is incorrect. Cloud Dataproc is a managed Hadoop and Spark service.
Option D is incorrect. Bigtable is a NoSQL database well suited for large-volume, low-latency writes and limited ranges of queries. It is not suitable for the kind of ad hoc querying commonly done with data warehouses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A government regulation requires you to keep certain financial data for seven years. You are not likely to ever retrieve the data, and you are only keeping it to comply with regulations. There are approximately 500 TB of financial data for each year that you are required to save. What is the most cost-effective way to store this data?
A. Cloud Storage multiregional storage
B. Cloud Storage Nearline storage
C. Cloud Storage Archive storage
D. Cloud Storage persistent disk storage

A

The correct answer is C. Cloud Storage Archive is the lowest-cost option, and it is designed for data that is accessed less than once per year. Options A and B are incorrect because they cost more than Archive storage.
Option D is incorrect because there is no such service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Global Games Enterprises Inc. is expanding from North America to Europe. Some of the games offered by the company collect personal information. With what additional regulation will the company need to comply when it expands into the European market?
A. HIPAA
B. PCI-DSS
C. GDPR
D. SOX

A

The correct answer is C. The GDPR is a European Union directive protecting the personal information of EU citizens.
Option A is incorrect. HIPAA is a U.S. healthcare regulation.
Option B is incorrect. PCI-DSS is a payment card data security regulation; if Global Games Enterprises Inc. is accepting payment cards in North America, it is already subject to that regulation.
Option D is a U.S. regulation on some publicly traded companies; the company may be subject to that regulation already, and expanding to Europe will not change its status.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Your team is developing a Tier 1 application for your company. The application will depend on a PostgreSQL database. Team members do not have much experience with PostgreSQL and want to implement the database in a way that minimizes their administrative responsibilities for the database. What managed service would you recommend?
A. Cloud SQL
B. Cloud Dataproc
C. Cloud Bigtable
D. Cloud PostgreSQL

A

The correct answer is A. Cloud SQL is a managed database service that supports PostgreSQL.
Option B is incorrect. Cloud Dataproc is a managed Hadoop and Spark service.
Option C is incorrect. Cloud Bigtable is a NoSQL database.
Option D is incorrect. There is no service called Cloud PostgreSQL in GCP at this time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a service-level indicator?
A. A metric collected to indicate how well a service-level objective is being met
B. A type of log
C. A type of notification sent to a sysadmin when an alert is triggered
D. A visualization displayed when a VM instance is down

A

The correct answer is A. A service-level indicator is a metric used to measure how well a service is meeting its objectives. Options B and C are incorrect. It is not a type of log or a type of notification.
Option D is incorrect. A service-level indicator is not a visualization, although the same metrics may be used to drive the display of a visualization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Developers at MakeYouFashionable have adopted agile development methodologies. Which tool might they use to support CI/CD?
A. Google Docs
B. Jenkins
C. Apache Cassandra
D. Clojure

A

The correct answer is B. Jenkins is a popular CI/CD tool.
Option A is incorrect. Google Docs is a collaboration tool for creating and sharing documents.
Option C is incorrect. Cassandra is a NoSQL database.
Option D is incorrect. Clojure is a Lisp-like programming language that runs on the Java virtual machine (JVM).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You have a backlog of audio files that need to be processed using a custom application. The files are stored in Cloud Storage. If the files were processed continuously on three n2-standard-4 instances, the job could complete in two days. You have 30 days to deliver the processed files, after which they will be sent to a client and deleted from your systems. You would like to minimize the cost of processing. What might you do to help keep costs down?
A. Store the files in Coldline storage.
B. Store the processed files in multiregional storage.
C. Store the processed files in Cloud CDN.
D. Use preemptible VMs.

A

The correct answer is D. Use preemptible VMs, which cost significantly less than standard VMs.
Option A is incorrect. Coldline storage is not appropriate for files that are actively used.
Option B is incorrect. Storing files in multiregional storage will cost more than regional storage, and there is no indication from the requirements that they should be stored multiregionally.
Option C is incorrect. There is no indication that the processed files need to be distributed to a global user base.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You have joined a startup selling supplies to visual artists. One element of the company’s strategy is to foster a social network of artists and art buyers. The company will provide e-commerce services for artists and earn revenue by charging a fee for each transaction. You have been asked to collect more detailed business requirements. What might you expect as an additional business requirement?
A. The ability to ingest streaming data
B. A recommendation system to match buyers to artists
C. Compliance with SOX regulations
D. Natural language processing of large volumes of text

A

The correct answer is B. This is an e-commerce site matching sellers and buyers, so a system that recommends artists to buyers can help increase sales.
Option A is incorrect. There is no indication of any need for streaming data.
Option C is incorrect. This is a startup, and it is not likely subject to SOX regulations.
Option D is incorrect. There is no indication of a need to process large volumes of text.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You work for a manufacturer of specialty die cast parts for the aerospace industry. The company has built a reputation as the leader in high-quality, specialty die cast parts, but recently the number of parts returned for poor quality is increasing. Detailed data about the manufacturing process is collected throughout every stage of manufacturing. To date, the data has been collected and stored but not analyzed. There is a total of 20 TB of data. The company has a team of analysts familiar with spreadsheets and SQL. What service might you recommend for conducting preliminary analysis of the data?
A. Compute Engine
B. Kubernetes Engine
C. BigQuery
D. Cloud Functions

A

The correct answer is C. BigQuery is an analytics database that supports SQL. Options A and B are incorrect because although they could be used to run analytics applications, such as Apache Hadoop or Apache Spark, it would require more administrative overhead. Also, the team members working on this are analysts, but there is no indication that they have the skills or desire to manage analytics platforms.
Option D is incorrect. Cloud Functions is for running short programs in response to events in GCP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A client of yours wants to run an application in a highly secure environment. They want to use instances that will only run boot components verified by digital signatures. What would you recommend they use in Google Cloud?
A. Preemptible VMs
B. Managed instance groups
C. Cloud Functions
D. Shielded VMs

A

The correct answer is D. Shielded VMs include secure boot, which only runs digitally verified boot components.
Option A is incorrect. Preemptible VMs are interruptible instances, but they cost less than standard VMs.
Option B is incorrect. Managed instance groups are sets of identical VMs that are managed as a single entity.
Option C is incorrect. Cloud Functions is a managed service for running programs in response to events in GCP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You have installed the Google Cloud SDK. You would now like to work on transferring files to Cloud Storage. What command-line utility would you use?
A. bq
B. gsutil
C. cbt
D. gcloud

A

The correct answer is B. gsutilis the command-line utility for working with Cloud Storage.
Option A is incorrect.bqis the command-line utility for working with BigQuery.
Option C is incorrect.cbtis the command-line utility for working with Cloud Bigtable.
Option D is incorrect. gcloudis used to work with most GCP services but not Cloud Storage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Kubernetes pods sometimes need access to persistent storage. Pods are ephemeral-they may shut down for reasons not in control of the application running in the pod. What mechanism does Kubernetes use to decouple pods from persistent storage?
A. PersistentVolumes
B. Deployments
C. ReplicaSets
D. Ingress

A

The correct answer is A. PersistentVolumes is Kubernetes’ way of representing storage allocated or provisioned for use by a pod.
Option B is incorrect. Deployments are a type of controller consisting of pods running the same version of an application.
Option C is incorrect. A ReplicaSet is a controller that manages the number of pods running in a deployment.
Option D is incorrect. An Ingress is an object that controls external access to services running in a Kubernetes cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

An application that you support has been missing service-level objectives, especially around database query response times. You have reviewed monitoring data and determined that a large number of database read operations is putting unexpected load on the system. The database uses PostgreSQL, and it is running in Compute Engine. You have tuned SQL queries, and the performance is still not meeting objectives. Of the following options, which would you try next?
A. Migrate to a NoSQL database.
B. Move the database to Cloud SQL.
C. Use read replicas.
D. Move some of the data out of the database to Cloud Storage.

A

The correct answer is C. Use read replicas to reduce the number of reads against the primary persistent storage system that is supporting both reads and writes.
Option A is incorrect. The application is designed to work with a relational database, and there is no indication that a NoSQL database is a better option overall.
Option B is incorrect. Simply moving the database to a managed service will not change the number of read operations, which is the cause of the poor performance.
Option D is incorrect. Moving data to Cloud Storage will not reduce the number of reads, and Cloud Storage does not support SQL.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You are running a complicated stream processing operation using Apache Beam. You want to start using a managed service. What GCP service would you use?
A. Cloud Dataprep
B. Cloud Dataproc
C. Cloud Dataflow
D. Cloud Identity

A

The correct answer is C. Cloud Dataflow is an implementation of the Apache Beam stream processing framework. Cloud Dataflow is a fully managed service.
Option A is incorrect. Cloud Dataprep is used to prepare data for analysis.
Option B is incorrect. Cloud Dataproc is a managed Hadoop and Spark service.
Option D is incorrect. Cloud Identity is an authentication service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Your team has had several incidents in which Tier 1 and Tier 2 services were down for more than one hour. After conducting a few retrospective analyses of the incidents, you have determined that you could identify the causes of incidents faster if you had a centralized log repository. What GCP service could you use for this?
A. Cloud Logging
B. Cloud Monitoring
C. Cloud SQL
D. Cloud Trace

A

The correct answer is A. Cloud Logging is a centralized logging service.
Option B is incorrect. Cloud Monitoring collects and manages performance metrics.
Option C is incorrect. Cloud SQL is used for regional, relational databases.
Option D is incorrect. Cloud Trace is a service for distributed tracing of application performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A Global 2000 company has hired you as a consultant to help architect a new logistics system. The system will track the location of parts as they are shipped between company facilities in Europe, Africa, South America, and Australia. Anytime a user queries the database, they must receive accurate and up-to-date information; specifically, the database must support strong consistency. Users from any facility may query the database using SQL. What GCP service would you recommend?
A. Cloud SQL
B. BigQuery
C. Cloud Spanner
D. Cloud Dataflow

A

The correct answer is C. Cloud Spanner is a globally scalable, strongly consistent relational database that can be queried using SQL.
Option A is incorrect because it will not scale to the global scale as Cloud Spanner will.
Option B is incorrect. The requirements describe an application that will likely have frequent updates and transactions. BigQuery is designed for analytics and data warehousing.
Option D is incorrect. Cloud Dataflow is a stream and batch processing service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A database architect for a game developer has determined that a NoSQL document database is the best option for storing players’ possessions. What GCP service would you recommend?
A. Cloud Firestore
B. Cloud Storage
C. Cloud Dataproc
D. Cloud Bigtable

A

The correct answer is A. Cloud Firestore is a managed document NoSQL database in GCP.
Option B is incorrect. Cloud Storage is an object storage system, not a document NoSQL database.
Option C is incorrect. Cloud Dataproc is a managed Hadoop and Spark service.
Option D is incorrect. Cloud Bigtable is a wide-column NoSQL database, not a document database.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

A major news agency is seeing increasing readership across the globe. The CTO is concerned that long page-load times will decrease readership. What might the news agency try to reduce the page-load time of readers around the globe?
A. Regional Cloud Storage
B. Cloud CDN
C. Fewer firewall rules
D. Virtual private network

A

The correct answer is B. Cloud CDN is GCP’s content delivery network, which distributes static content globally.
Option A is incorrect. Reading from regional storage can still have long latencies for readers outside of the region.
Option C is incorrect. Firewall rules do not impact latency in any discernible way.
Option D is incorrect because VPNs are used to link on-premises networks to Google Cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What networking mechanism allows different VPC networks to communicate using private IP address space, as defined in RFC 1918?
A. ReplicaSets
B. Custom subnets
C. VPC network peering
D. Firewall rules

A

The correct answer is C. VPC peering allows different VPCs to communicate using private networks.
Option A is incorrect. ReplicaSets are used in Kubernetes; they are not related to VPCs.
Option B is incorrect. Custom subnets define network address ranges for regions.
Option D is incorrect. Firewall rules control the flow of network traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

You have been tasked with setting up disaster recovery infrastructure in the cloud that will be used if the on-premises data center is not available. What network topology would you use for a disaster recovery environment?
A. Meshed topology
B. Mirrored topology
C. Gated egress topology
D. Gated ingress topology

A

The correct answer is B. With a mirrored topology, the public cloud and private on-premises environments mirror each other.
Option A is incorrect. In a mesh topology, all systems in the cloud and private networks can communicate with each other.
Option C is incorrect. In a gated egress topology, on-premises service APIs are made available to applications running in the cloud without exposing them to the public internet.
Option D is incorrect. In a gated ingress topology, cloud service APIs are made available to applications running on-premises without exposing them to the public internet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

You have been tasked with interviewing line-of-business owners about their needs for a new cloud application. Which of the following do you expect to find?
A. A comprehensive list of defined business and technical requirements
B. That their business requirements do not have a one-to-one correlation with technical requirements
C. Business and technical requirements in conflict
D. Clear consensus on all requirements

A

The correct answer is B. Business requirements are high-level, business-oriented requirements that are rarely satisfied by meeting a single technical requirement.
Option A is incorrect because business sponsors rarely have sufficient understanding of technical requirements to provide a comprehensive list.
Option C is incorrect because business requirements constrain technical options but should not be in conflict.
Option D is incorrect because there is rarely a clear consensus on all requirements. Part of an architect’s job is to help stakeholders reach a consensus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

You have been asked by stakeholders to suggest ways to reduce operational expenses as part of a cloud migration project. Which of the following would you recommend?
A. Managed services, preemptible machines, access controls
B. Managed services, preemptible machines, autoscaling
C. NoSQL databases, preemptible machines, autoscaling
D. NoSQL databases, preemptible machines, access controls

A

The correct answer is B. Managed services relieve DevOps work, preemptible machines cost significantly less than standard VMs, and autoscaling reduces the chances of running unnecessary resources. Options A and D are incorrect because access controls will not help reduce costs, but they should be used anyway. Options C and D are incorrect because there is no indication that a NoSQL database should be used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Some executives are questioning your recommendation to employ continuous integration/continuous delivery (CI/CD). What reasons would you give to justify your recommendation?
A. CI/CD supports small releases, which are easier to debug and enable faster feedback.
B. CI/CD is used only with preemptible machines and therefore saves money.
C. CI/CD fits well with waterfall methodology but not agile methodologies.
D. CI/CD limits the number of times code is released.

A

The correct answer is A. CI/CD supports small releases, which are easier to debug and enable faster feedback.
Option B is incorrect, as CI/CD does not use only preemptible machines.
Option C is incorrect because CI/CD works well with agile methodologies.
Option D is incorrect, as there is no limit to the number of times new versions of code can be released.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

The finance director has asked your advice about complying with a document retention regulation. What kind of service-level objective (SLO) would you recommend to ensure that the finance director will be able to retrieve sensitive documents for at least the next seven years? When a document is needed, the finance director will have up to seven days to retrieve it. The total storage required will be approximately 100 TB.
A. High availability SLO
B. Durability SLO
C. Reliability SLO
D. Scalability SLO

A

The correct answer is B. The finance director needs to have access to documents for seven years. This requires durable storage.
Option A is incorrect because the access does not have to be highly available; as long as the finance director can access the document in a reasonable period of time, the requirement can be met.
Option C is incorrect because reliability is a measure of being available to meet workload demands successfully.
Option D is incorrect because the requirement does not specify the need for increasing and decreasing storage to meet the requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

You are facilitating a meeting of business and technical managers to solicit requirements for a cloud migration project. The term incident comes up several times. Some of the business managers are unfamiliar with this term in the context of IT. How would you describe an incident?
A. A disruption in the ability of a DevOps team to complete work on time
B. A disruption in the ability of the business managers to approve a project plan on schedule
C. A disruption that causes a service to be degraded or unavailable
D. A personnel problem on the DevOps team

A

The correct answer is C. An incident in the context of IT operations and service reliability is a disruption that degrades or stops a service from functioning. Options A and B are incorrect-incidents are not related to scheduling.
Option D is incorrect; in this context, incidents are about IT services, not personnel.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

You have been asked to consult on a cloud migration project that includes moving private medical information to a storage system in the cloud. The project is for a company in the United States. What regulation would you suggest that the team review during the requirements-gathering stages?
A. General Data Protection Regulations (GDPR)
B. Sarbanes–Oxley (SOX)
C. Payment Card Industry Data Security Standard (PCI DSS)
D. Health Insurance Portability and Accountability Act (HIPAA)

A

The correct answer is D. HIPAA governs, among other things, privacy and data protections for private medical information.
Option A is incorrect, as GDPR is a European Union regulation.
Option B is incorrect, as SOX is a U.S. financial reporting regulation.
Option C is incorrect, as PCI DSS is a payment card industry regulation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

You are in the early stages of gathering business and technical requirements. You have noticed several references about needing up-to-date and consistent information regarding product inventory and support for SQL reporting tools. Inventory is managed on a global scale, and the warehouses storing inventory are located in North America, Africa, Europe, and Asia. Which managed database solution in Google Cloud would you include in your set of options for an inventory database?
A. Cloud Storage
B. BigQuery
C. Cloud Spanner
D. Microsoft SQL Server

A

The correct answer is C. Cloud Spanner is a globally consistent, horizontally scalable relational database.
Option A is incorrect. Cloud Storage does not support SQL.
Option B is incorrect because BigQuery is an analytical database used for data warehousing and related operations.
Option D is incorrect; Microsoft SQL Server is a Cloud SQL database option, and Cloud SQL is a managed database, but Cloud SQL scales regionally, not globally.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

A developer at Mountkirk Games is interested in how architects decide which database to use. The developer describes a use case that requires a document store. The developer would rather not manage database servers or have to run backups. What managed service would you suggest the developer consider?
A. Cloud Firestore
B. Cloud Spanner
C. Cloud Storage
D. BigQuery

A

The correct answer is A. Cloud Firestore is a managed document database and a good fit for storing documents.
Option B is incorrect because Cloud Spanner is a relational database and globally scalable. There is no indication that the developer needs a globally scalable solution, which implies higher cost.
Option C is incorrect, as Cloud Storage is an object storage system, not a managed database.
Option D is incorrect because BigQuery is an analytical database designed for data warehousing and similar applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Members of your company’s legal team are concerned about using a public cloud service because other companies, organizations, and individuals will be running their systems in the same cloud. You assure them that your company’s resources will be isolated and not network-accessible to others because of what networking resource in Google Cloud?
A. CIDR blocks
B. Direct connections
C. Virtual private clouds
D. Cloud Pub/Sub

A

The correct answer is C. VPCs isolate cloud resources from resources in other VPCs, unless VPCs are intentionally linked.
Option A is incorrect because a CIDR block has to do with subnet IP addresses.
Option B is incorrect, as direct connections are for transmitting data between a data center and Google Cloud-it does not protect resources in the cloud.
Option D is incorrect because Cloud Pub/Sub is a messaging service, not a networking service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

A startup has recently migrated to Google Cloud using a lift-and-shift migration. They are now considering replacing a self-managed MySQL database running in Compute Engine with a managed service. Which Google Cloud service would you recommend that they consider?
A. Cloud Dataproc
B. Cloud Dataflow
C. Cloud SQL
D. PostgreSQL

A

The correct answer is C. Cloud SQL offers a managed MySQL service. Options A and B are incorrect, as neither is a database. Cloud Dataproc is a managed Hadoop and Spark service. Cloud Dataflow is a stream and batch processing service.
Option D is incorrect, because PostgreSQL is another relational database, but it is not a managed service. PostgreSQL is an option in Cloud SQL, however.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Which of the following requirements from a customer make you think the application should run in Compute Engine and not App Engine?
A. Dynamically scale up or down based on workload
B. Connect to a database
C. Run a hardened Linux distro on a virtual machine
D. Don’t lose data

A

The correct answer is C. In Compute Engine, you create virtual machines and choose which operating system to run. All other requirements can be realized in App Engine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Mountkirk Games wants to store player game data in a time-series database. Which Google Cloud managed database would you recommend?
A. Bigtable
B. BigQuery
C. Cloud Storage
D. Cloud Dataproc

A

The correct answer is A. Cloud Bigtable is a scalable, wide-column database designed for low-latency writes, making it a good choice for time-series data.
Option B is incorrect because BigQuery is an analytic database not designed for the high volume of low-latency writes that will need to be supported. Options C and D are not managed databases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

The original video captured during helicopter races by the Helicopter Racing League are transcoded and stored for frequent access. The original captured videos are not used for viewing but are stored in case they are needed for unanticipated reasons. The files require high durability but are not likely to be accessed more than once in a five-year period. What type of storage would you use for the original video files?
A. BigQuery Long Term Storage
B. BigQuery Active Storage
C. Cloud Storage Nearline class
D. Cloud Storage Archive class

A

The correct answer is D. Cloud Storage Archive class is the most cost-effective option and meets durability requirements.
Option C is incorrect; Cloud Storage Nearline class would meet durability requirements, but since the videos are likely accessed less than once per year, Cloud Storage Archive class would meet durability requirements and cost less. Options A and B are incorrect because videos are large binary objects best stored in object storage, not an analytical database such as BigQuery.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

The game analytics platform for Mountkirk Games requires analysts to be able to query up to 10 TB of data. What is the best managed database solution for this requirement?
A. Cloud Spanner
B. BigQuery
C. Cloud Storage
D. Cloud Dataprep

A

The correct answer is B. This is a typical use case for BigQuery, and it fits well with its capabilities as an analytic database.
Option A is incorrect, as Cloud Spanner is best used for transaction processing on a global scale. Options C and D are not managed databases. Cloud Storage is an object storage service; Cloud Dataprep is a tool for preparing data for analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

EHR Healthcare business requirements frequently discuss the need to improve observability in their systems. Which of the following Google Cloud Platform services could be used to help improve observability?
A. Cloud Build and Artifact Registry
B. Cloud Pub/Sub and Cloud Dataflow
C. Cloud Monitoring and Cloud Logging
D. Cloud Storage and Cloud Pub/Sub

A

The correct answer is C. Cloud Monitoring collects metrics, and Cloud Logging collects event data from infrastructure, services, and other applications that provide insight into the state of those systems. Cloud Build and Artifact Registry are important CI/CD services. Cloud Pub/Sub is a messaging service, Cloud Dataflow is a batch and stream processing service, and Cloud Storage is an object storage system; none of these directly supports improved observability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

In the TerramEarth case study, the volume of data and compute load will be most affected by what characteristics of the TerramEarth systems?
A. The number of dealers and customers
B. The number of vehicles, the number of sensors on vehicles, network connectivity, and the types of data collected
C. The type of storage used
D. Compliance with regulations

A

Option B is correct. The amount of data generated per vehicle, which is determined by the amount and frequency of data collected by each sensor on the vehicle, is the most likely to impact data size and processing. Network connectivity will also affect compute load if connectivity is unreliable, which leads to periods when data is not transmitted and will have to be sent in larger batches at a later time. The total amount of computing workload will not change but will be delayed when that workload is processed.
Option A is incorrect because the volume of data related to dealers and customers is not going to be as large as the data generated by vehicles. Also, the number of dealers is in the hundreds while the number of vehicles is in the millions.
Option C is the type of storage used and does not influence the amount of data the application needs to manage, or the amount of computing resources needed.
Option D, compliance and regulations, may have some effect on security controls and monitoring, but it will not influence compute and storage resources in a significant way.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

You are advising a customer on how to improve the availability of a data storage solution. Which of the following general strategies would you recommend?
A. Keeping redundant copies of the data
B. Lowering the network latency for disk writes
C. Using a NoSQL database
D. Using Cloud Spanner

A

The correct answer is A. Redundancy is a general strategy for improving availability.
Option B is incorrect because lowering network latency will not improve availability of the data storage system. Options C and D are incorrect because there is no indication that either a NoSQL or a relational database will meet the overall storage requirements of the system being discussed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

A team of data scientists is analyzing archived data sets. Their statistical model building procedures run in batches. If the model building system is down for up to 30 minutes per day, it does not adversely impact the data scientists’ work. What is the minimal percentage availability among the following options that would meet this requirement?
A. 99.99 percent
B. 99.90 percent
C. 99.00 percent
D. 99.999 percent

A

The minimum percentage availability that meets the requirements is option C, which allows for up to 14.4 minutes of downtime per day. All other options would allow for less downtime, but that is not called for by the requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Your development team has recently triggered three incidents that resulted in service disruptions. In one case, an engineer mistyped a number in a configuration file and in the other cases specified an incorrect disk configuration. What practices would you recommend to reduce the risk of these types of errors?
A. Continuous integration/continuous deployment
B. Code reviews of configuration files
C. Vulnerability scanning
D. Improved access controls

A

The correct answer is B. A code review is a software engineering practice that requires an engineer to review code with another engineer before deploying it.
Option A would not solve the problem, as continuous integration reduces the amount of effort required to deploy new versions of software. Options C and D are both security controls, which would not help identify misconfigurations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Your company is running multiple VM instances that have not had any downtime in the past several weeks. Recently, several of the physical servers suffered disk failures. The applications running on the servers did not have any apparent service disruptions. What feature of Compute Engine enabled that?
A. Preemptible VMs
B. Live migration
C. Canary deployments
D. Redundant array of inexpensive disks

A

The correct answer is B, Live migration, which moves running VMs to different physical servers without interrupting the state of the VM.
Option A is incorrect because preemptible VMs are low-cost VMs that may be taken back by Google at any time.
Option C is incorrect, as canary deployments are a type of deployment-not a feature of Compute Engine.
Option D is incorrect, as arrays of disks are not directly involved in preserving the state of a VM and moving the VM to a functioning physical server.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

You have deployed an application on a managed instance group. Occasionally the application experiences an intermittent malfunction and then resumes normal operation. Which of these is a reasonable explanation for what is happening?
A. The application shuts down when the instance group time-to-live (TTL) threshold is reached.
B. The application shuts down when the health check fails.
C. The VM shuts down when the instance group TTL threshold is reached and a new VM is started.
D. The VM shuts down when the health check fails and a new VM is started.

A

Option D is correct. When a health check fails, the failing VM is replaced by a new VM that is created using the instance group template to configure the new VM. Options A and C are incorrect, as TTL is not used to detect problems with application functioning.
Option B is incorrect because the application is not shut down when a health check fails.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

An online gaming company is growing its user base in North America, Europe, and Asia. Executives are concerned that players in Europe and Asia will have a degraded experience if the game backend runs only in North America. What would you suggest to improve latency and game experience for users in Europe and Asia?
A. Use Cloud Spanner to have a globally consistent, horizontally scalable relational database.
B. Create instance groups running the game backend in multiple regions across North America, Europe, and Asia. Use global load balancing to distribute the workload.
C. Use Standard Tier networking to ensure that data sent between regions is routed over the public internet.
D. Use a Cloud Memorystore cache in front of the database to reduce database read latency.

A

The correct answer is B. Creating instance groups in multiple regions and routing workload to the closest region using global load balancing will provide the most consistent experience for users in different geographic regions.
Option A is incorrect because Cloud Spanner is a relational database and does not affect how game backend services are run except for database operations.
Option C is incorrect, as routing traffic over the public internet means traffic will experience the variance of public internet routes between regions.
Option D is incorrect. A cache will reduce the time needed to read data, but it will not affect network latency when that data is transmitted from a game backend to the player’s device.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What configuration changes are required to ensure high availability when using Cloud Storage or Cloud Filestore?
A. A sufficiently long TTL must be set.
B. A health check must be specified.
C. Both a TTL and health check must be specified.
D. Nothing. Both are managed services. GCP manages high availability.

A

The correct answer is D. Users do not need to make any configuration changes when using Cloud Storage or Cloud Filestore. Both are fully managed services. Options A and C are incorrect because TTLs do not need to be set to ensure high availability. Options B and C are incorrect because users do not need to specify a health check for managed storage services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

The finance director at your company is frustrated with the poor availability of an on-premises finance data warehouse. The data warehouse uses a commercial relational database that only scales by buying larger and larger servers. The director asks for your advice about moving the data warehouse to the cloud and if the company can continue to use SQL to query the data warehouse. What GCP service would you recommend to replace the on-premises data warehouse?
A. Bigtable
B. BigQuery
C. Cloud Datastore
D. Cloud Storage

A

The best answer is B. BigQuery is a serverless, fully managed analytic database that uses SQL for querying. Options A and C are incorrect because both Bigtable and Cloud Datastore are NoSQL databases.
Option D, Cloud Storage, is not a database, and it does not meet most of the requirements listed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

TerramEarth has determined that it wants to use Cloud Bigtable to store equipment telemetry received from vehicles in the field. It has also concluded that it wants two clusters in different regions. Both clusters should be able to respond to read and write requests. What kind of replication should be used?
A. Primary–hot primary
B. Primary–warm primary
C. Primary–primary
D. Primary read–primary write

A

The correct answer is C. Primary-primary replication keeps both clusters synchronized with write operations so that both clusters can respond to queries. Options A, B, and D are not actual replication options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Your company is implementing a hybrid cloud computing model. Line-of-business owners are concerned that data stored in the cloud may not be available to on-premises applications. The current network connection is using a maximum of 40 percent of bandwidth. What would you suggest to mitigate the risk of that kind of service failure?
A. Configure firewall rules to improve availability.
B. Use redundant network connections between the on-premises data center and Google Cloud.
C. Increase the number of VMs allowed in Compute Engine instance groups.
D. Increase the bandwidth of the network connection between the data center and Google Cloud.

A

Option B is correct. A redundant network connection would mitigate the risk of losing connectivity if a single network connection went down.
Option A is incorrect, as firewall rules are a security control and would not mitigate the risk of network connectivity failures.
Option C may help with compute availability, but it does not improve network availability.
Option D does not improve availability, and additional bandwidth is not needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

A team of architects in your company is defining standards to improve availability. In addition to recommending redundancy and code reviews for configuration changes, what would you recommend including in the standards?
A. Use of access controls
B. Use of managed services for all compute requirements
C. Use of Cloud Monitoring to alert on changes in application performance
D. Use of Bigtable to collect performance monitoring data

A

The correct answer is C. Cloud Monitoring should be used to monitor applications and infrastructure to detect early warning signs of potential problems with applications or infrastructure.
Option A is incorrect because access controls are a security control and not related to directly improving availability.
Option B is incorrect because managed services may not meet all requirements and so should not be required in a company’s standards.
Option D is incorrect because collecting and storing performance monitoring data does not improve availability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Why would you want to run long-running, compute-intensive backend computation in a different managed instance group than on web servers supporting a minimal user interface?
A. Managed instance groups can run only a single application.
B. Managed instance groups are optimized for either compute or HTTP connectivity.
C. Compute-intensive applications have different scaling characteristics from those of lightweight user interface applications.
D. There is no reason to run the applications in different managed instance groups.

A

The correct answer is C. The two applications have different scaling requirements. The compute-intensive backend may benefit from VMs with a large number of CPUs that would not be needed for web serving. Also, the front end may be able to reduce the number of instances when users are not actively using the user interface, but long compute jobs may still be running in the background. Options A and B are false statements.
Option D is incorrect for the reasons explained in reference t.
Option C.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

An instance group is adding more VMs than necessary and then shutting them down. This pattern is happening repeatedly. What would you do to try to stabilize the addition and removal of VMs?
A. Increase the maximum number of VMs in the instance group.
B. Decrease the minimum number of VMs in the instance group.
C. Increase the time autoscalers consider when making decisions.
D. Decrease the cooldown period.

A

The correct answer is C. The autoscaler may be adding VMs because it has not waited long enough for recently added VMs to start and begin to take on load. Options A and B are incorrect because changing the minimum and maximum number of VMs in the group does not affect the rate at which VMs are added or removed.
Option D is incorrect because it reduces the time available for new instances to initialize, so it may actually make the problem worse.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

A clothing retailer has just developed a new feature for its customer-facing web application. Customers can upload images of their clothes, create montages from those images, and share them on social networking sites. Images are temporarily saved to locally attached drives as the customer works on the montage. When the montage is complete, the final version is copied to a Cloud Storage bucket. The services implementing this feature run in a managed instance group. Several users have noted that their final montages are not available even though they saved them in the application. No other problems have been reported with the service. What might be causing this problem?
A. The Cloud Storage bucket is out of storage.
B. The locally attached drive does not have a filesystem.
C. The users experiencing the problem were using a VM that was shut down by an autoscaler, and a cleanup script did not run to copy the latest version of the montage to Cloud Storage.
D. The network connectivity between the VMs and Cloud Storage has failed.

A

The correct answer is C. If the server is shut down without a cleanup script, then data that would otherwise be copied to Cloud Storage could be lost when the VM shuts down.
Option A is incorrect because buckets do not have a fixed amount of storage.
Option B is incorrect because, if it were true, the service would not function for all users-not just several of them.
Option D is incorrect because if there was a connectivity failure between the VM and Cloud Storage, there would be more symptoms of such a failure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Your development team has implemented a new application using a microservices architecture. You would like to minimize DevOps overhead by deploying the services in a way that will autoscale. You would also like to run each microservice in containers. What is a good option for implementing these requirements in Google Cloud Platform?
A. Run the containers in Cloud Functions.
B. Run the containers in Kubernetes Engine.
C. Run the containers in Cloud Dataproc.
D. Run the containers in Cloud Dataflow.

A

The correct answer is B. The requirements are satisfied by the Kubernetes container orchestration capabilities.
Option A is incorrect, as Cloud Functions do not run containers.
Option C is incorrect because Cloud Dataproc is a managed service for Hadoop and Spark.
Option D is incorrect, as Cloud Dataflow is a managed service for stream and batch processing using the Apache Beam model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

TerramEarth is considering building an analytics database and making it available to equipment designers. The designers require the ability to query the data with SQL. The analytics database manager wants to minimize the cost of the service. What would you recommend?
A. Use BigQuery as the analytics database, and partition the data to minimize the amount of data scanned to answer queries.
B. Use Bigtable as the analytics database, and partition the data to minimize the amount of data scanned to answer queries.
C. Use BigQuery as the analytics database, and use data federation to minimize the amount of data scanned to answer queries.
D. Use Bigtable as the analytics database, and use data federation to minimize the amount of data scanned to answer queries.

A

The correct answer is A. BigQuery should be used for an analytics database. Partitioning allows the query processor to limit scans to partitions that might have the data selected in a query. Options B and D are incorrect because Bigtable does not support SQL. Options C and D are incorrect because federation is a way of making data from other sources available within a database-it does not limit the data scanned in the way that partitioning does.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Line-of-business owners have decided to move several applications to the cloud. They believe the cloud will be more reliable, but they want to collect data to test their hypothesis. What is a common measure of reliability that they can use?
A. Mean time to recovery
B. Mean time between failures
C. Mean time between deployments
D. Mean time between errors

A

The correct answer is B. Mean time between failures is a measure of reliability.
Option A is a measure of how long it takes to recover from a disruption. Options C and D are incorrect because the time between deployments or errors is not directly related to reliability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

A group of business executives and software engineers are discussing the level of risk that is acceptable for a new application. Business executives want to minimize the risk that the service is not available. Software engineers note that the more developer time dedicated to reducing risk of disruption, the less time they have to implement new features. How can you formalize the group’s tolerance for risk of disruption?
A. Request success rate
B. Uptime of service
C. Latency
D. Throughput

A

The correct answer is A. Request success rate is a measure of how many requests were successfully satisfied.
Option B is incorrect because at least some instances of an application may be up at any time, so it does not reflect the capacity available. Options C and D are not relevant measures of risk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Your DevOps team recently determined that it needed to increase the size of persistent disks used by VMs running a business-critical application. When scaling up the size of available persistent storage for a VM, what other step may be required?
A. Adjusting the filesystem size in the operating system
B. Backing up the persistent disk before changing its size
C. Changing the access controls on files on the disk
D. Updating disk metadata, including labels

A

The correct answer is A. The persistent storage may be increased in size, but the operating system may need to be configured to use that additional storage.
Option B is incorrect because while backing up a disk before operating on it is a good practice, it is not required.
Option C is incorrect because changing storage size does not change access control rules.
Option D is incorrect because any disk metadata that needs to change when the size changes is updated by the resize process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

You are consulting for a client that is considering moving some on-premises workloads to the Google Cloud Platform. The workloads are currently running on VMs that use a specially hardened operating system. Application administrators will need root access to the operating system as well. The client wants to minimize changes to the existing configuration. Which GCP compute service would you recommend?
A. Compute Engine
B. Kubernetes Engine
C. App Engine Standard
D. App Engine Flexible

A

The correct answer is A. Compute Engine instances meet all of the requirements: they can run VMs with minimal changes and application administrators can have root access.
Option B would require the VMs to be deployed as containers.
Option C is incorrect because App Engine Standard is limited to applications that can execute in a language-specific runtime.
Option D is incorrect, as App Engine Flexible runs containers, not VMs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

You have just joined a startup company that analyzes healthcare data and makes recommendations to healthcare providers to improve the quality of care while controlling costs. The company must comply with privacy regulations. A compliance consultant recommends that your company control its encryption keys used to encrypt data stored on cloud servers. You agree with the consultant but also want to minimize the overhead of key management. What GCP service should the company use?
A. Use default encryption enabled on Compute Engine instances.
B. Use Google Cloud Key Management Service to store keys that you create and use them to encrypt storage used with Compute Engine instances.
C. Implement a trusted key store on premises, create the keys yourself, and use them to encrypt storage used with Compute Engine instances.
D. Use an encryption algorithm that does not use keys.

A

The best option is B. It meets the requirement of creating and managing the keys without requiring your company to deploy and manage a secure key store.
Option A is incorrect because it does not meet the requirements.
Option C requires more setup and maintenance tha.
Option B.
Option D does not exist, at least for strong encryption.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

A colleague complains that the availability and reliability of GCP VMs is poor because their instances keep shutting down without them issuing shutdown commands. No instance has run for more than 24 hours without shutting down for some reason. What would you suggest your colleague check to understand why the instances may be shutting down?
A. Make sure that the Cloud Operations agent is installed and collecting metrics.
B. Verify that sufficient persistent storage is attached to the instance.
C. Make sure that the instance availability is not set to preemptible.
D. Ensure that an external IP address has been assigned to the instance.

A

Option C is correct. The description of symptoms matches the behavior of preemptible instances.
Option A is wrong because collecting performance metrics will not cause or prevent shutdowns.
Option B is incorrect because shutdowns are not triggered by insufficient storage.
Option D is incorrect, as the presence or absence of an external IP address would not affect shutdown behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Your company is working on a government contract that requires all instances of VMs to have a virtual Trusted Platform Module. What Compute Engine configuration option would you enable or disable on your instance?
A. Trusted Module Setting
B. Shielded VMs
C. Preemptible VMs
D. Disable live migration

A

Option B is correct. Shielded VMs include the vTPM along with Secure Boot and Integrity Monitoring.
Option A is incorrect-there is no such option. Options C and D are not related to vTPM functionality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

You are leading a lift-and-shift migration to the cloud. Your company has several load-balanced clusters that use VMs that are not identically configured. You want to make as few changes as possible when moving workloads to the cloud. What feature of GCP would you use to implement those clusters in the cloud?
A. Managed instance groups
B. Unmanaged instance groups
C. Flexible instance groups
D. Kubernetes clusters

A

The correct answer is B. Unmanaged instance groups can have nonidentical instances.
Option A is incorrect, as all instances are configured the same in managed instance groups.
Option C is incorrect because there is no such thing as a flexible instance group.
Option D is incorrect because Kubernetes clusters run containers, not VMs, and would require changes that are not required if the cluster is migrated to an unmanaged instance group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

Your startup has a stateless web application written in Python 3.7. You are not sure what kind of load to expect on the application. You do not want to manage servers or containers if you can avoid it. What GCP service would you use?
A. Compute Engine
B. App Engine
C. Kubernetes Engine in Standard Mode
D. Cloud Dataproc

A

The correct answer is B. The requirements call for a PaaS. Second-generation App Engine Standard supports Python 3.7, and it does not require users to manage VMs or containers.
Option A is incorrect because you would have to manage VMs if you used Compute Engine.
Option C is incorrect, as you would have to create containers to run in Kubernetes Engine in Standard Mode.
Option D is incorrect because Cloud Dataproc is a managed Hadoop and Spark service, and it is not designed to run Python web applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

Your department provides audio transcription services for other departments in your company. Users upload audio files to a Cloud Storage bucket. Your application transcribes the audio and writes the transcript file back to the same bucket. Your process runs every day at midnight and transcribes all files in the bucket. Users are complaining that they are not notified if there is a problem with the audio file format until the next day. Your application has a program that can verify the quality of an audio file in less than two seconds. What changes would you make to the workflow to improve user satisfaction?
A. Include more documentation about what is required to transcribe an audio file successfully.
B. Use Cloud Functions to run the program to verify the quality of the audio file when the file is uploaded. If there is a problem, notify the user immediately.
C. Create a Compute Engine instance and set up a cron job that runs every hour to check the quality of files that have been uploaded into the bucket in the last hour. Send notices to all users who have uploaded files that do not pass the quality control check.
D. Use the App Engine Cron service to set up a cron job that runs every hour to check the quality of files that have been uploaded into the bucket in the last hour. Send notices to all users who have uploaded files that do not pass the quality control check.

A

The correct answer is B. This solution notifies users immediately of any problem and does not require any servers.
Option A does not solve the problem of reducing time to notify users when there is a problem. Options C and D solve the problem but do not notify users immediately.
Option C also requires you to manage a server.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

You have inherited a monolithic C++ application that you need to keep running. There will be minimal changes, if any, to the code. The previous developer who worked with this application created a Dockerfile and image container with the application and needed libraries. You’d like to deploy this in a way that minimizes your effort to maintain it. How would you deploy this application?
A. Create an instance in Compute Engine, install Docker, install the Cloud Monitoring agent, and then run the Docker image.
B. Create an instance in Compute Engine, but do not use the Docker image. Install the application, Ruby, and needed libraries. Install the Cloud Monitoring agent. Run the application directly in the VM, not a container.
C. Use App Engine Flexible to run the container image. App Engine will monitor as needed.
D. Use App Engine Standard to run the container image. App Engine will monitor as needed.

A

The correct answer is C. App Engine Flexible requires the least effort. App Engine Flexible will run the container and perform health checks and collect performance metrics. Options A and B are incorrect because provisioning and managing Compute Engine instances is more effort than using App Engine Flexible.
Option D is incorrect because you cannot run a custom container in App Engine Standard.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

You have been asked to give a presentation on Kubernetes. How would you explain the difference between the cluster master and nodes?
A. Cluster masters manage the cluster and run core services such as the controller manager, API server, scheduler, and etcd. Nodes run workload jobs.
B. The cluster manager is an endpoint for API calls. All services needed to maintain a cluster are run on nodes.
C. The cluster manager is an endpoint for API calls. All services needed to maintain a cluster are run on nodes, and workloads are run on a third kind of server, a runner.
D. Cluster masters manage the cluster and run core services such as the controller manager, API server, scheduler, and etcd. Nodes monitor the cluster master and restart it if it fails.

A

The correct answer is A. Cluster masters run core services for the cluster, and nodes run workload. Options B and C are incorrect, as the cluster manager is not just an endpoint for APIs. Also, there is no runner node type.
Option D is incorrect because nodes do not monitor cluster masters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

External services are not able to access services running in a Kubernetes cluster. You suspect a controller may be down. Which type of controller would you check?
A. Pod
B. Deployment
C. Ingress Controller
D. Service Controller

A

Option C is correct. Ingress Controllers are needed by Ingress objects, which are objects that control external access to services running in a Kubernetes cluster.
Option A is incorrect, as pods are the lowest level of computational unit, and they run one or more containers.
Option B is incorrect, as deployments are collections of pods that run an application in a cluster.
Option D is incorrect, as services do not control access from external services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

You are planning to run stateful applications in Kubernetes Engine. What should you use to support stateful applications?
A. Pods
B. StatefulPods
C. StatefulSets
D. PersistentStorageSet

A

The correct answer is C. StatefulSets deploy pods with unique IDs, which allows Kubernetes to support stateful applications by ensuring that clients can always use the same pod.
Option A is incorrect, as pods are always used for both stateful and stateless applications. Options B and D are incorrect because they are not actually components in Kubernetes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

Every time a database administrator logs into a Firebase database, you would like a message sent to your mobile device. Which compute service could you use that would minimize your work in deploying and running the code that sends the message?
A. Compute Engine
B. Kubernetes Engine
C. Cloud Functions
D. Cloud Dataflow

A

Option C is correct because Cloud Functions can detect authentications to Firebase and run code in response. Sending a message would require a small amount of code, and this can run in Cloud Functions. Options A and B would require more work to set up a service to watch for a login and then send a message.
Option D is incorrect, as Cloud Dataflow is a stream and batch processing platform not suitable for responding to events in Firebase.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

Your team has been tasked with deploying infrastructure for development, test, staging, and production environments in region us-west1. You will likely need to deploy the same set of environments in two additional regions. What service would allow you to use an infrastructure-as-code (IaC) approach?
A. Cloud Dataflow
B. Deployment Manager
C. Identity and Access Manager
D. App Engine Flexible

A

The correct answer is B. Deployment Manager is Google Cloud’s IaaS manager.
Option A is incorrect because Cloud Dataflow is a stream and batch processing service.
Option C, Identity and Access Management, is an authentication and authorization service.
Option D, App Engine Flexible, is a PaaS offering that allows users to customize their own runtimes using containers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

An IoT startup collects streaming data from industrial sensors and evaluates the data for anomalies using a machine learning model. The model scales horizontally. The data collected is buffered in a server for 10 minutes. Which of the following is a true statement about the system?
A. It is stateful.
B. It is stateless.
C. It may be stateful or stateless; there is not enough information to determine.
D. It is neither stateful nor stateless.

A

The correct answer is A. This application is stateful. It collects and maintains data about sensors in servers and evaluates that data.
Option B is incorrect because the application stores data about a stream, so it is stateful.
Option C is incorrect because thereisenough information.
Option D is incorrect because the application stores data about the stream, so it is stateful.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Your team is designing a stream processing application that collects temperature and pressure measurements from industrial sensors. Someone on the team suggests using a Cloud Memorystore cache. What could that cache be used for?
A. A SQL database
B. As a memory cache to store state data outside of instances
C. An extraction, transformation, and load service
D. A persistent object storage system

A

The correct answer is B. Of the four options, a cache is most likely used to store state data. If instances are lost, state information is not lost as well.
Option A is incorrect; Memorystore is not a SQL database.
Option C is incorrect because Memorystore does not provide extraction, transformation, and load services.
Option D is incorrect because Memorystore is not a persistent object store.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

A distributed application is not performing as well as expected during peak load periods. The application uses three microservices. The first of the microservices has the ability to send more data to the second service than the second service can process and keep up with. This causes the first microservice to wait while the second service processes data. What can be done to decouple the first service from the second service?
A. Run the microservices on separate instances.
B. Run the microservices in a Kubernetes cluster.
C. Write data from the first service to a Cloud Pub/Sub topic and have the second service read the data from the topic.
D. Scale both services together using MIGs.

A

Option C is the correct answer. Using a queue between the services allows the first service to write data as fast as needed, while the second service reads data as fast as it can. The second service can catch up after peak load subsides. Options A, B, and D do not decouple the services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

A colleague has suggested that you use the Apache Beam framework for implementing a highly scalable workflow. Which Google Cloud service would you use?
A. Cloud Dataproc
B. Cloud Dataflow
C. Cloud Dataprep
D. Cloud Memorystore

A

Option B is the correct answer. Cloud Dataflow is Google Cloud’s implementation on Apache Beam.
Option A, Cloud Dataproc, is a managed Hadoop and Spark service.
Option C, Cloud Dataprep, is a data preparation tool for analysis and machine learning.
Option D, Cloud Memorystore, is a managed cache service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

Your manager wants more data on the performance of applications running in Compute Engine, specifically, data on CPU and memory utilization. What Google Cloud service would you use to collect that data?
A. Cloud Dataprep
B. Cloud Monitoring
C. Cloud Dataproc
D. Cloud Memorystore

A

Option B is the correct answer.Cloud Monitoring is Google Cloud’s monitoring service.
Option A, Cloud Dataprep, is a data preparation tool for analysis and machine learning.
Option C, Cloud Dataproc, is a managed Hadoop and Spark service.
Option D, Cloud Memorystore, is a managed cache service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

You are receiving alerts that CPU utilization is high on several Compute Engine instances. The instances are all running a custom C++ application. When you receive these alerts, you deploy an additional instance running the application. A load balancer automatically distributes the workload across all of the instances. What is the best option to avoid having to add servers manually when CPU utilization is high?
A. Always run more servers than needed to avoid high CPU utilization.
B. Deploy the instances in a MIG, and use autoscaling to add and remove instances as needed.
C. Run the application in App Engine Standard.
D. Whenever you receive an alert, add two instances instead of one.

A

The correct answer is B. Managed instances groups can autoscale, so this option would automatically add or remove instances as needed. Options A and D are not as cost-efficient a.
Option B.
Option C is incorrect because App Engine Standard does not provide a C++ runtime.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

A retailer has sales data streaming into a Cloud Pub/Sub topic from stores across the country. Each time a sale is made, data is sent from the point of sale to Google Cloud. The data needs to be transformed and aggregated before it is written to BigQuery. Which of the following services would you use to perform that processing and write data to BigQuery?
A. Firebase
B. Cloud Dataflow
C. Cloud Memorystore
D. Cloud Datastore

A

Option B is correct. Cloud Dataflow is designed to support stream and batch processing, and it can write data to BigQuery.
Option A is incorrect, as Firebase is GCP’s mobile development platform.
Option D is incorrect; Datastore is a NoSQL database.
Option C is incorrect because Cloud Memorystore is a managed cache service.This is an ETL operation so Cloud Data Fusion is also a viable solution but that was not included in the options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

Auditors have determined that several of the microservices deployed on Kubernetes clusters in your GCP and on-premises clusters do not perform authentication in ways that comply with security requirements. You want developers to be able to deploy microservices without having to spend a lot of time developing and testing authentication mechanisms. What managed service in GCP would you use to reduce the need for developers to implement authentication mechanisms with each new service?
A. Kubernetes Services
B. Anthos Service Mesh
C. Kubernetes Ingress
D. Anthos Config Management

A

The correct answer is B. The Anthos Service Mesh provides a common framework for performing common operations, such as monitoring, networking, and authentication, on behalf of services so individual services do not have to implement those operations.
Option A is incorrect; a Kubernetes Service is an abstraction for accessing applications to a Kubernetes cluster.
Option C is incorrect; Kubernetes Ingress is used for enabling access to Kubernetes services from external clients.
Option D is incorrect; the Anthos Config Management service controls cluster configuration by applying configuration specifications to select components of a cluster based on such as namespaces, labels, and annotations. Anthos Config Management includes the Policy Controller, which is designed to enforce business logic rules on API requests to Kubernetes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

You need to store a set of files for an extended period. Anytime the data in the files needs to be accessed, it will be copied to a server first, and then the data will be accessed. Files will not be accessed more than once a year. The set of files will all have the same access controls. What storage solution would you use to store these files?
A. Cloud Storage Archive
B. Cloud Storage Nearline
C. Cloud Filestore
D. Bigtable

A

The correct answer is A. The Cloud Storage Archive service is designed for long-term storage of infrequently accessed objects.
Option B is not the best answer because Nearline should be used with objects that are accessed less often than once in 30 days. Archive class storage is more cost-effective and still meets the requirements.
Option C is incorrect. Cloud Filestore is a network filesystem, and it is used to store data that is actively used by applications running on Compute Engine VM and Kubernetes Engine clusters.
Option D is incorrect; Bigtable is a NoSQL database that is not designed for file storage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

You are uploading files in parallel to Cloud Storage and want to optimize load performance. What could you do to avoid creating hotspots when writing files to Cloud Storage?
A. Use sequential names or time stamps for files.
B. Do not use sequential names or time stamps for files.
C. Configure retention policies to ensure that files are not deleted prematurely.
D. Configure lifecycle policies to ensure that files are always using the most appropriate storage class.

A

The correct answer is B. Do not use sequential names or time stamps if uploading files in parallel. Files with sequentially close names will likely be assigned to the same server. This can create a hotspot when writing files to Cloud Storage.
Option A is incorrect, as this could cause hotspots. Options C and D affect the lifecycle of files once they are written and do not impact upload efficiency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

As a consultant on a cloud migration project, you have been asked to recommend a strategy for storing files that must be highly available even in the event of a regional failure. What would you recommend?
A. BigQuery
B. Cloud Datastore
C. Multiregional Cloud Storage
D. Regional Cloud Storage

A

The correct answer is C. Multiregional Cloud Storage replicates data to multiple regions. In the event of a failure in one region, the data would be retrieved from another region. Options A and B are incorrect because those are databases, not file storage systems.
Option D is incorrect because it does not meet the requirement of providing availability in the event of a single region failure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

As part of a migration to Google Cloud Platform, your department will run a collaboration and document management application on Compute Engine virtual machines. The application requires a filesystem that can be mounted using operating system commands. All documents should be accessible from any instance. What storage solution would you recommend?
A. Cloud Storage
B. Cloud Filestore
C. A document database
D. A relational database

A

The correct answer is B. Cloud Filestore is a network-attached storage service that provides a filesystem that is accessible from Compute Engine. Filesystems in Cloud Filestore can be mounted using standard operating system commands.
Option A, Cloud Storage, is incorrect because it does not provide a filesystem. Options C and D are incorrect because databases do not provide filesystems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

Your team currently supports seven MySQL databases for transaction processing applications. Management wants to reduce the amount of staff time spent on database administration. What GCP service would you recommend to help reduce the database administration load on your teams?
A. Bigtable
B. BigQuery
C. Cloud SQL
D. Cloud Filestore

A

The correct answer is C. Cloud SQL is a managed database service that supports MySQL, SQLServer, and PostgreSQL.
Option A is incorrect because Bigtable is a wide-column NoSQL database, and it is not a suitable substitute for MySQL.
Option B is incorrect because BigQuery is optimized for data warehouse and analytic databases, not transactional databases.
Option D is incorrect, as Cloud Filestore is not a database.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

Your company is developing a new service that will have a global customer base. The service will generate large volumes of structured data and require the support of a transaction processing database. All users, regardless of where they are on the globe, must have a consistent view of data. What storage system will meet these requirements?
A. Cloud Spanner
B. Cloud SQL
C. Cloud Storage
D. BigQuery

A

The correct answer is A. Cloud Spanner is a managed database service that supports horizontal scalability across regions.
Option B is incorrect because Cloud SQL cannot scale globally.
Option C is incorrect, as Cloud Storage does not meet the database requirements.
Option D is incorrect because BigQuery is not designed for transaction processing systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Your company is required to comply with several government and industry regulations, which include encrypting data at rest. What GCP storage services can be used for applications subject to these regulations?
A. Bigtable and BigQuery only
B. Bigtable and Cloud Storage only
C. Any of the managed databases, but no other storage services
D. Any GCP storage service

A

The correct answer is D. All data in GCP is encrypted when at rest. The other options are incorrect because they do not include all GCP storage services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

As part of your role as a data warehouse administrator, you occasionally need to export data from the data warehouse, which is implemented in BigQuery. What command-line tool would you use for that task?
A. A. gsutil
B. B. gcloud
C. C.bq
D. D.cbt

A

The correct answer is C. Thebqcommand-line tool is used to work with BigQuery.
Option A,gsutil, is the command-line tool for working with Cloud Storage, an.
Option D,cbt, is the command-line tool for working with Bigtable.
Option B,gcloud, is the command-line tool for most other GCP services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

Another task that you perform as data warehouse administrator is granting authorizations to perform tasks with the BigQuery data warehouse. A user has requested permission to view table data but not change it. What role would you grant to this user to provide the needed permissions but nothing more?
A. dataViewer
B. admin
C. metadataViewer
D. dataOwner

A

The correct answer is A. dataViewer allows a user to list projects and tables and get table data and metadata. Options B and D would enable the user to view data but would grant more permissions than needed, including the ability to change the data.
Option C does not grant permission to view data in tables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

A developer is creating a set of reports and is trying to minimize the amount of data each query returns while still meeting all requirements. What bq command-line option will help you understand the amount of data returned by a query without actually executing the query?
A. A.–no-data
B. B.–estimate-size
C. C.–dry-run
D. D.–size

A

The correct answer is C.–dry-runreturns an estimate of the number of bytes that would be returned if the query were executed. The other choices are not actuallybqcommand-line options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

A team of developers is choosing between using NoSQL or a relational database. What is a feature of NoSQL databases that is not available in relational databases?
A. Fixed schemas
B. ACID transactions
C. Indexes
D. Flexible schemas

A

The correct answer is D. NoSQL data has flexible schemas.The other options specify features that are found in relational databases. ACID transactions and indexes are found in some NoSQL databases as well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

A group of venture capital investors has hired you to review the technical design of a service that will be developed by a startup company seeking funding. The startup plans to collect data from sensors attached to vehicles. The data will be used to predict when a vehicle needs maintenance and before the vehicle breaks down. Thirty sensors will be on each vehicle. Each sensor will send up to 5 KB of data every second. The startup expects to start with hundreds of vehicles, but it plans to reach 1 million vehicles globally within 18 months. The data will be used to develop machine learning models to predict the need for maintenance. The startup is considering using a self-managed relational database to store the time-series data but wants your opinion. What would you recommend for a time-series database?
A. Continue to plan to use a self-managed relational database.
B. Use Cloud SQL.
C. Use Cloud Spanner.
D. Use Bigtable.

A

The correct answer is D. Bigtable is the best option for storing streaming data because it provides low-latency writes and can store petabytes of data. The database would need to store petabytes of data if the number of users scales as planned.
Option A is a poor choice because a self-managed relational database will be difficult to scale, is not the best type of database for the scale of time-series data the company anticipates, would not meet requirements, and would require less administrative support.
Option B will not scale to the volume of data expected.
Option C, Cloud Spanner, could scale to store the volumes of data, but it is not optimized for low-latency writes of streaming data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

A Bigtable instance increasingly needs to support simultaneous read and write operations. You’d like to separate the workload so that some nodes respond to read requests and others respond to write requests. How would you implement this to minimize the workload on developers and database administrators?
A. Create two instances, and separate the workload at the application level.
B. Create multiple clusters in the Bigtable instance, and use Bigtable replication to keep the clusters synchronized.
C. Create multiple clusters in the Bigtable instance, and use your own replication program to keep the clusters synchronized.
D. It is not possible to accomplish the partitioning of the workload as described.

A

The correct answer is B, create multiple clusters in the instance and use Bigtable replication. Options A and C are not correct, as they require developing custom applications to partition data or keep replicas synchronized.
Option D is incorrect because the requirements can be met.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

As a database architect, you’ve been asked to recommend a database service to support an application that will make extensive use of JSON documents. What would you recommend to minimize database administration overhead while minimizing the work required for developers to store JSON data in the database?
A. Cloud Storage
B. Cloud Firestore
C. Cloud Spanner
D. Cloud SQL

A

The correct answer is B. Cloud Firestore is a managed document database, which is a kind of NoSQL database that uses a flexible JSON-like data structure.
Option A is incorrect. It is not a database. Options C and D are not good fits because the JSON data would have to be mapped to relational structures to take advantage of the full range of relational features. There is no indication that additional relational features are required.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

Your Cloud SQL database is experiencing high query latency. You could vertically scale the database to use a larger instance, but you do not need additional write capacity. What else could you try to reduce the number of reads performed by the database?
A. Switch to Cloud Spanner.
B. Use Cloud Bigtable instead.
C. Use Cloud Memorystore to create a database cache that stores the results of database queries. Before a query is sent to the database, the cache is checked for the answer to the query.
D. Add read replicas to the Cloud SQL database.

A

The correct answer is D. Configuring a read-only replica for the database will likely require only a configuration change to the applications that use the database. The turnaround on configuration changes is usually a lot faster than for code changes, which would be required to use a cache, such as Cloud Memorystore.
Option C is incorrect because it would require code changes to the application to read from the cache, which requires programmer time. It is a viable solution, but it is not the best solution available.
Option A is not a good choice because it would require a database migration, and there is no indication that the scale of Cloud Spanner is needed.
Option B is not a good choice because Bigtable is a NoSQL database and may not meet the database needs of the application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

You would like to move objects stored in Cloud Storage automatically from regional storage to Nearline storage when the object is six months old. What feature of Cloud Storage would you use?
A. Retention policies
B. Lifecycle policies
C. Bucket locks
D. Multiregion replication

A

Option B is correct. Lifecycle policies allow you to specify an action, like changing storage class, after an object reaches a specified age.
Option A is incorrect, as retention policies prevent premature deleting of an object.
Option C is incorrect. This is a feature used to implement retention policies.
Option D is incorrect; multiregion replication does control changes to storage classes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

A customer has asked for help with a web application. Static data served from a data center in Chicago in the United States loads slowly for users located in Australia, South Africa, and Southeast Asia. What would you recommend to reduce latency?
A. Distribute data using Cloud CDN.
B. Use Premium Network from the server in Chicago to client devices.
C. Scale up the size of the web server.
D. Move the server to a location closer to those users.

A

The correct answer is A. Cloud CDN distributes copies of static data to points of presence around the globe so that it can be closer to users.
Option B is incorrect. Premium Network routes data over the internal Google network, but it does not extend to client devices.
Option C will not help with latency.
Option D is incorrect because moving the location of the server might reduce the latency for some users, but it would likely increase latency for other users, as they could be located anywhere around the globe.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

A data pipeline ingests performance monitoring data about a fleet of vehicles using Cloud Pub/Sub. The data is written to Cloud Bigtable to enable queries about specific vehicles. The data will also be written to BigQuery and BigQuery ML will be used to build predictive models about failures in vehicle components. You would like to provide high throughput ingestion and exactly-once delivery semantics when writing data to BigQuery. How would you load that data into BigQuery?
A. BigQuery Transfer Service
B. Cloud Storage Transfer Service
C. BigQuery Storage Write API
D. BigQuery Load Jobs

A

The correct answer is C. The BigQuery Storage Write API provides high-throughput ingestion and exactly-once delivery semantics.The BigQuery Transfer Service and BigQuery Load Jobs are used for batch loading, not streaming loading. Cloud Storage Transfer Service is used to load data into Cloud Storage, not BigQuery.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

Your team has deployed a VPC with default subnets in all regions. The lead network architect at your company is concerned about possible overlap in the use of private addresses. How would you explain how you are dealing with the potential problem?
A. You inform the network architect that you are not using private addresses at all.
B. When default subnets are created for a VPC, each region is assigned a different IP address range.
C. You have increased the size of the subnet mask in the CIDR block specification of the set of IP addresses.
D. You agree to assign new IP address ranges on all subnets.

A

The correct answer is B. Default subnets are each assigned a distinct, nonoverlapping IP address range.
Option A is incorrect, as default subnets use private addresses.
Option C is incorrect because increasing the size of the subnet mask does not necessarily prevent overlaps.
Option D is an option that would also ensure nonoverlapping addresses, but it is not necessary given the stated requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

A data warehouse service running in GCP has all of its resources in a single project. The e-commerce application has resources in another project, including a database with transaction data that will be loaded into the data warehouse. The data warehousing team would like to read data directly from the database using extraction, transformation, and load processes that run on Compute Engine instances in the data warehouse project. Which of the following network constructs could help with this?
A. Shared VPC
B. Regional load balancing
C. Direct peering
D. Cloud VPN

A

The correct answer is A. A Shared VPC allows resources in one project to access the resources in another project.
Option B is incorrect, as load balancing does not help with network access. Options C and D are incorrect because those are mechanisms for hybrid cloud computing. In this case, all resources are in GCP, so hybrid networking is not needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

An intern working with your team has changed some firewall rules. Prior to the change, all Compute Engine instances on the network could connect to all other instances on the network. After the change, some nodes cannot reach other nodes. What might have been the change that causes this behavior?
A. One or more implied rules were deleted.
B. Thedefault-allow-internalrule was deleted.
C. Thedefault-all-icmprule was deleted.
D. The priority of a rule was set higher than 65535.

A

The correct answer is B. Thedefault-allow-internalrule allows ingress connections for all protocols and ports among instances in the network.
Option A is incorrect because implied rules cannot be deleted, and the implied rules alone would not be enough to enable all instances to connect to all other instances.
Option C is incorrect because that rule governs the ICMP protocol for management services, like ping.
Option D is incorrect because 65535 is the largest number/lowest priority allowed for firewall rules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

The network administrator at your company has asked that you configure a firewall rule that will always take precedence over any other firewall rule. What priority would you assign?
A. 0
B. 1
C. 65534
D. 65535

A

The correct answer is A. 0 is the highest priority for firewall rules. All the other options are incorrect because they have priorities that are not guaranteed to enable the rule to take precedence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

During a review of a GCP network configuration, a developer asks you to explain CIDR notation. Specifically, what does the 8 mean in the CIDR block 172.16.10.2/8?
A. 8 is the number of bits used to specify a host address.
B. 8 is the number of bits used to specify the subnet mask.
C. 8 is the number of octets used to specify a host address.
D. 8 is the number of octets used to specify the subnet mask.

A

The correct answer is B. 8 is the number of bits used to specify the subnet mask.
Option A is wrong because 24 is the number of bits available to specify a host address. Options C and D are wrong, as the integer does not indicate an octet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

Several new firewall rules have been added to a VPC. Several users are reporting unusual problems with applications that did not occur before the firewall rule changes. You’d like to debug the firewall rules while causing the least impact on the network and doing so as quickly as possible. Which of the following options is best?
A. Set all new firewall priorities to 0 so that they all take precedence over other rules.
B. Set all new firewall priorities to 65535 so that all other rules take precedence over these rules.
C. Disable one rule at a time to see whether that eliminates the problems. If needed, disable combinations of rules until the problems are eliminated.
D. Remove all firewall rules and add them back one at a time until the problems occur and then remove the latest rule added back.

A

The correct answer is C. Disabling a firewall rule allows you to turn off the effect of a rule quickly without deleting it.
Option A is incorrect because it does not help isolate the rule or rules causing the problem, and it may introduce new problems because the new rules may take precedence in cases they did not before.
Option B is not helpful because alone it would not help isolate the problematic rule or rules.
Option D is incorrect because it will leave the VPC with only implied rules. Adding back all rules could be time-consuming, and having no rules could cause additional problems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

An executive wants to understand what changes in the current cloud architecture are required to run compute-intensive machine learning workloads in the cloud and have the models run in production using on-premises servers. The models are updated daily. There is no network connectivity between the cloud and on-premises networks. What would you tell the executive?
A. Implement additional firewall rules.
B. Use global load balancing.
C. Use hybrid-cloud networking.
D. Use regional load balancing.

A

The correct answer is C. Hybrid networking is needed to enable the transfer of data to the cloud to build models and then transfer models back to the on-premises servers.
Option A is incorrect because firewall rules restrict or allow traffic on a network-they do not link networks. Options B and D are incorrect because load balancing does not link networks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

To comply with regulations, you need to deploy a disaster recovery site that has the same design and configuration as your production environment. You want to implement the disaster recovery site in the cloud. Which topology would you use?
A. Gated ingress topology
B. Gated egress topology
C. Handover topology
D. Mirrored topology

A

The correct answer is D. With mirrored topology, public cloud and private on-premises environments mirror each other. Options A and B are not correct because gated topologies are used to allow access to APIs in other networks without exposing them to the public internet.
Option C is incorrect because that topology is used to exchange data and have different processing done in different environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

Network engineers have determined that the best option for linking the on-premises network to GCP resources is by using an IPSec VPN. Which GCP service would you use in the cloud?
A. Cloud IPSec
B. Cloud VPN
C. Cloud Interconnect IPSec
D. Cloud VPN IKE

A

The correct answer is B. Cloud VPN implements IPSec VPNs. All other options are incorrect because they are not names of actual services available in GCP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

Network engineers have determined that a link between the on-premises network and GCP will require an 8 Gbps connection. Which option would you recommend?
A. Cloud VPN
B. Partner Interconnect
C. Direct Interconnect
D. Hybrid Interconnect

A

The correct answer is B. Partner Interconnect provides between 50 Mbps and 10 Gbps connections.
Option A, Cloud VPN, provides up to 3 Gbps connections.
Option C, Direct Interconnect, provides 10 or 100 Gbps connections.
Option D is not an actual GCP service name.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

Network engineers have determined that a link between the on-premises network and GCP will require a connection between 60 Gbps and 80 Gbps. Which hybrid-cloud networking services would best meet this requirement?
A. Cloud VPN
B. Cloud VPN and Direct Interconnect
C. Direct Interconnect and Partner Interconnect
D. Cloud VPN, Direct Interconnect, and Partner Interconnect

A

The correct answer is C. Both Direct Interconnect and Partner Interconnect can be configured to support between 60 Gbps and 80 Gbps. All other options are wrong because Cloud VPN supports a maximum of 3 Gbps.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

The director of network engineering has determined that any links to networks outside of the company data center will be implemented at the level of BGP routing exchanges. What hybrid-cloud networking option should you use?
A. Direct peering
B. Indirect peering
C. Global load balancing
D. Cloud IKE

A

The correct answer is A. Direct peering allows customers to connect their networks to a Google network point of access and exchange Border Gateway Protocol (BGP) routes, which define paths for transmitting data between networks. Options B and D are not the names of GCP services.
Option C is not correct because global load balancing does not link networks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

A startup is designing a social site dedicated to discussing global political, social, and environmental issues. The site will include news and opinion pieces in text and video. The startup expects that some stories will be exceedingly popular, and others won’t be, but they want to ensure that all users have a similar experience with regard to latency, so they plan to replicate content across regions. What load balancer should they use?
A. HTTP(S)
B. SSL Proxy
C. Internal TCP/UDP
D. TCP Proxy

A

The correct answer is A. HTTP(S) load balancers are global and will route HTTP traffic to the region closest to the user making a request.
Option B is incorrect, as SSL Proxy is used for non-HTTPS SSL traffic.
Option C is incorrect because it does not support external traffic from the public internet.
Option D is incorrect, as TCP Proxy is used for non-HTTP(S) traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

As a developer, you foresee the need to have a load balancer that can distribute load using only private RFC 1918 addresses. Which load balancer would you use?
A. Internal TCP/UDP
B. HTTP(S)
C. SSL Proxy
D. TCP Proxy

A

The correct answer is A. Only Internal TCP/UDP supports load balancing using private IP addressing. The other options are all incorrect because they cannot load balance using private IP addresses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

After a thorough review of the options, a team of developers and network engineers have determined that the SSL Proxy load balancer is the best option for their needs. What other GCP service must they have to use the SSL Proxy load balancer?
A. Cloud Storage
B. Cloud VPN
C. Premium Tier networking
D. TCP Proxy Load Balancing

A

The correct answer is C. All global load balancers require the Premium Tier network, which routes all data over the Google global network and not the public internet.
Option A is incorrect, as object storage is not needed.
Option C is incorrect because a VPN is not required.
Option D is incorrect, as that is another kind of global load balancer that would require Premium Tier networking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
115
Q

You want to connect to access Cloud Storage APIs from a Compute Engine VM that has only an internal IP address. What GCP service would you use to enable that access?
A. Private Service Connect for Google APIs
B. Dedicated Interconnect
C. Partner Interconnect
D. HA VPN

A

The correct answer is A. Private Service Connect for Google APIs allows for access to Google Cloud APIs without requiring an external IP address. The other options are all for hybrid cloud computing connecting on-premises devices to a VPC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
116
Q

A company is migrating an enterprise application to Google Cloud. When running on-premises, application administrators created user accounts that were used to run background jobs. There was no actual user associated with the account, but the administrators needed an identity with which to associate permissions. What kind of identity would you recommend using when running that application in GCP?
A. Google-associated account
B. Cloud Identity account
C. Service account
D. Batch account

A

Option C, a service account, is the best choice for an account that will be associated with an application or resource, such as a VM.Both options A and B should be used with actual users.
Option D is not a valid type of identity in GCP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
117
Q

You are tasked with managing the roles and privileges granted to groups of developers, quality assurance testers, and site reliability engineers. Individuals frequently move between groups. Each group requires a different set of permissions. What is the best way to grant access to resources that each group needs?
A. Create a group in Google Groups for each of the three groups: developers, quality assurance testers, and site reliability engineers. Add the identities of each user to their respective group. Assign predefined roles to each group.
B. Create a group in Google Groups for each of the three groups: developers, quality assurance testers, and site reliability engineers. Assign permissions to each user and then add the identities to their respective group.
C. Assign each user a Cloud Identity, and grant permissions directly to those identities.
D. Create a G Suite group for each of the three groups: developers, quality assurance testers, and site reliability engineers. Assign permissions to each user and then add the identities to their respective group.

A

The correct answer is A. The identities should be assigned to groups and predefined roles assigned to those groups. Assigning roles to groups eases administrative overhead because users receive permissions when they are added to a group. Removing a user from a group removes permissions from the user, unless the user receives that permission in another way. Options B, C, and D are incorrect because you cannot assign permissions directly to a user.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
118
Q

You are making a presentation on Google Cloud security to a team of managers in your company. Someone mentions that to comply with regulations, the organization will have to follow several security best practices, including least privilege. They would like to know how GCP supports using least privilege. What would you say?
A. GCP provides a set of three broad roles: owner, editor, and viewer. Most users will be assigned viewer unless they need to change configurations, in which case they will receive the editor role, or if they need to perform administrative functions, in which case they will be assigned owner.
B. GCP provides a set of fine-grained permissions and predefined roles that are assigned those permissions. The roles are based on commonly grouped responsibilities. Users will be assigned only the predefined roles needed for them to perform their duties.
C. GCP provides several types of identities. Users will be assigned a type of identity most suitable for their role in the organization.
D. GCP provides a set of fine-grained permissions and custom roles that are created and managed by cloud users. Users will be assigned a custom role designed specifically for that user’s responsibilities.

A

The correct answer is option B. Fine-grained permissions and predefined roles help implement least privilege because each predefined role has only the permissions needed to carry out a specific set of responsibilities.
Option A is incorrect. Basic roles are coarse-grained and grant more permissions than often needed.
Option C is incorrect. Simply creating a particular type of identity does not by itself associate permissions with users.
Option D is not the best option because it requires more administrative overhead than option B, and it is a best practice to use predefined roles as much as possible and only create custom roles when a suitable predefined role does not exist.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
119
Q

In the interest of separating duties, one member of your team will have permission to perform all actions on logs. You will also rotate the duty every 90 days. How would you grant the necessary permissions?
A. Create a Google Group, assignroles/logging.adminto the group, add the identity of the person who is administering the logs at the start of the 90-day period, and remove the identity of the person who administered logs during the previous 90 days.
B. Assignroles/logging.adminto the identity of the person who is administering the logs at the start of the 90-day period, and revoke the role from the identity of the person who administered logs during the previous 90 days.
C. Create a Google Group, assignroles/logging.privateLogViewerto the group, add the identity of the person who is administering the logs at the start of the 90-day period, and remove the identity of the person who administered logs during the previous 90 days.
D. Assignroles/logging.privateLogViewerto the identity of the person who is administering the logs at the start of the 90-day period, and revoke the role from the identity of the person who administered logs during the previous 90 days.

A

The correct answer is A. A group should be created for administrators and granted the necessary roles, which in this case isroles/logging.admin. The identity of the person responsible for a period should be added at the start of the period, and the person who was previously responsible should be removed from the group.
Option B is not the best option because it assigns roles to an identity, which is allowed but not recommended. If the team changes strategy and wants to have three administrators at a time, roles would have to be granted and revoked to multiple identities rather than a single group. Options C and D are incorrect becauseroles/logging.privateLogViewerdoes not grant administrative access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
120
Q

Your company is subject to several government and industry regulations that require all personal healthcare data to be encrypted when persistently stored. What must you do to ensure that applications processing protected data encrypt it when it is stored on disk or SSD?
A. Configure a database to use database encryption.
B. Configure persistent disks to use disk encryption.
C. Configure the application to use application encryption.
D. Nothing. Data is encrypted at rest by default.

A

The correct answer is D. You do not need to configure any settings to have data encrypted at rest in GCP. Options B, C, and D are all incorrect because no configuration is required.

121
Q

Data can be encrypted at multiple levels, such as at the platform, infrastructure, and device levels. Data may be encrypted multiple times before it is written to persistent storage. At the device level, how is data encrypted in GCP?
A. AES256 or AES128 encryption
B. Elliptic curve cryptography
C. Data Encryption Standard (DES)
D. Blowfish

A

The correct answer is A.
Option B is incorrect because it is an asymmetric encryption algorithm that requires the use of a pair of keys, and Google’s key management options only support the use of a single key to manage encryption.
Option C is incorrect. DES is a weak and obsolete encryption algorithm that is easily broken by today’s methods.
Option D is incorrect. Blowfish is a strong encryption algorithm designed as a replacement for DES and other weak encryption algorithms, but it is not used in GCP.

122
Q

In GCP, each data chunk written to a storage system is encrypted with a data encryption key. The key is kept close to the data that it encrypts to ensure low latency when retrieving the key. How does GCP protect the data encryption key so that an attacker who gained access to the storage system storing the key could not use it to decrypt the data chunk?
A. Writes the data encryption key to a hidden location on disk
B. Encrypts the data encryption key with a key encryption key
C. Stores the data encryption key in a secure Cloud SQL database
D. Applies an elliptic curve encryption algorithm for each data encryption key

A

The correct answer is B. The data encryption key is encrypted using a key encryption key.
Option A is incorrect. There are no hidden locations on disk that are inaccessible from a hardware perspective.
Option C is incorrect. Keys are not stored in a relational database.
Option D is incorrect. An elliptic curve encryption algorithm is not used.

123
Q

Data can be encrypted at different layers of the OSI network stack. Google Cloud may encrypt network data at multiple levels. What protocol is used at layer 7?
A. IPSec
B. TLS
C. ALTS
D. ARP

A

The correct answer is C. Layer 7 is the application layer, and Google uses ALTS at that level. Options A and B are incorrect. IPSec and TLS are used by Google but not at layer 7.
Option D is incorrect. ARP is an address resolution protocol, not a security protocol.

124
Q

After reviewing security requirements with compliance specialists at your company, you determine that your company will need to manage its own encryption keys. Keys may be stored in the cloud. What GCP service would you recommend for storing keys?
A. Cloud Datastore
B. Cloud Firestore
C. Cloud KMS
D. Bigtable

A

The correct answer is C. Cloud KMS is the key management service in GCP. It is designed specifically to store keys securely and manage the lifecycle of keys. Options A and B are incorrect. They are both document databases and are not suitable for low-latency, highly secure key storage.
Option D is incorrect. Bigtable is designed for low-latency, high-write volume operations over variable structured data. It is not designed for secure key management.

125
Q

The finance department of your company has notified you that logs generated by any finance application will need to be stored for five years. It is not likely to be accessed, but it has to be available if needed. If it were needed, you would have up to three days to retrieve the data. How would you recommend storing that data?
A. Keep it in Cloud Logging.
B. Export it to Cloud Storage and store it in Archive class storage.
C. Export it to BigQuery and partition it by year.
D. Export it to Cloud Pub/Sub using a different topic for each year.

A

The correct answer is B. Cloud Storage Archive class is the best option for maintaining archived data such as log data. Also, since the data is not likely to be accessed, Archive storage would be the most cost-effective option.
Option A is incorrect because Cloud Logging does not retain log data for five years.
Option C is not the best option since the data does not need to be queried, and it is likely not structured sufficiently to be stored efficiently in BigQuery.
Option D is incorrect. Cloud Pub/Sub is a messaging service, not a long-term data store.

126
Q

The legal department in your company notified software development teams that if a developer can deploy to production, then that developer cannot be allowed to perform the final code review before deploying to production. This is an example of which security best practice?
A. Defense in depth
B. Separation of duties
C. Least privilege
D. Encryption at rest

A

The correct answer is B. The duties of the development team are separated so that no one person can both approve a deployment and execute a deployment.
Option A is incorrect. Defense in depth is the use of multiple security controls to mitigate the same risk.
Option C is incorrect because least privilege applies to a set of permissions granted for a single task, such as deploying to production.
Option D is incorrect. Encryption at rest is not related to the scenario described in the question.

127
Q

A startup has hired you to advise on security and compliance related to their new online game for children ages 10 to 14. Players will register to play the game, which includes collecting the name, age, and address of the player. Initially, the company will target customers in the United States. With which regulation would you advise them to comply?
A. HIPAA/HITECH
B. SOX
C. COPPA
D. GDPR

A

The correct answer is C. The service will collect personal information of children under 13 in the United States, so COPPA applies.
Option A is incorrect because HIPAA and HITECH apply to protected healthcare data.
Option B is incorrect because SOX applies to financial data.
Option D is incorrect because GDPR applies to citizens of the European Union, not the United States.

128
Q

The company for which you work is expanding from North America to set up operations in Europe, starting with Germany and the Netherlands. The company offers online services that collect data on users. With what regulation must your company comply?
A. HIPAA/HITECH
B. SOX
C. COPPA
D. GDPR

A

The correct answer is D. The service will collect personal information from citizens of the European Union, so GDPR applies.
Option A is incorrect because HIPAA and HITECH apply to protected healthcare data.
Option B is incorrect because SOX applies to financial data.
Option C is incorrect, as it applies to children in the United States.

129
Q

Enterprise Self-Storage Systems is a company that recently acquired a startup software company that provides applications for small and midsize self-storage companies. The company is concerned that the business strategy of the acquiring company is not aligned with the software development practices of the software development teams of the acquired company. What IT framework would you recommend the company follow to better align business strategy with software development?
A. ITIL
B. TOGAF
C. Porters Five Forces Model
D. Ansoff Matrix

A

The correct answer is A. ITIL is a framework for aligning business and IT strategies and practices.
Option B is incorrect because TOGAF is an enterprise architecture framework.
Option C is incorrect because the Porters Five Forces Model is used to assess competitiveness.
Option D is incorrect because the Ansoff Matrix is used to summarize growth strategies.

130
Q

As an SRE, you are assigned to support several applications. In the past, these applications have had significant reliability problems. You would like to understand the performance characteristics of the applications, so you create a set of dashboards. What kind of data would you display on those dashboards?
A. Metrics and time-series data measuring key performance attributes, such as CPU utilization
B. Detailed log data from syslog
C. Error messages output from each application
D. Results from the latest acceptance tests

A

The correct answer is A. If the goal is to understand performance characteristics, then metrics, particularly time-series data, will show the values of key measurements associated with performance, such as utilization of key resources.
Option B is incorrect because detailed log data describes significant events but does not necessarily convey resource utilization or other performance-related data.
Option C is incorrect because errors are types of events that indicate a problem but are not helpful for understanding normal, baseline operations.
Option D is incorrect because acceptance tests measure how well a system meets business requirements but do not provide point-in-time performance information.

131
Q

After determining the optimal combination of CPU and memory resources for nodes in a Kubernetes cluster, you want to be notified whenever CPU utilization exceeds 85 percent for 5 minutes or when memory utilization exceeds 90 percent for 1 minute. What would you have to specify to receive such notifications?
A. An alerting condition
B. An alerting policy
C. A logging message specification
D. An acceptance test

A

The correct answer is B. Alerting policies are sets of conditions, notification specifications, and selection criteria for determining resources to monitor.
Option A is incorrect because one or more conditions are necessary but not sufficient.
Option C is incorrect because a log message specification describes the content written to a log when an event occurs.
Option D is incorrect because acceptance tests are used to assess how well a system meets business requirements; they are not related to alerting.

132
Q

A compliance review team is seeking information about how your team handles high-risk administration operations, such as granting operating system users root privileges. Where could you find data that shows your team tracks changes to user privileges?
A. In metric time-series data
B. In alerting conditions
C. In audit logs
D. In ad hoc notes kept by system administrators

A

The correct answer is C. Audit logs would contain information about changes to user privileges, especially privilege escalations such as granting root or administrative access.
Option A an.
Option B are incorrect, as neither records detailed information about access control changes.
Option D may have some information about user privilege changes, but notes may be changed and otherwise tampered with, so on their own they are insufficient sources of information for compliance review purposes.

133
Q

Release management practices contribute to improving reliability by which one of the following?
A. Advocating for object-oriented programming practices
B. Enforcing waterfall methodologies
C. Improving the speed and reducing the cost of deploying code
D. Reducing the use of stateful services

A

The correct option is C. Release management practices reduce manual effort to deploy code. This allows developers to roll out code more frequently and in smaller units and, if necessary, quickly roll back problematic releases.
Option A is incorrect because release management is not related to programming paradigms.
Option B is incorrect because release management does not require waterfall methodologies.
Option D is incorrect. Release management does not influence the use of stateful or stateless services.

134
Q

A team of software engineers is using release management practices. They want developers to check code into the central team code repository several times during the day. The team also wants to make sure that the code that is checked is functioning as expected before building the entire application. What kind of tests should the team run before attempting to build the application?
A. Unit tests
B. Stress tests
C. Acceptance tests
D. Compliance tests

A

The correct answer is A. These are tests that check the smallest testable unit of code. These tests should be run before any attempt to build a new version of an application.
Option B is incorrect because a stress test could be run on the unit of code, but it is more than what is necessary to test if the application should be built.
Option C is incorrect because acceptance tests are used to confirm that business requirements are met; a build that only partially meets business requirements is still useful for developers to create.
Option D is incorrect becausecompliance testsis a fictitious term and not an actual class of tests used in release management.

135
Q

Developers have just deployed a code change to production. They are not routing any traffic to the new deployment yet, but they are about to send a small amount of traffic to servers running the new version of code. What kind of deployment are they using?
A. Blue/Green deployment
B. Before/After deployment
C. Canary deployment
D. Stress deployment

A

The correct answer is C. This is a canary deployment.
Option A is incorrect because Blue/Green deployment uses two fully functional environments and all traffic is routed to one of those environments at a time. Options B and D are incorrect because they are not actual names of deployment types.

136
Q

You have been hired to consult with an enterprise software development that is starting to adopt Agile and DevOps practices. The developers would like advice on tools that they can use to help them collaborate on software development in the Google Cloud. What version control software might you recommend?
A. Jenkins and Cloud Source Repositories
B. Syslog and Cloud Build
C. GitHub and Cloud Build
D. GitHub and Cloud Source Repositories

A

The correct answer is D. GitHub and Cloud Source Repositories are version control systems.
Option A is incorrect because Jenkins is a CI/CD tool, not a version control system.
Option B is incorrect because neither Syslog nor Cloud Build is a version control system.
Option C is incorrect because Cloud Build is not a version control system.

137
Q

A startup offers a software-as-a-service solution for enterprise customers. Many of the components of the service are stateful, and the system has not been designed to allow incremental rollout of new code. The entire environment has to be running the same version of the deployed code. What deployment strategy should they use?
A. Rolling deployment
B. Canary deployment
C. Stress deployment
D. Blue/Green deployment

A

The correct answer is D. A Blue/Green deployment is the kind of deployment that allows developers to deploy new code to an entire environment before switching traffic to it. Options A and B are incorrect because they are incremental deployment strategies.
Option C is not an actual deployment strategy.

138
Q

A service is experiencing unexpectedly high volumes of traffic. Some components of the system are able to keep up with the workload, but others are unable to process the volume of requests. These services are returning a large number of internal server errors. Developers need to release a patch as soon as possible that provides some relief for an overloaded relational database service. Both memory and CPU utilization are near 100 percent. Horizontally scaling the relational database is not an option, and vertically scaling the database would require too much downtime. What strategy would be the fastest to implement?
A. Shed load
B. Increase connection pool size in the database
C. Partition the workload
D. Store data in a Pub/Sub topic

A

The correct option is A. The developers should create a patch to shed load.
Option B would not solve the problem, since more connections would allow more clients to connect to the database, but CPU and memory are saturated, so no additional work can be done.
Option C could be part of a long-term architecture change, but it could not be implemented quickly.
Option D could also be part of a longer-term solution to allow a database to buffer requests and process them at a rate allowed by available database resources.

139
Q

A service has detected that a downstream process is returning a large number of errors. The service automatically slows down the number of messages it sends to the downstream process. This is an example of what kind of strategy?
A. Load shedding
B. Upstream throttling
C. Rebalancing
D. Partitioning

A

The correct answer is B. This is an example of upstream or client throttling.
Option A is incorrect because load is not shed; rather, it is just delayed.
Option C is incorrect. There is no rebalancing of load, such as might be done on a Kafka topic.
Option D is incorrect. There is no mention of partitioning data.

140
Q

A team of early career software engineers has been paired with an architect to work on a new software development project. The engineers are anxious to get started coding, but the architect objects to that course of action because there has been insufficient work prior to development. What steps should be completed before beginning development according to SDLC?
A. Business continuity planning
B. Analysis and design
C. Analysis and testing
D. Analysis and documentation

A

The correct answer is B. Analysis defines the scope of the problem and assessing options for solving the problem. Design produces high-level and detailed plans that guide development.
Option A is incorrect, as business continuity planning is not required before development, though it can occur alongside development.
Option C is incorrect because testing occurs after software is developed. Similarly, option D is incorrect because documentation comes after development as well.

141
Q

In an analysis meeting, a business executive asks about research into COTS. What is this executive asking about?
A. Research related to deciding to build versus buying a solution
B. Research about a Java object relational mapper
C. A disaster planning protocol
D. Research related to continuous operations through storms (COTS), a business continuity practice

A

The correct answer is A. COTS stands for commercial off-the-shelf, so the question is about research related to the question of buy versus build.
Option B is incorrect, as COTS is not an ORM. Options C and D are both incorrect. COTS is not about business continuity or disaster recovery.

142
Q

Business decision-makers have created a budget for software development over the next three months. There are more projects proposed than can be funded. What measure might the decision-makers use to choose projects to fund?
A. Mean time between failures (MTBF)
B. Recovery time objectives (RTO)
C. Return on investment (ROI)
D. Marginal cost displacement

A

Option C is correct. ROI is a measure used to compare the relative value of different investments.
Option A is a measure of reliability and availability.
Option B is a requirement related to disaster recovery.
Option D is a fictitious measure.

143
Q

A team of developers is working on a backend service to implement a new business process. They are debating whether to use arrays, lists, or hash maps. In what stage of the SDLC are these developers at present?
A. Analysis
B. High-level design
C. Detailed design
D. Maintenance

A

The correct answer is C because questions of data structure are not usually addressed until the detail design stage.
Option A is incorrect, as analysis is about scoping a problem and choosing a solution approach.
Option B is incorrect because high-level design is dedicated to identifying subcomponents and how they function together.
Option D is incorrect because the maintenance phase is about keeping software functioning.

144
Q

An engineer is on call for any service-related issues with a service. In the middle of the night, the engineer receives a notification that a set of APIs is returning HTTP 500 error codes to most requests. What kind of documentation would the engineer turn to first?
A. Design documentation
B. User documentation
C. Operations documentation
D. Developer documentation

A

The correct answer is C. In the middle of the night the primary goal is to get the service functioning properly. Operations documentation, like runbooks, provides guidance on how to start services and correct problems.
Option A is incorrect because design documentation may describe why design decisions were made-it does not contain distilled information about running the service.
Option B is incorrect, as user documentation is for customers of the service.
Option D is incorrect because although developer documentation may eventually help the engineer understand the reason why the service failed, it is not the best option for finding specific guidance on getting the service to function normally.

145
Q

As a developer, you write code in your local environment, and after testing it, you commit it or write it to a version control system. From there it is automatically incorporated with the baseline version of code in the repository. What is the process called?
A. Software continuity planning
B. Continuous integration (CI)
C. Continuous development (CD)
D. Software development lifecycle (SDLC)

A

The correct answer is B. This is an example of continuous integration because code is automatically merged with the baseline application code.
Option A is not an actual process.
Option C is not an actual process, and it should not be confused with continual deployment.
Option D is incorrect because the software development life cycle includes continuous integration and much more.

146
Q

As a consulting architect, you have been asked to help improve the reliability of a distributed system with a large number of custom microservices and dependencies on third-party APIs running in a hybrid cloud architecture. You have decided that at this level of complexity, you can learn more by experimenting with the system than by studying documents and code listings. So, you start by randomly shutting down servers and simulating network partitions. This is an example of what practice?
A. Irresponsible behavior
B. Integration testing
C. Load testing
D. Chaos engineering

A

The correct answer is D. This is an example of chaos engineering. Netflix’s Simian Army is a collection of tools that support chaos engineering.
Option A is incorrect because this is a reasonable approach to improving reliability, assuming that the practice is transparent and coordinated with others responsible for the system.
Option B is incorrect. This is not a test to ensure that components work together. It is an experiment to see what happens when some components do not work.
Option C is incorrect. This does test the ability of the system to process increasingly demanding workloads.

147
Q

There has been a security breach at your company. A malicious actor outside of your company has gained access to one of your services and was able to capture data that was passed into the service from clients. Analysis of the incident finds that a developer included a private key in a configuration file that was uploaded to a version control repository. The repository is protected by several defensive measures, including role-based access controls and network-level controls that require VPN access to reach the repository. As part of backup procedures, the repository is backed up to a cloud storage service. The folder that stores the backup was mistakenly granted public access privileges for up to three weeks before the error was detected and corrected. During the post-mortem analysis of this incident, one of the objectives should be to
A. Identify the developer who uploaded the private key to a version control repository. They are responsible for this incident.
B. Identify the system administrator who backed up the repository to an unsecured storage service. They are responsible for this incident.
C. Identify the system administrator who misconfigured the storage system. They are responsible for this incident.
D. Identify ways to better scan code checked into the repository for sensitive information and perform checks on cloud storage systems to identify weak access controls.

A

The correct answer is D. The goal of the post-mortem is to learn how to prevent this kind of incident again (fix the problem, not the blame). Options A, B, and C are all wrong because they focus on blaming a single individual for an incident that occurred because of multiple factors. Also, laying blame does not contribute to finding a solution. In cases where an individual’s negligence or lack of knowledge is a significant contributing factor, then other management processes should be used to address the problem. Post-mortems exist to learn and to correct technical processes.

148
Q

You have just been hired as a cloud architect for a large financial institution with global reach. The company is highly regulated, but it has a reputation for being able to manage IT projects well. What practices would you expect to find in use at the enterprise level that you might not find at a startup?
A. Agile methodologies
B. SDLC
C. ITIL
D. Business continuity planning

A

The correct answer is C. ITIL is a set of enterprise IT practices for managing the full range of IT processes, from planning and development to security and support. Options A and B are likely to be found in all well-run software development teams.
Option D may not be used at many startups, but it should be.

149
Q

A software engineer asks for an explanation of the difference between business continuity planning and DR planning. What would you say is the difference?
A. There is no difference; the terms are synonymous.
B. They are two unrelated practices.
C. DR is a part of business continuity planning, which includes other practices for continuing business operations in the event of an enterprise-level disruption of services.
D. Business continuity planning is a subset of disaster recovery.

A

The correct answer is C. Disaster recovery is a part of business continuity planning. Options A and B are wrong. They are neither the same nor are they unrelated.
Option D is incorrect because it has the relationship backward.

150
Q

In addition to ITIL, there are other enterprise IT process management frameworks. Which other standard might you reference when working on enterprise IT management issues?
A. ISO/IEC 20000
B. Java Coding Standards
C. PEP-8
D. ISO/IEC 27002

A

The correct answer is A. ISO/IEC 20000 is a service management standard. Options B and C are incorrect. They are programming language–specific standards for Java and Python, respectively.
Option D is incorrect. ISO/IEC 27002 is a security standard.

151
Q

A minor problem repeatedly occurs with several instances of an application that causes a slight increase in the rate of errors returned. Users who retry the operation usually succeed on the second or third attempt. By your company’s standards, this is considered a minor incident. Should you investigate this problem?
A. No. The problem is usually resolved when users retry.
B. No. New feature requests are more important.
C. Yes. But only investigate if the engineering manager insists.
D. Yes. Since it is a recurring problem, there may be an underlying bug in code or weakness in the design that should be corrected.

A

The correct answer is D. There may be an underlying bug in code or weakness in the design that should be corrected. Options A and B are incorrect because it should be addressed, since it adversely impacts customers.
Option C is incorrect because software engineers and architects can recognize a customer-impacting flaw and correct it.

152
Q

A CTO of a midsize company hires you to consult on the company’s IT practices. During preliminary interviews, you realize that the company does not have a business continuity plan. What would you recommend they develop first with regard to business continuity?
A. Recovery time objectives (RTO)
B. An insurance plan
C. A disaster plan
D. A service management plan

A

The correct answer is C. A disaster plan documents a strategy for responding to a disaster. It includes information such as where operations will be established, which services are the highest priority, what personnel are considered vital to recovery operations, as well as plans for dealing with insurance carriers and maintaining relationships with suppliers and customers.
Option A is incorrect. Recovery time objectives cannot be set until the details of the recovery plan are determined.
Option B is incorrect because you cannot decide what risk to transfer to an insurance company before understanding what the risks and recovery objectives are.
Option D is incorrect. A service management plan is part of an enterprise IT process structure.

153
Q

A developer codes a new algorithm and tests it locally. They then check the code into the team’s version control repository. This triggers an automatic set of unit and integration tests. The code passes, and it is integrated into the baseline code and included in the next build. The build is released and runs as expected for 30 minutes. A sudden spike in traffic causes the new code to generate a large number of errors. What might the team decide to do after the post-mortem analysis of this incident?
A. Fire the developer who wrote the algorithm.
B. Have at least two engineers review all of the code before it is released.
C. Perform stress tests on changes to code that may be sensitive to changes in load.
D. Ask the engineering manager to provide additional training to the engineer who revised the algorithm.

A

The correct answer is C.
Option A is not correct because blaming engineers and immediately imposing severe consequences is counterproductive. It will tend to foster an environment that is not compatible with agile development practices.
Option B is incorrect because this could be highly costly in terms of engineers’ time, and it is unlikely to find subtle bugs related to the complex interaction of multiple components in a distributed system.
Option D is incorrect because, while additional training may be part of the solution, that is for the manager to decide. Post-mortems should be blameless, and suggesting that someone be specifically targeted for additional training in a post-mortem implies some level of blame.

154
Q

Your company’s services are experiencing a high level of errors. Data ingest rates are dropping rapidly. Your data center is located in an area prone to hurricanes, and these events are occurring during peak hurricane season. What criteria do you use to decide to invoke your disaster recovery plan?
A. When your engineering manager says to invoke the disaster recovery plan
B. When the business owner of the service says to invoke the disaster recovery plan
C. When the disaster plan criteria for invoking the disaster recovery plan are met
D. When the engineer on call says to invoke the disaster recovery plan

A

The correct answer is C. The criteria for determining when to invoke the disaster recovery plan should be defined before a team might have to deal with a disaster. Options A, B, and C are all incorrect because the decision should not be left to the sole discretion of an individual manager, service owner, or engineer. A company policy should be in place for determining when to invoke a DR plan.

155
Q

You have been asked to help with a new project kickoff. The project manager has invited engineers and managers from teams directly working on the project. They have also invited members of teams that might use the service to be built by the project. What is the motivation of the project manager for inviting these various participants?
A. To communicate with stakeholders
B. To meet compliance requirements
C. To practice good cost control measures
D. To solicit advice on building team skills

A

The correct answer is A. Each of the individuals invited to the meeting has an interest in the project.
Option B is incorrect since there is no mention of compliance requirements and regulations do not typically dictate meeting structures. Options C and D are incorrect, as there is no discussion of cost or skill building.

156
Q

A junior engineer asks you to explain some terms often used in meetings. In particular, the engineer wants to know the difference between a project and a program. How would you explain the difference?
A. There is no difference; the two terms are used interchangeably.
B. A project is part of a program, and programs span multiple departments; both exist to execute organizational strategy.
C. A program is part of a project, and projects span multiple departments; both exist to execute organizational strategy.
D. A project is used only to describe software development efforts, while a program can refer to any company initiative.

A

Option B is correct. A project is part of a program, and programs span multiple departments; both exist to execute organizational strategy.
Option A is incorrect because the words do mean different things.
Option C is incorrect because programs are not part of projects.
Option D is incorrect because projects do not refer only to software engineering efforts.

157
Q

An architect writes a post for an internal blog describing the pros and cons of two approaches to improving the reliability of a widely used service. This is an example of what stage of stakeholder management?
A. Identifying stakeholders
B. Determining their roles and scope of interests
C. Developing a communications plan
D. Communicating with and influencing stakeholders

A

The correct answer is D. This is an example of communicating with stakeholders and influencing their opinions about options.
Option A is incorrect, as the stakeholders are not identified here.
Option B is incorrect because there is no discussion of individuals’ roles and scope of interest.
Option C is incorrect because the architect did not publish a plan.

158
Q

Your company provides a SaaS product used by mobile app developers to capture and analyze log messages from mobile devices in real time. Another company begins to offer a similar service but includes alerting based on metrics as well as log messages. This prompts the executives to change strategy from developing additional log analysis features to developing alerting features. This is an example of a change prompted by which one of the following?
A. Individual choice
B. Competition
C. Skills gap
D. Unexpected economic factors

A

The correct answer is B. This is a change because of the introduction of a competitive product with more features.
Option A is incorrect. This is not a change prompted by the actions of an individual, such as someone leaving the company.
Option C is incorrect because a skills gap did not trigger the change, although there may be a skills gap on the team that now has to implement alerting.
Option D is incorrect. There is no mention of economic factors, such as a recession.

159
Q

In May 2018, the EU began enforcement of a new privacy regulation known as the GDPR. This required many companies to change how they manage personal information about citizens of the EU. This is an example of what kind of change?
A. Individual choice
B. Competition
C. Skills gap
D. Regulation

A

The correct answer is D. The changes were prompted by a new regulation.
Option A is incorrect. This is not a change prompted by the actions of an individual, such as someone leaving the company.
Option B is incorrect, as there is no mention of competitive pressures.
Option C is incorrect. A skills gap did not trigger the change, although there may be a skills gap on the team that now has to implement alerting.

160
Q

A program manager asks for your advice on managing change in projects. The program manager is concerned that there are multiple changes underway simultaneously, and it is difficult to understand the impact of these changes. What would you suggest as an approach to managing this change?
A. Stop making changes until the program manager can understand their potential impacts.
B. Communicate more frequently with stakeholders.
C. Implement a Plan-Do-Study-Act methodology.
D. Implement cost control measures to limit the impact of simultaneous changes.

A

The correct option is C. The program manager should use a change management methodology to control and better understand changes.
Option A is incorrect. A program manager may not be able to stop some changes, such as changes due to regulatory changes, without adverse consequences.
Option B is incorrect because it does not solve the problem presented but may be part of a solution that includes using a change management strategy.
Option D is incorrect, as cost controls will not help the program manager understand the impact of changes.

161
Q

A company for whom you consult is concerned about the potential for startups to disrupt its industry. The company has asked for your help implementing new services using IoT, cloud computing, and AI. There is a high risk that this initiative will fail. This is an example of which one of the following?
A. Typical change management issues
B. A digital transformation initiative
C. A project in response to a competitor’s product
D. A cost management initiative

A

The correct answer is B. This is an example of a digital transformation initiative that is attempting fundamental changes in the way that the company delivers value to its customers.
Option A is incorrect. This is not a typical change management issue because it involves the entire enterprise introducing multiple new technologies.
Option C is incorrect. The scope of this initiative is in response to more than a single competitor.
Option D is incorrect. This is not a cost management initiative.

162
Q

You and another architect in your company are evaluating the skills possessed by members of several software development teams. This exercise was prompted by a new program to expand the ways that customers can interact with the company. This will require a significant amount of mobile development. This kind of evaluation is an example of which part of team skill management?
A. Defining skills needed to execute programs and projects defined by organizational strategy
B. Identifying skill gaps on a team or in an organization
C. Working with managers to develop plans to develop skills of individual contributors
D. Helping recruit and retain people with the skills needed by the team

A

The correct answer is B. This exercise is an attempt to identify a skills gap-in this case, mobile development skills.
Option A is incorrect. This is not about defining skills needed, as that has already been done.
Option C is incorrect because it is premature to develop a plan until the gaps are understood.
Option D is incorrect because there is no mention of hiring additional engineers.

163
Q

You and an engineering manager in your company are creating a schedule of training courses for engineers to learn mobile development skills. This kind of planning is an example of which part of team skill management?
A. Defining skills needed to execute programs and projects defined by organizational strategy
B. Identifying skill gaps on a team or in an organization
C. Working with managers to develop plans to develop skills of individual contributors
D. Helping recruit and retain people with the skills needed by the team

A

The correct answer is C. This is an example of developing the skills of individual contributors.
Option A is incorrect. This is not about defining skills needed.
Option B is incorrect. This is not about identifying skills gaps, as that has already been done.
Option D is incorrect because it does not entail recruiting.

164
Q

After training engineers on the latest mobile development tools and techniques, managers determine that the teams do not have a sufficient number of engineers to complete software development projects in the time planned. The managers ask for your assistance in writing job advertisements reaching out to your social network. These activities are an example of which part of team skill management?
A. Defining skills needed to execute programs and projects defined by organization strategy
B. Identifying skill gaps on a team or in an organization
C. Working with managers to develop plans to develop skills of individual contributors
D. Helping recruit and retain people with the skills needed by the team

A

The correct answer is D. This is an example of recruiting.
Option A is incorrect, as this is not about defining skills needed.
Option B is incorrect. This is not about identifying skills gaps, as that has already been done.
Option C is incorrect because it does not entail planning training and skill development.

165
Q

A team of consultants from your company is working with a customer to deploy a new offering that uses several services that your company provides. They are making design decisions about how to implement authentication and authorization and want to discuss options with an architect. This is an example of which aspect of customer success management?
A. Customer acquisition
B. Marketing and sales
C. Professional services
D. Training and support

A

The correct answer is C. This is an example of professional services because it involves custom support and development for customers.
Option A is incorrect because the customer is already acquired.
Option B is incorrect because there is no marketing or sales involved.
Option D is incorrect because this is a consulting engagement and not a training activity.

166
Q

Customers are noticing delays in receiving messages from an alerting service that your company provides. They call your company and provide details that are logged into a central database and reviewed by engineers who are troubleshooting the problem. This is an example of which aspect of customer success management?
A. Customer acquisition
B. Marketing and sales
C. Professional services
D. Training and support

A

The correct answer is D. This is an example of training and support because those are support activities.
Option A is incorrect because the customer is already acquired.
Option B is incorrect because there is no marketing or sales involved.
Option C is incorrect because this is not a consulting engagement.

167
Q

As an architect, you have been invited to attend a trade conference in your field of expertise. In addition to presenting at the conference, you will spend time at your company’s booth in the exhibit hall, where you will discuss your company’s products with conference attendees. This is an example of what aspect of customer success management?
A. Customer acquisition
B. Marketing and sales
C. Professional services
D. Training and support

A

The correct answer is B. This is an example of marketing and sales because the booth is a marketing activity.
Option A is incorrect because customers are rarely acquired at trade shows. The marketing activities at a trade show may lead to customer acquisition at a later date, however.
Option C is incorrect because this is not a consulting engagement.
Option D is incorrect because this does not involve training and support activities.

168
Q

A group of executives has invited you to a meeting to represent architects in a discussion about identifying projects and programs that require funding and prioritizing those efforts based on the company’s strategy and needs. This is an example of what aspect of cost management?
A. Resource planning
B. Cost estimating
C. Cost budgeting
D. Cost control

A

The correct answer is A. This is an example of resource planning because it involves prioritizing projects and programs. Options B and C are incorrect because there is no cost estimating or budgeting done in the meeting.
Option D is incorrect because it does not involve expenditure approvals or reporting.

169
Q

An engineer has been tasked with creating reports to help managers track spending. This is an example of what aspect of cost management?
A. Resource planning
B. Cost estimating
C. Cost budgeting
D. Cost control

A

The correct answer is D. This effort involves reporting on expenditures.
Option A is incorrect because there is no review of proposed projects or discussion of priorities. Options B and C are incorrect because there is no cost estimating or budgeting done in the meeting.

170
Q

A team of developers is tasked with developing an enterprise application. They have interviewed stakeholders and collected requirements. They are now designing the system and plan to begin implementation next. After implementation, they will verify that the application meets specifications. They will not revise the design once coding starts. What application development methodology is this team using?
A. Extreme programming
B. Agile methodology
C. Waterfall methodology
D. Spiral methodology

A

The correct answer is C. This is an example of waterfall methodology because each stage of the software development life cycle is performed once and never revisited.
Option A is incorrect. Extreme programming is a type of agile methodology.
Option B is incorrect because there is no tight collaboration, rapid development and deployment, and frequent testing.
Option D is incorrect because the steps of the software development life cycle are not repeated with each iteration focused on defining a subset of work and identifying risks.

171
Q

A team of developers is tasked with developing an enterprise application. They have interviewed stakeholders and set a scope of work that will deliver a subset of the functionality needed. Developers and stakeholders have identified risks and ways of mitigating them. They then proceed to gather requirements for the subset of functionalities to be implemented. That is followed by design, implementation, and testing. There is no collaboration between developers and stakeholders until after testing, when developers review results with stakeholders and plan the next iteration of development. What application development methodology is this team using?
A. Extreme programming
B. Agile methodology
C. Waterfall methodology
D. Spiral methodology

A

The correct answer is D. This is an example of spiral methodology because each stage of the software development life cycle is repeated in a cyclical manner, and each iteration begins with scoping work and identifying risks.
Option A is incorrect. Extreme programming is a type of agile methodology.
Option B is incorrect because there is no tight collaboration, rapid development and deployment, and frequent testing.
Option C is incorrect because the steps of the software development life cycle are repeated.

172
Q

A team of developers is tasked with developing an enterprise application. They meet daily with stakeholders to discuss the state of the project. The developers and stakeholders have identified a set of functionalities to be implemented over the next two weeks. After some design work, coding begins. A new requirement is discovered, and developers and stakeholders agree to prioritize implementing a feature to address this newly discovered requirement. As developers complete small functional units of code, they test it. If the code passes the tests, the code unit is integrated with the version-controlled codebase. What application development methodology is this team using?
A. Continuous integration
B. Agile methodology
C. Waterfall methodology
D. Spiral methodology

A

The correct answer is B. This is an example of an agile methodology because developers and stakeholders work closely together, development is done in small units of work that include frequent testing and release, and the team is able to adapt to changes in requirements without following a rigid linear or cyclical process.
Option A is incorrect. Continuous integration is not an application development methodology.
Option C is incorrect, this is not linear process that does not revisit earlier stages.
Option D is incorrect because the steps of the software development life cycle are not repeated with each iteration focused on defining a subset of work and identifying risks.

173
Q

You are a developer at a startup company that is planning to release its first version of a new mobile service. You have discovered a design flaw that generates and sends more data to mobile devices than is needed. This is increasing the latency of messages between mobile devices and backend services running in the cloud. Correcting the design flaw will delay the release of the service by at least two weeks. You decide to address the long latency problem by coding a workaround that does not send the unnecessary data. The design flaw is still there and is generating unnecessary data, but the service can ship under these conditions. This is an example of what?
A. Incurring technical debt
B. Paying down technical debt
C. Shifting risk
D. Improving security

A

The correct answer is A. You are incurring technical debt by making a suboptimal design and coding choice in order to meet other requirements or constraints. The code will need to be refactored in the future.
Option B is incorrect. This is not an example of refactoring suboptimal code.
Option C is incorrect, as there is no shifting or transferring of risk.
Option D is incorrect. There is no mention that this change would improve the confidentiality, integrity, or availability of the service.

174
Q

You are a developer at a startup company that has just released a new service. During development, you made suboptimal coding choices to keep the project on schedule. You are now planning your next two weeks of work, which you decide will include implementing a feature the product manager wanted in the initial release but was postponed to a release occurring soon after the initial release. You also include time to refactor code that was introduced to correct a bug found immediately before the planned release date. That code blocks the worst impact of the bug, but it does not correct the flaw. Revising that suboptimal code is an example of what?
A. Incurring technical debt
B. Paying down technical debt
C. Shifting risk
D. Improving security

A

The correct answer is B. You are paying down technical debt by changing suboptimal code that was intentionally used to mitigate but not correct a bug.
Option A is incorrect. This is not an example of incurring technical debt because you are not introducing suboptimal code in order to meet other requirements or constraints.
Option C is incorrect. There is no shifting or transferring of risk.
Option D is incorrect. There is no mention that this change would improve the confidentiality, integrity, or availability of the service.

175
Q

As a developer of a backend service for managing inventory, your manager has asked you to include a basic API for the inventory service. You plan to follow best-practice recommendations. What is the minimal set of API functions that you would include?
A. Create, read, update, and delete
B. List, get, create, update, and delete
C. Create, delete, and list
D. Create and delete

A

The correct answer is B. The standard API operations are list, get, create, update, and delete. Options A, C, and D are incorrect because they are all missing at least one of the standard functions.

176
Q

A junior developer asks your advice about handling errors in API functions. The developer wants to know what kind of data and information should be in an API error message. What would you recommend?
A. Return HTTP status 200 with additional error details in the payload.
B. Return a status code form with the standard 400s and 500s HTTP status codes with no additional error details in the response body.
C. Return error details in the payload, and do not return a code.
D. Define your own set of application-specific error codes.

A

The correct answer is B. The API should return a standard status code used for errors, in other words, from the 400s or 500s, no other details, in order to reduce exposing information that could pose a security risk.
Option A is incorrect. 200 is the standard HTTP success code.
Option C is incorrect because it does not return a standard error code.
Option D is incorrect because HTTP APIs should follow broadly accepted conventions so that users of the API can process standard error messages and not have to learn application-specific error messages.

177
Q

A junior developer asks your advice about verifying authorizations in API functions. The developer wants to know how they can allow users of the API to make assertions about what they are authorized to do. What would you recommend?
A. Use JSON Web Tokens (JWTs)
B. Use API keys
C. Use encryption
D. Use HTTPS instead of HTTP

A

The correct answer is A. JWTs are a standard way to make assertions securely.
Option B is incorrect. API keys can be used for authentication, but they do not carry assertions.
Option C is incorrect. Encryption does not specify authentication information.
Option D is incorrect. HTTPS does not provide for assertions.

178
Q

Your startup has released a new online game that includes features that allow users to accumulate points by playing the game. Points can be used to make in-game purchases. You have discovered that some users are using bots to play the game programmatically much faster than humans can play the game. The use of bots is unauthorized in the game. You modify the game API to prevent more than 10 function calls per user, per minute. This is an example of what practice?
A. Encryption
B. Defense in depth
C. Least privileges
D. Resource limiting

A

The correct answer is D. This is an example of rate limiting because it is putting a cap on the number of function calls allowed by a user during a specified period of time.
Option A is incorrect. This is not encryption.
Option B is incorrect because defense in depth requires at least two distinct security controls.
Option C is incorrect. The solution does not limit privileges based on a user’s role. In this case, most users are players. They continue to have the same privileges that they had before resource limiting was put in place.

179
Q

A team of developers is creating a set of tests for a new service. The tests are defined using a set of conditions or input values and expected output values. The tests are then executed by reading the test data source, and for each test the software being tested is executed, and the output is compared to the expected value. What kind of testing framework is this?
A. Data-driven testing
B. Hybrid testing
C. Keyword-driven testing
D. Model-based testing

A

The correct answer is A. This is an example of data-driven testing because the input data and expected output data are stated as part of the test.
Option B is incorrect because this testing approach does not include two or more frameworks.
Option C is incorrect because it does not include a set of detailed instructions for executing the test.
Option D is incorrect. No simulator is used to generate inputs and expected outputs.

180
Q

Your company is moving an enterprise application to Google Cloud. The application runs on a cluster of virtual machines, and workloads are distributed by a load balancer. Your team considered revising the application to use containers and the Kubernetes Engine, but they decide not to make any unnecessary changes before moving the application to the cloud. This is an example of what migration strategy?
A. Lift and shift
B. Move and improve
C. Rebuild in the cloud
D. End of life

A

The correct answer is A. This is a lift-and-shift migration because only required changes are made to move the application to the cloud. Options B and C are incorrect because there is no new development in this migration.
Option D is not a valid type of migration strategy.

181
Q

As a consultant to an insurance company migrating to the Google Cloud, you have been asked to lead the effort to migrate data from AWS S3 to Cloud Storage. Which transfer method would you consider first?
A. Google Transfer Service
B. B. gsutilcommand line
C. Google Transfer Appliance
D. Cloud Dataproc

A

The correct answer is A. The Google Transfer Service executes jobs that specify source and target locations. It is the recommended method for transferring data from other clouds.
Option B could be used, but it is not the recommended practice, so it should not be the first option considered.
Option C is incorrect. The Google Transfer Service has to be installed in your data center, so it is not an option for migrating data from a public cloud.
Option D is incorrect. Cloud Dataproc is a managed Hadoop and Spark service. It is not used for data migrations.

182
Q

You are a consultant to an insurance company migrating to GCP. Five petabytes of business-sensitive data need to be transferred from the on-premises data center to Cloud Storage. You have a 10 GB network between the on-premises data center and Google Cloud. What transfer option would you recommend?
A. A. gsutil
B. B. gcloud
C. Cloud Transfer Appliance
D. Cloud Transfer Service

A

The correct answer is C. The Cloud Transfer Appliance should be used. Sending 5 PB over a 10 GB network would take approximately two months to transfer. Options A and D are not correct because they would use the 10 GB network, and that would take too long to transfer and consume network resources.
Option B is incorrect. gcloudis used to manage many GCP services; it is not used to transfer data from on-premises data centers to Cloud Storage.

183
Q

You are migrating a data warehouse from an on-premises database to BigQuery. You would like to write a script to perform some of the migration steps. What component of the GCP SDK will you likely need to use to create the new data warehouse in BigQuery?
A. A.cbt
B. B.bq
C. C. gsutil
D. D.kubectl

A

The correct answer is B.bqis the GCP SDK component used to manage BigQuery.
Option A is incorrect.cbtis used to manage Bigtable.
Option C is incorrect. gsutilis used to work with Cloud Storage.
Option D is incorrect.kubectlis used to work with Kubernetes.

184
Q

You are setting up a new laptop that is configured with a standard set of tools for developers and architects, including some GCP SDK components. You will be working extensively with the GCP SDK and want to know specifically which components are installed and up-to-date. What command would you run on the laptop?
A. A. gsutil component list
B. B.cbt component list
C. C. gcloud component list
D. D.bq component list

A

The correct answer is C. gcloudis the utility that manages SDK components.
Option A is incorrect. gsutilis for working with Cloud Storage.
Option B is incorrect.cbtis for working with Bigtable.
Option D is incorrect.bqis used for working with BigQuery.

185
Q

Your midsize company has decided to assess the possibility of moving some or all of its enterprise applications to the cloud. As the CTO, you have been tasked with determining how much it would cost and what the benefits of a cloud migration would be. What would you do first?
A. Take inventory of applications and infrastructure, document dependencies, and identify compliance and licensing issues.
B. Create a request for proposal from cloud vendors.
C. Discuss cloud licensing issues with enterprise software vendors.
D. Interview department leaders to identify their top business pain points.

A

The correct answer is A. Before migrating to the cloud, one of the first steps is understanding your own infrastructure, dependencies, compliance issues, and licensing structure.
Option B is incorrect. Without an understanding of what you want from a cloud vendor, it is not possible to create a request for proposal.
Option C is incorrect. It is too early to discuss licensing if you don’t understand your current licensing situation and what licensing you want to have in the cloud.
Option D is incorrect. It is a reasonable thing to do as a CTO, but it is too broad of a topic, and instead discussions should be focused on understanding your infrastructure and workloads so you can complete the specific task assigned to you, which is determining how much it would cost and what the benefits of a cloud migration are.

186
Q

You are working with a colleague on a cloud migration plan. Your colleague would like to start migrating data. You have completed an assessment but no other preparation work. What would you recommend before migrating data?
A. Migrating applications
B. Conducting a pilot project
C. Migrating all identities and access controls
D. Redesigning relational data models for optimal performance

A

The correct answer is B. Conducting a pilot project will provide an opportunity to learn about the cloud environment.
Option A is incorrect, as applications should be migrated after data.
Option C is incorrect. There is no need to migrate all identities and access controls until you understand how you will define identities, roles, and groups and if you will be integrating an existing identity provider.
Option D is incorrect. There is no reason given that would warrant redesigning a relational database as part of the migration.

187
Q

As the CTO of your company, you are responsible for approving a cloud migration plan for services that include a wide range of data. You are reviewing a proposed plan that includes a data migration plan. Network and security plans are being developed in parallel and are not yet complete. What should you look for as part of the data migration plan?
A. Database configuration details, including IP addresses and port numbers
B. Specific firewall rules to protect databases
C. An assessment of data classifications and regulations relevant to the data to be migrated
D. A detailed description of current backup operations

A

The correct answer is C. You should be looking for a recognition that data classification and regulation needs to be considered and addressed.
Option A is incorrect. Database and network administrators will manage database configuration details when additional information on database implementations is known.
Option B is incorrect. It is not necessary to specify specific firewall rules at this stage since network migration issues are still under development.
Option D is incorrect. Current backup operations are not relevant to the migration plan any more than any other routine operational procedures.

188
Q

A client of yours is prioritizing applications to move to the cloud. One system written in Java is a Tier 1 production system that must be available 24/7; it depends on three Tier 2 services that are running on premises, and two other Tier 1 applications depend on it. Which of these factors is least important from a risk assessment perspective?
A. The application is written in Java.
B. The application must be available 24/7.
C. The application depends on three Tier 2 services.
D. Two other Tier 1 applications depend on it.

A

The correct answer is A. Java is a widely used, widely supported language for developing a range of applications, including enterprise applications. There is little risk moving a Java application from an on-premises platform to the cloud. All other options are considerable factors in assessing the risk of moving the application.

189
Q

As part of a cloud migration, you will be migrating a relational database to the cloud. The database has strict SLAs, and it should not be down for more than a few seconds a month. The data stores approximately 500 GB of data, and your network has 100 Gbps bandwidth. What method would you consider first to migrate this database to the cloud?
A. Use a third-party backup and restore application.
B. Use the MySQL data export program and copy the export file to the cloud.
C. Set up a replica of the database in the cloud, synchronize the data, and then switch traffic to the instance in the cloud.
D. Transfer the data using the Google Transfer Appliance.

A

The correct answer is C. Because of the strict SLAs, the database should not be down as long as would be required if a MySQL export were used. Also, the problem statement did not say what kind of relational database it is. Options A and B would leave the database unavailable longer than allowed or needed.
Option D is not needed because of the small data volume, and it would require the database to be down longer than allowed by the SLA.

190
Q

Your company is running several third-party enterprise applications. You are reviewing the licenses and find that they are transferrable to the cloud, so you plan to take advantage of that option. This form of licensing is known as which one of the following?
A. Compliant licensing
B. Bring-your-own-license
C. Pay-as-you-go license
D. Metered pricing

A

The correct answer is B. This is an example of bring-your-own-license.
Option A is a fictitious term. Options C and D both refer to pay based on usage in the cloud.

191
Q

Your company is running several third-party enterprise applications. You are reviewing the licenses and find that they are not transferrable to the cloud. You research your options and see that the vendor offers an option to pay based on your level of use of the application in the cloud. What is this option called?
A. Compliant licensing
B. Bring-your-own-license
C. Pay-as-you-go license
D. Incremental payment licensing

A

The correct answer is C. This is an example of pay-as-you-go licensing. Options A and D are fictitious terms.
Option B is incorrect. You are not using a license that you own in this scenario.

192
Q

You have been asked to brief executives on the networking aspects of the cloud migration. You want to begin at the highest level of abstraction and then drill down into lower-level components. What topic would you start with?
A. Routes
B. Firewalls
C. VPCs
D. VPNs

A

The correct answer is C. VPCs are the highest networking abstraction and constitute a collection of network components. Options A, B, and C are wrong because they are lower-level components.

193
Q

You have created a VPC in Google Cloud, and subnets were created automatically. What range of IP addresses would you not expect to see in use with the subnets?
A. 10.0.0.0 to 10.255.255.255
B. 172.16.0.0 to 172.31.255.255
C. 192.168.0.0 to 192.168.255.255
D. 201.1.1.0 to 201.2.1.0

A

The correct answer is D. It is not an RFC 1918 private address, which is within the address ranges used with subnets. Options A, B, and C are all incorrect because they are private address ranges and may be used with subnets.

194
Q

During migration planning, you learn that traffic to the subnet containing a set of databases must be restricted. What mechanism would you plan to use to control the flow of traffic to a subnet?
A. IAM roles
B. Firewall rules
C. VPNs
D. VPCs

A

The correct answer is B. Firewall rules are used to control the flow of traffic.
Option A is incorrect because IAM roles are used to assign permissions to identities, such as users or service accounts.
Option C is incorrect. A VPN is a network link between Google Cloud and on-premises networks.
Option D is incorrect. VPCs are high-level abstractions grouping lower-level network components.

195
Q

During migration planning, you learn that some members of the network management team will need the ability to manage all network components, but others on the team will only need read access to view the state of the network. What mechanism would you plan to use to control the user access?
A. IAM roles
B. Firewall rules
C. VPNs
D. VPCs

A

The correct answer is A. IAM roles are used to assign permissions to identities, such as users or service accounts. These permissions are assigned to roles, which are assigned to users.
Option B is incorrect. Firewall rules are used to control the flow of traffic between subnets.
Option C is incorrect. A VPN is a network link between Google Cloud and on-premises networks.
Option D is incorrect. VPCs are high-level abstractions grouping lower-level network components.

196
Q

Executives in your company have decided that the company should not route its GCP-only traffic over public internet networks. What Google Cloud service would you plan to use to geographically distribute the workload of an enterprise application?
A. Global load balancing
B. Simple network management protocol
C. Content delivery network
D. VPNs

A

The correct answer is A. Global load balancing is the service that would route traffic to the nearest healthy instance using Premium Network Tier.
Option B is incorrect. SNMP is a management protocol, and it does not enable global routing. Options C and D are wrong because they are network services but do not enable global routing.

197
Q

Executives in your company have decided to expand operations from just North America to Europe as well. Applications will be run in several regions. All users should be routed to the nearest healthy server running the application they need. What Google Cloud service would you plan to use to meet this requirement?
A. Global load balancing
B. Cloud Interconnect
C. Content delivery network
D. VPNs

A

The correct answer is A. Global load balancing will route traffic to the nearest healthy instance.
Option B is incorrect. Cloud Interconnect is a way to implement hybrid computing.
Option C is incorrect. Content delivery networks are used to distribute content to reduce latency when delivering that content.
Option D is incorrect. VPNs link on-premises data centers to Google Cloud.

198
Q

Executives in your company have decided that the company should expand its service offerings to a global market. Your company distributes educational video content online. Maintaining low latency is a top concern. What type of network service would you expect to use to ensure low-latency access to content from around the globe?
A. Routes
B. Firewall rules
C. Content delivery network
D. VPNs

A

The correct answer is C. A content delivery network would be used to distribute video content globally to reduce network latency.
Option A is incorrect. Routes are used to control traffic flow and are not directly related to reducing latency of content delivery, although a poorly configured set of routes could cause unnecessarily long latencies.
Option B is incorrect. Firewalls will not reduce latency.
Option D is incorrect because VPNs are used to link on-premises data centers to Google Cloud.

199
Q

Your company is using Google Cloud for development and testing. You require less than 2 Gbps network bandwidth. You want the lowest-cost and easiest-to-administer option. What networking option would you use to implement hybrid cloud networking?
A. Cloud Interconnect
B. Cloud VPN
C. Direct peering
D. CIDR block

A

Option B is correct. Cloud VPN is a GCP service that provides virtual private networks between GCP and on-premises networks. Cloud VPN is implemented using IPSec VPNs and supports bandwidths of up to 3 Gbps per VPN connection.
Option A is incorrect because Cloud Interconnect is used when at least 10 Gbps bandwidth is required.
Option C is incorrect. Direct peering is a direct connection to a Google network point of access that would require more administration than a Cloud VPN solution.
Option D is incorrect. CIDR blocks are address ranges for IP addressing.

200
Q

Working with a network engineer, you have determined that a new application to be deployed to the us-west1 region should use a regional load balancer. What are two options for regional load balancers?
A. Network TCP/UDP and Internal TCP/UDP
B. Internal TCP/UDP and HTTP(S) Load Balancing
C. Regional Proxy Load Balancing and Network TCP/UDP
D. Regional Proxy Load Balancing and Internal TCP/UDP

A

The correct answer is A. Network TCP/UDP and Internal TCP/UDP are the two regional load balancers in GCP.
Option B is incorrect. HTTP(S) Load Balancing is a global load balancer. Options C and D are incorrect because there is no Regional Proxy Load Balancing at this time.

201
Q

A client of yours has a technical requirement that states as their systems scale, no ingested data should be lost due to processing backlogs. What GCP service would you use to meet that requirement?
A. Cloud Dataprep
B. Cloud Dataproc
C. Cloud Dataflow
D. Cloud Pub/Sub

A

The correct answer is D. Cloud Pub/Sub can buffer data in a topic until the services are ready to process the data.
Option A is incorrect. Cloud Dataprep is service for preparing data for analysis, such as machine learning.
Option B is incorrect. Cloud Dataproc is a Hadoop and Spark managed service.
Option C is incorrect. Cloud Dataflow is an implementation of Apache Beam, a stream and batch processing service.

202
Q

Google Cloud implements two implied firewall rules for virtual private clouds. What type of operations do these two rules enforce?
A. Block incoming and outgoing SMTP traffic
B. Block all incoming traffic, allow all outgoing traffic
C. Block all outgoing traffic, allow all incoming traffic
D. Block incoming SMTP traffic, allow outgoing UDP traffic

A

The correct answer is B. The implied rules block all incoming traffic and allow all outgoing traffic.
Option A is incorrect. The implied rules are not limited to SMTP traffic.
Option C is incorrect because it is the opposite of what is actually implemented.
Option D is incorrect because the SMTP traffic is blocked on ingress and all traffic is allowed to egress.

203
Q

A client of yours wants to improve business agility and speed of innovation and they want to be able to rapidly provision new resources. Which of the following would help meet that objective?
A. Cloud SQL
B. Cloud Storage
C. Cloud Deployment Manager
D. Cloud Machine Learning Engine

A

The correct answer is C. The Cloud Deployment Manager uses declarative infrastructure specifications to deploy infrastructure.
Option A is incorrect. Cloud SQL is a managed database service.
Option B is incorrect. Cloud Storage is an object store.
Option D is incorrect. Cloud Machine Learning is a managed service for running machine learning models.

204
Q

A retailer notices that their traffic patterns are highest in the mornings and weekend evenings. They also notice that during other times, 75 percent of their VM capacity is idle. What GCP mechanism could you use to optimize the number of virtual machine instances running at any time?
A. Unmanaged instance groups
B. Managed instance groups
C. Persistent disks
D. Cloud Functions

A

The correct answer is B. Managed instance groups can be autoscaled.
Option A is incorrect because unmanaged instance groups cannot be autoscaled.
Option C is incorrect. Persistent disks do not increase CPU availability.
Option D is incorrect. Cloud Functions do run virtual machines for GCP users.

205
Q

A healthcare client needs to transmit data securely from client devices to an application running in GCP. The application runs in multiple regions. Data must be encrypted between client devices and the load balancer in GCP. What load balancer would you recommend?
A. TCP Proxy load balancing
B. SSL Proxy load balancing
C. Network TCP/UDP
D. Internal TCP/UDP

A

The correct answer is B. SSL Proxy load balancer is a global load balancer that terminates SSL/TLS traffic at the load balancer and distributes traffic across the set of backend servers.
Option A is incorrect because TCP Proxy load balancing is not recommended for SSL traffic. Options C and D are incorrect because they are both regional load balancers.

206
Q

An online gaming company wants to reduce latency to all customers while increasing their global footprint. Which of the following would you expect to see in an architecture designed to meet these two requirements?
A. Cloud CDN
B. Global load balancing
C. BigQuery
D. Cloud Pub/Sub

A

The correct answer is B. To reduce latency and expand global presence, the company should run game servers in regions around the globe. This will require global load balancing.
Option A is incorrect. Cloud CDN is used to distribute static content, not dynamic content like updates to game scenes.
Option C is incorrect. Cloud Dataproc is a Hadoop and Spark managed service.
Option C is incorrect. BigQuery is an analytics database. Answer D is incorrect. Cloud Pub/Sub is a message queue.

207
Q

A data analytics company wants to replace its MySQL databases running on premises with a managed service that provides autoscaling, provides low-latency load balancing, and does not require them to manage servers. What GCP service would you recommend for this?
A. Cloud SQL
B. Cloud BigQuery
C. Cloud Datastore
D. Cloud Storage

A

The correct answer is A. A MySQL database can be run in Cloud SQL.
Option B is incorrect. BigQuery is an analytical database.
Option C is incorrect. Cloud Datastore is a NoSQL document database.
Option D is incorrect. Cloud Storage is an object storage system, not a relational database.

208
Q

A company with a fleet of vehicles collects 120 fields of data per second for a total of 10 TB of data per day from the vehicles. The company wants to stream all data directly to Google Cloud in near real time. Executives want to minimize data loss and are concerned that backend processors may not be able to keep up with the load at all times. What would you recommend to mitigate the risk of losing data on ingest?
A. Deploy enough virtual machines to handle peak capacity load
B. Write data to a Cloud Pub/Sub topic and have the backend application read from the topic
C. Write data to a Cloud SQL database table and provide an API for the backend application to call to get the data
D. Write the data to a Cloud Dataproc database table and provide an API for the backend application to call to get the data

A

The correct answer is B. Write the data to a Cloud Pub/Sub topic that can buffer the data until the backend services can process it.
Option A is incorrect, as that might solve the problem, but it is costly and not necessary.
Option C is incorrect. Cloud SQL would not scale to the volume of data ingested, and an API is not the best way to move large volumes of streaming data between services.
Option D is incorrect. Cloud Dataproc is a managed Hadoop and Spark service; while Hadoop could have Hive tables, those tables are not designed for high volumes of low latency writes.

209
Q

An company with a fleet of delivery vehicles wants to be able to stock replacement parts preemptively so they can reduce unplanned downtime of their vehicles by 75 percent. In addition to collecting data, this will require developing predictive models using statistical and machine learning techniques. What managed service would you recommend for that?
A. Cloud Bigtable
B. Cloud Storage
C. Vertex AI
D. Cloud Datastore

A

The correct answer is C. Vertex AI is a managed service for machine learning. Options A and B are incorrect, because although they are useful for storing data, they do not provide managed machine learning services.
Option D is incorrect. Cloud Datastore is a NoSQL database.

210
Q

You repeatedly receive alerting notifications from Cloud Monitoring that the instances in a managed instance group are running with high CPU utilization. You determine that during peak periods, you should have two more instances running than you do now. What change would you implement to address this problem?
A. Manually add two instances to the managed instance group
B. Modify the managed instance group template to increase the maximum number of instances
C. Change to an unmanaged instance group so that the autoscaler can add two more instances
D. Grant theroles/compute.add-more-instancesprivilege to the service account associated with the managed instance group

A

The correct answer is B. You would have to modify the managed instance group template.
Option A is incorrect. You should not manually add instances to the managed instance group.
Option C is incorrect. Unmanaged instance groups do not autoscale.
Option D is incorrect. There is noroles/compute.add-more-instancesprivilege.

211
Q

A financial institution is using machine learning models to predict fraud. The company wants to update the machine learning models monthly. Machine learning engineers train the models with 90 percent of the data from the last 12 months and 10 percent of training data is 13 to 24 months old. All training data is stored in Cloud Storage. Data older than 24 months is not needed. The amount of training data retrieved from Cloud Storage is randomly selected and represents less than 0.05 percent of all potential training data stored. Most potential training data is never retrieved. Which of the following options is best if you want to minimize the cost of storage while also minimizing administrative overhead?
A. Store all data in Cloud Storage Standard class dual region storage.
B. Store all data in Cloud Datastore and execute a monthly cron job to delete data older than 24 months.
C. Store data from the most recent 12 months in standard storage and data from 13 to 24 months ago in nearline storage. Use lifecycle policies to move data from regional to Nearline storage after 12 months and delete data older than 24 months.
D. Store data in Cloud Spanner and execute a monthly cron job to delete data older than 24 months.

A

The correct answer is C. Store recent data in standard storage, migrate data to nearline storage after 12 months, and automatically delete after 24 months.
Option A is incorrect. There is no indication that dual region storage is needed, and it costs more than storing data in a single region. Options B and D are incorrect. There is no indication that you would benefit from storing data in a relational or NoSQL database, and running a cron job requires someone on the team to monitor the cron job and address any problems running the job.

212
Q

A company uses three projects in GCP for sales, finance, and analytics. The analytics project needs to access transactional databases in both sales and finance. What mechanism would you use to make resources in one project available to another project?
A. Subnet
B. Firewall
C. Shared VPC
D. Shared Project VPN

A

The correct answer is C. A Shared VPC is a method for making resources in one VPC available to other VPCs. Options A and B are incorrect, as neither subnets nor firewalls makes project resources available to other projects.
Option D is incorrect. There is no such thing as a Shared Project VPN.

213
Q

Your organization has several VPCs and would like to avoid sending data over the public internet when sending network traffic between VPCs. You would also like to avoid any egress charges. Which of the following networking options should you use?
A. VPC Network Peering
B. Firewall
C. Subnet
D. Shared Project VPN

A

The correct answer is A. VPC Network Peering enables different VPC networks to communicate using private IP address space, as defined in RFC 1918. VPC network peering is used as an alternative to using external IP addresses or using VPNs to link networks. Options B and C are incorrect. Neither subnets nor firewalls enable different VPCs to communicate.
Option D is incorrect. There is no such thing as a Shared Project VPN.

214
Q

To comply with security regulations, you have added firewall rules to limit inbound and outbound traffic. You notice that two of the rules can apply to the same situation and you want one to take precedence. What would you do to ensure that the correct rule takes precedence?
A. Specify the top flag when defining the firewall rule
B. Set the priority of the rule to take precedence with an integer value less than the priority value of the other rule
C. Set the priority of the rule to take precedence with an integer value greater than the priority value of the other rule
D. There is no way to do that-one of the rules must be removed

A

The correct answer is B. Firewall rules have priority values between 0 and 65535, where 0 is the highest priority and 65535 is the lowest priority.
Option A is incorrect. There is no top flag on a firewall rule.
Option C is incorrect. Larger integers have lower priority.
Option D is incorrect. There is a way to specify precedence.

215
Q

A network engineer is troubleshooting a new network configuration and believes that one or more firewall rules may be misconfigured. The network engineer would like to enable traffic currently blocked by a firewall rule. What should the network engineer do to enable traffic during troubleshooting?
A. Disable the firewall rule using the enforcement status
B. Delete the firewall rule
C. Create a new subnet without the firewall rule in question
D. Delete the subnet

A

The correct answer is A. The network engineer can disable the firewall rule without deleting it.
Option B is incorrect because although it would allow traffic to flow, disabling is preferred to deleting when troubleshooting.
Option C is incorrect because it would not enable the flow of traffic currently blocked by the firewall rule.
Option D would make it impossible to troubleshoot the firewall rule since the subnet would not exist.

216
Q

You are working with network engineers to set up several subnets on a VPC. You have outlined your plan with stakeholders who have questions about the decisions you have made. One question is about the meaning of parts of an IP address in CIDR notation. How would you explain the purpose of the number 20 in the IP address: 172.16.0.0/20?
A. 20 is the number of bits used to identify the subnet.
B. 20 is the number of bits used for host addresses.
C. 20 is a reference to a firewall rule ID.
D. 20 is a reference to a subnet ID.

A

The correct answer is A. The number after the slash is the number of bits used to identify the subnet. The remaining bits are used for host addresses.
Option B is incorrect. The number of bits used for host addresses in this case is 32-20, or 12. Options C and D are incorrect because 20 is not an identifier of a firewall rule or a subnet.

217
Q

A data warehouse architect would like your advice for moving a data warehouse to GCP. The architect plans to migrate from a PostgreSQL database to BigQuery. The plan assumes that an ETL process will write data to Cloud Storage. A process running in GCP will load the data into BigQuery. What kind of topology is this?
A. Mirrored topology
B. Meshed topology
C. Gated egress topology
D. Handover topology

A

The correct answer is D. In a handover topology, applications running on premises upload data to a shared storage service, such as Cloud Storage, and then a service running in GCP consumes and processes that data.
Option A is incorrect. In a mirrored topology, environments mirror each other.
Option B is incorrect. In a meshed topology, all systems within the cloud and private network can communicate with each other.
Option C is incorrect. In a gated egress topology, on-premises APIs are made available to applications running in the cloud.

218
Q

Your company is implementing a hybrid cloud. Large volumes of data will be transferred between the on-premises data center and GCP. You have determined that you will need 100 Gbps of bandwidth to meet networking service-level agreements. What networking option would you choose to link the on-premises data center to GCP using a single link, assuming all other preconditions for use are met?
A. Cloud VPN
B. Cloud VPC
C. Cloud Interconnect direct connection
D. Cloud Interconnect partner interconnect

A

The correct answer is C. To provide 100 Gbps, you will need a direct connection between the on-premises data center and a Google Cloud access point.
Option A is incorrect. Cloud VPN provides 3 Gbps of bandwidth.
Option B is incorrect. A Cloud VPC is not a way to link a remote data center to GCP.
Option D is incorrect. A partner interconnect can be configured as either 50 Mbps or 10 Gbps, so with multiple links the requirements could be met but not with a single link.

219
Q

Regulations require that your company’s virtual machine instances cannot run on a physical server with virtual machines of any other cloud customer. What Compute Engine options would you use to ensure that your VMs do not run with other customer’s VMs?
A. Persistent disks
B. Standard boot image
C. Sole tenancy
D. Shielded VMs

A

The correct answer is C. You would select the sole tenancy option when creating the VM.
Option A is incorrect. Persistent disks are for storage and not related to sole tenancy.
Option B is incorrect. A standard boot image is for specifying an image to run when creating a VM.
Option D is incorrect. Shielded VMs have additional security controls but do not enforce sole tenancy.

220
Q

You are using a set of preemptible virtual machines to process a backlog of video analysis jobs. You notice that some of the machines have not been preempted. How long can a preemptible VM run before it is preempted?
A. 12 hours
B. 24 hours
C. 36 hours
D. 48 hours

A

The correct answer is B. Preemptible VMs will run up to 24 hours.
Option A is incorrect. Preemptible VMs can run longer than 12 hours.
Option C an.
Option D are incorrect because preemptible VMs do not run that long.

221
Q

Regulations require that your virtual machine instances run only on servers that support Virtual Trusted Platform Module (vTPM). What Compute Engine option would you use to ensure that this is the case?
A. Preemptible VMs
B. Custom machine type
C. Shielded VMs
D. Kubernetes pods

A

The correct answer is C. Shielded VMs included vTPM, along with secure boot and integrity monitoring.
Option A is incorrect. Preemptible VMs are lower cost and may be preempted by Google.
Option B is incorrect. Custom machine types are used to specify custom vCPU and memory configurations.
Option D is incorrect. Kubernetes pods are not related to Compute Engine security.

222
Q

You are helping a startup plan the release of a new application running on Compute Engine. The company expects slow adoption at first, but the pace of adoption will increase. They do not want to provision for peak capacity but want to have enough instances available to meet the load requirements at all times. Which of the following could help with this?
A. Autoscaling managed instance groups
B. Autoscaling unmanaged instance groups
C. Custom machine types
D. Preemptible VMs with load balancing

A

The correct answer is A. Autoscaling and managed instance groups can help ensure that enough VMs are available as needed.
Option B is incorrect. Unmanaged instance groups do not autoscale.
Option C is incorrect. Custom machine types would not adjust the number of VMs according to load.
Option D is incorrect. This option does not adjust the number of VMs either.

223
Q

A managed instance group has been running in the us-west1 region for three months. Your company’s customer base is growing, and you now need to deploy a similar managed instance group in the europe-west2 region. You copy the instance template from the us-west1 region to europe-west2 region, but when you try to use it in the new region, the instance group fails to create. Which of the following would you check first?
A. Specifying zonal resources in the instance group template
B. Using GPUs with VMs
C. Incorrectly specified private IP address
D. Typo in the machine image specification

A

The correct answer is A. If a zonal resource is specified in a managed instance template, it must be changed before using the template in the new region.
Option B is incorrect. Using GPUs with VMs is allowed.
Option C an.
Option D are incorrect because those errors would have generated errors in the original region.

224
Q

As a database administrator, you are tasked with configuring a VM to run a relational database in Compute Engine. You want the highest IOPS possible when reading and writing to storage while still having regional disks since zonal disks are not sufficient for your availability requirements. What kind of storage would you use?
A. Persistent SSD drives
B. Persistent HDD drives
C. Cloud Storage bucket
D. PersistentVolumes

A

The correct answer is A. Persistent SSD drives provide the highest IOPS for this use case. Extreme persistent disks have higher IOPs but are zonal, not regional resources.
Option B is incorrect because hard disk drives have lower IOPS than SSDs.
Option C is incorrect. A Cloud Storage bucket is an object store.
Option D is incorrect. PersistentVolumes are persistent storage for Kubernetes.

225
Q

Risk managers in your company have determined that no third parties may create or manage encryption keys used to encrypt your company’s data. Third parties can be used to store keys. What service would you use in Google Cloud to store encryption keys that are created in your on-premises data center?
A. Identity and Access Management (IAM)
B. Cloud Key Management Service
C. Cloud Identity
D. Shielded VMs

A

The correct answer is B. The Cloud Key Management Service is a service for managing encryption keys.
Option A is incorrect. IAM is for managing identities and authorizations.
Option C is incorrect. Cloud Identity is used for authentication.
Option D is incorrect. Shielded VMs are instances with additional security controls.

226
Q

Developers have a Python 3.7 application that they want to run in a platform-as-a-service. Which of the following GCP services will meet those requirements?
A. App Engine Standard First Generation
B. App Engine Standard Second Generation
C. Compute Engine
D. Cloud Dataflow

A

The correct answer is B. App Engine Standard Second Generation is a platform-as-a-service and supports Python 3.7.
Option A is incorrect because App Engine Standard First Generation does not support Python 3.7.
Option C is incorrect because Compute Engine is not a platform-as-a-service.
Option D is incorrect. Cloud Dataflow is not a platform-as-a-service.

227
Q

A startup has an application implemented using a set of Docker containers. The product manager wants to run the containers in a platform-as-a-service in Google Cloud. Of the following options, which would you recommend?
A. App Engine Standard First Generation
B. App Engine Standard Second Generation
C. App Engine Flexible
D. Cloud Functions

A

The correct answer is C. App Engine Flexible is a platform-as-a-service that supports custom containers. Cloud Run could also be used but is not a listed option. Options A and B are incorrect because App Engine Standard does not support custom containers.
Option D is incorrect. Cloud Functions does not run custom containers.

228
Q

A group of Python developers are running a set of microservices in App Engine. In addition to front-end, business logic, and backend services, the application requires an ancillary process to run every night. What App Engine service could they use to schedule a nightly job?
A. App Engine Cron Service
B. Cloud Pub/Sub
C. App Engine Task Queues
D. Cloud Build

A

The correct answer is A. The App Engine Cron Service runs tasks at regular times or intervals.
Option B is incorrect. Cloud Pub/Sub is a messaging queue.
Option C is incorrect. App Engine Task Queues is used to run jobs asynchronously or in the background.
Option D is incorrect. Cloud Build is used to build container images.

229
Q

You notice a problem with a Kubernetes cluster and believe that some nodes may not be communicating with the cluster master. What Kubernetes component would you check to solve the node to cluster master communication problem?
A. etcd
B. Controller manager
C. kubelet
D. cron job

A

The correct answer is C. kubelet is an agent that runs on nodes and communicates with the cluster master.
Option A is incorrect. etcd is a distributed key value store.
Option B is incorrect. Controller manager is a service running on the cluster master.
Option D is incorrect. cron is a utility service for scheduling jobs to run.

230
Q

A Kubernetes cluster is used to run a set of containerized microservices. Your application has a high availability SLA, and you want to ensure that Kubernetes will monitor and replace pods that are not functioning properly. What do you need to do to ensure that Kubernetes monitors pod health and replaces unhealthy pods?
A. Select the high availability option when creating the cluster.
B. Deploy extra nodes.
C. Deploy extra pods.
D. Nothing. Kubernetes monitors health and replaces unhealthy pods by default.

A

The correct answer is D. Kubernetes monitors pod health and replaces unhealthy pods without requiring special setup.
Option A is incorrect. There is no need to select a high availability option. Options B and C are incorrect because there is no need to deploy extra pods or nodes, as Kubernetes scales the number of pods and nodes as needed.

231
Q

A mobile app uploads images to Cloud Storage. When a file is uploaded, metadata about the file must be written to a Cloud Firestore database, and an upload message must be written to a Pub/Sub topic. The mobile app developer wants to use a managed, serverless solution to perform this processing. Which of the following would you recommend?
A. Cloud Storage Nearline
B. Cloud Functions
C. Cloud Dataflow
D. Cloud Dataproc

A

The correct answer is B. Cloud Functions is a managed serverless product that can respond to events in the cloud, such as creating a file in Cloud Storage.
Option A is incorrect. Cloud Storage Nearline is a class of object storage.
Option C is incorrect. Cloud Dataflow is a stream and batch processing service that does not respond to events.
Option D is incorrect. Cloud Dataproc is a managed Hadoop and Spark service.

232
Q

A DevOps team is trying to improve its processes. In the past, a team member would manually create compute and storage resources in GCP. Now they are moving to specify infrastructure as code. What GCP service can they use to set up cloud environments?
A. Cloud Build
B. Deployment Manager
C. Cloud Dataflow
D. Cloud Dataprep

A

The correct answer is B. Deployment Manager is a service that allows you to specify infrastructure as code.
Option A is incorrect. Cloud Build is used for working with containers.
Option C is incorrect. Cloud Dataflow is a stream and batch processing service.
Option D is incorrect. Cloud Dataprep is a tool for preparing data for analysis, such as machine learning. Terraform is another infrastructure-as-code platform but was not included in the set of options to choose from.

233
Q

A team of Java developers has created a backend application for processing streaming data from IoT devices. Each IoT device is assigned to a single server, and all data from that device goes to the same server. The data is stored in server memory where it is read by other services. Recently, several servers failed while processing data and data was lost. What is one way to improve the availability of data in this system while preserving the lowest latency reads possible with the least amount of change to the application?
A. Store data in Cloud Storage
B. Store data in Cloud Datastore
C. Store data in Cloud Memorystore
D. Store data in Cloud Build

A

The correct answer is C. Data can be stored in a Cloud Memorystore cache. If a single node fails, data is still available in the cache. Options A and B are incorrect because they would not provide the lowest latency possible.
Option D is incorrect because Cloud Build is a service for building containers.

234
Q

A data quality check program runs in App Engine Standard. Each time a data file is uploaded to a Cloud Storage bucket, the data quality check program runs on the newly uploaded file. You have created a program that runs on a Compute Engine instance that checks the bucket every three minutes for new files and invokes the App Engine program on any new files. You would rather not have to check the bucket continuously. Which of the following would run the data quality check program on demand?
A. Write a Cloud Function that triggers on file upload and writes a message to a Cloud Pub/Sub topic. Create a push subscription to send message data to an endpoint URL that will run the data quality control program in App Engine.
B. Write a Cloud Function that triggers on file upload and writes a message to a Cloud Pub/Sub topic. Create a pull subscription to send message data to an endpoint URL that will run the data quality control program in App Engine.
C. Write a Cloud Run service that triggers on file upload and writes a message to a Cloud Pub/Sub topic. Create a push subscription to send message data to an endpoint URL that will run the data quality control program in App Engine.
D. Write an App Engine Flexible application that triggers on file upload and writes a message to a Cloud Pub/Sub topic. Create a pull subscription to send message data to an endpoint URL that will run the data quality control program in App Engine.

A

The correct answer is A. A push subscription will push message data to an App Engine application endpoint and run the data quality program.
Option B is incorrect. If you used a pull subscription, the data quality control program would have to check the message queue for new messages. Options C and D are incorrect. Cloud Run and App Engine Flexible applications do not trigger on events.

235
Q

As part of a migration to the cloud, your team has decided to use Google Cloud managed services to replace as many self-managed services as possible. Your team currently runs an Apache Flink cluster that implements the Apache Beam stream and batch processing model. Which GCP service would you use to replace Apache Flink?
A. Cloud Dataprep
B. Cloud Dataflow
C. Cloud Build
D. Cloud Datastore

A

The correct answer is B, Cloud Dataflow, which is a service that implements Apache Beam.
Option A is incorrect. Cloud Dataprep is a tool for preparing data for analysis.
Option C is incorrect. Cloud Build is used to build containers.
Option D is incorrect. Cloud Datastore is a NoSQL document database.

236
Q

You have several applications running in Compute Engine, App Engine, and Kubernetes Engine. When you review the metrics collected by Cloud Monitoring, you notice that only metrics from App Engine and Kubernetes Engine applications are available. What might be the cause of this?
A. Compute Engine applications are written in a version of Java not supported by Cloud Monitoring.
B. The Cloud Monitoring ops agent has not been installed on Compute Engine instances.
C. Cloud Monitoring does not monitor Compute Engine-only App Engine and Kubernetes Engine.
D. Compute Engine metrics are stored locally and must be queried using a different interface.

A

The correct answer is B. The Cloud Monitoring ops agent probably hasn’t been installed.
Option A is incorrect. This behavior is unrelated to the language used to write applications running in Compute Engine.
Option C is incorrect. Cloud Monitoring can monitor Compute Engine instances.
Option D is incorrect. Cloud Monitoring metrics collected from Compute Engine instances are not stored locally or viewed with a different interface.

237
Q

As part of a migration to Google Cloud, you are mapping identities from your on-premises systems to identities that you will use in the cloud. Google Cloud provides several different types of identities. Which of the following is not a valid identity type in GCP?
A. Google account
B. Service account
C. Cloud IoT account
D. Cloud Identity domain

A

The correct answer is C. Cloud IoT is not a valid identity type. Options A, B, and D are all incorrect because they are valid types of identities in GCP.

238
Q

Several services running in your on-premises data center use the usernames and passwords of users to perform tasks. Your team has had several incidents because tasks fail to execute since a password was changed by a user and the application was not updated. In addition, auditors have criticized the way that you store passwords for use by applications. You are moving applications to the Google Cloud and have an opportunity to use another method. What would you suggest?
A. Use a Cloud Identity domain for all applications
B. Use an IoT account identity type
C. Use a different service account for each application
D. Use a project name instead of a username to authenticate

A

The correct answer is C. You could use a service account that is granted its own set of access controls instead of relying on using a person’s set of permissions.
Option A is incorrect. A Cloud Identity domain is a type of identity that does not solve the problem.
Option B is incorrect. There is no IoT account type.
Option D is incorrect. Project names cannot be used for authentication.

239
Q

The number of users who need access to your cloud resources is increasing. You would prefer not to assign roles to individual identities. Instead, you would like assign IAM roles in bulk. What could you use to do that?
A. Google Identity account.
B. A Google email account.
C. A Google Group.
D. You can’t. This is a security feature of GCP.

A

The correct answer is C. A Google Group can be used to create sets of users and assign them roles. Options A and B are incorrect. A Google Identity account and Google email account are for a single user.
Option D is incorrect. You can create sets of users and assign roles to them using Google Groups.

240
Q

You are administering a data warehouse running in BigQuery. A new team of developers wants to add new tables to the existing data warehouse. What permission will they need to create tables?
A. A.biquery.tables.create
B. Any of the basic roles
C. Create table
D. D.table-create-bigquery

A

The correct answer is A. This is the only option that follows the IAM role naming convention that starts role names with a resource type and ends with an action, in this casecreate.
Option B is incorrect. The Viewer basic role does not grant permission to create resources.
Option C is incorrect. Create table is not a permission in GCP.
Option D is incorrect.table-create-bigqueryis not a permission in GCP.

241
Q

You have determined that several developers using Cloud Pub/Sub will need the pubsub.subscriptions.consume permission. You expect other developers to need the permission in the future. At some point in the future, it is likely that some developers who need the permission now will no longer need it. Which way would you choose to assign this permission to those developers?
A. Grant the permission to each developer individually
B. Grant the permission to a group, and add each developer to the group
C. Grant a role with the permission to a group, and add each developer to the group
D. Grant a role with the permission to each developer individually

A

The correct answer is C. As needs for the permission change, users can be added or removed from the group. Options A and B are incorrect. You do not assign permissions directly to a user or group; permissions are assigned to roles.
Option D is incorrect because although you could assign the role to individuals, access controls are easier to administer when groups are used.

242
Q

Google Cloud provides predefined roles that are organized around commonly performed tasks, such as administering a server or querying a database. Which of the following is a role that would grant administration privileges for BigQuery?
A. A.admin.bigquery
B. B.roles/bigquery.admin
C. C.bigquery-admin
D. D.admin-bigquery

A

The correct answer is B. It is the only option that follows the role naming convention of starting role names withroles/. Options A, C, and D are incorrect. They are not properly formed role names.

243
Q

You have to assign a system administrator a set of permissions to perform a limited set of operations on a service. You have found existing roles that have all the permissions needed, but they also have other permissions that are not needed. What is one way to grant the permissions needed while following the principle of least privilege?
A. Assign the needed permissions directly to the system administrator’s identity
B. Create a custom role with only the needed permissions
C. Create a Google Group with only the needed permissions
D. Create a project with only the need permissions

A

The correct answer is B. Create a custom role with only the needed permissions.
Option A is incorrect. Permissions are not assigned directly to identities. Options C and D are incorrect. Permissions are not assigned to Google Groups or projects.

244
Q

You have created a policy specifying users that have a particular role. You set the policy on a folder in the resource hierarchy that contains only individual resources. Which of the following will inherit that policy?
A. Individual resources
B. Projects within a different folder
C. Organization that contains the folder
D. Different projects created by the same user

A

The correct option is A. When a policy is assigned to a folder, it is inherited by projects within that folder and individual resources within those projects.
Option B is incorrect. Projects in different folders do not inherit the policy.
Option C is incorrect. An organization does not inherit policies from folders within the organization.
Option D is incorrect. Different projects created by the same user do not inherit the policy.

245
Q

You have been asked to give a presentation on Google Cloud security controls. Some stakeholders are particularly interested in encryption. They have heard the term envelope encryption but do not understand what that refers to. How would you describe envelope encryption?
A. Envelope encryption is the process of storing keys in a Cloud Storage sub-bucket structure called an envelope.
B. Envelope encryption is the process of storing keys in a Cloud Key Management Service structure called an envelope.
C. Envelope encryption is the process of encrypting a data encryption key with another key called a key encryption key.
D. Envelope encryption is the process of encrypting a data encryption key with a Google public key

A

The correct answer is C. Envelope encryption is the process of encrypting a data encryption key with another key called a key encryption key. Options A and B are incorrect. Keys are not stored in a structure called an envelope in either of those services.
Option D is incorrect. Data encryption keys are not encrypted with publicly known keys. That would defeat the purpose of encrypting the data encryption key.

246
Q

You are managing several applications running in Google Cloud, and that includes being responsible for meeting SLAs. You have set up monitoring and alerting on all services. Members of the DevOps team are complaining that they are receiving too many alert notifications for things that do not require them to make any changes; the systems are running as expected. What might you do to reduce the number of alert notifications?
A. Monitor only backend services
B. Randomly select 20 percent of services to monitor
C. Review conditions and change thresholds
D. Use Cloud Logging instead of Cloud Monitoring

A

The correct answer is C. You should review conditions and change thresholds so that false alerts are not generated for situations that do not warrant human intervention. Options A and B are incorrect. You should monitor all services.
Option D is incorrect. You should use Cloud Logging in addition to Cloud Monitoring.

247
Q

An application generates a large volume of logs that are collected in Cloud Logging. You have gained some insights from the logs using Cloud Logging text search. You would like to perform structured queries using SQL. How can you accomplish this?
A. Export log data to BigQuery and use SQL queries
B. Export log data to Cloud Firestore and use SQL queries
C. Export log data to Cloud Storage and use SQL queries
D. Use SQL queries directly in Cloud Logging

A

The correct answer is A. Log data can be exported from Cloud Logging to BigQuery.
Option B is incorrect. Cloud Datastore does not support SQL.
Option C is incorrect. Cloud Storage does not support SQL.
Option D is incorrect. SQL queries are not available in Cloud Logging.

248
Q

A team of developers has made changes to several calculations in an application. They want to roll out the new version of the calculations to a small group of users at first. What kind of deployment would you recommend they use?
A. Blue/Green deployment
B. Complete deployment
C. Canary deployment
D. Rolling deployment

A

The correct answer is C. A canary deployment is used to roll out a change to a subset of users.
Option A is incorrect. A Blue/Green deployment involves switching all users from one version of an application to another.
Option B is incorrect. Complete deployment rolls out changes to all users.
Option D is incorrect. A rolling deployment incrementally updates servers over time.

249
Q

Your manager is questioning the amount of money spent each month for Compute Engine services. The manager believes that you are not using resources efficiently and could meet SLAs with fewer instances. Which of the following is the best option to collect data to show the manager CPU and memory utilization rates?
A. Collect log data into a central repository and review startup and shutdown log messages
B. Install Cloud Monitoring agents on VMs and collect metrics on CPU and memory utilization
C. Run Cloud Debugger to record utilization data
D. Write a custom script that runs as a cron job and records CPU utilization and memory utilization every minute to a local file on each VM

A

Option B is correct. Cloud Monitoring collects instance and application metrics.
Option A is incorrect. Collecting logs and reviewing them will not provide the measurements needed to understand CPU and memory utilization.
Option C is incorrect. Cloud Debugger is a debugging tool.
Option D could meet your needs, but it would take time to write the script to collect data, the script would have to be maintained, and additional programs would be needed to consolidate and aggregate the data collected.

250
Q

A DevOps team has had several incidents resulting in a loss of service for an important application. The team believes the incidents were caused by exhausting CPU and memory resources. What GCP service could they use to receive notifications when resources are approaching the maximum utilization you consider safe?
A. Cloud Dataproc
B. Cloud Trace
C. Cloud Monitoring Alerting
D. Cloud Logging

A

The correct answer is C. Cloud Monitoring Alerting allows one to implement alerting policies that describe what resources to monitor, what metrics to monitor, and when to send a notification.
Option A is incorrect. Cloud Dataproc is a managed Hadoop and Spark service.
Option B is incorrect. Cloud Trace is used for distributed tracing of application performance.
Option D is incorrect. Cloud Logging can provide useful information, but it does not send notifications.

251
Q

A new developer has joined your team of software engineers. You explain to the new developer your procedures for releasing code, which include which of the following types of tests to find bugs in the smallest units of code before incorporating code into production code?
A. Integration tests
B. Load tests
C. Unit tests
D. Acceptance tests

A

The correct answer is C. A unit test is a test of the smallest unit of code.
Option A is incorrect. Integration tests examine a combination of units of code.
Option B is incorrect. Load tests put workload on the full system to identify bugs related to high workloads.
Option D is incorrect. Acceptance tests are used to verify that a system meets business requirements.

252
Q

Your startup is offering paid and free versions of a service. The free version provides for up to 100 API calls a day to the service, while the paid version allows for up to 5,000 calls a day. Once a customer exceeds those limits, further requests are discarded and not processed. This is an example of what kind of overload strategy?
A. Shedding load
B. Upstream throttling
C. Downstream throttling
D. Cascading failure

A

The correct answer is A. This is an example of load shedding because data is dropped without being processed.
Option B is incorrect. This is not a case of reducing the amount of workload sent to downstream services.
Option C is incorrect. There is no such strategy as downstream throttling.
Option D is incorrect. This is not a failure of any kind; rather, it is a planned and intended response to prevent excessive workload.

253
Q

The CFO of your company has heard that your development team practices something called incident management. The CFO is concerned that this will cut into development time and delay the release of new features. How would you explain the purpose of incident management?
A. To identify the developer who introduces bugs so that they can receive additional training or be terminated.
B. During a service outage, incident management practices seek to correct problems in the service and restore operations as soon as possible.
C. During a service outage, an incident commander reviews all code released in the last build.
D. To monitor infrastructure and applications in order to ensure adequate resources are available for the workload the system is processing.

A

The correct answer is B. Incident management practices seek to correct problems and restore services.
Option A is incorrect. The goal is not to cast blame but to resolve the issue that is disrupting the service.
Option C is incorrect. An incident commander might review code, but that does not capture the full extent of incident management.
Option D is incorrect. Monitoring data may help to identify problems in services, but incident management is not about using those metrics for workload planning.

254
Q

You have been invited to a team meeting about developing a new service. The topics on the agenda include identifying the scope of the problem to address, evaluating options, and discussing the cost and benefits of various options. In what phase of the software development lifecycle is this team working?
A. Documentation
B. Analysis
C. Deployment
D. Design

A

The correct answer is B. The steps described are part of the analysis phase.
Option A is incorrect. Documentation is written in later stages.
Option C is incorrect. No code has been developed to deploy.
Option D is incorrect. The team is not at a point where it can make design decisions since the scope of work has not been defined.

255
Q

A group of executive stakeholders in your company is trying to prioritize software development and other IT projects. They are considering a lift-and-shift enterprise application migration to GCP, upgrading networking between data centers, and developing software to support a new line of business. What business value–focused metrics might the executives use to compare the value of the different projects?
A. Service-level agreement
B. Total cost of ownership (TCO)
C. Return on investment (ROI)
D. Price to sales

A

The correct answer is C. Return on investment measures the value of the benefit gained by an investment.
Option A is incorrect. A service-level agreement is not related to measuring the value of an investment.
Option B is a useful measure, but when comparing different kinds of investments, ROI is a better measure because it considers both the cost and the benefit of the investment.
Option D is incorrect. Price to sales is a measure used in equity investment, and it is calculated as a company’s market capitalization (total value of outstanding stock) divided by the annual sales revenue.

256
Q

After several months of intensive development, you have completed a major upgrade of an enterprise application. The application uses a microservices architecture and several managed services, and it runs in a hybrid cloud environment. You have performed unit test, integration test, and acceptance tests and have not found any significant problems; however, you understand complicated and complex systems fail in unexpected ways. What other technique could you use to understand how the system might fail?
A. Use pre-mortem analysis
B. Conduct an incident management review before there is any problem in production
C. Employ chaos engineering practices
D. Have a different team of developers review the application code

A

The correct answer is C. Chaos engineering introduces failures into a system to understand the consequences of component failures better.
Option A is incorrect. There is no such thing as a pre-mortem analysis.
Option B is incorrect. Incident management reviews are done after an incident, not before.
Option D is incorrect because although having additional code review may have some benefits, it will not help understand complex failure scenarios.

257
Q

A midsize business has acquired three startups over the past 18 months. Each organization had its own IT management practices. The CIO has decided that the companies will all standardize on the same set of service management practices that include general management practices, service management practices, and technical management practices. What standards of practice would you recommend they follow?
A. HIPAA
B. GDPR
C. ITIL
D. Waterfall methodology

A

The correct answer is C. The practices described constitute the ITIL framework for IT management. Options A and B are incorrect, as they are government regulations.
Option D is incorrect. Waterfall methodology is a software development methodology, not a comprehensive set of IT management practices.

258
Q

TerramEarth is revising some IT practices. One area of particular concern is disaster recovery. Stakeholders and architects have developed a plan that includes plans for deploying a disaster recovery environment in the cloud, detailed description of roles, and criteria for declaring a disaster and switching to the disaster recovery environment. What is missing from this plan?
A. A description of the contents of the code repository
B. A plan to review code before deploying in the disaster recovery environment
C. A list of alert policies that should be added to the disaster recovery environment
D. A description of how to test the disaster recovery plan, and a schedule to perform such tests

A

The correct answer is D. A disaster recovery plan should include testing the plan.
Option A is incorrect. There is no need for a description of the contents of the code repository-it may be changing frequently anyway.
Option B is incorrect. You should not be reviewing code during a disaster recovery operation; if code review is needed, it should be done as part of the standard software development process.
Option C is incorrect. The disaster recovery environment should mirror as close as possible the production environment.

259
Q

You are conducting a training session for new software developers. One of the participants asks what exactly does the term stakeholder mean, and why would software developers need to understand the concept?
A. A stakeholder is anyone who pays for a project.
B. A stakeholder is anyone with an interest in or influence over a project.
C. A stakeholder is a synonym for customer.
D. A stakeholder is a synonym for employer.

A

The correct answer is B. A stakeholder is someone with an interest in a project or influence over a project.
Option A is incorrect. Someone who pays for a project is a stakeholder, but they are not the only stakeholders. Options C and D are incorrect. Customers and employers are not synonyms for stakeholder, although a customer or an employer may be a stakeholder in a project.

260
Q

Executives at TerramEarth are planning multiple projects at the same time, including deploying wireless communications to more equipment, implementing machine learning and predictive analytics initiatives, and implementing new systems for collecting and analyzing feedback from dealers. Together, these projects implement TerramEarth’s business strategy for the next 12 months. This collection of projects is also known as which one of the following?
A. Portfolio
B. Stakeholder
C. ROI
D. Agile methods

A

The correct answer is A. A portfolio is a set of projects that implement a business strategy.
Option B is incorrect, as a stakeholder is someone with an interest in or influence over a project.
Option C is incorrect. ROI is a measure of the value of an investment.
Option D is incorrect. Agile methods are practices commonly used in software development.

261
Q

An online gaming company recently received audit reports on their financial and IT practices. The auditors noted that the company had insufficient cost management controls for a company growing as fast as it had. The auditors recommended more emphasis on two tasks related to cost management. What might those tasks have been?
A. Disaster recovery planning and incident management
B. Resource planning and cost estimating
C. Incident management and customer success management
D. Cost budgeting and separation of duties

A

The correct answer is B. Resource planning and cost estimating are two types of cost management practices.
Option A is incorrect. Disaster recovery planning and incident management are not cost management practices.
Option C is incorrect. Incident management and customer success management are not cost management practices.
Option D is incorrect. Cost budgeting is a cost management function, but separation of duties is not-it is a security best practice.

262
Q

Stakeholders at a retail company believe that their current software development practices are slowing the pace of software innovation. Executives are hearing complaints from product managers that new ways of engaging with customers are taking too long to implement. The stakeholders are advocating for more collaboration between the product managers and developers as well as more focus on software and less on detailed documentation. What software development methodology would work well for under these conditions?
A. Waterfall methodology
B. Agile methodology
C. Cost management
D. Spiral methodology

A

The correct answer is B. Agile methodologies favor collaboration and rapid software development.
Option A is incorrect. Waterfall methodology does not meet the requirements.
Option C is not a software development methodology.
Option D is incorrect, although the iterative nature of spiral methodology makes it closer to meeting the requirements than waterfall methodology.

263
Q

Mountkirk Games plans to make some services and data available to business partners using APIs. An architect assigned to the project is reviewing a design provided by several engineers that have not worked on APIs before. The architect notices that some of the proposed API methods are not standard. Which of the following could be the proposed nonstandard set of API methods?
A. List, Get, Update
B. Create, Get, Delete
C. Read, Write, Update
D. Create, Update, Delete

A

The correct answer is C. Read and write are not members of the set of standard resource methods. The standard methods are: List, Get, Create, Update, and Delete.

264
Q

A long-established insurance company is considering moving some workloads to the cloud. The company is risk averse, and it wants to proceed in ways that minimize risk. A team of developers and IT system owners have identified a service, which if it were disrupted, would not have an adverse impact in the short term. Which cloud migration strategy would best fit with their risk management approach?
A. Rebuild in the cloud
B. Move and improve
C. Lift and shift
D. Spiral methodology

A

The correct answer is C. Lift and shift is the lowest-risk approach since it minimizes change.
Option A is not the best fit because rebuilding presents more opportunities for delays and unanticipated issues.
Option B is better than rebuilding completely, but it still introduces changes that increases risk.
Option D is not a cloud migration strategy. It is a software development methodology.

265
Q

You have been tasked with building a data warehousing prototype in Google Cloud using BigQuery. You plan to upload approximately 50 GB of data to Cloud Storage from your on-premises data warehouse. You will load data only once for this prototype. You have access to a 1 Gbps network. What method would you choose?
A. Transfer using gsutil
B. Transfer usingbq
C. Transfer using Storage Transfer Service
D. Transfer using Cloud Transfer Appliance

A

The correct answer is A. Transfer 50 GB of data over a 1 Gbps network will take less than 10 minutes, andgsutilis the GCP SDK command for working with Cloud Storage.
Option B is incorrect.bqis the command line for working with BigQuery, and the data is being loaded to Cloud Storage.
Option C is incorrect. Storage Transfer Service is used for scheduling transfers and moving data from other clouds or from other HTTP/HTTPS locations.
Option D is incorrect. Cloud Transfer Appliance is used for transfer of 10 TB or more from a data center.

266
Q

Mountkirk Games plans to make some services and data available to business partners using APIs. They will rate limit the number of API calls that the business partners can make. What is a standard authentication method that they could use with their APIs?
A. JSON web tokens (JWT)
B. API key
C. Key encryption key
D. Data encryption key

A

The correct answer is B. API keys are used to authenticate the caller of API methods.
Option A is incorrect. JSON web tokens are used for authorization not authentication.
Option C is incorrect. Key encryption keys are used to encrypt data encryption keys.
Option D is incorrect. Data encryption keys are used to encrypt data.

267
Q

A data analytics consultancy is currently running 20 Apache Hadoop/Spark servers in their colocated data center. The cloud migration architect recommends a lift-and-shift migration. If company follows the architect’s recommendation, where will they run their Apache Hadoop/Spark servers?
A. Compute Engine
B. App Engine Standard
C. Cloud Functions
D. Cloud Dataprep

A

The correct answer is A. Lift and shift minimizes changes and runs services in the cloud in an environment as similar to the original environment as possible. By using Compute Engine, the company can deploy 20 virtual machines and run Hadoop/Spark servers similarly to the way that they are run in the colocation data center.
Option B is incorrect. App Engine Standard runs applications in a limited number of runtimes.
Option C is incorrect. Cloud Functions cannot run server-based software such as Hadoop/Spark.
Option D is incorrect because Cloud Dataprep is an managed service for preparing data for analysis and machine learning.

268
Q

A retailer’s product catalog is limited in functionality. The database schema is highly structured and difficult to change. The product catalog team has documented requirements for a new catalog, and they include minimizing the time developers spend on database administration, allowing for flexible schemas, and the ability to scale with minimal intervention by engineers. What database would you recommend?
A. PostgreSQL running in Compute Engine
B. MySQL running in Compute Engine
C. Cloud Firestore
D. BigQuery

A

The correct answer is C. Cloud Firestore is a managed NoSQL document database that supports flexible schemas and will scale without intervention.
Option A is incorrect. The company uses MySQL, not PostgreSQL, and running a relational database in the Compute Engine will still leave developers with significant database administration overhead.
Option B is incorrect because we know that the current database schema is difficult to change, and the company uses MySQL. Thus, using a MySQL solution in Compute Engine would not improve schema flexibility or reduce database administration overhead.
Option D is incorrect. BigQuery is an analytical database and not suitable for a product catalog.

269
Q

A logistics company has approximately 50 TB of historical vehicle monitoring data stored in an on-premises data center. It plans to use that data with Vertix AI to build predictive maintenance models, but first it must migrate the data to Cloud Storage. What method would you recommend?
A. Copy using gsutil
B. Copy using gcloud
C. Transfer using Storage Transfer Service
D. Transfer using Cloud Transfer Appliance

A

The correct answer is D. Since the volume of data is over 10 TB and the data is in an on-premises data center, the Cloud Transfer Appliance should be used.
Option A is incorrect. gsutilis used for copying smaller volumes of data from on premises.
Option B is incorrect. gcloudis not used to copy data, but it is the command-line utility for many GCP services.
Option C is incorrect. Storage Transfer Service is used for repeated transfers and data transfers from other clouds or HTTP/HTTPS locations.

270
Q

You have a client who uses three RabbitMQ servers for messaging, social notifications, and events. The architect managing their migration to Google Cloud wants to replace RabbitMQ with a managed service. What would you recommend?
A. Cloud Storage
B. Cloud Pub/Sub
C. Cloud Composer
D. Cloud Build

A

The correct answer is B. Cloud Pub/Sub is a managed messaging queue service that can be used in place of RabbitMQ in this case.
Option A is incorrect. Cloud Storage is an object storage system.
Option C is incorrect. Cloud Composer is a managed workflow service that implements Apache Airflow.
Option D is incorrect. Cloud Build is a service for building container images.

271
Q

As part of a migration to GCP, you are assisting a client with migrating a legacy document repository to the cloud. Cloud Storage is a good fit with the requirements because of its support for lifecycle management. You are uploading files in parallel over a 10 Gbps network, but the files are not loading as fast as you would expect given the network bandwidth. What might cause a slower-than-expected data upload?
A. You are writing to buckets that do not have globally unique names, which is delaying name resolution.
B. You are loading files with sequentially close names (in this case they start with timestamps), and the files are likely assigned to the same server creating a hotspot during write operations.
C. You have incorrectly configured access controls, and this is slowing write operations.
D. You are using personal identifiable information (PII), including IP addresses in the filenames.

A

The correct answer is B. Using sequentially similar names or timestamps with parallel loads can lead to similarly named files assigned to the same server, which creates a hotspot.
Option A is incorrect. Buckets must have globally unique names.
Option C is incorrect. If access controls were misconfigured, it would prevent writes completely, not just slow them down.
Option D is incorrect. Using personal identifiable information in a filename is not recommended, but that alone will not slow an upload unless the filenames are sequential as well.

272
Q

TerramEarth equipment is used around the world. In the future, it plans to store vehicle metrics in Cloud Storage. The data will be used for at least 18 months for analysis and training machine learning models. No data should be lost if a data center, zone, or region is inaccessible. The business requirements specifically call for georedundant storage. Which GCP service would you use to meet this requirement?
A. Cloud Storage Standard class using a multiregion
B. Cloud Storage Standard class using regional storage with a HTTP(S) global load balancer in front of Cloud Storage
C. Cloud Storage Nearline
D. Cloud Storage Coldline

A

The correct answer is A. Cloud Storage Standard Class storage using a multiregion is georedundant object storage service that should be used.
Option B is incorrect. You do not run a load balancer in front of Cloud Storage. Options C and D are incorrect because the data is being actively used. It should not be in Nearline storage or Coldline storage, which are designed for infrequently accessed data.

273
Q

Your startup has released a service to a small set of beta customers. The initial feedback is positive, except for indications that the service is not always available when customers expect it to be. You have analyzed several incidents that severely interrupted services, and you have determined that a failure in the relational database was the root cause of the failure in each case. You have decided to revise your architecture to replace the single instance of a PostgreSQL database running in Compute Engine with a high availability relational database. You want to deploy a high availability database as fast as possible, and you want to minimize database administration overhead going forward. What would you recommend?
A. Cloud Firestore
B. Cloud SQL
C. Cloud Bigtable
D. Cloud Spanner

A

The correct answer is B. Cloud SQL Second Generation is a managed relational database service that supports high availability databases. Options A and C are incorrect. Cloud Firestore and Cloud Bigtable are NoSQL databases.
Option D is incorrect. Cloud Spanner is a managed relational database designed to scale horizontally on a global scale. There is no indication that this level of scalability is needed by the startup, and Cloud SQL costs less, so Cloud SQL is a better option.

274
Q

A user of BigQuery would like to perform ad hoc analysis of some IoT data stored in Cloud Storage. The user will need to create, delete, and modify datasets in BigQuery. Which of the following predefined roles would you assign to that person?
A. dataViewer
B. dataAdmin
C. dataOwner
D. dataJobUser

A

The correct answer is C. A dataOwner has the permissions of dataViewer and dataEditor but can also create, modify, and delete datasets.
Option A is incorrect. dataViewer can only perform read operations, such as listing tables and getting table data.
Option B is incorrect. dataAdmin grants more permissions than explicitly needed, and there is another predefined role, dataOwner, that has the minimal set of permissions needed. Following the principle of least privilege, it is better to assign dataOwner than dataAdmin in this case.
Option D is incorrect. There is no dataJobUser role; there is, however, a jobUser role, which grants permissions related to running jobs queries.

275
Q

A startup is providing data collection and analysis services to hospitals. The first service to roll out is an IoT data collection service. The requirements for the data store include at most 20 ms latency and the ability to scale to petabytes of data. The startup wants to focus on building machine learning models to provide a competitive advantage. Which of the following database systems would you recommend?
A. Apache Cassandra
B. Cloud Bigtable
C. Cloud SQL
D. Cloud Storage

A

The correct answer is B. Bigtable is a managed NoSQL database with sub 10 ms latency that scales to petabytes of storage.
Option A is incorrect. An Apache Cassandra system would need to be managed by members of the startup, thus taking them away from their focus on building machine learning models.
Option C is incorrect. Cloud SQL is not designed for sub 20 ms latency or scaling to petabytes of storage.
Option D is incorrect. Cloud Storage is an object storage system, not a database system.

276
Q

The current database used to store players possessions in an online game is difficult to maintain and update. Company executives at the gaming company want to know what options are available in GCP for replacing it. Key requirements are flexible schema, support for ACID transactions, and indexes. Data is highly denormalized, so joins are not required. What managed database service would you recommend?
A. Cloud Firestore
B. Cloud Storage
C. Cloud Dataproc
D. Cloud SQL

A

The correct answer is A, Cloud Firestore, which supports all of the key requirements.
Option B is incorrect. Cloud Storage is not a managed database service-it is an object storage service.
Option C is incorrect. Cloud Dataproc is not a managed database service-it is a managed Hadoop and Spark service.
Option D is incorrect. Cloud SQL is a relational database, so it has more structured schemas than a NoSQL database such as Cloud Firestore. Also, joins are not needed, so there is requirement to use a database that supports joins.

277
Q

A financial services company is required by regulations to keep logs of equity traders’ messaging conversations. The company has collected logs and kept them in an on-premises object storage system. Recently, however, due to an operator error, several days of logs were deleted. The CIO has decided to store logs in the cloud and use any features available to help ensure that there is no more accidental loss of logs. What storage system and feature of that storage system would you recommend?
A. Cloud Storage and retention policies
B. Cloud Storage and access control policies
C. Cloud Firestore and retention policies
D. Cloud Firestore and access control policies

A

The correct answer is A. Cloud Storage is an object storage service, and retention policies ensure that objects are not deleted before some specified time.
Option B is incorrect because although access controls should be used, alone they would not prevent an administrator from accidently deleting data. Options C and D are incorrect because Cloud Firestore is a NoSQL database, which is not appropriate for archiving log files.

278
Q

Your company has organized the use of GCP resources into multiple projects across two organizations. This was done for administrative and billing reasons; however, there is a need to share data and resources across projects, including between projects in different organizations. What would you recommend using to enable sharing resources across projects and organizations?
A. Cloud VPN
B. Cloud Interconnect
C. VPC Network Peering
D. Shared VPC

A

The correct answer is C. VPC network peering allows for connecting VPCs between organizations.
Option A and
Option B are incorrect. Those are ways to implement network links between Google Cloud and on-premises data centers.
Option D is incorrect because Shared VPCs are limited to sharing VPCs in projects within the same organization.

279
Q

You have created a VPC for a group of data scientists who will be analyzing data using specialized statistical tools that they have developed. They will be running their tools on instances in Compute Engine. As you are setting up the environment, you are considering what firewall rules to implement. Some data scientists prefer to use Linux desktops, while others prefer Windows clients. What default firewall rule or rules will allow both Linux and Windows users to access resources?
A. A.default-allow-rdp
B. B.default-allow-ssh
C. C.default-allow-rdp and default-allow-ssh
D. D.default-allow-rdp and default-allow-icmp

A

The correct answer is C.default-allow-rdpwill allow Windows users to use the Remote Desktop Protocol (RDP) to access instances, anddefault-allow-sshwill allow Linux users to use SSH to access instances. Options A and B are incorrect because each is missing a necessary firewall rule.
Option D is incorrect.default-allow-icmpallows ingress ICMP traffic, but it does not block or allow client access to instances.

280
Q

You are reviewing a proposed design for a new service that will be available to customers in North America and Europe initially and eventually to customers across the globe. The service requires high availability so that a failure in one region should not disrupt services. The design includes the use of multiple projects, Cloud Spanner, load balancing using Network TCP/UDP load balancers, and TLS-based encryption for data in transit. What part of this design fails to meet requirements?
A. Cloud Spanner is not suitable for global scale.
B. Network TCP/UDP is not a global load balancer.
C. TLS-based encryption is not appropriate for data in transit.
D. It does not include the use of a NoSQL database.

A

The correct answer is B. This design requires the use of a global load balancer, so Network TCP/UDP load balancing should not be used.
Option A is incorrect because Cloud Spanner is a global, horizontally scalable relational database.
Option C is incorrect because TLS is appropriate for encrypting data in transit.
Option D is incorrect because there is no requirement for a NoSQL database.

281
Q

Your team is using Kubernetes Engine for several services. A project stakeholder has heard that Kubernetes can automatically detect an unhealthy pod and replace it with a healthy pod. The stakeholder questions how that would impact any data the unhealthy pod had persisted to storage. What mechanism would you describe to the stakeholder as the way Kubernetes manages that situation?
A. ReplicaSets
B. Deployments
C. PersistentVolumeClaims
D. Ingress

A

The correct answer is C. PersistentVolumeClaims is a logical way to link a pod to persistent storage; a PersistentVolume is Kubernetes’ way representing storage allocated or provisioned for use by a pod.
Option A is incorrect. ReplicaSets are controllers that manage the number of pods running in a deployment.
Option B is incorrect. A deployment is a set of pods running the same version of an application.
Option D is incorrect. An Ingress is an object for controlling external access to services running in a cluster.

282
Q

Your DevOps team is automating several processes. Some of the processes need to run on a regular schedule, and some need to run in response to an event, such as a file being uploaded to Cloud Storage when a background process writes a message to a Cloud Pub/Sub topic. What two GCP services could you use to help automate these processes?
A. Cloud Functions and Cloud Pub/Sub
B. Cloud Functions and App Engine Cron Service
C. App Engine Cron Service and Cloud Pub/Sub
D. App Engine Cron Service and Cloud Dataprep

A

The correct option is B. Cloud Functions can be used to run event-triggered processes, and App Engine Cron Service can be used for regularly schedule jobs. Options A and C are incorrect because Cloud Pub/Sub is a message queue service, and it is not used for running processes.
Option D is incorrect because Cloud Dataprep is a tool for preparing data for analysis.

283
Q

A distributed application that you are running needs to store state information. The application runs in a managed instance group. On several occasions, an instance in the managed instance groups fails and is replaced by another instance. This causes a loss of the state data that was stored in the unhealthy instance’s memory. What GCP service might you use to mitigate the risk of losing state when an instance fails?
A. Cloud Memorystore
B. Cloud Composer
C. Cloud Dataprep
D. Cloud Build

A

The correct answer is A. Cloud Memorystore can be used to cache state data outside of instances, so that if an instance fails, state data is not lost.
Option B is incorrect. Cloud Composer is a workflow service based on Apache Airflow.
Option C is incorrect. Cloud Dataprep is a tool for preparing data for analysis.
Option D is incorrect. Cloud Build is a service for building container images.

284
Q

A web application front end accepts data from a user, and that data is written to a relational database. The front end does not need an acknowledgment from the database, but it waits for the database operation to return before doing other processing. The database can sometimes lag behind the front-end application leading to long write operations. How could you reduce the time that the front-end application must wait when it sends data to the database?
A. Add more memory to front-end instances
B. Add more memory to backend instances
C. Write data to a Cloud Pub/Sub topic and have a database application pull from the topic
D. Write data to BigQuery

A

The correct answer is C. You could decouple the front end and database by writing to a Cloud Pub/Sub topic. The front end can then continue processing without waiting for the database. Options A and B are incorrect because there is no indication that memory is a limiting resource.
Option D is incorrect. BigQuery is an analytical database, not a relational database.

285
Q

You are migrating a stream processing application currently running on Apache Flink using the Apache Beam interface. You want to move it to a managed service in GCP. What service would you choose?
A. Cloud SQL
B. Cloud Dataflow
C. Cloud Data Fusion
D. Cloud Dataprep

A

The correct answer is B. Cloud Dataflow is a managed Apache Beam service for stream and batch processing.
Option A is incorrect. Cloud SQL is a managed relational database service.
Option C is incorrect. Cloud Data Fusion is a managed service for building and managing ETL pipelines.
Option D is incorrect. Cloud Dataprep is a tool for preparing data for analysis.

286
Q

You are migrating a log analysis process from an on-premises Hadoop cluster. You’d like to use a managed service in GCP. Which service would you recommend?
A. Cloud Spanner
B. Cloud Dataproc
C. Cloud Dataflow
D. Cloud Dataprep

A

The correct answer is B. Cloud Dataproc is a managed Hadoop and Spark service in GCP.
Option A is incorrect. Cloud Spanner is a horizontally scalable, global-scale relational database.
Option C is incorrect. Cloud Dataflow is a stream and batch processing framework based on Apache Beam.
Option D is incorrect. Cloud Dataprep is a tool for preparing data for analysis.

287
Q

Your company currently uses Prometheus and Grafana for collecting and visualizing performance metrics. As part of your migration to GCP, you’d like to switch to the GCP service for collecting and analyzing metrics. What service is that?
A. IAM
B. Cloud Monitoring
C. Cloud Trace
D. Cloud Logging

A

The correct option is B. Cloud Monitoring is the managed service for collecting and visualizing performance metrics.
Option A is incorrect. IAM is an identity and access control management service.
Option C is incorrect. Cloud Trace is for distributed tracing.
Option D is incorrect. Cloud Logging is for collecting and searching logs.

288
Q

Your company needs to store legacy application data files. The total volume is 10 TB, but each file is 2 GB or less. Each file, on average, will be accessed once a month at most. You’d like to minimize storage costs. What storage option would you choose?
A. Cloud Spanner
B. Cloud Storage Standard class storage
C. Cloud Storage Nearline class storage
D. Cloud Storage Archive class storage

A

The correct option is C. Cloud Storage Nearline is designed for data that is retrieved once a month or less and costs less than regional or multiregional Cloud Storage.
Option A is incorrect. Cloud Spanner is a relational database and not suitable for storing files.
Option B is incorrect because Cloud Storage Standard class storage is more expensive than Cloud Storage Nearline.
Option D is incorrect because Archive class storage is designed for data that is accessed less than once per year.

289
Q

A team of developers is creating a collaboration support tool for your company. They would like to use a file system that is accessible from Compute Engine and Kubernetes Engine. What GCP service would you recommend?
A. Cloud Storage
B. Cloud Filestore
C. Cloud Dataflow
D. Cloud Bigtable

A

The correct answer is B. Cloud Filestore is a network-attached storage service that provides a file system accessible from Compute Engine and Kubernetes Engine.
Option A is incorrect. Cloud Storage is an object storage system and does not provide file system services. (You could use Cloud Storage FUSE with Cloud Storage for file system–like access, but that was not included in the option and it is not actually a filesystem..
Option C is incorrect. Cloud Dataflow is a stream and batch processing service.
Option D is incorrect. Cloud Bigtable is a NoSQL database.

290
Q

Data scientists have developed a large data store using HBase. They would like to avoid the administrative overhead of managing a Hadoop cluster for HBase. They are open to migrating the data to a managed service. Which of the following would you recommend?
A. Cloud Bigtable
B. Cloud Firestore
C. Cloud SQL
D. BigQuery

A

The correct answer is A. Bigtable is a scalable, NoSQL database with an HBase interface.
Option B is incorrect. Cloud Firestore is a document NoSQL database.
Option C is incorrect. Cloud SQL is a managed relational database.
Option D is incorrect. BigQuery is an analytical database using SQL for querying and does not provide an HBase interface.

291
Q

Which of the following is a property that guarantees that when a transaction executes, the database is left in a state that complies with constraints, such as uniqueness requirements and referential integrity, which ensures foreign keys reference a valid primary key.
A. Consistency
B. Atomicity
C. Scalability
D. Durability

A

The correct answer is A. Consistency is a property that guarantees when a transaction executes, the database is left in a consistent state as described in the question.
Option B is incorrect. Atomicity ensures that all steps in a transaction complete or no steps take effect.
Option C is incorrect. Scalability is related to provisioning sufficient resources for a workload.
Option D is incorrect. Durability is a measure of the probability that an object will be accessible. Durability ensures that any transaction executed will not be lost, for example through replication methods or backups.

292
Q

BigQuery uses the concept of which of the following for organizing tables and views?
A. Shard
B. Dataset
C. Tablespace
D. Data store

A

The correct option is B. Datasets are used to organize tables and views in BigQuery.
Option A is incorrect. A shard is a concept in horizontal scaling.
Option C is incorrect. Tablespaces are a structure used in some relational databases.
Option D is incorrect. A data store is a general term for data management systems.

293
Q

You have a data warehouse in BigQuery. Data is less likely to be accessed the older it gets. What feature of BigQuery would you recommend using?
A. Time-partitioned tables
B. Space-partitioned tables
C. Wide-column database structure
D. Network database structure

A

The correct option is A. Time-partitioned tables can be used. Options B, C, and D are not actually options in BigQuery.

294
Q

You would like to migrate from an on-premises relational database to a managed database service that can scale horizontally and distributes data automatically among multiple regions. Which managed database service would you recommend?
A. Cloud SQL
B. Cloud Spanner
C. Cloud Bigtable
D. Cloud Firestore

A

The correct option is B. Cloud Spanner is a horizontally scalable relational database that automatically distributes data to multiple regions.
Option A is incorrect. Cloud SQL is not horizontally scalable. Options C and D are incorrect. They are not relational databases.

295
Q

An application requires the use SSL/TLS protocol when transmitting data; this is not HTTPS traffic. The application is distributed and, because of high availability requirements, deployed in multiple regions. Which type of global load balancer would you recommend?
A. HTTPS load balancer
B. TCP Proxy load balancer
C. SSL Proxy load balancer
D. Network TCP/UDP

A

The correct answer is C. The SSL Proxy load balancer is recommended for non-HTTPS traffic.
Option A is incorrect. This is non-HTTPS traffic.
Option B is incorrect. TCP Proxy load balancers are recommended for non-HTTPS and non-SSL traffic.
Option D is incorrect. Network TCP/UDP is not a global load balancer.

296
Q

Reviewing network design documentation, you notice several IP addresses with a /12 suffix, such as 172.16.0.0/12 . What does this tell you about the IP addresses on that network?
A. 12 bits are used to specify the subnet mask.
B. 12 bits are used to specify the host address.
C. 20 bits are used to specify the subnet address.
D. 20 bits are used to identify the VPN.

A

The correct option is A. The/12indicates that 12 bits are used for the subnet mask.
Option B is incorrect. 20 bits are used to specify the host address in this case.
Option C is incorrect. 20 bits are not used for the subnet mask.
Option D is incorrect. IP CIDR blocks are not used to specify a VPN.

297
Q

All systems within all clouds and private networks can communicate with each other using which of the following topologies?
A. Mirrored
B. Meshed
C. Gated egress
D. Gated ingress

A

The correct answer is B. In a meshed topology, all clouds and private networks can communicate.
Option A is incorrect. In a mirrored topology, the public cloud and private on-premise environments mirror each other. Options C and D are incorrect. Gated egress and ingress topologies control access to APIs.

298
Q

You need to convert a large number of video files to a new format. You have an application running in Compute Engine that is converting the files. Initial tests indicate that you will overrun your budget by 40 percent using standard instances. What could you try to cut costs?
A. Use additional storage
B. Use high memory configurations machine types
C. Run a cluster of servers using managed instance groups
D. Use preemptible VMs

A

The correct answer is D. Preemptible machines cost up to 80 percent less than standard VM instances. Options A and B are incorrect because adding resource will not lower the cost of the machines, and there is no indication that the instances are resource limited.
Option C is incorrect. Running instances in a managed instance group does not lower the costs of the instances.