A. Start Here PCA Beginner Topics Flashcards

1
Q

What is a GCP Service that handles streaming and batch data?

A

Cloud DataFlow

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does DLP stand for and how is it used?

A

Data Loss Prevention and it is used to sanitize data and remove sensitive information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

App Engine is what type of service?

A

PAAS Platform as a Service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Compute Engine (GCE) is what type of service?

A

IAAS Infrastructure as a Service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the FireStore Components?

A

FieldCollection GroupDocumentDocument ID

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the Cloud DataStore Components?

A

KindEntityPropertyKey

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

If a Compute Engine Application exists in a single VPC across three regions and your application must communicate over VPN to your company’s on-premise network then how many VPN Gateways are required?

A

3 Cloud VPN gateways are required. Cloud VPN Gateways are bound to a single region.Create a Cloud VPN Gateway in each region

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What type of migration model does Dress4Win state in their business requirements?

A

Lift and Shift

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the 5 sequential steps for cloud migration?

A

1 Assess2 Pilot3 Move Data4 Move Applications5 Cloudify & Optimize

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Dynamic Routing uses a _________ to automatically discover new subnet routes

A

Cloud Router

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The 4 layers of the GCP Cloud Resource Hierarchy

A

1 Organization2 Folders3 Projects4 Resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which network interconnect method connects your network to a GCP VPC over a public internet encrypted tunnel?

A

Cloud VPN

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Command to create a new storage bucket

A

gsutil mb -l {location} -c {storage class} gs://BucketName

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Cloud Router uses this protocol to handle dynamic routing between locations

A

BGP Border Gateway Protocol

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Where can you export Stackdriver logs to (not counting customer locations)

A

1 Cloud Storage2 Cloud Pub/Sub3 BigQuery

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the max speed of a single Cloud VPN tunnel (non-peered)

A

1.5 Gbps

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Every load balancer must have a ___ and a ____

A

Frontend || Backend

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Role necessary to link a project to a billing account

A

Billing Account User

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How many VPN tunnels can you create in a single Cloud VPN gateway

A

8

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the default, implied status of all egress traffic in a VPC firewall

A

Allow All

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Google Cloud Storage holds what type of data?

A

Unstructured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

This service is required to setup dynamic routing over a Cloud VPN Service

A

Cloud Router

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Where does Cloud Dataaprep load data from?

A

Cloud Storage and BigQuery

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

The two methods of permissions for Google Cloud Storage

A

1 IAM: Identity and Access management2 ACL: Access control list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

This database service is ideal for low-latency storage of time-series data

A

Cloud BigTable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Relational Databases

A

Cloud SQLCloud Spanner

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Non-Relational Databases

A

Cloud DataStoreCloud FireStoreCloud BigTable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

DataWareHouse

A

BigQuery

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

This managed database is a no-ops petabyte-scale data warehouse that queries data in standard SQL Format

A

Big Query

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Retention period for data access logs

A

30 days

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

______ Roles apply to the entire project.

A

Primitive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

An HTTP load balancer can forward traffic by ____ and ____

A

location content

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Which GCP load balancers are multi-regional in scope?

A

1 HTTP Load Balancer2 TCP Proxy3 SSL Proxy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

VPC subnets can exist in more than one _____

A

zone (in the same region)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Which connection protocol does the Cloud VPN service use?

A

IPSEC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

This IAM member allows public/anonymous access to a resource

A

allUsers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Google account type for members of an organization WITHOUT access to Google apps

A

Cloud Identity Domain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What type of managed database is ideal for web and mobile applications?

A

Cloud DataStore

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

More lightweight container image option to run on GKE

A

Alpine Linux

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

The name for the modular components of a Cloud Deployment Manager Configuration

A

Templates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

GCP Service for Providing a ‘single pane of glass’ for monitoring resources and alerts across projects in AWS

A

StackDriver Monitoring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

VPC firewall rules are applied on a per-instance basis

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What layer of the Cloud Resource Hierarchy are chargeable resources hosted in?

A

Projects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Which networking interconnect option connects your business directly to Google, but not directly to GCP VPC?

A

Peering

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

The 3 Primitive Roles and the types of access they give:

A

1 Owner: Full Project Access (Billing and Assigning IAM Roles)2 Editor: Full Access minus- Billing and IAM access3 Viewer: View only

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Google account type for a collection of individual Google Accounts

A

Google Groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

When to use Dataproc over Data Flow

A

When using Hadoop/Spark workflows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Another term for mapping Cloud Identity to Active Directory to duplicate account information.

A

Federation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

What is a pod on GKE?

A

Smallest deployable unit. Contains one or more containers that run on nodes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

The three IAM Role Types

A

1 Primitive2 Predefined3 Custom

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Two format options for Cloud Deployment Manager template files

A

JinjaPython

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

The five (non-beta) Stackdriver services

A

1). Logging2). Trace3). Monitoring4). Error Reporting5). Debug

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Cloud Storage can act as a block-level SAN replacement (True/False)

A

False; you would need to use a persistent disk for a direct SAN replacement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

The two Memcache service levels

A

1 Dedicated2 Shared

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

GCP service for asynchronous messaging, used for streaming data ingest

A

Cloud Pub/Sub

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

In a Shared VPC network, the ____ project hosts the VPC components, and the ___ project uses hosted VPC resources

A

HostService

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

This managed database is ideal for NoSQL purposes, is NoOps in setup/maintenance, and is ideal for mobile save game state

A

Cloud DataStore

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

What is a service account?

A

1 Assigned to an application or a server2 Authenticated with a service account key3 Both a member and a resource

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

How to easily apply VPC firewall rules to individual instances instead of the entire network

A

Network Tags

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Admin Activity Logs are ____ by default

A

Enabled

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

When are un-managed instance groups useful?

A

Migrating grouped servers to the cloud with minimal disruption in workflow

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

____ provides a direct physical connection to connect your on-premises network to a Google Cloud VPC network.

A

Cloud Interconnect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

How to optimize your CDN cache performance:

A

Configure Cache Hit Ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Collection of statements that define who has access to what resource on GCP

A

IAM Policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

This application is required to configure a Cloud Storage bucket as a mounted disk on a GCE instance.

A

Google Cloud Storage Fuse (gcs-fuse)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

a managed instance group is created from an ____

A

Instance Template

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

Permissions for working with VPC networks fall under this service.

A

Compute Engine

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

What are the 5 load balancer options in GCP

A

1) Internal 2) Network3) HTTP(s)4) TCP Proxy5) SSL Proxy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

How to add subnets in other regions to the same VPC network:

A

No configuration necessary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

What are the two database structure formats we discussed in this course?

A

Relational (SQL) || Non-Relational (NoSQL)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

An export in Stackdriver Logging requires what components to setup?

A

A filter to select log entriesA destination to export filtered logsSink: Select which filtered logs to send to which destination

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

Format of Deployment Manager configuration files

A

YAML format

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

GCP’s service that is build on Apache Beam, used for processing both batch and streaming data

A

Cloud DataFlow

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Retention period for admin activity logs

A

400 days

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

This type of disk is directly connected to a GCE instance and must be set up on instance creation

A

Local SSD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

Where can billing data be exported?

A

1 Cloud Storage 2 Big Query

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

Which are the benefits of quotas?

A

Protection of unexpected spikes in resource usagePrevent runaway consumption due to error or malicious intent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

What could be the cause if an Instance Group VMs keep restarting every minute?

A

1 Failing Health Check2 Configure the firewall to allow proper access to instance group VM’s (subnet, tag) from load balancer IP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

MountKirk Games is looking to migrate how many environments to the cloud?

A

(2) environments different storage for each service1 Game BackEnd on Google Cloud Compute Engine (GCE)2 Analytics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

What would fulfill the MountKirk technical requirement for “connecting a trans-actional database service to manage user profiles and game state”?

A

Cloud Datastore - NoSQL transactional database - perfect for game user-profiles and game states

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

What would fulfill the MountKirk technical requirement “Store game activity in a timeseries database service for future analysis”?

A

Store in BigQuery BigQuery vs BigTableBigQuery a lot more managedNo requirement for low latency analytics response time (Big Table)BigQuery has a response measured in seconds, scales efficientlyBigQuery reading from BigTable possible response as well

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

What would fulfill the MountKirk technical requirement “As the System scales, ensure that data is not lost due to processing backlogs. “?

A

1 HTTP Load Balancer- Automatically scales to meet demand2 Managed Instance Groups - also auto-scales3 Pub/Sub - Buffers late/slow data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

What would fulfill the MountKirk technical requirement “Run hardened Linux Distro”?

A

Managed Instance groups with custom images

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

What would fulfill the MountKirk technical requirement “Process incoming (streaming) data on the fly directly from the game servers?

A

Connect services (stackdriver logs metrics, gce game serverss) with Pub/SubProcess with DataFlow

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

What would fulfill the MountKirk technical requirement “Process data that arrives late because of slow mobile networks” ?

A

Pub/Sub: Scales and Buffers messagesDataFlow: Accounts for late/out of order data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

What would fulfill the MountKirk technical requirement “Allow queries to access at least 10 TB of historical data.”?

A

BigQuery - SQL Queries against data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

What would fulfill the MountKirk technical requirement “Process files that are regulary uploaded by users’ mobile devices. ?

A

Upload to Cloud StorageProcess via DataFlow

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

What would fulfill the Dress4Win technical requirement equivalent of “MySQL”?

A

DataCenter&raquo_space; GCPMySQL&raquo_space; Cloud SQL (Lift . Shift)5TB&raquo_space; 10 TB Size LimitSingle Region - no global footprint requirementMigration - 1 Create replica server managed by Cloud SQL2 Once replica is synced: Update applications to point to replica3 Promote replica to stand-alone instance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

What would fulfill the Dress4Win technical requirement “Redis 3 server Cluster” ?

A

Two options1) Run Redis server on Compute Engine2) Use new Memorystore managed Redis database

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

What would fulfill the Dress4Win technical requirement “40 Web Application servers providing micro-services based APIs and static content. “Tomcat - Java”, “Nginx”, “4 core CPUs”,”32 GB of RAM”?

A

The existing environment has lots of idle time- Managed instance groups - autoscaling using custom machine types (Fits Lift . Shift)Alternatively - can re-architect for GKE/GAE for microservices deployments for future phases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

What would fulfill the Dress4Win technical requirement “20 Apache Hadoop/Spark servers:”?

A

Cloud Dataproc connecting to Cloud Storage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

What would fulfill the Dress4Win technical requirement “3 RabbitMQ servers for messaging, social notifications, and events:”?

A

Pub/Sub likely replacementCan also deploy same environment on Compute engine instance group (lift and shift)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

What would fulfill the Dress4Win technical requirement “Jenkins, monitoring, bastion hosts, security scanners”?

A

No managed service equivalentsUse GCE instances - custom machine typesThink about using the Market Place as well

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

What would fulfill the Dress4Win technical requirement “iSCSI for VM hosts/Fiber channel SAN - Backup for MySQL databases” ?

A

SAN/iSCSI requires block storagePersistent disks working in a SAN Cluster

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

What would fulfill the Dress4Win technical requirement “NAS - image storage, logs, backups”?

A

Cloud Storage - direct replacementInfinite scale in a single bucketPersistent also an option

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

What would fulfill the TerramEarth business requirement “Decrease unplanned vehicle downtime to less than 1 week”?

A

Convert to 100% cellular connectivity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

What would fulfill the TerramEarth business requirement “Support the dealer network with more data on how their customers use their equipment to better position new products and services”?

A

Share insights with Data Studio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

What would fulfill the TerramEarth business requirement “Have the ability to partner with different companies – especially with seed and fertilizer suppliers in the fast-growing agricultural business – to create compelling joint offerings for their customers”?

A

-Share insights with Data Studio-BigQuery / ML analytics to predict customer needs-Tech lead will enable partnerships

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

What would fulfill the TerramEarth technical requirement “expand beyond a single datacenter to decrease latency to American midwest and east coast”?

A

Multi-regional/global services

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

What would fulfill the TerramEarth technical requirement “create a backup strategy”?

A

Regular BigQuery Exports to Cloud Storage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

What would fulfill the TerramEarth technical requirement “Increase the security of data transfer from equipment to the datacenter”?

A
  • Cloud Endpoints - manage and protect APIs- Cloud IoT Core - also managed security- Customer supplied encryption keys
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

What would fulfill the TerramEarth technical requirement “Improve data warehouse”?

A
  • Cloud dataflow - transform incoming streaming data to the preferred format- Alternatively, stage in Cloud Storage, clean with Cloud Dataprep, and run job backed by DataFlow into BigQuery
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

What would fulfill the TerramEarth technical requirement “Use Customer and equipment data to anticipate customer needs”?

A

Pair BigQuery with machine learning services for predictive analytics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

_______ provides visual notebooks for working with BigQuery/Cloud ML Engine data for ML/analytics?

A

Datalab

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

What does CSEKs stand for?

A

Customer-supplied encryption keys

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

What does CMEK stand for?

A

Customer-managed encryption keys

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

What is a use case for a .boto file?

A

use a .boto configuration file to supply the customer_managed encryption key, then use gsutil to upload the files

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

______ works with Global HTTP(s) Load Balancers to Deliver defense against ddos attacks.

A

Cloud Armor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

_________ will allow vms on your subnet to access GCP resources

A

Private Google Access

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

Resources not hosted on GCP should use a _____

A

CSEK Custome Service Encryption key for authentication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

Subnets are ________ resources

A

Regional

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

An IAM Policy Consists of a ____________

A

List of Bindings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

What role gives you permission to set up a Shared VPC

A

Shared VPC Admin Role

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

Based on MountKirk Games’ technical requirements, what GCP services/infrastructure will they use to host their game backend?

A

Managed Instance Group on Compute Engine

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
115
Q

What is Google Container Engine?

A

GKE Google Container Engine is the older naming convention of the container orchestration Google Kubernetes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
116
Q

What does the HTTP status Error response 401?

A

Unauthorized

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
117
Q

You want to enable your running Google Kubernetes cluster to scale as demand for your application changes. What should you do?

A

Update the existing Kubernetes Engine Cluster with the following command; “gcloud container clusters update CLUSTER_NAME –enable-autoscaling –min-nodes=1 –max-nodes=10”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
118
Q

Your company places a high value on being responsive and meeting customer needs quickly. Their primary business objectives are release speed and agility. You want to reduce the chance of security errors being accidentally introduced. Which two actions can you take?

A

1) Use source code security analyzers as part of the CI/CD pipeline2). Run a vulnerability security scanner as part of your continuous-integration - delivery (CI/CD) pipeline

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
119
Q

What are 2 characteristics of GCP VPC subnets?

A

1). Each subnet can span at least 2 Availability Zones to provide a high-availability environment.2). By default, all subnets can route between each other, whether they are private or public

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
120
Q

What is the minimum CIDR size for a subnet?

A

/29

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
121
Q

Which of TerramEarth’s legacy enterprise processes in their existing data centers would experience significant change as a result of increased Google Cloud Platform adoption?

A

Capacity planning, utilization measurement, data center expansion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
122
Q

You have a mission-critical database running on an instance on Google Compute Engine. You need to automate a database backup once per day to another disk. The database must remain fully operational and functional and can have no downtime. How can you best perform an automated backup of the database with minimal downtime and minimal costs?

A

Use a cron job to schedule your application to backup the database to another persistent disk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
123
Q

Once a month Terram Earth’s vehicles are serviced and the data is downloaded from the maintenance port. the data analysts would want to query this huge data collected from these vehicles and analyze the overall condition of the vehicles. Terram Earth’s management is looking at a solution which cost-effective and would scale for future requirements.

A

Load the data from Cloud Storage to BigQuery and run queries on BigQuery

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
124
Q

Your company’s architecture is shown in the diagram. You want to automatically and simultaneously deploy new code to each Google Container Engine cluster. Which method should you use?

A

Use an automation tool, such as Jenkins

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
125
Q

BigQuery Best practices for controlling cost

A

1). Avoid SELECT * Query only the columns that you need2). Use the –dry_run flag in the CLI before running queries, preview them to estimate costs3). If possible, partition your BigQuery tables by date

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
126
Q

The security team has disabled external SSH access into production virtual machines in GCP. The operations team needs to remotely manage the VMs and other resources. What can they do?

A

Grant the operations team access to use Google Cloud Shell

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
127
Q

Dress4Win has asked you to recommend machine types they should deploy their application servers t. How should you proceed?

A

Recommend that Dress4Win deploy into production with the smallest instances available, monitor them over time, and scale the machine type up until the desired performance is reached.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
128
Q

What is Google’s continuous integration solution?

A

Cloud Build

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
129
Q

Kubernetes Engine offers integrated support for two types of ________ for a publicly accessible application:

A

Cloud Load Balancing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
130
Q

URL maps are used with the following Google Cloud products:

A

1). External HTTP(S) Load Balancing2). Internal HTTP(S) Load Balancing3). Traffic Director

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
131
Q

Your customer is moving an existing corporate application from an on-premises data center to the Google Cloud Platform. The business owner requires minimal user disruption. There are strict security team requirements for storing passwords. What authentication strategy should they use?

A

Federate authentication via SAML 2.0 to the existing Identity Provider

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
132
Q

You write a Python script to connect to Google BigQuery from a Google Compute Engine virtual machine. The script is printing errors that it cannot connect to BigQuery. What should you do to fix the script?

A

Run your script on a new virtual machine with the BigQuery access scope enable.”The error is most like caused by the access scope issue. When a new instance is created you have the Compute Engine default service account but most services like access including BigQuery is not enabled.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
133
Q

AS part of migrating plans to the cloud, Dress4Win wants to set up a managed logging and monitoring system so they can understand and manage workload based on the traffic spikes and patterns. They want to ensure that:- The infrastructure can be notified when it needs to scale up and down to handle the daily workload- Their administrators are notified automatically when their application reports errors- They can filter their aggregated logs down to debug one piece of the application across many hosts.Which Google StackDriver features should they use?

A

Monitoring, Logging, Debug, Error Report

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
134
Q

You work in a small company where everyone should be able to view the resources of a specific project. You want to grant them access following Google’s recommended practices. What should you do?

A

Create a new Google Group and add all users to the group. Use “gcloud projects add-iam-policy-binding” with the Project Viewer role and Group email address

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
135
Q

One of your primary business objectives is being able to trust the data stored in your application. You want to log all changes to the application data. How can you design your logging system to verify the authenticity of your logs?

A

Digitally sign each timestamp and log entry and store the signature. “To verify the authenticity of your logs if they are tampered or forged, you can use certain algorithms to generate digest by hashing each timestamp or log entry and then digitally sign the digest with a private key to generate a signature. Anybody with your public key can verify that signature to confirm that it was made with your private key and they can tell if the timestamp or log entry was modified. You can put the signature files into a folder separate from the log files. This separation enables you to enforce granular security policies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
136
Q

Mountkrik is setting up its backend platform for a new game. They expect the new game to become popular once it is released. The platform must adhere to their technical requirements. Please select the Google Cloud Services that would fulfill all their requirements.

A

Managed Instance Group with Auto Scaling enabled, Cloud Datastore BigQuery, DataFlow1. Dynamically scale up or down based on game activity (Managed Instance Group w/ Autoscaling)2. Connect to a transactional database service to manage user profiles and game state (Cloud Datastore because Cloud Datastore is good for user profiles that deliver a customized experience based on the user’s past activities and preferences(gaming).3. Store game activity in a time-series database server for future analysis (BigQuery is good for time-series data unless it is specified for ‘low-latency’, BigTable would be a better fit4. As the system scales, ensure that data is not lost due to processing backlogs (Dataflow can handle late-arriving data and out of order data)5. Run hardened Linux distro (Managed Instance Group with Hardened Linux Distribution)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
137
Q

How are subnetworks different than the legacy networks?

A

Each subnetwork controls the IP address range used for instances that are allocated to that subnetwork

138
Q

What is the command to use multi-threaded uploads?

A

gsutil -m cp -r dir gs://my-bucket

139
Q

You have a collection of media files over 5GB each that you need to migrate to Google Cloud Storage. The files are in your on-premises data center. What migration method can you use to help speed up the transfer process?

A

Use parallel uploads to break the file into smaller chunks then transfer it simultaneously.gsutil -o GSUtil:parallel_composite_upload_threshold=150M cp bigfile gs:///yourbucket

140
Q

What are the flags to start a recursive upload?

A

The -R and -r options are synonymous. It causes directories, buckets, and bucket subdirectories to be copied recursively.

141
Q

What are two business risks of migrating to Cloud Deployment Manager?

A

1). Cloud Deployment Manager only supports the automation of Google Cloud Resources. 2). Cloud Deployment Manager can be used to permanently delete cloud resources

142
Q

Dress4Win wants to do penetration security scanning on the test and development environment deployed to the cloud. The scanning should be performed from an end-user perspective as much as possible. How should they conduct penetration testing?

A

Use the on-premises scanners to conduct penetration testing on the cloud environments routing traffic over the public internet.

143
Q

Mountkirk Games wants you to design their new testing strategy. How should the test coverage differ from their existing backends on the other platforms?

A

Tests should include directly testing the Google Cloud Platform (GCP) Infrastructure

144
Q

Your company collects and stores security camera footage in Google Cloud Storage. Within the first 30 days, the footage is processed regularly for threat detection, object detection, trend analysis, and suspicious behavior detection. You want to minimize the cost of storing all the data. How should you store the videos?

A

Use Google Cloud Regional Storage for the first 30 days, and then move to Coldline Storage.

145
Q

A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data files. The database is about to run out of storage space. How can you remediate the problem with the least amount of downtime?

A

In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.

146
Q

What is the command to resize a GCE disk?

A

gcloud compute disks resize [DISK_NAME] –size [DISK_SIZE]

147
Q

You are migrating your existing data center environment to Google Cloud Platform. You have 1 petabyte Storage Area Network (SAN) that needs to be migrated. What GCP service will this data map to?

A

Persistent DiskSAN data uses block storage, which will map directly to a persistent disk on GCP for equivalent storage.

148
Q

What type of storage does a SAN map to in GCP?

A

Persistent Disk

149
Q

What type of storage does a NAS map to in GCP

A

Persistent Disk or Cloud Storage

150
Q

Your company plans to host a large donation website on Google Cloud Platform. You anticipate a large and undetermined amount of traffic that will create many database writes. To be certain that you do not drop any writes to a database hosted on GCP. Which service should you use with managed service?

A

Cloud Pub/Sub for capturing the writes and draining the queue to write to the database.

151
Q

Dress4Win has end-to-end tests covering 100% of their endpoints. They want to ensure that the move to the cloud does not introduce any new bugs. Which additional testing methods should the developers employ to prevent an outage?

A

They should add additional unit tests and production scale load tests on their cloud staging environment

152
Q

Your development team has installed a new Linux kernel module on the batch servers in Google Compute Engine (GCE) virtual machines (VMs) to speed up the nightly batch process. Two days after the installation, 50% of the batch servers failed the nightly batch run. You want to collect details on the failure to pass back to the development team. Which three actions should you take? Choose 3 answers

A

1). Identify whether a live migration event of the failed server occurred, using the activity log. 2). Use gcloud or Cloud Console to connect to the serial console and observe the logs3). Adjust the Google Stackdriver timeline to match the failure time and observe the batch server metrics.

153
Q

Your company runs several databases on a single MySQL instance. They need to take backups of a specific database at regular intervals. The backup activity needs to complete as quickly as possible and cannot be allowed to impact disk performance. How should you configure the storage?

A

Mount a Local SSD volume as the backup location. After the backup is complete, use gsutil to move the backup to Google Cloud Storage.

154
Q

You have created a Kubernetes engine cluster named ‘mycluster’. You’ve realized that you need to change the machine type for the cluster from n-standard-1 to n1-standard-4. What is the command to make this change?

A

You must create a new node pool in the same cluster and migrate the workload to the new pool.”you cannot change the machine type for an individual node pool after creation. You need to create a new node pool and migrate your workload over”

155
Q

Every server in the payment-processing application network sends its logs to Stackdriver Monitoring and Stackdriver Logging, using _____________ servers to securely transmit the log data.

A

Squid Proxy

156
Q

You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes from 50,000 sensors sending 10 readings a second, in the format of a timestamp and sensor reading. Where should you store the data?

A

Google Cloud Bigtable- A scalable, fully-managed NoSQL Wide-column database that is suitable for both real-time access and analytics workloads- Low-latency read/write access- High-throughput analytics

157
Q

You need to take streaming data from thousands of Internet of Things (IoT) devices, ingest it, run it through a pipeline, and store it for analysis. You want to run SQL queries against your data for analysis. What services in which order should you use for this task?

A

Cloud Pub/Sub, Cloud Dataflow, BigQuery

158
Q

Your company has developed a series of LAMP stack applications, that are required to be scalable and fast and that are often updated by the IT teams. Which of the following actions allow you to facilitate the process of managing the various configurations in production, staging, and development ?

A

1). Create deployments using Deployment manager2). Use Labels for your Resources3). Organize Resources according to your standard and setup/reuse configurations and templates4). Use references, template properties, and outputs

159
Q

You have been asked to setup up a Disaster Recovery solution for a non-critical Database Server with multiple disks. The application can be stopped for hours without creating major issues. The data must be recovered at the beginning of the last day. The solution must be simple and inexpensive. What would you advise?

A

Custom Image, Regional SSD persistent disks, and daily snapshots stored to Cloud Storage

160
Q

You have several Python apps in App Engine Standard. You want to start experience continuous deployment but you want to handle the process in the best way possible. You need to deploy a new release for two apps: myapp-a and myapp-b.myapp-a has some deeply tested updates regarding the bugs. The main requirement is that the transition to the new version which is myapp-b, has to be smooth and without any disruptions.myapp-b has new features and updates and you want to do an A / B testing, introducting the new version for only 50% of the traffic. What are the correct and best commands to executed?

A

1) gcloud app services set-traffic myapp-b splits 1=.5 2=.5 by cookie2) Add warmup and issue; gcloud app services set-traffic myapp-a –splits 2=1 –migrate

161
Q

Your team is developing a social engagement app in Node.js on App Engine Flexible Edition. Among the various features required, there is an online chat between related and connected users. Which of the following functions should you use or activate to accomplish what is required?

A

1) Session Affinity2) Websocket

162
Q

An e-commerce system is operating in an “App Engine Flex” with Node.js and has to perform many operations while registering orders. You have been asked to find a way to “decouple the service” with a procedure that will send an e-mail to the customer with an order confirmation, at the end.

A

“Use Cloud Task and define an appropriate worker server”

163
Q

You have a Cloud Function that sometimes fails because of an error that is still not well identified. The error happens randomly, sometimes it occurs and sometimes it doesn’t. Is there a method to minimize the effect while the developers are looking for the solution?

A

Use the Retry failure option

164
Q

In your organization, you have 2 projects: projA and projB. You have never created a VPC in your projects. Which network configuration do you actually have?

A

1) A Global default VPC2). A route for Internet connection and a route for each subnet/region3). A set of firewall, with incoming traffic from outside networks that are blocked

165
Q

You created a new development environment project and you don’t want to manage a Network. So, you delete the default network because it may consume unwanted resources. What is most likely expected to happen?

A

1). You cannot create a VM2) You are free to create Cloud Functions3) You may create a Storage BuckedAny compute operations require a networkServerless technologies are free from infrastructure. So no server NO NETWORK

166
Q

A ______ should be used when you only need to allow outgoing traffic to get updates (while blocking all incoming traffic except for the data coming back from update request).

A

NAT

167
Q

A _______should be used when you want a user(s) to SSH or RDP into the private server.

A

Bastion host

168
Q

_________ are instances that sit within your public subnet and are typically accessed using SSH or RDP. It acts as a ‘jump’ server, allowing you to use SSH or RDP to login to other instance in a private subnet

A

Bastion Hosts

169
Q

___ instance is, like a bastion host, lives in your public subnet. A ___ instance, however, allows your private instances outgoing connectivity to the Internet, while at the same time blocking inbound traffic from the Internet.

A

NAT

170
Q

The ___ __ can detect and extract text from images. There are two annotation features that support optical character recognition (OCR):TEXT_DETECTION detects and extracts text from any imageDOCUMENT_TEXT_DETECTION also extracts text from an image, but the response is optimized for dense text and documents. The JSON includes page, block, paragraph, word, and break information.

A

VISION API

171
Q

Your team has created a set of applications that will run in GKE clusters. IT management wants to activate and standardize a simple but effective security system. You have prepared a list of possibilities and features that you can use. You realize that some choices must be discarded because they are not safe enough or even wrong. Which solutions would you recommend?

A

1). In the cluster, the nodes will be assigned on internal RFC 1918 IP addresses only2). Use Service Accounts and store the keys as a Kubernetes secret3). Use WorkLoad identity

172
Q

_______ _______, is the new way for GKE applications to authenticate and consume other Google Cloud services.

A

Workload Identity

173
Q

_________ ______ Let’s user inspect the state of an application, at any code location, without stopping or slowing down the running app. It has a user interface similar to that of the popular Chrome Devtools

A

StackDriver Debugger

174
Q

You are a consultant for a client company and the management wants to migrate its systems to the cloud. The customer is concerned about cost control. They send you communication with a series of hypotheses and questions that you must solve. Which of the required possibilities are correct?

A

1) Is it possible to create separate budgets for projects and resources?2) Is it possible to have notifications?3) Is there a way to have a programmatic interface?

175
Q

You’re reviewing an application that sometimes executes some SQL queries with unacceptable response times. You need to find a way to scale the problem and identify the causes. Which of the following methods would you suggest?

A

Use Stackdriver Logs and set up a metricYOu can set a metric that accurately identifies the log lines related to queries. You can also create an alert that can promptly alert you when the problem is displayed, so you can review all the related logs and information at the right time.

176
Q

Dress4Win business is growing strongly. The management wants to accelerate cloud migration in the most convenient and scalable way. They did a test with GCE and it went well. Now they also want to evaluate GKE before making the final decision in order to optimize the price/performance ratio. What actions would you recommend for this general test?

A
  • Use Cloud SQL mySQL Service- Setup a Pod for the Application Server and start using Cloud Build- Us DB Server with high availability
177
Q

Dress4Win 2 Support failover of the production environment to cloud during an emergency. After several tests, you are developing the final plan for Disaster Recovery and hot failover of the on-premises production environment on the Cloud. You have planned network, storage, and infrastructure. Which of the following actions would be in your final plan?

A
  • Prepare a custom image of the DB server stopping the instance- Configure replication between your on-premises database server and the Cloud DB- Setup the Cloud VPN and DNS
178
Q

TerramEearth is in the process of creating a faster transmission of the gzip CSV files. It has deployed 5g devices in their vehicles with the goal of achieving an unplanned vehicle downtime to a minimum. You are planning to:- Acquire directly files, from vehicles or from the services points, to the Cloud-Transform and get statistical figures immediately- Store everything in the Data Warehouse and in the Data Lake in the most suite way- Use the current work routines, whenever possibleWhich of the following steps contains your solution?

A
  • Pub/Sub- Cloud Dataflow- Cloud Storage- Big Query
179
Q

You have been asked to select the storage system for the click-data of your company’s large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute, with bursts of up to 8,5000 clicks per second. It must be stored for future analysis by your data science and user experience teams. Which storage infrastructure should you choose?

A

Google Cloud Bigtable- The reason is the data is in IoT nature and it will be used for analytics.

180
Q

Over time, you’ve created 5 snapshots of a single instance. To save space you delete snapshots number 3 and 4. What has happened to the fifth snapshot?

A
  • The data from both snapshots 3 and 4 necessary for continuance are transferred to snapshot 5
181
Q

One of your clients is using customer-managed encryption, which of the following statements are true when you are applying a customer-managed encryption key to an object.

A
  • the encryption key is used to encrypt the object’s data- the encryption key is used to encrypt the object’s CRC32C checksum- the encryption key is used to encrypt the object’s MD5 hash”The remaining metadata for the object, including the object’s name, is encrypted using standard server-side keys.
182
Q

What permission allows read access to read custom images from GCE engine?

A
  • compute.images.useReadOnly (permission)
183
Q

What role allows access to custom images from GCE?

A
  • roles/compute.imageUser (role)
184
Q

What role allows access to snapshots from GCE?

A
  • roles/compute.StorageAdmin (role)
185
Q

What permission allows read access to snapshots from GCE?

A
  • roles/compute.snapshots.useReadOnly (permission)
186
Q

What role allows for disk access from GCE?

A
  • roles/compute.StorageAdmin (role)
187
Q

What roles allow read access for disks from GCE?

A
  • compute.disks.useReadOnly (permission)
188
Q

You need to regularly create disk-level backups of the root disk of a critical instance. These backups need to be able to be converted into new instances that can be used in different projects. How should you do this?

A
  • Create snapshots, turn the snapshot into a custom image, and share the image across projects- Create snapshots and share them to other projects
189
Q

Your company has decided to build a backup replica of their on-premises user authentication PostgresSQL database on Google Cloud Platform. The database is 4 TB, and large updates are frequent. Replication requires RFC1918 private address space. Which networking approach would be the best choice?

A
  • Google Cloud Dedicated Interconnect- Google Cloud Partner Interconnect”The database is 4TB, and large updates are frequent” makes DI/PI a suitable solution”
190
Q

You are using DataFlow to ingest a large amount of data and later you send the data to Bigquery for Analysis, but you realize the data is dirty, what would be the best choice to use to clean the data in the stream with a serverless approach?

A
  • Fetch the data from Bigquery and create one more pipeline, clean data from DataFlow and send it back to BigQuery
191
Q

You have a long-running job that one of your employees has permissions to start. You don’t want that job to be terminated when the employee who last started that job leaves the company. What would be the best way to address the concern in this scenario?

A
  • Create a service account. - Grant the Service Account User Permission to the employees who needs to start the job. Also, provide “Compute Instance Admin” permission to that service account.
192
Q

Your company is using Bigquery for data analysis, many users have access to this service and the data set, you would want to know which user has run what query, what would be the best way to get the required information?

A

Go to the “Query history” it has information about what a user has run what query.

193
Q

A power generation company is looking to use the Google Cloud platform to monitor a power station. They have installed several IoT sensors in the power station like temperature sensors, smoke detectors, motion detectors, etc. Sensor data will be continuously streamed to the cloud. There it has to be handled by different components for real-time monitoring and alerts, analysis, and performance improvement. What Google Cloud Architecture would serve this purpose?

A

Cloud IoT Core receives data from IoT devices, Cloud IoT core transforms and redirects requests to a Cloud Pub/Subtopic. After the data is stored in Cloud Pub/Sub, it is retrieved by a streaming job running in Cloud Dataflow that transforms the data and sends it to Big Query for analysisCloud IoT&raquo_space; Cloud Pub/Sub&raquo_space; Cloud Dataflow&raquo_space; BigQuery

194
Q

Using the principle of least privilege and allowing for maximum automation, what steps can you take to store audit logs for long-term access and to allow access for external auditors’ view?

A
  • Generate a signed URL to the Stackdriver export destination for auditors to access- Export audit logs to Cloud Storage via an export sink
195
Q

MountKirk Games needs to build out their streaming data analytics pipeline to feed from their game backend application. What GCP services in which order will achieve this?

A

Cloud Pub/Sub - Cloud Dataflow - BigQuery

196
Q

___ ________ ______ create a security perimeter around data stored in API-based GCP services such as Google Cloud Storage, BigQuery, and Bigtable. This helps mitigate data exfiltration risks stemming from stolen identities, IAM policy misconfigurations, malicious insiders, and compromised virtual machines.

A

VPC Service Controls

197
Q

You are helping the QA team roll out a new load-testing tool to test the scalability of your primary cloud services that run on Google Compute Engine with Cloud Bigtable. What three requirements should they include?

A
  • Instrument the load-testing tool and the target services with detailed logging metrics collection- Create a separate Google Cloud Project to use for the load-testing environment- Ensure that the load tests validate the performance of Cloud Bigtable
198
Q

Your company places a high value on being responsive and meeting customer needs quickly. Their primary business objectives are release speed and agility. You want to reduce the chance of security errors being accidentally introduced. Which two actions can you take?

A
  • Use source code security analyzers as part of the CI/CD pipeline. - Run a vulnerability security scanner as part of your continuous-integration / continuous - delivery (CI/CD) pipeline.
199
Q

You have a mission-critical database running on an instance on Google Compute Engine. You need to automate a database backup once per day to another disk. The database must remain fully operational and functional and can have no downtime. How can you best perform an automated backup of the database with minimal downtime and minimal costs?

A
  • Use a cron job to schedule your application to backup the database to another persistent disk
200
Q

To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data to the ETL process. The current FTP process is error-prone and restarts the data transfer from the start of the file when connections fail, which happens often. You want to improve the reliability of the solution and minimize data transfer time on cellular connections. What should you do?

A

Directly transfer the files to a different “Google Cloud Regional bucket” location in US, EU, and Asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional Bucket.

201
Q

Ensure the following requirements are met. - Provide the ability for real-time analytics of the inbound biometric data- Ensure processing of the biometric data is highly durable, elastic and parallel- The results of the analytic processing should be persisted for “data mining”

A

Utilize Cloud Pub/Sub to collect the inbound sensor data, analyze the data with DataFlow and save the results to BigQuery– BigQuery = Data mining features

202
Q

Your infrastructure runs on another cloud and includes a set of multi-TB enterprise databases that are backed up nightly both on-premises and also to the cloud. You need to create a redundant backup to Google Cloud. You are responsible for performing “scheduled monthly disaster recovery drills”. You want to create a cost-effective solution. What should you do?

A
  • Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Nearline storage bucket as a final destination”Regular data transfers, so you should use the storage transfer service”“Transfer appliance more for one -time bulk”
203
Q
  • Do not run out of storage/disk space- Keep average CPU usage under 80%- Keep replication lab under 60 seconds
A

1 - Enable the automatic storage increase feature for your Cloud SQL instance. 2 - Create an alert in Stackdriver when CPU usage exceeds 80% and change the instance type reduce CPU usage3 - Create an alert in Stackdriver for replication lag and <b>shard the database</b> to reduce replication time.

204
Q

You have a website hosted on App Engine. After a recent update, you are receiving reports that some portions of the site take up to 20 seconds to load. The slow loading times occurred after the recent update. Which two actions should you perform to troubleshoot?

A

Rollback to a previous version of your app using the version management feature in App EngineUse Stackdriver Trace and Logging to troubleshoot latency issues with you website and diagnose in a testing environment

205
Q

When would you use Storage Transfer Service for migrating data?

A
  • Transfer from an on-premises location to Google Cloud Storage– Transfer from AWS S3 bucket to Google Cloud Storage bucket. - Transfer from publicly-available web resource to Google Cloud Storage bucket.
206
Q

_______.______._______ permissions is needed to create the transfer and __________.__________._______ permissions is needed on the target dataset.

A
  • bigquery.transers.update- bigquery.datasets.update
207
Q

The _____.______ predefined Cloud IAM role includes _________.________._______ and _______.________.________ permissions

A
  • bigquery.admin- bigquery.transfers.update- bigquery.datasets.update
208
Q

What does the error code 429 mean?

A

Too Many Requests

209
Q

What is the flag used for GCE to make the VM preemptible?

A

–preemptible

210
Q

If you using a preemptible machine and you want to use a shutdown script; how would you do this?

A

Under Management&raquo_space;> Metadata enter in “shutdown-script-url” &laquo_space;and then for the value use a url cloud bucket name for best practicegs://learning-gcp-229815/shutdown.sh

211
Q

Your company has developed a series of LAMP stack applications, that are required to be scalable and fast and that are often updated by the IT teams. Which of the following actions allow you to facilitate the process of managing the various configurations in production, staging, and development? (4)

A
  • Create deployments using Deployment Manager- Use Labels for your Resources- Organize Resources according to your standard and setup/reuse configurations and templates- Use References, template properties, and outputs
212
Q

Your team has created a set of applications that will run in GCP. IT management wants to activate and standardize a simple but effective security system. You have prepared a list of possibilities and features that you can use. You realize that some choices must be discarded because they are not safe enough or even wrong. What solutions would you, recommend at the end?

A
  • Service Accounts related to your applications- Service Accounts related to your VMs- Service Accounts related to your K8s Clusters
213
Q

Cloud DataStore

A
  • User Profiles- Game State- A scalable, fully-managed NoSQL document Database for your web and mobile applications
214
Q

Cloud BigTable

A
  • High-throughput analytics- Native time series- Geospatial datasets- Low-latency read/write access
215
Q

RTO

A

Recovery Time Objective- Maximum acceptable length of time that your application can be offline

216
Q

RPO

A

Recovery Point Objective- Maximum acceptable length of time during which data might be lost from your application due to a major incident

217
Q

Your company is using BigQuery for data analysis, many users have access to this service and the data set, you want to know which user has run what query, what would be the best way to get the required information?

A

Go to Query history it has information about which user has run what query.

218
Q

Horizontally scalable transactional DB

A

Cloud Spanner

219
Q

Access to audit logs and perform analytics using SQL

A

Stackdriver Logging + BigQuery

220
Q

Health-check is failing

A

Check Firewall rule(s)

221
Q

Scale down to Zero Web Application

A

App Engine Standard

222
Q

How Compute Engine can access BigQuery?

A

Access Scope (Default Service Account) OR IAM (Custom Service Account)

223
Q

Analyst knows SQL

A

BigQuery

224
Q

A managed instance group spreads and balances workloads across ____ zones in a region by default.

A

3

225
Q

_______________ improve your application availability by spreading your instances across three zones.

A

Regional managed groups

226
Q

A ___________ image is a baked image has everything set and tested and is ready for production use.

A

Golden

227
Q

3 Cloud Pub/Sub Use Cases

A
  • Balancing workloads in network clusters- Refreshing distributed caches- Implementing asynchronous workflows
228
Q

Connection draining delays the termination of an instance until existing connections are closed. Which of the following are also true about connection draining?

A
  • Minimizes interruption for users- New connections to the instance are prevented- Instance preserves existing sessions until they end OR a designate timeout is reached (1 to 3600 seconds)
229
Q

Google Cloud Platform has several unique and innovative benefits when it comes to billing and resource control. What are these benefits? (3)

A
  • Sub-hour billing (Billed for 10 minutes and thereafter every minute on VMs)- Sustained-use discounts- Compute Engine custom machine types
230
Q

Your customer has decided to run Windows in GCS and the customer also likes to use Powershell. What detail about scripts should you notify them of about with Windows?

A

A startup script is specified through the metadata server

231
Q

What is the name of the two “Managed” Instance Group types that are supported in GCP?

A
  • Managed Instance Group (Zonal)- Managed Instance Group (Regional)
232
Q

A ____ __________ ______ provides a single global IP address for an application.

A

global forwarding rule

233
Q

What are the two benefits for developers to use Cloud Endpoints?

A
  • Exposes an API for front-end client for mobile or web-application to make use of cloud-based application services- Frees developers from writing a wrapper to access App Engine resources from a mobile or web client
234
Q

Google Cloud Deployment Manager allows you to create and manage cloud resources with simple templates. What are some other features?

A

Repeatable Deployment Process, Declarative Language, Parallel Deployment, Schema Files

235
Q

Which specific object can you specify but also GCP can specify?

A

Project ID

236
Q

Cloud DNS pricing includes a monthly charge per zone plus usage costs based on

A

Query Traffic

237
Q

With Continous ________, revisions are deployed to a production environment automatically without explicit approval from a developer, making the entire software release process automated

A

Deployment

238
Q

______ ________ is a DevOps software development practice where code changes are automatically built, tested, and prepared for release to production.

A

Continuous delivery

239
Q

What are three benefits of using DevOps in a Production Environment?

A
  • Automate Software Releases- Improve Developer Productivity- Find Bugs Quicker
240
Q

_____ ___ enables integration with other tools such as compression and partial resource request/reply (access to specific fields in the data) so you don’t have to transfer the whole object to get a tiny part of it. There is no Python API for Cloud Storage.

A

JSON API

241
Q

What would be some reasons to use GCP platforms Transfer Appliance?

A
  • It would take more than 1 week to transfer data- If you have more than 60TB of data
242
Q

Google recommends using the “____”, technique which is an iterative interrogation technique to help identify the root cause of a problem and get past the apparent surface cause. What is the technique named?

A

“5 Whys”

243
Q

What are 2 facts of Cloud SQL?

A
  • Cloud SQL is limited to a maximum of 10 TB of data processing- Cloud SQL will scale up to 4,000 concurrent connections.
244
Q

What are the two ways to isolate microservices in GCP?

A

Service Isolation/Project Isolation

245
Q

What is the name of the design process that Google uses?

A

12 Factor Design

246
Q

Measuring helps ensure:

A
  • Making Design Choices- Testing and Validation- Monitoring
247
Q

What is the name of the design process Google uses?

A

12 Factor Design

248
Q

What are some disadvantages of Microservices?

A
  • Management overhead- Isolation- Resource overhead
249
Q

_____ are a concept that comes from user experience (UX) design and originated in marketing and represents the user and groups goals and behaviors.

A

User personas

250
Q

Terramearth Case StudyYour primary goal is to increase the operating efficiency of all 20 million cellular and unconnected vehicles in the field. How can you accomplish this goal?

A

Capture all operating data, train machine learning models that identify ideal operations, and “run locally” to make operational adjustments automatically

251
Q

Your company wants to control IAM policies for different departments. the departments must be independent from each other, however, you want to centrally manage the IAM policies for each individual department. How should you approach this?

A

Use a single Organization with a Folder for each department. This is the best structure to use. One single organization for the entire company. Organize departments inside folders inside of the single organization. You can then apply a single IAM policy to the single department folder, which will be applied to any projects or subfolders inside of it.

252
Q

compute.xpnAdmin

A

Shared VPC AdminOrganization level roleConfigure Shared VPCAssociate service projects with host projectsGrant Network User Role

253
Q

compute.networkUser

A

NetworkUserProject level roleCreate resources to use shared VPCDiscover shared VPC assetsRequires project admin role (Project Owner, Editor, Compute Engine Admin)

254
Q

Sharing and moving images requires ________ _______ ______ _____

A

Compute Engine Image User roleExample: User in Project A wants to use images from Project B User in Project A must have Compute Engine Image User role granted for project BRole grants access to all images in projectFor managed instance groups, Project A service account must be granted role to Project B

255
Q

How do you set a new project from google cloud cli?

A

gcloud config set project

256
Q

Retrieve IAM policy and download in YAML format

A

gcloud projects get-iam-policy (project_id) > [filename].yaml

257
Q

Update IAM Policy from file

A

gcloud projects set-iam-policy (project_id) [filename].yaml

258
Q

Add a single binding

A

gcloud projects add-iam-policy-binding (project_id) –member user:bob@gmail.com –role roles/editor

259
Q

Instance Template = ________

A

Global

260
Q

Instance Group = ______

A

Regional

261
Q

Cloud Functions scale down to _

A

0

262
Q

Set default region

A

gcloud config set compute/region us-central1

263
Q

Set default zone

A

gcloud config set compute/zone us-central1-a

264
Q

The application reliability team at your company has added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis.The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss. Which process should you implement?

A

Append metadata to the file body. Compress individual files. Name files with a random prefix pattern. Save files to one bucket.

265
Q

What are the steps for an architect to develop a solution from business objectives?

A

As a cloud architect, your goal is to assess a business objective at hand and distil the needs into business requirements and technical requirements. You’ll then use those requirements to architect a solution. You won’t always get all the requirements laid out so easily, and oftentimes you’ll spend hours upon hours diving into meetings with various stakeholders to gather pieces of data that start to formulate the building blocks of the solution. You’ll also need to be aware of future technical and strategic implications, ensuring that you’re building a solution that won’t encounter any major roadblocks.

A cloud architect would approach designing a solution in a very similar way that a software architect would: Schedule your scoping meetings. Gather your requirements—business requirements and technical requirements. Do some research for more nuanced requirements and understand the constraints of the system. Put together a high-level design diagram. Work through the high-level design and then break it down into deeper design diagrams. Ensure that you’re aligned with business and technology stakeholders along the way. Work toward a final draft and get the necessary approvals if you’re not the accountable stakeholder.

Then you’ve got a reference architecture that your development teams can use to begin building. You’ll probably want to reuse this pattern for similar use cases and continually refine it. You should also have a solid understanding of Agile best practices and the overall software development life cycle, understanding how code gets built and the various deployment methods (such as blue-green deployments, canary deployments, continuous integration/continuous delivery [CI/CD] deployment).

266
Q

What are the differences between functional requirements and non functional requirements?

A

Functional requirements The “what”—What is the system supposed to do? For example, my system needs to extract data from this API and load it into a storage bucket.

*Nonfunctional requirements The “How”—How should my system perform? What are the constraints? For example, my system needs to transform the data to a certain format before it’s loaded, or it needs to process at least X amount of data objects per second.

267
Q

What are the principles of Operational Excellence?

A

Operational excellence is the principle of building a foundation that successfully enables reliability across your infrastructure by efficiently running, managing, and monitoring systems that deliver business value. Three key strategies drive this principle:

  • Automating build, test, and deployment by using continuous integration and continuous deployment pipelines. This enables customers to programmatically do rapid deployment and iterations based on a continuous feedback loop.
  • Monitoring business objective metrics by defining, measuring, and alerting on key metrics. Data needs to be measured and output to your business leaders to give insight into where they have the competitive edge and where they can further optimize or reassess.

Conducting disaster recovery testing proactively and periodically. Disaster can strike a company in so many different ways, often causing financial and reputational business harm. This is overlooked too many times by customers until disaster strikes and ends up costing them exponentially more than it would cost had they been prepared.

268
Q

What is the core principle of Security, Privacy, and Compliance?
What are the

A

Security, Privacy, and Compliance
It is critical for any customer doing business in the cloud to ensure their intellectual property is protected and their customers are safe from malicious activity. This is a core principle of Google’s system design. Four key strategies drive this principle:

  • Implementing least privileges with identity and authorization controls. Centralizing your identity management system and designing your access management structure in a way that allows users to do only what they’re intended to do, while ensuring nonrepudiation (a user cannot deny their activity) and audit logs that are available to be consumed by automated and manual detection mechanisms.
  • Building a layered security approach. Also known as defense-in-depth, this involves the implementation of a variety of security controls at each level of the infrastructure and applications designed on top of the infrastructure. The idea is to assume that any security control can be breached, and when it is breached, several other layers of defense are available to protect intellectual property.
  • Automating deployment of sensitive tasks. Humans continue to be the weakest link in performance and security of administrative tasks. By automating the deployment of these tasks, you can eliminate the dependency on humans.
  • Implementing security monitoring. Part of a strong security model is to prevent, detect, and respond to malicious activity. By implementing automated tools to monitor your infrastructure, you can gather data to continue protecting your weak points and prevent malicious activity from occurring in your environment and harming your business.
269
Q

What are the principles of Reliability?

Who is it defined by?
How much should you use?
How do you create redundancy?

A

Reliability
Google sees reliability as the most important feature of any application. Without reliability, users begin to churn (stop using the product). Google suggests 15 strategies to achieve reliability; here are three key ones:

  • Reliability is defined by the user. Many data points can encompass all sorts of important factors in your workload, but truly measuring using key performance indicators (KPIs) requires an understanding of user actions, and the metrics define the success of those actions for reliability.
  • Use sufficient reliability. There’s no need to overinvest in reliability if you’re meeting user satisfaction. Figure out what sort of availability keeps your users happy and retained, and ensure that you continually assess reliability as your infrastructure grows.
  • Create redundancy. Always assume that if you depend on a single point to provide a function, that point can and will fail someday. When building your infrastructure and applications, always try to leverage resource redundancy across resources that can fail independently.
270
Q

What are the principles for performance and cost optimization?

A

Performance and Cost Optimization
Managing the performance of your applications and the associated costs is a balancing act, as highly performant environments often end up costing more to maintain. Understanding where you’ve met your minimum performance requirements and where you need to optimize cost is an important principle for system design. These three strategies are relevant here:

  • Evaluate performance requirements. Determine the minimum performance you need from your applications.
  • Use scalable design patterns. Leverage automatically scaling products and services where applicable to minimize cost to what is necessary.
  • Identify and implement cost-saving approaches. Understand the priority of each of your services with respect to its application to your business objectives. Use these priorities to optimize for service availability and cost.
271
Q

What is the problem with Pub/Sub being a global service that might affect a customer in Europe?

A

Pub/Sub is a global service, and its contract agreements state that, because of its global availability, if a region goes down, Pub/Sub will route data through other regions to maintain its high availability. This conflicts with your GDPR compliance requirements, so you may want to choose Kafka in this case, which offers you more control over the architecture versus a managed service. (Note that I threw a curveball here, because Pub/Sub now supports controlling where your message data is stored.)

272
Q

What principles are why role-based access control (RBAC) has become the standard for IAM over the last 15 years or so.

How does RBAC models work for your organization?

A

Least privilege and separation (or segregation) of duties are two of the building blocks of security for computing systems (it also applies to physical security) focused on access management. The principle of least privilege states that we should provision an individual user or a system account with the least amount of privileges needed to perform their job functions so that we minimize the threat surface. Separation of duties is the idea that you should separate all of the duties needed to perform a critical business function across multiple users so that in the event one user is compromised, a whole system or process is not compromised.

With the RBAC model, you define roles for your organization and your users, whether they are individuals, groups, or service accounts, and for each of these roles, you define the least amount of permissions necessary to enable the user to do their job.

273
Q

What is Attribute-based access control (ABAC)?

A

Attribute-based access control (ABAC) is becoming more prevalent in the cloud, offering users the ability to allow or deny authorization to a resource based on certain conditions—such as blocking a user from accessing an application outside of work hours, or blocking access from a user in another country. You won’t see this on the exam, but leveraging ABAC will continue improving the layers of defense for your enterprise and should be considered for your most critical applications.

274
Q

What are entitlements and privilege creep?

A

Entitlements refers to the process of granting users privileges and the scope of the privileges that are granted.
Privilege creep occurs when a user accumulates too many privileges over time, which violates the principle of least privilege. Privilege creep can result from a user’s various promotions, job transitions, or requests for one-time access privileges that are not removed.

275
Q

What is Google Cloud Resource Manager?

A

Google Cloud Resource Manager is a mechanism that provides resource containers such as organizations, folders, and projects and enables you to group and hierarchically organize GCP resources into those containers. With the Cloud Resource Manager API, you are also able to programmatically manage these resource containers. Having the ability to manage all of your projects centrally is an important function for the administrators of the cloud.

276
Q

Every time you interact with a resource, you’ll need to identify the what?

A

Every time you interact with a resource, you’ll need to identify the project info for every single request you make. You can use either the projectId or the projectNumber field. By default, the creator of the project is granted the owner role for any newly created projects.

277
Q

It’s important that you know the difference between a mutable and immutable object.
What are the differences?

A

Mutable objects can be changed after they’ve been created. Immutable objects cannot be changed after they’ve been created. An example of a mutable object is the project resource; you can change it as you please after it’s created. An example of an immutable object is a file that is uploaded to Google Cloud Storage; once you upload the file, it cannot be changed throughout its storage lifetime.

278
Q

Cloud Identity and Access Management (Cloud IAM) is what?

A

Cloud Identity and Access Management (Cloud IAM) is an enterprise-grade access control service that enables administrators to authorize who can take actions on certain resources and what conditions must exist before they can take action.

The goal of a cloud administrator is to have full visibility and fine-grained, centrally managed, access control capabilities that span all cloud resources.

Quite often, enterprises have massively complex organizational structures with hundreds of working groups, many projects, and very intricate access control requirements. Cloud IAM gives you the ability to manage all the access policies across your organization with its built-in auditing capabilities. You can create and manage your IAM policies through the Cloud Console, the API, or the command line.

279
Q

Remember the difference between Cloud Identity and Cloud IAM.

A

Cloud Identity is the source of truth for handling authentication by creating or synchronizing user accounts, setting up single sign-on (SSO), and leveraging 2-Step Verification (2SV), and it is managed from the Admin console at admin.google.com.

Cloud IAM handles access management, including creating and managing all the roles for your applications and environments, following the role-based access control (RBAC) model.

280
Q

What is a member?

A

A member can be a Google account (human users), a service account (programmatic account for applications and virtual machines), a Google group, or a Google Workspace or Cloud Identity domain that can access a resource. The identifier for a member is the e-mail address associated with the type of account or the domain name associated with the Google Workspace or Cloud Identity domain. It’s common to hear the term “users” used as a blanket statement to cover members. Just remember that Google Cloud refers to all of these as members, and each is treated distinctively according to the member’s appropriate title.

281
Q

What people use Google accounts for the cloud?

A

Google accounts are typically developers, administrators, and other users who interact with Google Cloud. Any e-mail that is associated with a Google account, including Gmail addresses, can be an identity.

282
Q

In GCP, there are two types of service account keys.

A

*GCP-managed keys These keys are used by Google Cloud APIs such as Google App Engine, Google Cloud Storage, and Google Compute Engine. You can’t download these keys, and they’re automatically rotated approximately once a week and are used for signing for a maximum of two weeks. (I won’t get into Kubernetes here because it takes a much deeper dive with respect to key management than what this exam covers. So you can research that on your own.)

*User-managed keys These keys are created, downloadable, and managed by your users. Once these keys are deleted from a service account, you can no longer use them to authenticate. If you’re using user-managed keys, you need to think about some of the top key management requirements such as key rotation, storage, distribution, revocation, recovery, and access.

283
Q

What are IAM Policies?

A

As discussed earlier, an IAM policy is a configuration that binds together one or more members and roles for the purpose of enforcing only approved access patterns through a collection of statements. These policies are represented by a Cloud IAM Policy object, which consists of a list of bindings. A binding binds a list of members to a role.

284
Q

What are IAM Conditions?

A

IAM Conditions is an attribute-based access control model that lets you define the conditions in which a user is able to be authorized to access a resource. Today there are a few conditions that you can implement, but GCP continues to evolve and grow this feature with new request attributes. Conditions are added to your role bindings of your Cloud IAM policy, and you use the Common Expression Language (CEL) to specify the expression in the Cloud IAM Condition.

285
Q

What are Admin Activity audit logs?

A

Admin Activity audit logs These contain log entries for API calls and other administrative actions that can modify configurations or metadata of resources. Log entries are generated, for example, when a user makes a modification to an IAM policy.

286
Q

What are Data Access audit logs?

A

Data Access audit logs These contain API calls that read the configuration or metadata of resources and user-driven API calls against user-provided resource data. The logs are disabled by default because they are large, and large means expensive. It’s highly recommended, however, that you enable these logs in your most critical environments.

287
Q

System Event audit logs

A

System Event audit logs
These contain administrative actions that modify the configuration of resources. They’re generated by systems, not by user actions.

288
Q

BeyondCorp is a zero trust access model that abstracts the authentication aspect of logging into the cloud away from the network and to the users and devices.

A

BeyondCorp is a zero trust access model that abstracts the authentication aspect of logging into the cloud away from the network and to the users and devices. The goal of a zero trust security access model is not to trust anything inside or outside an organization’s network perimeters and to use a higher level of scrutiny to determine whether a user is who they say they are and if they are on a safe device to access a resource. Zero trust security models are great, so learn more about them at your leisure!

289
Q

What is an edge point of presence (POP)?

A

An edge point of presence (POP) is a location where Google connects its network to the rest of the Internet via peering. Edge nodes, also known as content delivery network (CDN) POPs, are points at which content can be cached and served locally to end users. The user journey starts when a user opens an application built on Google’s infrastructure, and then their user request is routed to an edge network location that will provide the lowest latency. The edge network receives the request and passes it to the nearest Google data center, and the data center generates a response optimized for that user that can come from the data center, an edge POP, and edge nodes.

290
Q

What are Google’s various Network Tiers?

A

Since Google owns its entire network end to end, it is able to offer its customers the concept of network service tiers. Google offers two network service tiers: a default Premium network service tier and Standard network service tier.

291
Q

What are the differences between the premium tier and the standard tier?

A

The Premium tier is the default setting for GCP customers, offering users access to high-performance networking using Google’s entire global network, as described previously. In the Premium tier, Google uses cold potato routing, a form of network traffic routing in which Google will hold onto all network packets through the entire life cycle until they reach their destination. Once inbound traffic reaches Google’s POP, Google will use cold potato routing to hold onto packets until they reach their destination. Outbound traffic will be routed through Google’s network until it gets to an edge POP nearest to the user.

The Standard tier is a more cost-effective, lower-performance network that does not offer access to some features of Cloud networking (such as the ability to use a global load balancer) to save money. In the Standard tier, Google uses a hot potato routing method, whereby Google will offload your network traffic as fast as possible and hand it off to the public Internet to save you money.

292
Q

What is Private Google Access?

A

Private Google Access enables instances without external IP addresses to access resources outside of their network inside Google Cloud. This was created to avoid security concerns for sensitive workloads where you did not want to have to assign your VM an external IP address and have it communicate over the Internet to communicate with Google Cloud’s managed services. You can extend this access through your on-premises network if you have a VPN or an interconnect setup.

Private Google Access enables you to use Google managed services across your corporate intranet without having to access the Internet and exposing your GCP resources to Internet attackers. Remember that RFC 1918 is your friend from a security perspective. The fewer public IP addresses you are dealing with, the lower your external attack surface area. Private Google Access can be used to maintain a high security posture using private IP addressing within your network, while leveraging Google services and resources that live outside it but still inside Google Cloud. Google Cloud acts as a sort of DMZ in that you can leverage these services to bring assets (such as data files into GCS, public repositories in BigQuery, Git repositories in Cloud Source Repositories, and so on) into your ecosystem without actually exposing your network to the outside world. This significantly reduces your exposure to a network attack. You don’t always need to access the Internet from your environment to keep it up-to-date. You can have the Internet come to you via Google Cloud services and then access those resources indirectly through Private Google Access.

Imagine a scenario in which you need to install software on your VMs, and the content is located in an on-premises file server, but your organization does not permit or have connectivity (VPN or Interconnect) between your Google Cloud environment and your on-premises network. Moreover, your Google environment needs to install this software securely without accessing the Internet as per security requirements. You can enable this capability securely using Private Google Access. You can establish a secure TLS session from your on-premises environment to GCS when you need to upload your files using gsutil. Then you can set up Private Google Access on the GCP project’s subnet that your VM is located on, and just use the gsutil tool to download your content to the VM from GCS on GCP. It’s like a secret backroad to Google Cloud public endpoints, and Google acts as your security guard.

293
Q

What is Private Service Access?

A

Private Service Access enables you to connect to Google and third-party services that are located on other VPC networks owned by Google or third parties. So imagine a company that offers a service to you, hosted on its internal network on GCP, and you want to connect to that service without having to go over the public Internet. This is where you’d want to leverage Private Service Access. This sounds a bit more risky, but Google has many controls to ensure privacy between tenants to prevent security incidents. Still, you should assess each offering with the lens of your organization’s risk tolerance. This security model is similar to how AWS implemented a dedicated GovCloud for government agencies wanting a more secure cloud. The idea is that by never leaving the Google Cloud network and using trusted and certified third parties, you are able to leverage best-of-breed Software as a Service (SaaS) services or partner services without having to host these applications yourself in your own cloud.

294
Q

What is a Cloud Router?

A

Cloud Router is a managed service that dynamically exchanges routes between your VPC and your on-premises network using the Border Gateway Protocol (BGP) via a Dedicated Interconnect or Cloud VPN. This router is the foundational element of using the Cloud VPN and Cloud Interconnect.

BGP is a very complex exterior gateway path-vector routing protocol that advertises routing and reachability information among autonomous systems on the Internet. Virtually the whole Internet runs on BGP. There are two flavors of BGP you should be aware of—external BGP (eBGP) and internal BGP (iBGP).

295
Q

What are the Don’t forget the bandwidth constraints of the various connectivity options.

A

Cloud VPN only supports up to 3 Gbps per tunnel, Partner Interconnect supports up to 10 Gbps, and Dedicated Interconnect supports up to 100 Gbps. If you get a question on the exam about speed, privacy, and connecting between on-premises to GCP—you know what to do.

296
Q

What concepts do you need to think about with regards to protecting networks?

A

When you think about the concept of prevention, detection, and response with respect to networks, you and your cloud provider both have roles to play in the shared responsibility matrix. The way you design your network, the network patterns that are risk-assessed and approved, the exceptions with mitigating controls, and all the configurations associated with it are preventative controls. Setting up your network according to best practices, as I’ve outlined in this chapter, is all preventative work—it will prevent bad behavior.

297
Q

What is a Google Front End?

A

We discussed a bit about the GFE earlier in the chapter. The GFE is a smart reverse-proxy that is on a global control plane in more than 80 locations around the world, all interconnected through Google’s global network. When a user tries to access an address on Google’s infrastructure, the GFE authenticates the user and employs TLS to ensure confidentiality and integrity. The load balancing algorithm is applied at the GFE servers to find an optimal backend, and then the connections are terminated at the GFE. After the traffic is proxied and GFE has determined an optimal backend, it will leverage a gRPC call to send the request to the backend. You get DoS protection from the GFE and DDoS protection from the GLB. It’s basically a traffic cop that decides who to route where in an orderly fashion.

298
Q

What are restrictions and limitations with VPN?

A

You use VPC firewall rules to allow or deny connections to or from your VMs based on a configuration that you specify. These rules are always enforced. In GCP, every VPC network functions as a distributed firewall, where rules are set at the network level and connections are allowed or denied on a per-instance basis. You can set rules between your instances and other networks and also between instances within your network. Firewall rules can apply to an ingress or egress connection, but not both. The only action you can take is to allow or deny. (I wish “New phone… Who dis?” was an option. That would be nice if you’re testing a rule.) You must select a VPC when you’re creating a firewall rule, because they’re distributed firewalls across VPCs. You can’t share a rule among VPC networks; you’d have to define a separate rule in other VPC networks. When it comes to shared VPC, your firewall rules are centralized and managed by the host project. Firewall rules are stateful, meaning that after a session is established, bidirectional traffic flow will be permitted.

299
Q

What are implicit firewall rules?

A

You need to consider a few implied rules, which are firewall rules that are built in to VPC Firewall by default; you cannot change them, but if you need to make a pattern that goes against the implied rule, you can give it a higher priority and it will take precedence. Implied rules were created to prevent a default, new project from being exposed to the world. There are two rules:

*Implicit Ingress Deny This rule will deny any ingress traffic to your VPC by default. You don’t want to open up your VPC to the outside world unless you really need to!

*Implicit Egress Allow This rule will allow your instances inside your VPC to send traffic to any destination if it means Internet access criteria (that is, it has an external IP or uses NAT).

300
Q

How do VPC Service Controls come into play?

A

This is where VPC Service Controls come into play. They enable security administrators to define a security perimeter around managed services, such as GCS and BigQuery, to mitigate data exfiltration risks and keep data private inside of your defined perimeter. When you create this service perimeter, it effectively isolates resources of multitenant services and constrains the data from these services to fall under the enforcement boundaries that you create to protect your VPC. VPC Service Controls are awesome, but they are quite complex when you’re implementing them, so be patient and deliberate with your use case here.

301
Q

What is Identity-Aware Proxy?

A

Google follows a zero trust access model called BeyondCorp that shifts access controls from the network perimeter to individual users and devices, challenging users through a highly sophisticated authentication and authorization mechanism. This enables employees, contractors, and other users to work more securely from virtually anywhere in the world without the need for a traditional VPN. Identity Aware Proxy is one of the building blocks to building a zero trust model. Identity-Aware Proxy (IAP) is a mechanism to control access to your cloud-based and on-premises applications and VMs on GCP that uses identity and context to determine whether a user should be granted access.

302
Q

Memory-optimized machine types

A

Memory-optimized machine types are optimized for memory-intensive workloads, and they offer more memory per core than other machine types (up to 12TB of RAM). There are two families of memory-optimized machines: M1 and M2. M1 and M2 machines are optimized for ultra-high-memory workloads—think of large in-memory databases such as SAP HANA, Redis, or in-memory analytics.

303
Q

What are Compute-optimized machine types for?

A

Compute-optimized machine types are optimized for compute-intensive workloads. They offer more performance per core than other machine types. These machine types offer Intel Scalable processors and up to 3.8GHz of sustained all core turbo, which essentially means that the chips will be able to run all of their cores at a consistent maximum rate. There is one family of compute-optimized machine types, C2, and they’re typically used for high-performance computing, gaming, and single-threaded applications that are heavily CPU-intensive.

304
Q

Shielded VMs

A

Shielded VMs are a security feature designed to offer a verifiable integrity of your VM instances, so that you can be sure your instances are not compromised by boot- or kernel-level malware or rootkits. These are designed for more highly sensitive workloads or for organizations that have stricter compliance requirements. Shielded VMs leverage Secure Boot, with a virtual Trusted Platform Module (vTPM), and integrity monitoring to ensure that your virtual machine has not been tampered with.

305
Q

Confidential VMs

A

Confidential VMs are a breakthrough technology that Google Cloud developed that offers to encrypt data that is in use. This technology has never before been available to this extent. Encryption traditionally has been permitted only for data at rest or data in transit. When you’re actively using data or an application is processing data, you would have to decrypt the data in CPU and memory for your system to do anything with it. With Confidential VMs, GCE is able to work on encrypted data without having to decrypt it. This is possible by leveraging the Secure Encrypted Virtualization feature of second-generation AMD EPYC CPUs. Basically, the CPU natively encrypts and decrypts all the in-process memory. So if a bad actor were able to get a memory dump from your system, they would not be able to forensically make sense of anything.

306
Q

What are the minimum and maximum instance count set to ?

A

You can create MIGs with a minimum and maximum instance count set to 1. This enables your MIG to guarantee that at all times one instance of your VM is up and running within a region. This is a very cost-effective way to enable high availability without incurring the extra costs of having to keep two or more instances running at all times.

307
Q

What is the OS Login mechanism and is it better security than the SSH access?

A

OS Login is a mechanism to simplify SSH access management by linking your SSH users in Linux to their respective Google identities in Cloud Identity. If you aren’t using OS Login, your users will need to have separate credentials to log in to their respective VMs. You use OS Login for the same reason you’d want to use single sign-on (SSO) to simplify access management for your users. It enables you to manage the full life cycle of your Linux accounts through the governance of your Google identities. That way, you can manage identity and access management (IAM) permissions centrally through Cloud IAM, you can do automatic permission updates, and you can also import existing Linux accounts from your on-premises Active Directory and LDAP servers to ensure that they are synchronized for VMs across your environments.

You can also enable an organization policy to enforce OS Login to prevent a malicious user who does not have proper authorizations through a Google identity from getting direct SSH access to your VMs. It’s a lot more difficult to manage the full SSH key life cycle if you are manually provisioning privileged users on your VM instances.

308
Q

What are the difficulties for API Management>?

A

Creating, publishing, and managing APIs is quite a challenge for many organizations. If you’re in the business of creating APIs, whether internal or external, it’s not easy to govern and manage all of your API endpoints in a consistent manner. This is where using API management platforms comes in handy. Modern API management platforms offer the tools to develop, secure, publish, and manage your APIs in a consistent and often policy-based manner. API management platforms such as Apigee are fully developed platforms that will handle the full API life cycle, whereas other solutions such as Cloud Endpoints are not in parity when it comes to features and functionality.

309
Q

What is Cloud Apigee?

A

Apigee, a Google acquisition, is a full end-to-end OpenAPI-compliant API management platform that enables you to manage the full API life cycle in any cloud, including multi- and hybrid-cloud environments. This incredibly robust product was ranked by Forrester as a leader in API management solutions in Q3 2020.

The most useful capability that Apigee offers is the fact that it is an API proxy. As a result, for API clients, it presents your business services as managed “facades” for backend services. Technically, your backend need not support HTTP, be modernized, or even understand the concept of microservices. Apigee can act as a translation layer between the modern client-facing REST API presentation layer your business is exposing to its clients and whatever new or old technologies you have lurking in the back corners of your enterprise. Furthermore, as a proxy, it can control the end-to-end security, transaction rates, access controls, and so on, for your business-exposed APIs, enabling a company to modernize its business face without necessarily having to reinvent its backend.

310
Q

Cloud Endpoints

A

Cloud Endpoints is an API management platform that enables you to secure, manage, and monitor your APIs on Google Cloud.
This greatly simplifies the need to manage all of your APIs manually in Google Cloud. You can build all of your API documentation in a developer-accessible portal.
With Cloud Endpoints, you can leverage three communications protocols——OpenAPI, gRPC, or Cloud Endpoints Frameworks for App Engine standard environments. This offering gives your development teams the ability to focus on developing their APIs instead of building custom frameworks.

311
Q

Why do you need to secure your APIs?

A

APIs expose application logic and sensitive data by their intended design, and this continues to become a target for attackers. Products such as Apigee and Cloud Endpoints can prevent a lot of these flaws by default or provide you the ability to configure your APIs in a consistent, secure manner.

312
Q

What are your tools to secure your APIs?

A

Here are some recommendations to leverage when you’re thinking about securing your APIs:

  • Classify your APIs and design a reference architecture for the required controls and approved patterns based on the classification of the API. For example, you can have public-facing APIs, internal APIs, and partner-facing APIs, each with a different level of security controls.
  • Implement rate limiting to prevent denial-of-service attacks.
  • Be aware of excessive data exposure, and think about how you can filter unnecessary data before it’s displayed back to the end user of the API.
  • Use a common configuration and monitor your APIs for security misconfigurations.
  • Validate your inputs—ensure that your API is not the subject of injection flaws, such as SQL injections, to avoid malicious code being executed by an attacker.
313
Q

What is Block Storage?

A

Block storage refers to a data system in which the data is broken up into blocks and then stored across a distributed system to maximize efficiency. These storage blocks get unique identifiers so that they can be referenced in order to piece the data back together. Block storage systems are great when you need to retrieve and manipulate data quickly through a layer of abstraction (such as an operating system) that is responsible for accessing the data points across the blocks. The downside to block storage, however, is that because your data is split and chunked across blocks, there is no ability to leverage metadata.

314
Q

What is object storage?

A

Object storage refers to a data system with a flat structure that contains objects, and within objects are data, metadata, and a unique identifier. This type of storage is very versatile, enabling you to store massive amounts of unstructured data and still maintain simple data accessibility. You can use object storage for anything—unstructured data, large data sets, you name it. However, because of the nature of this flat system, you’ll need to manage your metadata effectively to be able to keep your objects accessible. In Google Cloud, object storage is associated with technologies such as Google Cloud Storage.

315
Q

What is a BigQuery Dataset?

A

BigQuery datasets are top-level containers that are used to organize and control access to tables and views. As you can imagine, tables consist of rows and columns of data records. Like other databases, each table is defined by a table schema, which should be created in the most efficient way. Views are virtual tables defined by a query. You can create authorized views to share query results with only a particular set of users and groups without giving them access to the tables directly.

316
Q

What are the five key elements in Pub/Sub:

A
  • Publisher The client that creates messages and publishes them to a specific topic
  • Message The data that moves through Pub/Sub
  • Topic A named resource that represents a feed of messages
  • Subscription A named resource that receives all specified messages on a topic
  • Subscriber The client that receives messages from a subscription
317
Q

What are pros/cons of External Key Manager?

A

To counter the lack of CSEK’s growth, Google launched a new service, External Key Manager (EKM), a service that enables organizations to supply their own keys through a supported external key management partner to protect their data in the cloud. With EKM, your keys are not stored in Google; they are stored at a third party, and you would typically own and control their location, distribution, access, and management. It’s a pretty new service that is going to take some time before it’s supported across the spectrum, but EKM is the most promising solution to customers that need to fully own their encryption keys. In my opinion, you should either use default encryption or EKM and nothing in between. But it’s going to take some time before the product is fully mature and compatible.

318
Q

What reasons would you use to use Cloud Spanner, Cloud SQL, BigQuery, BigTable, Firestore, Memorystore?

A
  • If you need SQL queries via an OLTP system, use Cloud Spanner or Cloud SQL.
  • If you need interactive querying via an OLAP system, use BigQuery.
  • If you need a strong NoSQL database for analytical workloads such as time-series data and IoT data, use Bigtable.
  • If you need to store structured data in a document database with support for ACID transactions and SQL-like queries, use Cloud Firestore.
  • If you need in-memory data storage, use Memorystore.
319
Q

What is the DevOps Philosophy?

A

The philosophy of DevOps challenges decades of software development, where the people, process, and technology were built on longstanding belief systems based on clearly defined roles for developers, QA, and operations teams, and a rigid structure around the deployment process. Traditional software development uses a factory model of moving work through a conveyor belt of roles (processes), which produces a consistent level of quality as an output. This has frequently proved to be a false axiom, however, as the individuals involved were often the reason why processes either succeeded or failed.

320
Q

There are five key pillars of success in the DevOps philosophy:

A

*Reduce organizational silos

*Accept failure as normal

*Implement gradual changes

*Leverage tooling and automation

*Measure everything

321
Q

What is IAC?

A

IaC is the practice of writing the elements of your infrastructure in code form, which can be interpreted by tools such as Terraform, Ansible, and Google Deployment Manager. You’re basically treating your infrastructure as you would treat your software—with clear code, source code repositories, approved patterns and strong change management, misconfiguration detection and prevention, and rapid deployment.

322
Q

What are the benefits of using IAC?

A

Having the ability to spin up any scale infrastructure by deploying a template speeds up the entire task of deploying infrastructure by an unmatched margin.

*Managing configurations for thousands of servers, services, and beyond can and will always lead to humans making errors and misconfigurations, which could cause a whole slew of issues for your applications, from operational issues to security issues. Using IaC enables you to centrally manage these configurations, scan them for deviances, and govern them centrally.

*Eliminating the need to manually provision new servers and services minimizes the time to deploy your applications. By using IaC, if your application goes through a massive iteration and you need to attach a new database, Pub/Sub sink, or VMs, you can modify your templates rather than having to plan to perform these activities as part of your deployment.

*Freeing up development time for your team to focus on building, testing, and managing your applications, rather than constantly provisioning infrastructure manually, saves money.

*Stop looking at infrastructure as immutable. For high-velocity IT development teams to work as efficiently as possible, give them their own temporary environments to test their new code. Then destroy the environment when they are done. The whole point of public cloud is on-demand infrastructure. This can be achieved only via automation and IaC.

323
Q

What are the benefits of using Blue-Green Deployment?

A

Blue-green deployment involves creating two identical environments—a blue environment and green environment—so that when a release is deployed to one environment, the other environment can be held as a reserve. The idea is that you can deploy the release to one environment and switch all your users over to the new release, while still maintaining your old environment in case you need to fall back to it, without having to do a full rollback. As you can imagine, the infrastructure costs double, and if the application footprint is too large, this is not always a feasible deployment strategy.

324
Q

What is a Rolling Deployment?

A

In a rolling deployment, you maintain one production environment that may consist of many servers with a load balancer in front of them. When you deploy your application, you stagger the deployment across servers, so that some servers run the new application version and others continue to host the old version. This enables you to test real-life traffic and load and potentially identify issues before the application is fully deployed. If you do have an issue, you can just divert all of your users to the servers that do not have the latest release (rather than having to roll back servers entirely). This can be a complicated process, especially around major changes, because the support team managing your application will have to understand how to troubleshoot both users on the older versions as well as users who’ve been routed to the new version.

325
Q

Canary Deployment

A

Canary deployment involves making the new release available to a subset of users before other users. It is similar to a rolling deployment in the sense that some of your users will get access to the new release before others. But in the canary deployment, you’re targeting users, not servers. Your infrastructure costs will be higher with this type of deployment because you are maintaining two sets of infrastructure, though your usage on the infrastructure where you target your canary users probably won’t be too high if your application is designed to scale on demand?

326
Q

What are A/B Deployments??

A

A/B deployment is more focused on testing different changes on end users to understand which they prefer. The idea here is to have half of your users work with version A, while the other half works with version B. It’s a way for you to understand how your customers are using your new version and derive insights from their usage patterns to drive customer happiness.

327
Q

What is the Container Registry?

A

Container Registry is a private Docker repository in which you can store, manage, and secure your Docker container images. You can also perform vulnerability analysis and manage access control to the container images. With Container Registry, you can integrate your CI/CD pipelines to design fully automated Docker pipelines. When you’re using the Container Registry, you can automatically build and push images to your private registries immediately upon committing code to your source code repository tools such as Cloud Source Repositories, GitHub, or BitBucket.

328
Q

What are the 3 key themes to optimize your day 2 operations.

A

Design resilient services:
standardize deployments and incorporate automation architectural standards and deploying with automation helps you standardize your builds, tests, and deployments,
Use Managed Services

Operational tools portfolio
Focus on improving operational excellence, rather than managing the tools themselves
Understanding what tools are available and how they can help you identify and resolve an issue will reduce the impact and return your applications to operations faster.

Improve your cloud operations
monitoring performance, anomaly detection, or even securing your cloud against abuse
Google Cloud operational tooling provides insights that help you understand what is happening faster and in turn, improve the efficiency of your operations

329
Q

What are webhooks?

A

What is a webhook?
Webhooks are one of a few ways web applications can communicate with each other.

It allows you to send real-time data from one application to another whenever a given event occurs.

For example, let’s say you’ve created an application using the Foursquare API that tracks when people check into your restaurant. You ideally want to be able to greet customers by name and offer a complimentary drink when they check in.

What a webhook does is notify you any time someone checks in, so you’d be able to run any processes that you had in your application once this event is triggered.

The data is then sent over the web from the application where the event originally occurred, to the receiving application that handles the data.

330
Q

How do webhooks work?

A

This exchange of data happens over the web through a “webhook URL.”

A webhook URL is provided by the receiving application, and acts as a phone number that the other application can call when an event happens.

Only it’s more complicated than a phone number, because data about the event is sent to the webhook URL in either JSON or XML format. This is known as the “payload.”

331
Q

How do webhooks work?

A

This exchange of data happens over the web through a “webhook URL.”

A webhook URL is provided by the receiving application, and acts as a phone number that the other application can call when an event happens.

Only it’s more complicated than a phone number, because data about the event is sent to the webhook URL in either JSON or XML format. This is known as the “payload.”

Here’s an example of what a webhook URL looks like with the payload it’s carrying:

332
Q

why should you use a webhook?

A

Imagine you run a membership site. Every time a customer pays you through a payment gateway like Stripe, you have to manually input their details into your membership management application. This is just so the user can log in.

This of course, quickly becomes tedious as the rate of new members increase. If only Stripe and your membership software could communicate with one another. That way anyone that pays through Stripe is automatically added as a member.

Using a webhook is one way to make this happen.

Let’s assume that both Stripe and the membership management software have webhook integrations. You’d then be able to set up the Stripe integration to automatically transfer a user’s information over, each time payment is made.

Here’s an image that shows how this might work:

webhooks flow

Webhooks are an incredible tool that saves you a lot of work. And the fact that they’re so popular means that you’ll be able to integrate most of the web apps you currently use.

For example, connecting your email marketing software with other applications through a webhook can open up a lot of possibilities:

You can use a webhook to connect a payment gateway with your email marketing software so that a user gets an email whenever a payment bounces.

You can use webhooks to sync customer data in other applications. For example, if a user changes their email address you can ensure that the change is reflected in your CRM as well.

You can also use webhooks to send information about events to external databases or data warehouses like Amazon’s Redshift or Google Big Query for further analysis.

333
Q

What is the key difference between a webhook and an API?
You’ll often hear APIs and webhooks mentioned together. And while they’re similar in what they can help you achieve – they’re not the same thing.

A

As mentioned earlier, webhooks are just one of the ways that different applications use to communicate with each other, and another is through an application programming interface (API).

Their uses cases are very similar but the prime difference between API and webhooks?

It’s in how they receive data.

334
Q

What is different between the day a webhook gets data vs an API?

A

With an API, you get data through a process known as “polling.” This is when your application periodically makes a request to an API server to check for new data.

A webhook, on the other hand, allows the provider to send (i.e “push”) data to your application as soon as an event occurs. This is why webhooks are sometimes referred to as “reverse APIs.”

APIs need to pull data from a server periodically to stay up to date, but with webhooks, the server can push this data over to you the instant something happens.

To use a real world analogy, APIs would be likened to you repeatedly calling a retailer to ask if they’ve stocked up on a brand of shoes you like.

Webhooks, would then be like asking the retailer to call you whenever they have the shoes in stock, which frees up time on both sides.

Webhooks are less resource-intensive because they save you time on constantly polling (checking) for new data.

335
Q
A

So the question is, if webhooks are easier to set up, less resource-intensive, and faster than APIs, why use APIs at all?

Well, APIs are still popular for a number of reasons:

Not every application supports webhook integrations.
Sometimes you only want to know about the end result, rather than every event (i.e. every permutation) that’s changed an object.
Webhooks can only notify you about an event, so if you want to make a change based on new information, you’ll need an API.

A webhook payload may not contain all the data you need about an event.
So APIs are still really useful, which is why a lot of applications support both APIs and webhooks.

Bottom line is, if your goal is to transfer data between two services, webhooks are the way to go.

335
Q
A

So the question is, if webhooks are easier to set up, less resource-intensive, and faster than APIs, why use APIs at all?

Well, APIs are still popular for a number of reasons:

Not every application supports webhook integrations.
Sometimes you only want to know about the end result, rather than every event (i.e. every permutation) that’s changed an object.
Webhooks can only notify you about an event, so if you want to make a change based on new information, you’ll need an API.

A webhook payload may not contain all the data you need about an event.
So APIs are still really useful, which is why a lot of applications support both APIs and webhooks.

Bottom line is, if your goal is to transfer data between two services, webhooks are the way to go.

336
Q

With Cloud Monitoring, when will we be able to connect to an internal webhook (VPC has connection with on-premises via interconnect)?

A

For connecting to internal webhooks, you can route them through Pub/Sub. Uptime checks work against private endpoints as well. See Creating custom notifications with Cloud Monitoring and Cloud Run for more information.

337
Q

How can a Google Cloud customer deliver alert notifications through Cloud Pub Sub

A

In the example, two monitoring alerting policies are created using Terraform: one is based on the GCE instance CPU usage_time metric and the other is based on the GCE instance disk read_bytes_count metric. Both alert policies use Cloud Monitoring Pub/Sub notification channels to send alert notifications. A Cloud Pub/Sub push subscription is configured for each Cloud Pub/Sub notification channel. The push endpoints of the Cloud Pub/Sub push subscriptions are pointed to the Cloud Run service we implement so that all the alert notifications sent to the Cloud Pub/Sub notification channels are forwarded to the Cloud Run service. The Cloud Run service is a simple Http server that transforms the incoming Cloud Pub/Sub messages into Google Chat messages and sends them to the configured Google Chat rooms via their incoming Webhook URLs.

https://cloud.google.com/blog/products/operations/write-and-deploy-cloud-monitoring-alert-notifications-to-third-party-services

338
Q

When we migrate legacy applications, clients are also legacy and are not compliant to all OWASP rules. We need to permit certain runtime exceptions and block SQL injection kind of rules. These are not always known upfront and are detected Day 2 onwards. What are the ways to quickly mitigate them?

A

You have control over which rules are applied in your security policies. You can turn off the few rules that are blocking your legacy clients. Instructions are available in how to write your own security policies:

You can configure Google Cloud Armor security policies, rules, and expressions by using the Google Cloud console, the Google Cloud CLI, or the REST API. When you use the gcloud CLI to create security policies, use the –type flag to specify whether the security policy is a backend security policy or an edge security policy.

Configure Google Cloud Armor security policies

https://cloud.google.com/armor/docs/configure-security-policies#atomic-update

338
Q

When we migrate legacy applications, clients are also legacy and are not compliant to all OWASP rules. We need to permit certain runtime exceptions and block SQL injection kind of rules. These are not always known upfront and are detected Day 2 onwards. What are the ways to quickly mitigate them?

A

You have control over which rules are applied in your security policies. You can turn off the few rules that are blocking your legacy clients. Instructions are available in how to write your own security policies:

You can configure Google Cloud Armor security policies, rules, and expressions by using the Google Cloud console, the Google Cloud CLI, or the REST API. When you use the gcloud CLI to create security policies, use the –type flag to specify whether the security policy is a backend security policy or an edge security policy.

Configure Google Cloud Armor security policies

https://cloud.google.com/armor/docs/configure-security-policies#atomic-update

339
Q

When we migrate legacy applications, clients are also legacy and are not compliant to all OWASP rules. We need to permit certain runtime exceptions and block SQL injection kind of rules. These are not always known upfront and are detected Day 2 onwards. What are the ways to quickly mitigate them?

A

You have control over which rules are applied in your security policies. You can turn off the few rules that are blocking your legacy clients. Instructions are available in how to write your own security policies:

You can configure Google Cloud Armor security policies, rules, and expressions by using the Google Cloud console, the Google Cloud CLI, or the REST API. When you use the gcloud CLI to create security policies, use the –type flag to specify whether the security policy is a backend security policy or an edge security policy.

Configure Google Cloud Armor security policies

https://cloud.google.com/armor/docs/configure-security-policies#atomic-update