ACE Failed Topic Review Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What services fall under ‘Networking’?

A

1) Virtual Private Cloud (VPC)
2) Cloud Load Balancing
3) Cloud CDN
4) Cloud Interconnnect
5) Cloud DNS
6) Network Service Tiers (alpha)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What services fall under ‘Big Data’?

A

1) BigQuery
2) Cloud Dataflow
3) Cloud Dataproc
4) Cloud Datalab
5) Cloud Dataprep (beta)
6) Cloud Pub/Sub
7) Genomics
8) Google Data Studio (beta)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What services fall under ‘Data Transfer’?

A

1) Google Transfer Appliance
2) Google Storage Transfer Service
3) Google BigQuery Data Transfer Service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What services fall under ‘Machine Learning’?

A

1) Cloud Machine Learning Engine
2) Cloud Job Discovery (beta)
3) DialogFlow Enterprise Edition
4) Cloud Natural Language
5) Cloud Speech API
6) Cloud Translation API
7) Cloud Vision API
8) Cloud Video Intelligence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What services or features fall under ‘Identity & Security’?

A

1) Cloud IAM
2) Cloud Identity-Aware Proxy
3) Cloud Data Loss Prevention API (beta)
4) Security Key Enforcement
5) Cloud Key Management Service
6) Cloud Resource Manager
7) Cloud Security Scanner

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What activities are required to setup a cloud solution environment?

A

Setup Account and Projects:
-Creating a resource hierarchy
-Applying organizational policies to the resource hierarchy
-Granting members IAM roles within a project
-Managing users and groups in Cloud Identity
-Enabling APIs within projects
-Provisioning and setting up products in Google Cloud’s operations suite

Managing billing configuration.
-Creating one or more billing accounts
-Linking projects to a billing account
-Establishing billing budgets and alerts
-Setting up billing exports

Installing and configuring the command line interface (CLI), specifically the Cloud SDK (e.g., setting the default project)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the steps to create a resource heirarcy

A

Create Organization
Create folders (one for each department)
Create projects in appropriate folders

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Purpose of the Google Cloud resource hierarchy is two-fold:

A

Provide a hierarchy of ownership, which binds the lifecycle of a resource to its immediate parent in the hierarchy.

Provide attach points and inheritance for access control and organization policies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Who owns the project resource in the heirarchy?

A

With an organization resource, project resources belong to your organization instead of the employee who created the project. This means that the project resources are no longer deleted when an employee leaves the company; instead they will follow the organization resource’s lifecycle on Google Cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Where can you set an IAM policy on a resource?

A

You can set an IAM policy at the organization level, the folder level, the project level, or (in some cases) the resource level.
Resources inherit the policies of the parent resource. If you set a policy at the organization level, it is inherited by all its child folder and project resources, and if you set a policy at the project level, it is inherited by all its child resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do you determine the effective policy for a resource?

A

The effective policy for a resource is the union of the policy set on the resource and the policy inherited from its ancestors.
This inheritance is transitive
Resources inherit policies from the project, which inherit policies from the organization resource.
Organization-level policies also apply at the resource level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What happens to inherited resource permissions when you move a project to a new location?

A

IAM policy hierarchy follows the same path as the Google Cloud resource hierarchy. If you change the resource hierarchy, the policy hierarchy changes as well.

moving a project resource from one folder resource to another will change the inherited permissions. Permissions that were inherited by the project resource from the original parent resource will be lost when the project resource is moved to a new folder resource. Permissions set at the destination folder resource will be inherited by the project resource as it is moved.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What types of users can create an organization resource?

A

Google Workspace and Cloud Identity customers can create organization resources.
Each Google Workspace or Cloud Identity account is associated with one organization resource.
When an organization resource exists, it is the top of the Google Cloud resource hierarchy, and all resources that belong to an organization are grouped under the organization resource.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What pre - requisites are required to create folder resources?

A

An organization resource is required as a prerequisite to use folders. Folder resources and their child project resources are mapped under the organization resource.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the benefit of having Google Cloud organization and folder resources?

A

organization resource and folder resources, allows companies to map their organization onto Google Cloud.
These provide logical attachment points for access management policies (IAM) and Organization policies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Are Orgnization resources required for Google Cloud?

A

Google Cloud users are not required to have an organization resource, but some features of Resource Manager will not be usable without one.

The organization resource is closely associated with a Google Workspace or Cloud Identity account.

When a user with a Google Workspace or Cloud Identity account creates a Google Cloud project resource, an organization resource is automatically provisioned for them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What restrictions with a managed user (workspace or cloud identity) when they create a project?

A

If a user specifies an organization resource and they have the right permissions, the project is assigned to that organization.
Otherwise, it will default to the organization resource the user is associated with.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What happens when you adopt Cloud Identity for an IAM heirarchy.

A

When you adopt Cloud Identity, you create a Cloud Identity account for each of your users and groups.

You can then use Identity and Access Management (IAM) to manage access to Google Cloud resources for each Cloud Identity account.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Are you able to migrate projects from one organization to another?

A

Yes - must check services and see what is allowed with project resources?
Need IAM Permissions to move project resource
If need be, can change back
Use import and export folders

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Where can you set an IAM Policy?

A

You can set an IAM policy at the organization level, the folder level, the project level, or (in some cases) the resource level.

Resources inherit the policies of the parent resource.

If you set a policy at the organization level, it is inherited by all its child folder and project resources, and if you set a policy at the project level, it is inherited by all its child resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Can you remove a permission that was granted at a higher level resource?

A

Roles are always inherited, and there is no way to explicitly remove a permission for a lower-level resource that is granted at a higher level in the resource hierarchy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

If you change the Google Cloud Resource Heirarchy, what happens to the policy Heirarchy?

A

The IAM policy hierarchy follows the same path as the Google Cloud resource hierarchy. If you change the resource hierarchy, the policy hierarchy changes as well. For example, moving a project into an organization resource will update the project’s IAM policy to inherit from the organization resource’s IAM policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What happens when a project moves from one folder resource to another?

A

Moving a project resource from one folder resource to another will change the inherited permissions. Permissions that were inherited by the project resource from the original parent resource will be lost when the project resource is moved to a new folder resource. Permissions set at the destination folder resource will be inherited by the project resource as it is moved.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

How do you use projects for organizing resources?

A

Use projects to group resources that share the same trust boundary. For example, resources for the same product or microservice can belong to the same project.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Why should you audit your allow policies?

A

Audit your allow policies to ensure compliance. Audit logs contain all setIamPolicy() calls, so you can trace when an allow policy has been created or modified.
Audit the ownership and the membership of the Google groups used in allow policies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

How do you limit project creation in your organization?

A

If you want to limit project creation in your organization, change the organization policy to grant the Project Creator role to a group that you manage.

Remove the default roles for project creation that are setup by default.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What are the benefits of the Organization Policy Service?

A

Centralize control to configure restrictions on how your organization’s resources can be used.
Define and establish guardrails for your development teams to stay within compliance boundaries.
Help project owners and their teams move quickly without worry of breaking compliance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What are common use cases for the organization policies?

A

Organization policies are made up of constraints that allow you to:

Limit resource sharing based on domain.
Limit the usage of Identity and Access Management service accounts.
Restrict the physical location of newly created resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What are the Differences from Identity and Access Management and Organization Policy Service?

A

Identity and Access Management focuses on who, and lets the administrator authorize who can take action on specific resources based on permissions.

Organization Policy focuses on what, and lets the administrator set restrictions on specific resources to determine how they can be configured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is a constraint for an organization policy service?

A

A constraint is a particular type of restriction against a Google Cloud service or a list of Google Cloud services. Think of the constraint as a blueprint that defines what behaviors are controlled. This blueprint is then applied to a resource hierarchy node (folder, project, or org) as an organization policy, which implements the rules defined in the constraint. The Google Cloud service mapped to that constraint and associated with that resource hierarchy node will then enforce the restrictions configured within the organization policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What are the different storage classes for any workload?

A

Storage classes for any workload
Save costs without sacrificing performance by storing data across different storage classes. You can start with a class that matches your current use, then reconfigure for cost savings.

Standard Storage: Good for “hot” data that’s accessed frequently, including websites, streaming videos, and mobile apps.

Nearline Storage: Low cost. Good for data that can be stored for at least 30 days, including data backup and long-tail multimedia content.

Coldline Storage: Very low cost. Good for data that can be stored for at least 90 days, including disaster recovery.

Archive Storage: Lowest cost. Good for data that can be stored for at least 365 days, including regulatory archives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What are the different persistent disk types?

A

Standard persistent disks (pd-standard) are backed by standard hard disk drives (HDD).

Balanced persistent disks (pd-balanced) are backed by solid-state drives (SSD). They are an alternative to SSD persistent disks that balance performance and cost.

SSD persistent disks (pd-ssd) are backed by solid-state drives (SSD).

Extreme persistent disks (pd-extreme) are backed by solid-state drives (SSD). With consistently high performance for both random access workloads and bulk throughput, extreme persistent disks are designed for high-end database workloads. Unlike other disk types, you can provision your desired IOPS. For more information, see Extreme persistent disks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

How can you protect against data loss on persistent disks?

A

you can create snapshots of persistent disks to protect against data loss due to user error. Snapshots are incremental, and take only minutes to create even if you snapshot disks that are attached to running instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What are the 6 steps to setup a cloud solution?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What are examples of Global Resources?

A

Images
Snapshots
VPC Network
Firewalls
Routes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What are example of Regional Resources

A

Static external IP Addresses
Subnets

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What are example of zonal resources?

A

Compute Instances (VMs)
Persistent Disks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What is the Billing Account Creator role authorized to do?

A

Create new self service accounts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What can billing account administrator role authorized to do

A

Manage self service account, but can’t create new accounts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What is the difference between billing account user and billing account viewer?

A

You can link projects to a billing account.
Link transactional and billing account data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

How can you achieve billing alerts?

A

You can get alerts by pub/sub or email.
Alerts can be based off a threshold or % of last month’s bills.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Where can you export billing data from GCP?

A

To Big Query
As CSV in Cloud Storage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Before you can utilize any features within GCP, you must set up ?????? to associate a payment method to all services and resources that are not free within the GCP.
This requires what role?

A

Billing Account Information
Role required Billing Account User

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Workspaces

A

Cloud Monitoring requires an organizational tool to monitor and collect information. In GCP, that tool is called a Workspace. The Workspace brings together Cloud Monitoring resources from one or more GCP projects.
The Workspace collects metric data from one or more monitored projects; however, the data remains project bound.
The data is pulled into the Workspace and then displayed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

To create an organization policy you choose a ?

A

Organization policy, you choose a constraint
Which is a particular type of restriction against either a Google Cloud service or a group of Google Cloud services.
You configure that constraint with your desired restrictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What type of policy do you create when you assign roles to users?

A

Allow policy
You can grant roles to users by creating an allow policy, which is a collection of statements that define who has what type of access. An allow policy is attached to a resource and is used to enforce access control whenever that resource is accessed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

How do you grant access to resources?

A

You can grant access to Google Cloud resources by using allow policies, also known as Identity and Access Management (IAM) policies, which are attached to resources. You can attach only one allow policy to each resource. The allow policy controls access to the resource itself, as well as any descendants of that resource that inherit the allow policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What is another name for an IAM Policy?

A

Also known as allow policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

There are two types of service accounts:

A

There are two types of service accounts, user-managed service accounts and Google- managed service accounts. Users can create up to 100 service accounts per project. When you create a project that has the Compute Engine API enabled, a Compute Engine service account is created automatically. You can only create 10 key pairs per project. Similarly, if you have an App Engine application in your project, GCP will automatically create an App Engine service account. Both the Compute Engine and App Engine service accounts are granted editor roles on the projects in which they are created. You can also create custom service accounts in your projects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

What are the events and information to use the Cloud Pub Sub?

A

Topic. A named resource to which messages are sent by publishers.
Subscription. A named resource representing the stream of messages from a single, specific topic, to be delivered to the subscribing application. For more details about subscriptions and message delivery semantics, see the Subscriber Guide.
Message. The combination of data and (optional) attributes that a publisher sends to a topic and is eventually delivered to subscribers.
Message attribute. A key-value pair that a publisher can define for a message. For example, key iana.org/language_tag and value en could be added to messages to mark them as readable by an English-speaking subscriber.
Publisher. An application that creates and sends messages to a topic(s).
Subscriber. An application with a subscription to a topic(s) to receive messages from it.
Acknowledgement (or “ack”). A signal sent by a subscriber to Pub/Sub after it has received a message successfully. Acked messages are removed from the subscription’s message queue.
Push and pull. The two message delivery methods. A subscriber receives messages either by Pub/Sub pushing them to the subscriber’s chosen endpoint, or by the subscriber pulling them from the service.
Event Types - Published
There is only one type of event that is triggered in Cloud Pub/Sub, and that is when a message is published,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

What is a post-mortem is and why it is used.

A

Post-mortems are reviews of incidents or projects with the goal of improving services or project practices. Incidents are disruptions to services.

Major incidents are often the result of two or more failures within a system.
Post-mortems help developers better understand application failure modes and learn ways to mitigate risks of similar incidents.
Post-mortems are best conducted without assigning blame.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Understand how and when to use the GCP SDK.

A

The GCP SDK is a set of command-line tools for managing Google Cloud resources. These commands allow you to manage infrastructure and perform operations from the command line instead of the console. The GCP SDK components are especially useful for automating routine tasks and for viewing information about the state of your infrastructure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Cloud migrations are inherently about incrementally changing existing infrastructure to use cloud services to deliver information services.

A

You will need to plan a migration carefully to minimize the risk of disrupting services while maximizing the likelihood of successfully moving applications and data to the cloud. For many organizations, cloud computing is a new approach to delivering information services. These organizations may have built large, complex infrastructures running a wide array of applications using on-premises data centers. Now those same organizations want to realize the advantages of cloud computing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Know the four stages of migration planning?

A

During the assessment phase, take inventory of applications and infrastructure. During the planning stage, you will define fundamental aspects of your cloud services, including the structure of the resource hierarchy as well as identities, roles, and groups. You will also migrate one or two applications in an effort to learn about the cloud and develop experience running applications in the cloud. In the deployment phase, data and applications are moved in a logical order that minimizes the risk of service disruption. Finally, once data and applications are in the cloud, you can shift your focus to optimizing the cloud implementation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Understand how to assess the risk of migrating an application.

A

Considerations include service-level agreements, criticality of the system, availability of support, and quality of documentation. Consider other systems on which the migrating system depends. Consider other applications that depend on the migrating system. Watch for challenging migration operations, such as performing a database replication and then switching to a cloud instance of a database.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Understand how to map licensing to the way you will use the licensed software in the cloud.

A

Operating system, application, middleware services, and third-party tools may all have licenses. There are a few different ways to pay for software running in the cloud. In some cases, the cost of licensing is included with cloud service charges. In other cases, you may have to pay for the software directly in one of two ways. You may have an existing license that can be used in the cloud, known as the BYOL model, or you may purchase a license from the vendor specifically for use in the cloud. In other cases, software vendors will charge based on usage, much like cloud service pricing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Know the steps involved in planning a network migration.

A

Network migration planning can be broken down into four broad categories of planning tasks: VPCs, access controls, scaling, and connectivity. Planning for each of these will help identify potential risks and highlight architecture decisions that need to be made. Consider how you will use networks, subnets, IP addresses, routes, and VPNs. Plan for linking on-premises networks to the Google Cloud using either VPNs or Cloud Interconnect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Managing default organization roles by?

A

When an organization resource is created, all users in your domain are granted the Billing Account Creator and Project Creator roles by default. These default roles allow your users to start using Google Cloud immediately, but are not intended for use in regular operation of your organization resource.
Next step is to designate a Billing Account Creator and Project Creator for regular operations, and how to remove roles that were assigned by default to the organization resource.

Adding a Billing Account Creator and Project Creator
To migrate existing billing accounts into an organization resource, a user must have the Billing Account Creator IAM role. Users with the Project Creator role are able to create and manage Project resources.

Remove default roles from the organization resource

After you designate your own Billing Account Creator and Project Creator roles, you can remove these roles from the organization resource to restrict those permissions to specifically designated users. T

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Billing to BigQuery

A

Tools for monitoring, analyzing and optimizing cost have become an important part of managing development. Billing export to BigQuery enables you to export your daily usage and cost estimates automatically throughout the day to a BigQuery dataset you specify. You can then access your billing data from BigQuery. You can also use this export method to export data to a JSON file.
Regular file export to CSV and JSON is also available. However, if you use regular file export, you should be aware that regular file export captures a smaller dataset than export to BigQuery. For more information about regular file export and the data it captures, see Export Billing Data to a File.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Billing Account

A

A billing account is used to define who pays for a given set of resources. A billing account includes a payment instrument (setup in payment profile), to which costs are charged, and access control that is established by Cloud Platform Identity and Access Management (IAM) roles.

A billing account can be linked to one or more projects. Project usage is charged to the linked billing account. Projects that are not linked to a billing account cannot use GCP services that aren’t free.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Billing API

A

You can configure Billing on Google Cloud Platform (GCP) in a variety of ways to meet different needs.
GCP resources are the fundamental components that make up all GCP services, such as Google Compute Engine virtual machines (VMs), Google Cloud Pub/Sub topics, Google Cloud Storage buckets, and so on. For billing and access control purposes, resources exist at the lowest level of a hierarchy that also includes projects and an organization.
Projects: All lower level resources are parented by projects, which are the middle layer in the hierarchy of resources. You can use projects to represent logical projects, teams, environments, or other collections that map to a business function or structure. Any given resource can only exist in one project.
An organization is the top of the hierarchy of resources. All resources that belong to an organization are grouped under the organization node, to provide insight into and access control over every resource in the organization.
For more information on projects and organizations, see the Cloud Resource Manager documentation.
A billing account can be linked to one or more projects. Project usage is charged to the linked billing account. Projects that are not linked to a billing account cannot use GCP services that aren’t free.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

Backend Bucket

A

Backend buckets allow you to use Google Cloud Storage buckets with HTTP(S) Load Balancing.
An HTTP(S) load balancer can direct traffic from specified URLs to either a backend bucket or a backend service. For example, the load balancer can send requests for static content to a Cloud Storage bucket and requests for dynamic content to a VM.
For example, you can have the load balancer send traffic with a path of /static to a storage bucket and all other requests to your instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

What is Image Baking

A

Baking, Image
Manual: You can create a simple custom image by creating a new VM instance from a public image, configuring the instance with the applications and settings that you want, and then creating a custom image from that instance.

Use this method if you can configure your images from scratch manually rather than using automated baking or importing existing images.

You can create a simple custom image using the following steps:
Create an instance from a public image.
Connect to the instance.
Customize the instance for your needs.
Stop the instance.
Create a custom image from the boot disk of that instance. This process requires you to delete the instance but keep the boot disk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

What are ways to provide connectivity from customer premise to the Google Cloud

A

Cloud Interconnect extends your on-premises network to Google’s network through a highly available, low latency connection. You can use Google Cloud Interconnect -
Dedicated (Dedicated Interconnect) to connect directly to Google or use Google Cloud Interconnect - Partner (Partner Interconnect) to connect to Google through a supported service provider.
Direct Peering: Google allows you to establish a direct peering connection between your business network and Google’s. With this connection you will be able to exchange Internet traffic between your network and Google’s at one of our broad-reaching Edge network locations. Direct peering with Google is done by exchanging BGP routes between Google and the peering entity. After a direct peering connection is in place, you can use it to reach all of Google’s services including the full suite of Google Cloud Platform products. Carrier peering allows you to obtain enterprise-grade network services that connect your infrastructure to Google by using a service provider.
Google Cloud Interconnect: Cloud Interconnect offers enterprise-grade connections to Google Cloud Platform using Google Services for Dedicated Interconnect, Partner Interconnect and Cloud VPN. This solution allows you to directly connect your on-premises network to your Virtual Private Cloud.
Carrier Peering: When connecting to Google through a service provider, you can get connections with higher availability and lower latency, using one or more links. Work with your service provider to get the connection you need.
CDN Interconnect allows select CDN providers to establish direct interconnect links with Google’s edge network at various locations.

65
Q

What is a Container Registry?

A

Container Registry is a private container image registry that runs on Google Cloud Platform. Container Registry supports Docker Image Manifest V2 and OCI image formats.
Many people use Dockerhub as a central registry for storing public Docker images, but to control access to your images you need to use a private registry such as Container Registry.
You can access Container Registry through secure HTTPS endpoints, which allow you to push, pull, and manage images from any system, VM instance, or your own hardware.

Additionally, you can use the Docker credential helper command-line tool to configure Docker to authenticate directly with Container Registry.

Detect vulnerabilities in early stages of the software deployment cycle. Make certain your container images are safe to deploy. Constantly refreshed database helps ensure your vulnerability scans are up-to-date with new malware.

66
Q

What is a Data Lake?

A

A data lake is a storage repository that holds a vast amount of raw data in its native format until it is needed. While a hierarchical data warehouse stores data in files or folders, a data lake uses a flat architecture to store data.

67
Q

Data Pipeline

A

In computing, a pipeline, also known as a data pipeline, is a set of data processing elements connected in series, where the output of one element is the input of the next one. The elements of a pipeline are often executed in parallel or in time-sliced fashion.

68
Q

What is Dataflow, Cloud Dataflow?

A

Google Cloud Dataflow is a fully managed service for strongly consistent, parallel data-processing pipelines. It provides an SDK for Java with composable primitives for building data-processing pipelines for batch or continuous processing. This service manages the life cycle of Google Compute Engine resources of the processing pipeline(s). It also provides a monitoring user interface for understanding pipeline health.

69
Q

What is Dataprep, Cloud Dataprep?

A

Cloud Dataprep by Trifacta is an intelligent data service for visually exploring, cleaning, and preparing structured and unstructured data for analysis. Cloud Dataprep is serverless and works at any scale. There is no infrastructure to deploy or manage. Easy data preparation with clicks and no code.
demo

70
Q

What is Dataproc, Cloud Dataproc?

A

Google Cloud Dataproc is a fast, easy to use, managed Spark and Hadoop service for distributed data processing. It provides management, integration, and development tools for unlocking the power of rich open source data processing tools. With Cloud Dataproc, you can create Spark/Hadoop clusters sized for your workloads precisely when you need them.

71
Q

Cloud DNS

A

Google Cloud DNS is a high-performance, resilient, global Domain Name System (DNS) service that publishes your domain names to the global DNS in a cost-effective way.

DNS is a hierarchical distributed database that lets you store IP addresses and other data, and look them up by name. Google Cloud DNS lets you publish your zones and records in the DNS without the burden of managing your own DNS servers and software. RESTful API to publish and manage DNS records for your applications and services.

72
Q

DNS Load Balancing

A

DNS load balancing is the practice of configuring a domain in the Domain Name System (DNS) such that client requests to the domain are distributed across a group of server machines.
DNS load balancing relies on the fact that most clients use the first IP address they receive for a domain. In most Linux distributions, DNS by default sends the list of IP addresses in a different order each time it responds to a new client, using the round‑robin method. As a result, different clients direct their requests to different servers, effectively distributing the load across the server group.

Unfortunately, this simple implementation of DNS load balancing has inherent problems that limit its reliability and efficiency. Most significantly, DNS does not check for server or network outages or errors, and so always returns the same set of IP addresses for a domain even if servers are down or inaccessible.

73
Q

Encryption Keys

A

Cloud Storage always encrypts your data on the server side, before it is written to disk, at no additional charge. Besides this standard behavior, there are additional ways to encrypt your data when using Cloud Storage:
Customer-supplied encryption keys: You can create and manage your own encryption keys for server-side encryption, which act as an additional encryption layer on top of the standard Cloud Storage encryption.
Customer-managed encryption keys: You can generate and manage your encryption keys using Cloud Key Management Service, which act as an additional encryption layer on top of the standard Cloud Storage encryption.Your encryption keys are stored within Cloud KMS. The project that holds your encryption keys can then be independent from the project that contains your buckets, thus allowing for better separation of duties.

74
Q

Filestore

A

Cloud Filestore is a scalable and highly available shared file service fully managed by Google (disk multiple Vms like AWS EFS). Cloud Filestore provides persistent storage ideal for shared workloads. It is best suited for enterprise applications requiring persistent, durable, shared storage which is accessed by NFS or requires a POSIX compliant file system.

75
Q

What are the groups of Gcloud Commands?

A

gcloud logging read

gcloud app appengine

gcloud auth - manage oauth2 credentials for the Google Cloud SDK

gcloud bigtable - manage your Cloud Bigtable storage

gcloud builds - create and manage builds for Google Cloud Build

gcloud components - list, install, update, or remove Google Cloud SDK components

gcloud composer - create and manage Cloud Composer Environments
Cloud Composer is a managed Apache Airflow service that helps you create, schedule, monitor and manage workflows

gcloud compute - create and manipulate Google Compute Engine resources

gcloud config - view and edit Cloud SDK properties

gcloud container - deploy and manage clusters of machines for running containers

gcloud dataflow manage Google Cloud Dataflow jobs

gcloud dataproc - create and manage Google Cloud Dataproc clusters and jobs

gcloud datastore - manage your Cloud Datastore indexes

gcloud debug - commands for interacting with the Cloud Debugger

gcloud deployment-manager - manage deployments of cloud resources

gcloud dns - manage your Cloud DNS managed-zones and record-sets

gcloud docker - enable Docker CLI access to Google Container Registry

gcloud domains - manage domains for your Google Cloud projects (custom domains)

gcloud endpoints - create, enable and manage API services

gcloud firebase - work with Google Firebase

gcloud functions - manage Google Cloud Functions

gcloud iam - manage IAM service accounts and keys

gcloud iot - manage Cloud IoT resources

gcloud kms - manage cryptographic keys in the cloud

gcloud logging read

gcloud ml - use Google Cloud machine learning capabilities (vision speech..)

gcloud ml-engine - manage Cloud ML Engine jobs and models

gcloud organizations - create and manage Google Cloud Platform Organizations

gcloud projects - create and manage project access policies

gcloud pubsub - manage Cloud Pub/Sub topics and subscriptions

gcloud redis - manage Cloud Memorystore Redis resources

gcloud services - list, enable and disable APIs and services

gcloud source - cloud git repository commands

gcloud spanner - command groups for Cloud Spanner

gcloud sql - create and manage Google Cloud SQL databases

gcloud topic - gcloud supplementary help

76
Q

What is gsutil? What kind of jobs can you do with it?

A

gsutil is a Python application that lets you access Cloud Storage from the command line. You can use gsutil to do a wide range of bucket and object management tasks, including:
Creating and deleting buckets.
Uploading, downloading, and deleting objects.
Listing buckets and objects.
Moving, copying, and renaming objects.
Editing object and bucket ACLs.
For a complete list of guides to completing tasks with gsutil, see Cloud Storage How-to Guides.
gsutil Quickstart shows you how to set up a Google Cloud Platform project, enable billing, install gsutil, and run basic commands with the tool.
gsutil acl set get ch

gsutil cat -h gs://bucket/meeting_notes/2012_Feb/*.txt

//Concatenate a sequence of objects into a new composite object
gsutil compose obj1 obj2

//config - Obtain credentials and create configuration file
gsutil config -f –Create a token with full-control access for storage resources:

//cors - Get or set a CORS JSON document for one or more buckets

cp - Copy files and objects

defacl - Get, set, or change default ACL on buckets

defstorageclass - Get or set the default storage class on buckets

du - Display object size usage

hash - Calculate file hashes

iam - Get, set, or change bucket and/or object IAM permissions

kms - Configure Cloud KMS encryption

label - Get, set, or change the label configuration of a bucket

lifecycle - Get or set lifecycle configuration for a bucket

logging - Configure or retrieve logging on buckets

ls - List providers, buckets, or objects
gsutil ls -l gs://bucket/*.txt

mb - Make buckets

rsync - Synchronize content of two buckets/directories

setmeta - Set metadata on already uploaded objects

signurl - Create a signed url - es uploading a plain text file via HTTP PUT
gsutil signurl -m PUT -d 1h -c text/plain gs:///

stat - Display object status

test - Run gsutil unit/integration tests (for developers)

update - Update to the latest gsutil release

version - Print version info about gsutil

versioning - Enable or suspend versioning for one or more buckets

web - Set a main page and/or error page for one or more buckets

77
Q

What are health checks?

A

gsutil
gsutil is a Python application that lets you access Cloud Storage from the command line. You can use gsutil to do a wide range of bucket and object management tasks, including:

Creating and deleting buckets.

Uploading, downloading, and deleting objects.

Listing buckets and objects.

Moving, copying, and renaming objects.

Editing object and bucket ACLs.

For a complete list of guides to completing tasks with gsutil, see Cloud Storage How-to Guides.

gsutil Quickstart shows you how to set up a Google Cloud Platform project, enable billing, install gsutil, and run basic commands with the tool.

gsutil acl set get ch

gsutil cat -h gs://bucket/meeting_notes/2012_Feb/*.txt

//Concatenate a sequence of objects into a new composite object

gsutil compose obj1 obj2

//config - Obtain credentials and create configuration file

gsutil config -f –Create a token with full-control access for storage resources:

//cors - Get or set a CORS JSON document for one or more buckets

cp - Copy files and objects

defacl - Get, set, or change default ACL on buckets

defstorageclass - Get or set the default storage class on buckets

du - Display object size usage

hash - Calculate file hashes

iam - Get, set, or change bucket and/or object IAM permissions

kms - Configure Cloud KMS encryption

label - Get, set, or change the label configuration of a bucket

lifecycle - Get or set lifecycle configuration for a bucket

logging - Configure or retrieve logging on buckets

ls - List providers, buckets, or objects

gsutil ls -l gs://bucket/*.txt

mb - Make buckets

rsync - Synchronize content of two buckets/directories

setmeta - Set metadata on already uploaded objects

signurl - Create a signed url - es uploading a plain text file via HTTP PUT

gsutil signurl -m PUT -d 1h -c text/plain gs:///

stat - Display object status

test - Run gsutil unit/integration tests (for developers)

update - Update to the latest gsutil release

version - Print version info about gsutil

versioning - Enable or suspend versioning for one or more buckets

web - Set a main page and/or error page for one or more buckets

78
Q

What are health checks?

A

A health checker polls instances at specified intervals.

Instances that do not respond successfully to a specified number of consecutive probes are marked as UNHEALTHY.
No new connections are sent to such instances, though existing connections are allowed to continue. The health checker continues to poll unhealthy instances. If an instance later responds successfully to a specified number of consecutive probes, it is marked HEALTHY again and can receive new connections.

79
Q

Ingress traffic

A

Ingress traffic is network traffic that originates from outside of the network’s routers and proceeds toward a destination inside of the network.
For example, an email message that is considered ingress traffic will originate somewhere outside of a enterprise’s LAN, pass over the Internet and enter the company’s LAN before it is delivered to the recipient.

80
Q

What are all the instance group types?

A

You can create and manage groups of virtual machine (VM) instances so that you don’t have to individually control each instance in your project. Compute Engine offers two different types of instance groups: managed and unmanaged instance groups.
A managed instance group uses an instance template to create a group of identical instances. You control a managed instance group as a single entity.
A zonal managed instance group, which contains instances from the same zone.
A regional managed instance group, which contains instances from multiple zones across the same region.
Unmanaged instance groups are groups of dissimilar instances that you can arbitrarily add and remove from the group. Unmanaged instance groups do not offer autoscaling, rolling update support, or the use of instance templates so Google recommends creating managed instance groups whenever possible. Use unmanaged instance groups only if you need to apply load balancing to your pre-existing configurations or to groups of dissimilar instances.

81
Q

ISTIO

A

Istio makes it easy to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, with few or no code changes in service code. You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices, then configure and manage Istio using its control plane functionality:
Automatic load balancing for HTTP,
gRPC
WebSocket
TCP traffic
Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
Secure service-to-service communication in a cluster with strong identity-based authentication and authorization.

The main component of ISTIO are:
Pilot
Citadel
Mixer

82
Q

Kafka

A

Apache Kafka is a distributed streaming platform. A streaming platform has three key capabilities:
Publish and subscribe to streams of records, similar to a message queue or enterprise messaging system.
Store streams of records in a fault-tolerant durable way.
Process streams of records as they occur.
Kafka is generally used for two broad classes of applications:
Building real-time streaming data pipelines that reliably get data between systems or applications
Building real-time streaming applications that transform or react to the streams of data

Same as Pub/Sub, and GCP has Confluent Cloud on GCP. Confluent Cloud is a fully-managed streaming service based on Apache Kafka. Led by the creators of Kafka—Jay Kreps, Neha Narkhede and Jun Rao—Confluent provides enterprises with a real-time streaming platform built on a reliable, scalable ecosystem of products that place Kafka at their core

83
Q

Knative is?

A

Knative is an essential set of components to build and run serverless applications on Kubernetes. Knative offers features like scale-to-zero, autoscaling, in-cluster builds, and eventing framework for cloud-native applications on Kubernetes. Whether on-premises, in the cloud, or in a third-party data center, Knative codifies the best practices shared by successful real-world Kubernetes-based frameworks.
Main parts: Build Serving Eventing (based on Cloudevents)

84
Q

Kubernetes Secrets are?

A

Kubernetes Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys.
A Kubernetes secret is a simple object that’s stored securely (e.g. encrypted at rest) by the orchestrator and can contain arbitrary data in key-value format.
The value is base64 encoded, so we can also store binary data like certificates. Kubernetes makes it easy to consume secrets by letting you simply mount them onto your container, either as env var — not recommended — or as a file.

85
Q

What is the difference between Google Global and regional balancing?

A

Google global load balancing is implemented entirely in software, done by Google Front Ends (GFEs). The GFEs are distributed globally and load balance traffic in sync with each other by working with Google’s other software-defined systems and global control plane.
Google regional load balancing is implemented entirely in software. Your instances are in a single GCP region and traffic is distributed to instances within a single region.
https://www.ianlewis.org/en/google-cloud-platform-http-load-balancers-explaine
https://cloud.google.com/load-balancing/docs/https/adding-a-backend-bucket-to-content-based-load-balancing

86
Q

How does the http-s load balancer work

A

A complete HTTP load balancer is structured as follows:

-A global forwarding rule directs incoming requests to a target HTTP proxy.

-The target HTTP proxy checks each request against a URL map to determine the appropriate backend service for the request.

-The backend service directs each request to an appropriate backend based on serving capacity, zone, and instance health of its attached backends.

The health of each backend instance is verified using an HTTP health check, an HTTPS health check, or an HTTP/2 health check. If the backend service is configured to use an HTTPS or HTTP/2 health check, the request will be encrypted on its way to the backend instance.
Sessions between the load balancer and the instance can use the HTTP, HTTPS, or HTTP/2 protocol. If you use HTTPS or HTTP/2, each instance in the backend services must have an SSL certificate.

87
Q

How does Network Load Balancing Work?
What type of connections does it use?

A

Use Network Load Balancing to balance the load on your systems based on incoming IP protocol data, such as address, port, and protocol type.
Network Load Balancing uses forwarding rules that point to target pools, which list the instances available for load balancing and define which type of health check that should be performed on these instances. See Setting Up Network Load Balancing for more information.
Network Load Balancing is a regional, non-proxied load balancer. You can use it to load balance UDP traffic, and TCP and SSL traffic on ports that are not supported by the SSL proxy and TCP proxy load balancers.
A Network load balancer is a pass-through load balancer (direct server return(DSR), direct routing). It does not proxy connections from clients, that is (link):
The IP packets are forwarded unmodified to the VM, there is no address or port translation
The VM thinks that the load balancer IP is one of its own IPs Is fast.

88
Q

How does Load Balancing SSL Proxy work?

A

Google Cloud SSL Proxy Load Balancing terminates user SSL (TLS) connections at the load balancing layer, then balances the connections across your instances using the SSL or TCP protocols. Cloud SSL proxy is intended for non-HTTP(S) traffic. For HTTP(S) traffic, HTTP(S) load balancing is recommended instead.
SSL Proxy Load Balancing supports both IPv4 and IPv6 addresses for client traffic. Client IPv6 requests are terminated at the load balancing layer, then proxied over IPv4 to your backends. Load balancing service that can be deployed globally. You can deploy your instances in multiple regions, and the load balancer automatically directs traffic to the closest region that has capacity.

89
Q

What is Load Balancing TCP Proxy?

A

Google Cloud Platform (GCP) TCP Proxy Load Balancing allows you to use a single IP address for all users around the world. GCP TCP proxy load balancing automatically routes traffic to the instances that are closest to the user.
Note that global load balancing requires that you use the Premium Tier of Network Service Tiers, which is the default tier. Otherwise, load balancing is handled regionally.
Cloud TCP Proxy Load Balancing is intended for non-HTTP traffic. For HTTP traffic, HTTP Load Balancing is recommended instead. For proxied SSL traffic, use SSL Proxy Load Balancing.
TCP Proxy Load Balancing supports both IPv4 and IPv6 addresses for client traffic. Client IPv6 requests are terminated at the load balancing layer, then proxied over IPv4 to your backends.

90
Q

What is the Machine Learning Engine?

A

Cloud Machine Learning Engine is a managed service that enables you to easily build machine learning models with the powerful TensorFlow framework. It provides scalable training and prediction services that work on large scale datasets.
Cloud ML Engine offers training and prediction services, which can be used together or individually. Cloud ML Engine is a proven service used by enterprises to solve problems ranging from identifying clouds in satellite images, ensuring food safety, and responding four times faster to customer emails.
TensorFlow, scikit-learn and XGBoost host your trained models on Cloud ML Engine so that you can send them prediction requests and manage your models and jobs using the GCP services.

91
Q

What Network Service Tiers does Google have?

A

Network Service Tiers: Network Service Tiers enable you to select different quality networks (tiers) for outbound traffic to the internet: the Standard Tier primarily utilizes third party transit providers while the Premium Tier leverages Google’s private backbone and peering surface for egress.

92
Q

What are the two different project types in a Shared VPC scenario?

A

Projects, Host and Shared
In a Shared VPC scenario, the host project contains a common Shared VPC network usable by VMs in service projects. With Shared VPC, the VLAN attachments and Cloud Routers for an interconnect need to be created only in the Shared VPC host project. Because VMs in the service projects use the Shared VPC network, Service Project Admins do not need to create other VLAN attachments or Cloud Routers in the service projects themselves.
See also Shared VCP

93
Q

What is Schema Auto-Detection with Big Query?

A

Schema auto-detection is available when you load data into BigQuery, and when you query an external data source.
When auto-detection is enabled, BigQuery starts the inference process by selecting a random file in the data source and scanning up to 100 rows of data to use as a representative sample. BigQuery then examines each field and attempts to assign a data type to that field based on the values in the sample.

94
Q

What is AI in GCP

A

Cloud Artificial Intelligence (AI)
Google Cloud AI offers cloud services for businesses and individuals to leverage pre-trained models for custom artificial intelligence tasks through the use of REST APIs. It also exposes services for developing custom models for domain use cases such as AutoML Vision for image classification and object detection tasks and AutoML tables to deploy AI models on structured data.

Google Cloud AI services in the drawing include Cloud AutoML (train custom machine learning models leveraging transfer learning), Cloud Machine Learning Engine (for large-scale distributed training and deployment of machine learning models), Cloud TPU (to quickly train large-scale models), Video Intelligence (train custom video models), Cloud Natural Language API (extract/analyze text from documents), Cloud Speech API (transcribe audio to text), Cloud Vision API (classification/segmentation of images), Cloud Translate API (translate from one language to another), and Cloud Video Intelligence API (extract metadata from video files).

95
Q

What are the 3 Vs of data?

A

Data science encompasses the tools and techniques for extracting information from data. Data science techniques draw extensively from the field of mathematics, statistics, and computation. However, data science is now encapsulated into software packages and libraries, thus making them easily accessible and consumable by the software development and engineering communities. This is a major factor to the rise of intelligence capabilities now integrated as a major staple in software products across all sorts of domains.

This chapter will discuss broadly on the opportunities for data science and big data analytics integration as part of the transformation portfolio of businesses and institutions and give an overview on the data science process as a reusable template for fulfilling data science projects.

The Challenge of Big Data
Due to the expansion of data at the turn of the twenty-first century epitomized by the so-called 3Vs of big data, which are volume, velocity, and variety. Volume refers to the increasing size of data, velocity the speed at which data is acquired, and variety the diverse types of data that are available. For others, this becomes 5Vs with the inclusion of value and veracity to mean the usefulness of data and the truthfulness of data, respectively. We have observed data volume blowout from the megabyte (MB) to the terabyte (TB) scale and now exploding past the petabyte (PB). We have to find new and improved means of storing and processing this ever-increasing dataset. Initially, this challenge of storage and data processing was addressed by the Hadoop ecosystem and other supporting frameworks, but even these have become expensive to manage and scale, and this is why there is a pivot to cloud-managed, elastic, secure, and high-availability data storage and processing capabilities.

On the other hand, for most applications and business use cases, there is a need to carry out real-time analysis on data due to the vast amount of data created and available at a given moment. Previously, getting insights from data and unlocking value had been down to traditional analysis on batch data workloads using statistical tools such as Excel, Minitab, or SPSS. But in the era of big data, this is changing, as more and more businesses and institutions want to understand the information in their data at a real-time or at worst near real-time pace.

Another vertical to the big data conundrum is that of variety. Formerly, a pre-defined structure had to be imposed on data in order to easily store them as well as make it easy for data analysis. However, a wide diversity of datasets are now collected and stored such as spatial maps, image data, video data, audio data, text data from emails and other documents, and sensor data. As a matter of fact, a far larger amount of datasets in the wild are unstructured. This led to the development of unstructured or semi-structured databases such as Elasticsearch, Solr,

96
Q

What are the components of the GKE Control Plane?

A

Also known as the master node, the control plane is the brain behind the Kubernetes system and is responsible for the deployment of containers, schedulers, worker nodes, and everything that runs in the cluster. The control plane manages the cluster using several components that handle various specific tasks. Because specific components manage each task, the operation of the cluster is smooth, and there is no double handling of processes by the same component.

The main components of the control plane are as follows:

API Server: This is the component that sends and receives requests and instructions to all nodes and clients that connect and interact with the cluster.

Scheduler: This is responsible for deploying containers (also known as pods) on worker nodes that have the Docker runtime installed.

Etcd: This holds the database with the cluster configuration.

Controller manager: This manages all running components and objects in the cluster, like nodes, pods, and services.

97
Q

What are the components of a GKE Worker Nodes?

A

Worker nodes are the server hosts that the containers (pods) run on. Each node has the Docker runtime installed and running the actual containers. The nodes are controlled and managed by the master nodes, and all communication to them and from them goes through the API server.

Each worker node has the following components installed:

Kubelet: This manages, starts, stops, and checks the health of all containers on the host; in other words, it is responsible for the lifecycle of each container on the node.

Kube-proxy: This manages all networking operations on the node that include load-balancer and network proxy.

Container runtime: This includes all the runtime libraries that require the Docker engine to run containers.

98
Q

What are the backend components that make a Kubernetes cluster are as follows:

A

Objects

Pods: In the world of Kubernetes, pods are logical grouping of containers; a pod can be single or multiple containers that make what we call a deployment.

Volumes: For our pods to access persistent storage and dynamic configuration files, we need a storage volume that is available across the deployment and cluster regardless of the state of the cluster.

Services: A service is a group of containers that form a deployment of back-end and front-end servers (pods). For example, a WordPress deployment will have a front-end container for the actual WordPress application and a back-end container running MySQL database that is mapped to a persistent storage volume.

Namespaces: Kubernetes namespaces help us break down our cluster into a logical environment that does not cross-reference another cluster or share the same resources.

99
Q

What is the difference between a replica, deployment, a pod.

A

A Deployment is a uniformly managed set of Pod instances, all based on the same Docker image. A Pod instance is called a Replica. The Deployment controller uses multiple Replicas to achieve high scalability, by providing more compute capacity than is otherwise possible with a single monolithic Pod, and in-cluster high availability, by diverting traffic away from unhealthy Pods (with the aid of the Service controller, as we will see in Chapter 4) and restarting—or recreating—them when they fail or get stuck.

As per the definition given here, a Deployment may appear simply as a fancy name for “Pod cluster,” but “Deployment” is not actually a misnomer; the Deployment controller’s true power lies in its actual release capabilities—the deployment of new Pod versions with near-zero downtime to its consumers (e.g., using blue/green or rolling updates) as well as the seamless transition between different scaling configurations, while preserving compute resources at the same time.

100
Q

Which no-sql databases need to be provisioned?

A

Datastore has no provisioning requirements.
BigTable requires the user to provision the instance and nodes.
A Cloud Bigtable instance is a container for Bigtable clusters. An instance that has more than one cluster uses replication. You can create clusters in up to 8 regions, with as many clusters in each region as there are zones.
Plan your configuration:

Optional: If you plan to enable replication, do the following:

Identify your use case for replication.
Determine the region or regions that your instance should be in, based on your use case and the location of your application and traffic.
Decide how you’ll use application profiles to route incoming requests.
Optional: If you want to use customer-managed encryption keys (CMEK) instead of the default Google-managed encryption, complete the tasks under Creating a CMEK-enabled instance and have your CMEK key ID ready before you create your new instance.

101
Q

What are the difference between BigTable components

A

BigTable has components:
Instance, Clusters, Nodes

nstances
A Bigtable instance is a container for your data. Instances have one or more clusters, located in different zones. Each cluster has at least 1 node.

A table belongs to an instance, not to a cluster or node. If you have an instance with more than one cluster, you are using replication. This means you can’t assign a table to an individual cluster or create unique garbage collection policies for each cluster in an instance. You also can’t make each cluster store a different set of data in the same table.

Clusters
A cluster represents the Bigtable service in a specific location. Each cluster belongs to a single Bigtable instance, and an instance can have clusters in up to 8 regions. When your application sends requests to a Bigtable instance, those requests are handled by one of the clusters in the instance.

Each cluster is located in a single zone. An instance can have clusters in up to 8 regions where Bigtable is available. Each zone in a region can contain only one cluster. For example, if an instance has a cluster in us-east1-b, you can add a cluster in a different zone in the same region, such as us-east1-c, or a zone in a separate region, such as europe-west2-a.

The number of clusters that you can create in an instance depends on the number of zones in the regions that you choose. For example, if you create a cluster in every zone in 8 regions that each have 4 zones, your instance has 32 clusters. For a list of zones and regions where Bigtable is available, see Bigtable locations.

Bigtable instances that have only 1 cluster do not use replication.
Nodes
Each cluster in an instance has 1 or more nodes, which are compute resources that Bigtable uses to manage your data.

Behind the scenes, Bigtable splits all of the data in a table into separate tablets. Tablets are stored on disk, separate from the nodes but in the same zone as the nodes. A tablet is associated with a single node.

102
Q

What database does Automatic database sharding?

A

Cloud Spanner optimizes performance by automatically sharding the data based on request load and size of the data. As a result, you can spend less time worrying about how to scale your database and instead focus on scaling your business.

103
Q

What are the storage classes and location types?

A

You can choose to create different location types in which Multi-regional is where you would store files for use by your applications which are worldwide. Dual-region is best when your file needs to be accessed in associated regions. Regional is best for any internal jobs that require storage.

There are multiple storage classes that you can choose while creating a bucket mentioned below:

Standard Storage is used for data that is regularly accessed or stored only for a short period of time.

The nearline is for backups where you have a minimum storage duration of 30 days

Coldline is for disaster recovery, basically data you will not be accessing but needs to be stored for when regulatory authorities ask for it, where you have a minimum storage duration of 90 days.

Archive is for data archival, you can say the coldest service where the minimum duration is 365 days.
Accessing data from Standard buckets is free. Fetching from the Nearline costs some money, fetching from the Coldline costs even more, and lastly fetching from the archival storage costs even more.

104
Q

To export Cloud Billing data to BigQuery, take the following steps:

A

Create a project where the Cloud Billing data will be stored, and enable billing on the project (if you have not already done so).

Configure permissions on the project and on the Cloud Billing account.
To enable and configure the export of Google Cloud billing usage cost data to a BigQuery dataset, you need the following permissions:

For Cloud Billing, you need either the Billing Account Costs Manager role or the Billing Account Administrator role on the target Cloud Billing account.

For BigQuery, you need the BigQuery User role for the Cloud project that contains the BigQuery dataset to be used to store the Cloud Billing data.

To enable and configure the export of Cloud Billing pricing data, you need the following permissions:

For Cloud Billing, you need the Billing Account Administrator role on the target Cloud Billing account.

For BigQuery, you need the BigQuery Admin role for the Cloud project that contains the BigQuery dataset to be used to store the Cloud Billing pricing data.

For the Cloud project containing the target dataset, you need the resourcemanager.projects.update permission. This permission is included in the roles/editor role.

Enable the BigQuery Data Transfer Service API (required to export your pricing data).

Create a BigQuery dataset in which to store the data.

Enable Cloud Billing export of cost data and pricing data to be written into the dataset.

105
Q

What role is necessary to deploy a new app engine app

A

App Engine Deployer
(roles/appengine.deployer)

Read-only access to all application configuration and settings.

To deploy new versions, you must also have the Service Account User (roles/iam.serviceAccountUser) role on the App Engine default service account, and the Cloud Build Editor (roles/cloudbuild.builds.editor) and Cloud Storage Object Admin (roles/storage.objectAdmin) roles on the project.

106
Q

How can you configure a project across all your CLI commands, gcloud, gsutil, and bq?

A

gcloud config set sets the specified property in your active configuration only. A property governs the behavior of a specific aspect of Google Cloud CLI such as the service account to use or the verbosity level of logs. To set the property across all configurations, use the –installation flag. For more information regarding creating and using configurations, see gcloud topic configurations.
To view a list of properties currently in use, run gcloud config list.

To unset properties, use gcloud config unset.

Google Cloud CLI comes with a default configuration. To create multiple configurations, use gcloud config configurations create, and gcloud config configurations activate to switch between them.

Note: If you are using Cloud Shell, your gcloud command-line tool preferences are stored in a temporary tmp folder, set for your current tab only, and do not persist across sessions. For details on how to make these configurations persist, refer to the Cloud Shell guide on setting gcloud command-line tool preferences: https://cloud.google.com/shell/docs/configuring-cloud-shell#gcloud_command-line_tool_preferences.

107
Q

What is a single timestamp row implementation method for adding new serialized timestamp data?

A

Single-timestamp rows
In this pattern, you create a row for each new event or measurement instead of adding cells to columns in existing rows. The row key suffix is the timestamp value. Tables that follow this pattern tend to be tall and narrow, and each column in a row contains only one cell.

Important: To avoid hotspots, never use a timestamp value as a row key prefix.
Single-timestamp serialized
In this pattern, you store all the data for a row in a single column in a serialized format such as a protocol buffer (protobuf). This approach is described in more detail on Designing your schema.

Advantages of this pattern include the following:

Storage efficiency

Speed

Disadvantages include the following:

The inability to retrieve only certain columns when you read the data

The need to deserialize the data after it’s read

Use cases for this pattern include the following:

You are not sure how you will query the data or your queries might fluctuate.

Your need to keep costs down outweighs your need to be able to filter data before you retrieve it from Bigtable.

Each event contains so many measurements that you might exceed the 100 MB per-row limit if you store the data in multiple columns.

108
Q

How do you get audit logs from GKE and into Big Query?

A

Need to project owner
Enable Cloud Logging for specific logs

System
Workload
API Server
Scheduler
Controller Manager

Create sink, use sink tied to Big Query Dataset
You can route log entries from Cloud Logging to BigQuery using sinks. When you create a sink, you define a BigQuery dataset as the destination. Logging sends log entries that match the sink’s rules to partitioned tables that are created for you in that BigQuery dataset.

BigQuery table schemas for data received from Cloud Logging are based on the structure of the LogEntry type and the contents of the log entry payloads. Cloud Logging also applies rules to shorten BigQuery schema field names for audit logs and for certain structured payload fields.

Logging sinks stream logging data into BigQuery in small batches, which lets you query data without running a load job. For details, see Streaming data into BigQuery. For pricing information, see the streaming inserts section found in BigQuery pricing: Data ingestion pricing.

109
Q

How do you setup a GKE Cluster that can autoscale?

A

Setup the Cluster Autoscaler
automatically resize your Standard Google Kubernetes Engine (GKE) cluster’s node pools based on the demands of your workloads. When demand is high, the cluster autoscaler adds nodes to the node pool. When demand is low, the cluster autoscaler scales back down to a minimum size that you designate. This can increase the availability of your workloads when you need it, while controlling costs.
With Autopilot clusters, you don’t need to worry about provisioning nodes or managing node pools because node pools are automatically provisioned through node auto-provisioning, and are automatically scaled to meet the requirements of your workloads.
o add a node pool with autoscaling to an existing cluster, use the following command:

Create a pool with autoscaling enabled.

gcloud container node-pools create POOL_NAME \
–cluster=CLUSTER_NAME \
–enable-autoscaling \
–min-nodes=MIN_NODES \
–max-nodes=MAX_NODES \
–region=COMPUTE_REGION

Drawbacks of GKE Cluster Autoscaler
The Limited cluster size of 15,000 nodes and 150,000 pods.
Cluster autoscaler does not currently support local persistent volumes.
During a scale-up event, cluster autoscaler only balances across zones.
Custom scheduling with different filters is not possible.
The cluster autoscaler cannot completely scale down, sometimes an extra node remains after scaling down.

110
Q

What should you use for gke container clusters to update, list, resize, upgrade, delete, or create?

A

gcloud container clusters list
gcloud container clusters describe - describe an existing cluster for running containers
gcloud container clusters create - create a cluster for running containers
gcloud container clusters resize - resizes an existing cluster for running containers
gcloud container clusters update - update cluster settings for an existing container cluster

111
Q

What’s the difference between ClusterIP and NodePort?
What is internal, external shared.

A

What is the difference between ClusterIP and LoadBalancer?

ClusterIP (default): Internal clients send requests to a stable internal IP address.

NodePort: Clients send requests to the IP address of a node on one or more nodePort values that are specified by the Service.

LoadBalancer: Clients send requests to the IP address of a network load balancer.

112
Q

How do you estimate big query costs for queries?

A

Use “bq query –dry_run” to determine the number of bytes read by the query. Use this number in the Pricing Calculator.
Issue a query dry run

When you run a query in the bq command-line tool, you can use the –dry_run flag to estimate the number of bytes read by the query. You can also use the dryRun parameter when submitting a query job using the API or client libraries.

Dry runs do not use query slots, and you are not charged for performing a dry run. You can use the estimate returned by a dry run to calculate query costs in the pricing calculator.

113
Q

How do you list grantable roles for a resource?

A

List grantable roles for a project:

gcloud iam list-grantable-roles //cloudresourcemanager.googleapis.com/projects/PROJECT_ID

How do I list all grantable roles within my GCP environment at the organization level? I am using… gcloud iam list-grantable-roles but everywhere I read it says I must specify the resource I want to check. I want it to check all resources.

Roles can be used in two ways. Applied to identities and applied to resources. When applied to resources (your example) you must specify the resource because resources only support a subset of all possible roles.

114
Q

What are these commands for:
gcloud iam roles create

A

command to create new custom roles. You can use this command in two ways:
By providing a YAML file that contains the role definition
By using flags to specify the role definition When creating a custom role, you must specify whether it applies to the organization level or project level by using the –organization [ORGANIZATION_ID] or –project [PROJECT_ID] flags. Each example below creates a custom role at the project level.

115
Q

What gcloud command could you use to create a policy binding?

A

1- gcloud iam policies create POLICY_ID –attachment-point=ATTACHMENT_POINT –kind=KIND –policy-file=POLICY_FILE [GCLOUD_WIDE_FLAG …]
Requires a policy file
Path to the file that contains the policy, in JSON or YAML format. For valid syntax, see https://cloud.google.com/iam/help/deny/policy-syntax.
The gcloud iam command group lets you manage Google Cloud Identity & Access Management (IAM) service accounts and keys.
Cloud IAM authorizes who can take action on specific resources, giving you full control and visibility to manage cloud resources centrally.

2-gcloud projects add-iam-policy-binding

116
Q

What are some ways todesign resilient systems?

A

Use live migration
Distribute your VMs
Use zone-specific internal DNS names
Use managed instance groups to create homogeneous groups of VMs
Use startup and shutdown scripts
Back up your data

117
Q

How can you invoke a shutdown script on an instance?

A

Shutdown script invocation
Shutdown scripts are triggered by certain Advanced configuration and power interface (ACPI) events. such as restarts or stops. There are many ways to restart or stop an instance, but only some ways trigger the shutdown script to run. A shutdown script runs as part of the following actions:

When an instance shuts down due to an instances.delete request or an instances.stop request to the API.
[reemptible instance as part of the preemption process.

When an instance shuts down through a request to the guest operating system, such as sudo shutdown or sudo reboot.

When you shut down an instance manually through the Google Cloud console or the gcloud compute tool.

The shutdown script won’t run if the instance is reset using instances().reset.

118
Q

How can you monitor and maintain that the MIG instances are functioning properly?

A

Two things
Health Check
AutoHealing

Checking the status
You can verify that a VM is created and its application is responding by inspecting the current health state of each VM, by checking the current action on each VM, or by checking the group’s status.

Checking whether VMs are healthy
If you have configured an application-based health check for your MIG, you can review the health state of each managed instance.

119
Q

What load balancer would you use for SSL traffic from HTTPS sessions?

A

External HTTP(S) Load Balancing is a proxy-based Layer 7 load balancer that enables you to run and scale your services behind a single external IP address. External HTTP(S) Load Balancing distributes HTTP and HTTPS traffic to backends hosted on a variety of Google Cloud platforms (such as Compute Engine, Google Kubernetes Engine (GKE), Cloud Storage, and so on), as well as external backends connected over the internet or via hybrid connectivity. For details, see Use cases.

Do not use External SSL Proxy Load Balancing
This is a reverse proxy load balancer that distributes SSL traffic other than HTTPS

What options?
Global external HTTP(S) load balancer. This is a global load balancer that is implemented as a managed service on Google Front Ends (GFEs). It uses the open-source Envoy proxy to support advanced traffic management capabilities such as traffic mirroring, weight-based traffic splitting, request/response-based header transformations, and more.

Google’s global network and control plane.
Regional external HTTP(S) load balancer. This is a regional load balancer that is implemented as a managed service on the open-source Envoy proxy. It includes advanced traffic management capabilities such as traffic mirroring, weight-based traffic splitting, request/response-based header transformations, and more.

120
Q
A

metadata tag on all the PDF file objects with key: Content- Type and value: application/pdf.

121
Q

How do you overcome displaying information about object storage in a GCS Bucket website?

A

Use Metadata - specifically content type

Objects stored in Cloud Storage have metadata associated with them. Metadata identifies properties of the object, as well as specifies how the object should be handled when it’s accessed. Metadata exists as key:value pairs. For example, the storage class of an object is represented by the metadata entry storageClass:STANDARD. storageClass is the key for the metadata, and all objects have such a key associated with them. STANDARD specifies the value this specific object has, and the value varies from object to object.

Content-Type
The most commonly set metadata is Content-Type (also known as media type), which lets browsers render the object properly. All objects have a value specified in their Content-Type metadata, but this value does not have to match the underlying type of the object. For example, if the Content-Type is not specified by the uploader and cannot be determined, it is set to application/octet-stream or application/x-www-form-urlencoded, depending on how you uploaded the object. For a list of valid content types, see the IANA Media Types page.

122
Q

What do you need to do to run a datastore emulator in your environment?

A

Install Datastore emulator to provide local emulation of the production datastore environment in your local workstation by running gcloud components install.

123
Q

What are the two methods to deploy an update to a managed instance group?

A

Rolling Update
Deploys on all, updates roll, incrementally
rolling-action replace with max-unavailable set to 0 and max-surge set to 1
gcloud compute instance-groups managed rolling-action start-update updates instances in a managed instance group, according to the given versions and the given update policy.

–max-surge=MAX_SURGE
Maximum additional number of instances that can be created during the update process. This can be a fixed number (e.g. 5) or a percentage of size to the managed instance group (e.g. 10%). Defaults to 0 if the managed instance group has stateful configuration, or to the number of zones in which it operates otherwise.

–max-unavailable=MAX_UNAVAILABLE
Maximum number of instances that can be unavailable during the update process. This can be a fixed number (e.g. 5) or a percentage of size to the managed instance group (e.g. 10%). Defaults to the number of zones in which the managed instance group operates.

canary update
Deploys on subset, test incrementally

124
Q

How do I filter Google Cloud logs?

A

You can use the filter menus in the Query pane to add resource, log name, and log severity parameters to the query-editor field.

Use filter menus
Resource: Lets you specify the resource. type and associated resource. …
Log name: Lets you specify the logName. …
Severity: Lets you specify the severity

125
Q

You want to create a Google Cloud Storage regional bucket logs-archive in the Los Angeles region (us-west2). You want to use Coldline storage class to minimize costs and you want to retain files for 10 years. Which of the following commands should you run to create this bucket?

A

gsutil mb -l los-angeles -S coldline -retention 10m gs://logs-archive

126
Q

gsutil rewrite command does what for the user

A

The gsutil rewrite command rewrites cloud objects, applying the specified transformations to them. The transformation(s) are atomic for each affected object and applied based on the input transformation flags. Object metadata values are preserved unless altered by a transformation. At least one transformation flag, -k or -s, must be included in the command.

The -k flag is supported to add, rotate, or remove encryption keys on objects. For example, the command:

gsutil rewrite -k -r gs://bucket
updates all objects in gs://bucket with the current encryption key from your boto config file, which may either be a base64-encoded CSEK or the fully-qualified name of a Cloud KMS key.

gsutil cp gs://bucket/object#123 gs://bucket/object
gsutil rewrite -k gs://bucket/object
You can use the -s option to specify a new storage class for objects. For example, the command:

gsutil rewrite -s nearline gs://bucket/foo
rewrites the object, changing its storage class to nearline.

127
Q

How do you authenticate a service account from the gcloud command line?

A

NAME
gcloud auth activate-service-account - authorize access to Google Cloud with a service account

SYNOPSIS
gcloud auth activate-service-account [ACCOUNT] –key-file=KEY_FILE [–password-file=PASSWORD_FILE | –prompt-for-password] [GCLOUD_WIDE_FLAG …]

To allow gcloud (and other tools in Google Cloud CLI) to use service account credentials to make requests, use this command to import these credentials from a file that contains a private authorization key, and activate them for use in gcloud. gcloud auth activate-service-account serves the same function as gcloud auth login but uses a service account rather than Google user credentials.
For more information on authorization and credential types, see: https://cloud.google.com/sdk/docs/authorizing.

Key File

To obtain the key file for this command, use either the Google Cloud Console or gcloud iam service-accounts keys create. The key file can be .json (preferred) or .p12 (legacy) format. In the case of legacy .p12 files, a separate password might be required and is displayed in the Console when you create the key.

128
Q

You want to deploy a Managed Instance Group for a cost sensitive application and only have 1 instance run.
What settings should you set for the MIG?

A

Enable autoscaling on the Managed Instance Group (MIG) and set minimum instances to 1 and maximum instances to 1.
Now the MIG will autoheal and repair a bad instance but won’t oversell.

129
Q
A

Google Cloud VPN securely connects your existing network to your Google Cloud Platform (GCP) network through an IPsec VPN connection. Therefore, only resources that are connected to GCP networks can communicate through Cloud VPN tunnels.

App Engine Flexible Environment is based on Google Compute Engine and consequently can connect to your remote network via Cloud VPNs. As described in this article, you can specify network settings in your app.yaml configuration file of your GAE Flexible application.

The GAE Standard environment is not able to use Cloud VPN.
Vanilla AppEngine does not use fixed IPs so even if you could create a tunnel into the same network (which to my knowledge you can’t) you wouldn’t be able to send a request to the app engine instance, you simply wouldn’t know where to send the request. If you use the flexible environment it’s a different story. The flexible environment uses compute instances. –

130
Q

How do you export audit logs so somewhere other than cloud logging you can view?

A

You have to create a sink.
Sink could be
Pub Sub
Cloud storage
Big query Dataset

131
Q

How do you modify VM settings so they will start if they are terminated or if they crash or stopped?

A

Set host maintenance policy of a VM using maintenance behavior, restart behavior,
You can change the host maintenance policy of a VM when you first create a VM or after the VM is created, by using the setScheduling method. To configure a VM’s maintenance behavior, restart behavior, and host error detection time, use the onHostMaintenance, automaticRestart, and hostErrorTimeoutSeconds properties. Compute Engine configures all VMs with default values unless you specify otherwise.

onHostMaintenance: determines the behavior when a maintenance event occurs that might cause your VM to reboot.

MIGRATE: causes Compute Engine to live migrate an instance when there is a maintenance event. This is the default value.
TERMINATE: stops a VM instead of migrating it.
automaticRestart: determines the behavior when a VM crashes or is stopped by the system.

true: Compute Engine restarts an instance if the instance crashes or is stopped. This is the default value.
false: Compute Engine does not restart a VM if the VM crashes or is stopped.
hostErrorTimeoutSeconds (Preview): Sets the maximum amount of time, in seconds, that Compute Engine waits to restart or terminate a VM after detecting that the VM is unresponsive.

[Default] unset, Compute Engine waits up to 5.5 minutes (330 seconds) before restarting an unresponsive VM.
Number of seconds between 90 and 330, in increments of 30, which sets how long Compute Engine waits before restarting an unresponsive VM.
All VMs are configured with default values unless you explicitly specify otherwise. During host events, depending on the configured host maintenance policy, VMs that do not support live migration are terminated or automatically restarted.

132
Q

Executing shell commands on your container

To troubleshoot some isses, you might have to access the container to execute commands directly on the container itself.
You can access a container through a bash shell

A

Execute shell commands using one of the following methods:
Use kubectl exec to open a bash command shell where you can execute commands.

kubectl exec -it pod-name – /bin/bash
The following example gets a shell to the suitecrm-0 pod:

kubectl exec -it suitecrm-0 – /bin/bash
Use kubectl exec to execute commands directly.

kubectl exec -it pod-name – /bin/bash -c “command(s)”

133
Q

How can you find out all of the ip addresses of the IP info

A

gcloud compute instances list displays all Compute Engine instances in a project.
EXAMPLES
To list all instances in a project in table form, run:

gcloud compute instances list

134
Q

What command would you use to initialize your gcloud environment.

A

gcloud init - initialize or reinitialize gcloud

SYNOPSIS
gcloud init [–no-browser] [–console-only, –no-launch-browser] [–skip-diagnostics] [GCLOUD_WIDE_FLAG …]

gcloud init launches an interactive Getting Started workflow for the gcloud command-line tool. It performs the following setup steps:
Authorizes gcloud and other SDK tools to access Google Cloud using your user account credentials, or from an account of your choosing whose credentials are already available.

Sets up a new or existing configuration.
Sets properties in that configuration, including the current project and optionally, the default Google Compute Engine region and zone you’d like to use.
gcloud init can be used for initial setup of gcloud and to create new or reinitialize gcloud configurations. More information about configurations can be found by running gcloud topic configurations.

Persistence
Properties set by gcloud init are local and persistent, and are not affected by remote changes to the project. For example, the default Compute Engine zone in your configuration remains stable, even if you or another user changes the project-level default zone in the Cloud Platform Console.

135
Q

How do you export all entities from a datastore database to GCS?

A

Use the gcloud datastore export command to export all entities in your database.

gcloud datastore export gs://bucket-name –async

136
Q

what would you use to view all the datasets in a BQ warehouse?

A

BigQuery uses the bq command line and the command to list datasets is bq ls. bq dir is not a valid bq command.

137
Q

What are the 3 primary types of ports that expose services in a kubernetes cluster?

A

In GKE, services are used to expose pods to the outside world. There are multiple types of services. The three common types are - NodePort, ClusterIP, and LoadBalancer (there are two more service types - ExternalName and Headless, which are not relevant in this context). We do not want to create a Cluster IP as this is not accessible outside the cluster. And we do not want to create NodePort as this results in exposing a port on each node in the cluster; and as we have multiple replicas, this will result in them trying to open the same port on the nodes which fail. The compute engine instance in pt-network needs a single point of communication to reach GKE, and you can do this by creating a service of type LoadBalancer. The LoadBalancer service is given a public IP that is externally accessible.
Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps

138
Q

you want to review IAM users and their assigned roles in the production GCP project, how do you do that?

A

In Google Cloud Platform there is no single command that can do this. Permissions via roles are assigned to resources. Organizations, Folders, Projects, Databases, Storage Objects, KMS keys, etc can have IAM permissions assigned to them. You must scan (check IAM permissions for) every resource to determine the total set of permissions that an IAM member account has.

Must go to the console and use the IAM, members page, not IAM roles

139
Q

What would you need to do to prevent service account creation across a project?

A

Preventing creation of service accounts
You can prevent the creation of service accounts by enforcing the constraints/iam.disableServiceAccountCreation organization policy constraint in an organization, project, or folder.

Before you enforce this constraint, consider the following limitations:

If you enforce this constraint in a project, or in all projects within an organization, then some Google Cloud services cannot create default service accounts. As a result, if the project runs workloads that need to impersonate a service account, the project might not contain a service account that the workload can use.

To address this issue, you can enable service account impersonation across projects. When you enable this feature, you can create service accounts in a centralized project, then attach the service accounts to resources in other projects.

Some features, such as workload identity federation, require you to create service accounts.

If you do not use workload identity federation, consider using organization policy constraints to block federation from all identity providers.

140
Q

What would you do to enable a user to manage

A

Grant the Service Account User role (roles/iam.serviceAccountUser) at the project level for all service accounts in the project, or at the service account level.

Granting the Service Account User role to a user for a project gives the user access to all service accounts in the project, including service accounts that might be created in the future.

Granting the Service Account User role to a user for a specific service account gives a user access to only that service account.

This role’s permissions include the iam.serviceAccounts.actAs permission.

Users granted the Service Account User role on a service account can use it to indirectly access all the resources to which the service account has access.
For example, if a service account has been granted the Compute Admin role (roles/compute.admin), a user that has been granted the Service Account Users role (roles/iam.serviceAccountUser) on that service account can act as the service account to start a Compute Engine instance. I
n this flow, the user impersonates the service account to perform any tasks using its granted roles and permissions.

141
Q

What are best practices when managing service accounts?

A

Service accounts represent your service-level security. The security of the service is determined by the people who have IAM roles to manage and use the service accounts, and people who hold private external keys for those service accounts. Best practices to ensure security include the following:

Use the IAM API to audit the service accounts, the keys, and the allow policies on those service accounts.

If your service accounts don’t need external keys, delete them.

If users don’t need permission to manage or use service accounts, then remove them from the applicable allow policy.

Make sure that service accounts have the fewest permissions possible.

Use default service accounts with caution, because they are automatically granted the Editor (roles/editor) role on the project.

142
Q

What are dangers of default service accounts?

A

When you enable or use some Google Cloud services, they create user-managed service accounts that enable the service to deploy jobs that access other Google Cloud resources. These accounts are known as default service accounts.

If your application runs in a Google Cloud environment that has a default service account, your application can use the credentials for the default service account to call Google Cloud APIs. Alternatively, you can create your own user-managed service account and use it to authenticate. For details, see Finding credentials automatically.

143
Q

How do you set a startup script using a metadata key?

A

A startup script is a file that performs tasks during the startup process of a virtual machine (VM) instance. Startup scripts can apply to all VMs in a project or to a single VM. Startup scripts specified by VM-level metadata override startup scripts specified by project-level metadata, and startup scripts only run when a network is available. This document describes how to use startup scripts on Linux VM instances. For information about how to add a project-level startup script, see gcloud compute project-info add-metadata.

For Linux startup scripts, you can use bash or non-bash file. To use a non-bash file, designate the interpreter by adding a #! to the top of the file. For example, to use a Python 3 startup script, add #! /usr/bin/python3 to the top of the file.

If you specify a startup script by using one of the procedures in this document, Compute Engine does the following:

Copies the startup script to the VM

Sets run permissions on the startup script

Runs the startup script as the root user when the VM boots

Metadata key
Use for
startup-script Passing a bash or non-bash startup script that is stored locally or added directly and that is up to 256 KB in size
startup-script-url Passing a bash or non-bash startup script that is stored in Cloud Storage and that is greater than 256 KB in size. The string you enter here is used as-is to run gsutil. If your startup-script-url contains space characters, then don’t replace the spaces with %20 or add double quotes (“”) to the startup-script-url string.
gcloud compute instances create VM_NAME \
–image-project=debian-cloud \
–image-family=debian-10 \
–scopes=storage-ro \
–metadata=startup-script-url=CLOUD_STORAGE_URL

144
Q

What permissions does a default service account have?

A

When you create a new Compute Engine instance, it is automatically configured with the following access scopes:

Read-only access to Cloud Storage:
https://www.googleapis.com/auth/devstorage.read_only
Write access to write Compute Engine logs:
https://www.googleapis.com/auth/logging.write
Write access to publish metric data to your Google Cloud projects:
https://www.googleapis.com/auth/monitoring.write
Read-only access to Service Management features required for Google Cloud Endpoints(Alpha):
https://www.googleapis.com/auth/service.management.readonly
Read/write access to Service Control features required for Google Cloud Endpoints(Alpha):
https://www.googleapis.com/auth/servicecontrol
Write access to Cloud Trace allows an application running on a VM to write trace data to a project.
https://www.googleapis.com/auth/trace.append

145
Q

What are the default firewall rules that get allowed when you create an automode VPC?

A

Ingress for rdp, SSH, ICMP,

146
Q

How do you clone a project?

A

You can’t clone a project.

147
Q

How do you create a standard compute instance with N1 and 4 CPU

A

CPU plugged into the machine type

gcloud compute instances create –machine-type=n1-standard-4 server-1

148
Q

What is a lifecycle policy?

A

What is Lifecycle Policy?
It is used to create rules like setting a Time to Live (TTL) for objects, or “downgrading” storage classes of objects to help save money.

You can add a lifecycle management configuration to a bucket. You will have to add certain conditions so once an object meets the criteria of any of the rules, Cloud Storage automatically performs a specified action on the object.

Here are some example use cases:

Adding a rule to downgrade the storage class for objects older than 180 days to Coldline Storage.

Adding a rule for Deletion of objects created before Dec 01, 2020.

There are two types of actions that can be specified in a lifecycle rule. i.e. Delete or SetStorageClass

Delete: It deletes an object when the object meets the conditions specified in the lifecycle policy.

SetStorageClass: It changes the storage class of an object when a certain object meets the specified conditions in the lifecycle policy.

149
Q

What are basic features and failover and read options for Cloud SQL?

A

What is Google Cloud SQL?
Google Cloud SQL is a relational database service where its main offerings are relational data, and transactional data (mainly used in banks). For e.g. Without database transactions your bank would not offer you to transfer money from one account to another, What if the transfer of $100 didn’t result in receiving in the destination account, your bank just lost $100. That’s why SQL is required. It has features like commit or rollback.

A classic relational database has a lot of setup required, management, Configuration, maintenance, administer, to get rid of all these. GCP provides you a platform where one can easily manage or administer your database instances.

It offers MySQL, PostgreSQL, and SQL Server as a fully managed service, it offers a database that is capable of handling terabytes of data (up to 30 TB). You always have an option of running your own DB server in a VM machine but then you have management overhead.

It provides read replica, external replica, and failover features, if there is an outage, it will failover to another zone.

A backup option is there, either scheduled or on-demand.

You can scale vertically by changing machine type or horizontally like Read replica.

Customer data is always encrypted where on google internal network or in DB tables, or in the backup.

Cloud SQL is compatible with other Google services like app engine, compute engine, or external applications also like MySQL workbench.

It reduces maintenance costs and automates Database provisioning, backups, patches, and capacity increases ensuring 99.95% availability.

It provides you with High Availability with automatic failover.

Data is always encrypted at rest or in transit

It helps you focus on your app rather than management.

Architectural Diagram:

The primary instance writes logs to the system database every second in terms of the heartbeat signal, in any case, if heartbeats aren’t detected for 60 seconds, a failover process is initiated. This may also occur if the zone containing the primary instance experiences an outage. In case of failover, the standby instance serves as a backup database from the secondary zone.

150
Q

What is the difference between layer 4 and layer 7 load balancer?

A

Layer 4-LBs act almost as transport layer-aware routers that do no packet manipulation and are faster than Layer 7-LBs that perform a number of manipulation to packets and also have session affinity feature ensuring connections that result from the same source are always served from the same backend. Layer 7-LBs are more common and are often always software whereas Layer 4 - Load Balancers are less common, and tend to be implemented in dedicated hardware.

One important note about Layer 7-LBs is their ability to terminate the SSL traffic. This is a limitation for most Layer 4-LBs as they cannot determine if incoming packets are wrapped in SSL and therefore fail to terminate SSL traffic. L7 Load Balancers can have CA certificates installed within them that can verify the authenticity of the service instead of storing and handling backends. The processing strain from having to encrypt and decrypt such requests is pushed onto Layer 7 - Load Balancers to decrypt such data and re-encrypt the packet for transmission to the backend server. This often results in high latency and can be problematic at times.

Within Layer 7 - Load Balancers, the packet is inspected, although this can be a costly process in terms of latency, it has additional features like balancing traffic based on content. For example, your company has a pool of backends that have been fitted with some high-end instances optimized for video processing. Another pool may contain low-power CPUs that are optimized for static websites. Layer 7 - Load Balancers can use the URL path e.g. whizlabs.com/courses to serve the most appropriate backend to send incoming traffic to the ones with high-end instances, whereas requests to a different URL such as whizlabs.com/blogs can be transferred to the low-power instances, all thanks to the Layer 7 - Load Balancers ability to intelligently split traffic.

Another interesting feature of Layer 7 - Load Balancers is the fact of session affinity or connection stickiness. It is the tendency for a connection where the traffic from the same source continues to be served from the same backend. So if your IP is 35.145.224.101 and you connect to Youtube servers, that are configured with Layer 7-LBs, there is a high chance your tutorial on ‘How to get GCP Certified Profession’, is being served by the exact same server even if you switch to any other video. This way you receive an uninterrupted consistent connection, which improves the quality of service. Session affinity provides a best-effort attempt to send requests from a particular client to the same backend for as long as the backend is healthy.

151
Q

how do you create a GKE cluster
named example-cluster
4 nodes
Load balancer

A

gcloud container clusters create example-cluster –cloud-run-config=load-balancer-type=INTERNAL –num-nodes=4

152
Q

What does the 0.0.0/0 all zero cidr block mean?

A

0.0.0.0/0 which defines an IP block containing all possible IP addresses.

It is commonly used in routing to depict the default route as a destination subnet.
For firewalls in GCP It matches all addresses in the IPv4 address space and is present on most hosts, directed towards a local router.

153
Q

What steps are required to ensure SSH working for a newly migrated instance?

A

TCP Port 22 open on firewall ingress, not UDP - TCP
SSH Key put into SSH setting

154
Q
A

Cool-down period: How long to wait before collecting information from a new instance. This should be at least the time it takes to initialize the instance.

The cool down period is also known as the application initialization period. While an application is initializing on an instance, the instance’s usage data might not reflect normal circumstances. So the autoscaler uses the cool down period for scaling decisions in the following ways:

For scale-in decisions, the autoscaler considers usage data from all instances, even an instance that is still within its cool down period. The autoscaler recommends to remove instances if the average utilization from all instances is less than the target utilization.
For scale-out decisions, the autoscaler ignores usage data from instances that are still in their cool down period.
If you enable predictive mode, the cool down period informs the predictive autoscaler to scale out further in advance of anticipated load, so that applications are initialized when the load arrives. For example, if you set the cool down period to 300 seconds, then predictive autoscaler creates VMs 5 minutes ahead of forecasted load.
Specify a cool down period to indicate how long it takes applications on your instance to initialize. By default, the cool down period is 60 seconds.

155
Q

What does the ch -change command versus the set gsutil command?

A

Ch
The “acl ch” (or “acl change”) command updates access control lists, similar in spirit to the Linux chmod command. You can specify multiple access grant additions and deletions in a single command run; all changes will be made atomically to each object in turn. For example, if the command requests deleting one grant and adding a different grant, the ACLs being updated will never be left in an intermediate state where one grant has been deleted but the second grant not yet added. Each change specifies a user or group grant to add or delete, and for grant additions, one of R, W, O (for the permission to be granted). A more formal description is provided in a later section; below we provide examples.

Ch Examples
Examples for “ch” sub-command:

Grant anyone on the internet READ access to the object example-object:

gsutil acl ch -u AllUsers:R gs://example-bucket/example-object

156
Q

How would you enable compute API in the cloud shell.

A

gcloud services - list, enable and disable APIs and services

gcloud services enable compute
SYNOPSIS
gcloud services GROUP | COMMAND [GCLOUD_WIDE_FLAG …]
DESCRIPTION
The gcloud services command group lets you manage your project’s access to services provided by Google and third parties.

157
Q

What is VM Metadata and how do you query it?

A

Query VM metadata

LINUX WINDOWS
Every VM stores its metadata on a metadata server. Use these instructions to query these metadata values. For more information about metadata, see VM metadata.

You can query for default VM metadata, such as the VM’s host name, instance ID, and service account information programmatically from within a VM. For a list of default metadata values, see Default VM metadata values.

You can also query any custom metadata such as startup and shutdown scripts programmatically from within a VM, or, you can use the Google Cloud console or Google Cloud CLI.

This documents shows how to complete the following tasks:

Query a single metadata entry
Query a metadata directory listing
Monitor metadata changes using the wait-for-change feature

158
Q

WHat’s the best way to setup name resolution for new VMs in your new subnet?

A

n a nutshell, Private DNS zone provides a simple-to-manage internal DNS solution for your private networks on GCP. This GCP-native and managed private zone capability removes the need to provision and manage additional software and compute resources, simplifying management for network administrators. Since DNS queries for private zones are restricted to a private network, hostile agents can’t get internal network information.
Cloud DNS private zones offers flexibility in your configurations by allowing multiple zones to be attached to a single VPC network. Additionally, support for split horizons allows you to have a private zone share the same name as a public zone while resolving to different IP addresses in each zone.

DNS peering allows one network to forward DNS requests to another network
When GCP networks are peered, they do not automatically share private DNS zones, DNS policies, or even internal DNS records. Cloud DNS peering, provides a second method for sharing DNS data. You can configure all or a portion of the DNS namespace to be sent from one VPC to another and, once there, it will respect the DNS policies or matching zones defined in the peered network.

Run the dns managed-zones create command:

gcloud dns managed-zones create NAME \
–description=DESCRIPTION \
–dns-name=DNS_SUFFIX \
–labels=LABELS \
–visibility=public
Replace the following:

NAME: a name for your zone
DESCRIPTION: a description for your zone
DNS_SUFFIX: the DNS suffix for your zone, such as example.com