Review Section Flashcards
What is AWS connector project?
An AWS connector project is a Google Cloud project that lets Cloud Monitoring read metrics for a specific AWS account. The following diagram shows a Google Cloud project that has an AWS connector project as a monitored project. That AWS connector project reads the metrics from an AWS account and then stores those metrics:
An AWS connector project lets you read metrics from an AWS account.
The AWS connector project is created when you connect your AWS account to Google Cloud. For information about these steps, see Connect your AWS account to Google Cloud.
To display your AWS account metrics in multiple Google Cloud projects, connect your AWS account to Google Cloud, and then follow the steps in Add AWS connector projects to a metrics scope.
How can you expand the set of metrics that a project can access by adding other Google Cloud projects?
By default, a Google Cloud project has visibility only to the metrics it stores. However, you can expand the set of metrics that a project can access by adding other Google Cloud projects to the project’s metrics scope. The metrics scope defines the set of Google Cloud projects whose metrics the current Google Cloud project can access.
What are the best practices for scoping projects when you have multiple projects you want to monitor?
We recommend that you use a new Cloud project or one without resources as the scoping project when you want to view metrics for multiple Cloud projects or AWS accounts.
When a metrics scope contains monitored projects, to chart or monitor only those metrics stored in the scoping project, you must specify filters that exclude metrics from the monitored projects. The requirement to use filters increases the complexity of chart and alerting policy, and it increases the possibility of a configuration error. The recommendation ensures that these scoping projects don’t generate metrics, so there are no metrics in the projects to chart or monitor.
The previous example follows our recommendation. The scoping project, AllEnvironments, was created and then the Staging and Production projects were added as monitored projects. To view or monitor the combined metrics for all projects, you use the metrics scope for the AllEnvironments project. To view or monitor the metrics stored in the Staging project, you use the metrics scope for that project.
What are stackdriver groups?
you can use Stackdriver Groups to organize a subset of the resources your team cares about, such as one microservice.
Users within a Workspace all have common view permissions, so that everyone on the team collaborating on an application’s dashboard or debugging an incident generated from an alerting policy will have the same view.
how do you organizing cloud operation Workspaces by environment
Organizing by environment means that Workspaces are aligned to environments such as development, staging, and production. In this case, projects are included in separate Workspaces based on their function in the environment. For example, splitting the projects along development and staging/production environments would result in two Workspaces: one for development and one for staging/production, as shown.
What is a metric?
Operations Suite supports the alerts creation based on predefined metrics.
A metric is a defined measurement using a resource based on a regular period of time. Metrics leverage mathematical calculations to measure outcomes.
Examples of metrics available using Operations Suite, and specifically the Stackdriver API, include maximum, minimum, average, and mean. Each of these mathematical calculations might evaluate CPU utilization, memory usage, and network activity.
What is a workspace?
Workspaces
Cloud Monitoring requires an organizational tool to monitor and collect information. In GCP, that tool is called a Workspace.
The Workspace brings together Cloud Monitoring resources from one or more GCP projects. It can even bring in third-party account data from other cloud providers, including Amazon Web Services.
The Workspace collects metric data from one or more monitored projects; however, the data remains project bound. The data is pulled into the Workspace and then displayed.
What are the rules regarding provisioning a workspace?
A Workspace can manage and monitor data for one or more GCP projects.
A project, however, can only be associated with a single Workspace.
*Monitoring Editor
*Monitoring Admin
*Project Owner
Before you create a new Workspace, you need to identify who in the organization has roles in a given project.
What are the GCP best practices for workspaces when you have to monitor multiple projects?
Create a seperate project to manage all the activity across mulltple workspaces.
You can add or merge workspaces, but each project can ony be assigned workspace.
What are the 3 types of zonal/regional clusters?
Single-zone clusters
A single-zone cluster has a single control plane running in one zone. This control plane manages workloads on nodes running in the same zone.
Multi-zonal clusters
A multi-zonal cluster has a single replica of the control plane running in a single zone, and has nodes running in multiple zones. During an upgrade of the cluster or an outage of the zone where the control plane runs, workloads still run. However, the cluster, its nodes, and its workloads cannot be configured until the control plane is available. Multi-zonal clusters balance availability and cost for consistent workloads. If you want to maintain availability and the number of your nodes and node pools are changing frequently, consider using a regional cluster.
Regional clusters
A regional cluster has multiple replicas of the control plane, running in multiple zones within a given region. Nodes in a regional cluster can run in multiple zones or a single zone depending on the configured node locations. By default, GKE replicates each node pool across three zones of the control plane’s region. When you create a cluster or when you add a new node pool, you can change the default configuration by specifying the zone(s) in which the cluster’s nodes run. All zones must be within the same region as the control plane.
https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters
What Cloud Storage systems are there for granting users permission to access your buckets and objects?
Cloud Storage offers two systems for granting users permission to access your buckets and objects: IAM and Access Control Lists (ACLs). These systems act in parallel - in order for a user to access a Cloud Storage resource, only one of the systems needs to grant the user permission.
IAM - grant permissions at the bucket and project levels.
ACLs - used only by Cloud Storage and have limited permission options, per-object basis.
uniform bucket permissioning system
disables ACLs
Resources granted exclusively through IAM. After you enable uniform bucket-level access, you can reverse your decision for 90 days.
What do you need to do to protect your org after you creat4 a billing account and setup projects?
Why?
When an organization resource is created, all users in your domain are granted the Billing Account Creator and Project Creator roles by default. These default roles allow your users to start using Google Cloud immediately, but are not intended for use in regular operation of your organization resource.
the organization resource.
Removing default roles from the organization resource
After you designate your own Billing Account Creator and Project Creator roles, you can remove these roles from the organization resource to restrict those permissions to specifically designated users. To remove roles from the organization resource
You want to deploy an application to a Kubernetes Engine cluster using a manifest file called my-app.yaml.
What command would you use?
kubectl apply -f my-app.yaml
kubectl apply -k dir
Explanation
Part of the app management commands.
The correct answer is to use the “kubectl apply -f” with the name of the deployment file. Deployments are Kubernetes abstractions and are managed using kubectl, not gcloud. The other options are not valid commands. For more information, see https://kubernetes.io/docs/reference/kubectl/overview/.
The command set kubectl apply is used at a terminal’s command-line window to create or modify Kubernetes resources defined in a manifest file. This is called a declarative usage. The state of the resource is declared in the manifest file, then kubectl apply is used to implement that state.
In contrast, the command set kubectl create is the command you use to create a Kubernetes resource directly at the command line. This is an imperative usage. You can also use kubectl create against a manifest file to create a new instance of the resource. However, if the resource already exists, you will get an error.
Kubernetes Engine collects application logs by default when the log data is written where?
app logs: STDOUT and STDERR
In addition to cluster audit logs, and logs for the worker nodes, GKE automatically collects application logs written to either STDOUT or STDERR. If you’d prefer not to collect application logs, you can also now choose to collect only system logs. Collecting system logs are critical for production clusters as it significantly accelerates the troubleshooting process. No matter how you plan to use logs, GKE and Cloud Logging make it simple and easy–simply start your cluster, deploy your applications and your logs appear in Cloud Logging!
Where does GKE collect Cluster logs?
By default, GKE clusters are natively integrated with Cloud Logging (and Monitoring). When you create a GKE cluster, both Monitoring and Cloud Logging are enabled by default. That means you get a monitoring dashboard specifically tailored for Kubernetes and your logs are sent to Cloud Logging’s dedicated, persistent datastore, and indexed for both searches and visualization in the Cloud Logs Viewer.
If you have an existing cluster with Cloud Logging and Monitoring disabled, you can still enable logging and monitoring for the cluster. That’s important because with Cloud Logging disabled, a GKE-based application temporarily writes logs to the worker node, which may be removed when a pod is removed, or overwritten when log files are rotated. Nor are these logs centrally accessible, making it difficult to troubleshoot your system or application.
Where would you view your GKE logs?
Cloud Logging, and its companion tool Cloud Monitoring, are full featured products that are both deeply integrated into GKE. In this blog post, we’ll go over how logging works on GKE and some best practices for log collection. Then we’ll go over some common logging use cases, so you can make the most out of the extensive logging functionality built into GKE and Google Cloud Platform.
Cloud Logging console – You can see your logs directly from the Cloud Logging console by using the appropriate logging filters to select the Kubernetes resources such as cluster, node, namespace, pod or container logs. Here are some sample Kubernetes-related queries to help get you started.
GKE console – In the Kubernetes Engine section of the Google Cloud Console, select the Kubernetes resources listed in Workloads, and then the Container or Audit Logs links.
Monitoring console – In the Kubernetes Engine section of the Monitoring console, select the appropriate cluster, nodes, pod or containers to view the associated logs.
gcloud command line tool – Using the gcloud logging read command, select the appropriate cluster, node, pod and container logs.
What is the difference between Regional and global IP addresses?
When you list or describe IP addresses in your project, Google Cloud labels addresses as global or regional, which indicates how a particular address is being used. When you associate an address with a regional resource, such as a VM, Google Cloud labels the address as regional. Regions are Google Cloud regions, such as us-east4 or europe-west2.
For more information about global and regional resources, see Global, regional, and zonal resources in the Compute Engine documentation.
As a developer using GCP, you will need to set up a local development environment. You will want to authorize the use of gcloud commands to access resources. What commands could you use to authorize access?
gcloud init
Explanation
Gcloud init will authorize access and perform other common setup steps. Gcloud auth login will authorize access only. Gcloud login and gcloud config login are not valid commands.
You can also run gcloud init to change your settings or create a new configuration.
gcloud init performs the following setup steps:
Authorizes the gcloud CLI to use your user account credentials to access Google Cloud, or lets you select an account if you have previously authorized access
Sets up a gcloud CLI configuration and sets a base set of properties, including the active account from the step above, the current project, and if applicable, the default Compute Engine region and zone
https://cloud.google.com/sdk/docs/initializing
gcloud auth login
Authorize with a user account without setting up a configuration.
gcloud auth login [ACCOUNT] [–no-activate] [–brief] [–no-browser] [–cred-file=CRED_FILE] [–enable-gdrive-access] [–force] [–no-launch-browser] [–update-adc] [GObtains access credentials for your user account via a web-based authorization flow. When this command completes successfully, it sets the active account in the current configuration to the account specified. If no configuration exists, it creates a configuration named default.
If valid credentials for an account are already available from a prior authorization, the account is set to active without rerunning the flow.
You have a Cloud Datastore database that you would like to backup. You’d like to issue a command and have it return immediately while the backup runs in the background. You want the backup file to be stored in a Cloud Storage bucket named my-datastore-backup. What command would you use?
gcloud datastore export gs://my-datastore-backup –async
Explanation
The correct command is gcloud datastore export gs://my-datastore-backup –async. Export, not backup, is the datastore command to save data to a Cloud Storage bucket. Gsutil is used to manage Cloud Storage, not Cloud Datastore. For more information, see https://cloud.google.com/datastore/docs/export-import-entities.
How do you setup a database for export?
Before you begin
Before you can use the managed export and import service, you must complete the following tasks.
Enable billing for your Google Cloud project. Only Google Cloud projects with billing enabled can use the export and import functionality.
Create a Cloud Storage bucket in the same location as your Firestore in Datastore mode database. You cannot use a Requester Pays bucket for export and import operations.
Assign an IAM role to your user account that grants the datastore.databases.export permission, if you are exporting data, or the datastore.databases.import permission, if you are importing data. The Datastore Import Export Admin role, for example, grants both permissions.
If the Cloud Storage bucket is in another project, give your project’s default services account access to the bucket.
Authorize with a user account
Use the following gcloud CLI commands to authorize access with a user account:
Command Description
gcloud init Authorizes access and performs other common setup steps.
gcloud auth login Authorizes access only.
During authorization, these commands obtain account credentials from Google Cloud and store them on the local system.
The specified account becomes the active account in your configuration.
The gcloud CLI uses the stored credentials to access Google Cloud. You can have any number of accounts with stored credentials for a single gcloud CLI installation, but only one account is active at a time.
A manager in your company is having trouble tracking the use and cost of resources across several projects. In particular, they do not know which resources are created by different teams they manage. What would you suggest the manager use to help better understand which resources are used by which team?
Labels are key-value pairs attached to resources and used to manage them. The manager could use a key-value pair with the key ‘team-name’ and the value the name of the team that created the resource. Audit logs do not necessarily have the names of teams that own a resource. Traces are used for performance monitoring and analysis. IAM policies are used to control access to resources, not to track which team created them.
For more information, see
https://cloud.google.com/resource-manager/docs/creating-managing-labels
You have created a target pool with instances in two zones which are in the same region. The target pool is not functioning correctly. What could be the cause of the problem?
The target pool is missing a health check.
Target pools must have a health check to function properly. Nodes can be in different zones but must be in the same region. Cloud Monitoring and Cloud Logging are useful but they are not required for the target pool to function properly. Nodes in a pool have the same configuration. For more information, see https://cloud.google.com/load-balancing/docs/target-pools
What is an External NLB Target pool based load balancer look like?
Google Cloud external TCP/UDP Network Load Balancing (after this referred to as Network Load Balancing) is a regional, pass-through load balancer. A network load balancer distributes external traffic among virtual machine (VM) instances in the same region.
You can configure a network load balancer for TCP, UDP, ESP, GRE, ICMP, and ICMPv6 traffic.
A network load balancer can receive traffic from:
Any client on the internet
Google Cloud VMs with external IPs
Google Cloud VMs that have internet access through Cloud NAT or instance-based NAT
What is a target pool?
Target pools
A target pool resource defines a group of instances that should receive incoming traffic from forwarding rules. When a forwarding rule directs traffic to a target pool, Cloud Load Balancing picks an instance from these target pools based on a hash of the source IP and port and the destination IP and port. Each target pool operates in a single region and distributes traffic to the first network interface (nic0) of the backend instance. For more information about how traffic is distributed to instances, see the Load distribution algorithm section in this topic.
The network load balancers are not proxies. Responses from the backend VMs go directly to the clients, not back through the load balancer. The load balancer preserves the source IP addresses of packets. The destination IP address for incoming packets is the regional external IP address associated with the load balancer’s forwarding rule.
For architecture details, see network load balancer with a target pool backend.
What are Health checks?
Health checks ensure that Compute Engine forwards new connections only to instances that are up and ready to receive them. Compute Engine sends health check requests to each instance at the specified frequency. After an instance exceeds its allowed number of health check failures, it is no longer considered an eligible instance for receiving new traffic.
To allow for graceful shutdown and closure of TCP connections, existing connections are not actively terminated. However, existing connections to an unhealthy backend are not guaranteed to remain viable for long periods of time. If possible, you should begin a graceful shutdown process as soon as possible for your unhealthy backend.
The health checker continues to query unhealthy instances, and returns an instance to the pool when the specified number of successful checks occur. If all instances are marked as UNHEALTHY, the load balancer directs new traffic to all existing instances.
Network Load Balancing relies on legacy HTTP health checks to determine instance health. Even if your service does not use HTTP, you must run a basic web server on each instance that the health check system can query.
Legacy HTTPS health checks aren’t supported for network load balancers and cannot be used with most other types of load balancers.
A client has asked for your advice about building a data transformation pipeline. The pipeline will read data from Cloud Storage and Cloud Spanner, merge data from the two sources and write the data to a BigQuery data set. The client does not want to manage servers or other infrastructure, if possible. What GCP service would you recommend?
Cloud Data Fusion
Cloud Data Fusion is a managed service that is designed for building data transformation pipelines. https://cloud.google.com/data-fusion/docs/how-to
What is Cloud Data Fusion?
bookmark_border
Cloud Data Fusion is a fully managed, cloud-native, enterprise data integration service for quickly building and managing data pipelines.
The Cloud Data Fusion web UI lets you to build scalable data integration solutions to clean, prepare, blend, transfer, and transform data, without having to manage the infrastructure.
Cloud Data Fusion is powered by the open source project CDAP. Throughout this page, there are links to the CDAP documentation site, where you can find more detailed information.