PCA-QA - 71-125 Flashcards
You are using a single Cloud SQL instance to serve your application from a specific zone. You want to introduce
high availability. What should you do?
D. Create a failover replica instance in the same region, but in a different zone
Your company is running a stateless application on a Compute Engine instance. The application is used heavily during regular business hours and lightly outside of business hours. Users are reporting that the application is slow during peak hours. You need to optimize the application’s performance. What should you do?
C. Create a custom image from the existing disk. Create an instance template from the custom image. Create an autoscaled managed instance group from the instance template.
Your web application has several VM instances running within a VPC. You want to restrict communications
between instances to only the paths and ports you authorize, but you don’t want to rely on static IP addresses or
subnets because the app can autoscale. How should you restrict communications?
B. Use firewall rules based on network tags attached to the compute instances
You are using Cloud SQL as the database backend for a large CRM deployment. You want to scale as usage
increases and ensure that you don’t run out of storage, maintain 75% CPU usage cores, and keep replication lag
below 60 seconds. What are the correct steps to meet your requirements?
A. 1. Enable automatic storage increase for the instance. 2. Create a Stackdriver alert when CPU usage exceeds
75%, and change the instance type to reduce CPU usage. 3. Create a Stackdriver alert for replication lag, and
shard the database to reduce replication time.
You are tasked with building an online analytical processing (OLAP) marketing analytics and reporting tool. This
requires a relational database that can operate on hundreds of terabytes of data. What is the Googlerecommended tool for such applications?
D. BigQuery, because it is designed for large-scale processing of tabular data
You have deployed an application to Google Kubernetes Engine (GKE), and are using the Cloud SQL proxy
container to make the Cloud SQL database available to the services running on Kubernetes. You are notified that
the application is reporting database connection issues. Your company policies require a post- mortem. What
should you do?
C. In the GCP Console, navigate to Stackdriver Logging. Consult logs for (GKE) and Cloud SQL.
Your company pushes batches of sensitive transaction data from its application server VMs to Cloud Pub/Sub for
processing and storage. What is the Google- recommended way for your application to authenticate to the
required Google Cloud services?
A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.
You want to establish a Compute Engine application in a single VPC across two regions. The application must
communicate over VPN to an on-premises network.
How should you deploy the VPN?
D. Deploy Cloud VPN Gateway in each region. Ensure that each region has at least one VPN tunnel to the onpremises peer gateway
Your applications will be writing their logs to BigQuery for analysis. Each application should have its own table. Any
logs older than 45 days should be removed.
You want to optimize storage and follow Google-recommended practices. What should you do?
B. Make the tables time-partitioned, and configure the partition expiration at 45 days
You want your Google Kubernetes Engine cluster to automatically add or remove nodes based on CPU load.
What should you do?
A. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable the Cluster Autoscaler from the GCP Console.
You need to develop procedures to verify resilience of disaster recovery for remote recovery using GCP. Your
production environment is hosted on-premises. You need to establish a secure, redundant connection between
your on-premises network and the GCP network.
What should you do?
B. Verify that Dedicated Interconnect can replicate files to GCP. Verify that Cloud VPN can establish a secure
connection between your networks if Dedicated Interconnect fails.
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study -
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in demand around the world.
Executive Statement -
We are the number one local community app; it’s time to take our local community services global. Our venture capital investors want to see rapid growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides clear uptime data.
Existing Technical Environment -
HipLocal’s environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal’s investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal is configuring their access controls.
Which firewall configuration should they implement?
C. Allow traffic on port 443 for a specific tag.
Your customer wants to do resilience testing of their authentication layer. This consists of a regional managed instance group serving a public REST API that reads from and writes to a Cloud SQL instance.
What should you do?
C. Schedule a disaster simulation exercise during which you can shut off all VMs in a zone to see how your application behaves.
Your BigQuery project has several users. For audit purposes, you need to see how many queries each user ran in the last month. What should you do?
D. Use Cloud Audit Logging to view Cloud Audit Logs, and create a filter on the query operation to get the required information.
You want to automate the creation of a managed instance group. The VMs have many OS package dependencies.
You want to minimize the startup time for new
VMs in the instance group.
What should you do?
B. Create a custom VM image with all OS package dependencies. Use Deployment Manager to create the managed instance group with the VM image.
Your company captures all web traffic data in Google Analytics 360 and stores it in BigQuery. Each country has its
own dataset. Each dataset has multiple tables.
You want analysts from each country to be able to see and query only the data for their respective countries.
How should you configure the access rights?
A. Create a group per country. Add analysts to their respective country-groups. Create a single group ‘all_analysts’, and add all country-groups as members. Grant the ‘all_analysts’ group the IAM role of BigQuery jobUser. Share the appropriate dataset with view access with each respective analyst country-group.
You have been engaged by your client to lead the migration of their application infrastructure to GCP. One of their current problems is that the on-premises high performance SAN is requiring frequent and expensive upgrades to keep up with the variety of workloads that are identified as follows: 20 TB of log archives retained for legal reasons; 500 GB of VM boot/data volumes and templates; 500 GB of image thumbnails; 200 GB of customer session state data that allows customers to restart sessions even if off-line for several days.
Which of the following best reflects your recommendations for a cost-effective storage allocation?
B. Memcache backed by Cloud Datastore for the customer session state data. Lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes.
Your web application uses Google Kubernetes Engine to manage several workloads. One workload requires a consistent set of hostnames even after pod scaling and relaunches.
Which feature of Kubernetes should you use to accomplish this?
A. StatefulSets
You are using Cloud CDN to deliver static HTTP(S) website content hosted on a Compute Engine instance group. You want to improve the cache hit ratio.
What should you do?
A. Customize the cache keys to omit the protocol from the key
Your architecture calls for the centralized collection of all admin activity and VM system logs within your project.
How should you collect these logs from both VMs and services?
B. Stackdriver automatically collects admin activity logs for most services. The Stackdriver Logging agent must
be installed on each instance to collect system logs.
You have an App Engine application that needs to be updated. You want to test the update with production traffic before replacing the current application version.
What should you do?
B. Deploy the update as a new version in the App Engine application, and split traffic between the new and current versions.