Pro Network Engineer Cert Flashcards

1
Q

What are the three types of networks?

A

Default , auto, and custom

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the default network?

A

It is an auto-mode network with one subnet per region, fixed /20 per region, expandable to /16. Comes with default firewall rules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is an auto-mode network?

A

One subnet per region, fixed /20 per region, expandable to /16. Regional IP allocation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a custom network?

A

No default subnets created, full control of IP ranges, regional IP allocation, expandable to any RFC 1918 size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Are subnets zonal or regional or global?

A

They are regional - one subnet can span multiple zones

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the first available address in a subnet? What are the ones before it for?

A

.0 is for the network, .1 is for the gateway, so .2 is the first available

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Does a VM know its external address?

A

No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Are public DNS records published automatically?

A

Nope

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the SLA for Cloud DNS?

A

100%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do you assign multiple IP addresses to a VM? Why would you do this?

A

Can assign multiple through multiple NICs. You can use this to bridge multiple networks or have management network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do you assign a range of IP addresses to a VM? Why would you do this?

A

Can assign a range through alias IPs. Can assign range for giving services (i.e. containers) their own IP addresses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is default routing?

A

Every network has a default route to get out of the network. Routes default to get to the other subnets as well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Where are firewall rules applied?

A

At the instance level

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Are firewall rules stateful?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the default firewall rules?

A

DENY ALL ingress and ALLOW ALL egress

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How many NICs can a VM have?

A

At least 2. After 2, it’s the number of CPUs until 8. Max is 8.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

When can you add, change, or delete multiple NICs?

A

Only at instance creation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Which NIC does internal DNS associate to?

A

nic0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the restrictions for IPs/networks for multiple NICs

A

Each NIC is on a different network, IP ranges cannot overlap at all, networks must already exist before being configured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are the basic roles for networking? What can they do?

A

Network viewer - read-only access to all networking
Network admin - permissions to create/modify/delete except for firewall rules and SSL certs
Security admin - can create/modify/delete SSL certs and firewall rules

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What can you specify for targets with firewall rules?

A

All instances, specified target tags, specified service accounts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What can you specify for sources with firewall rules?

A

IP ranges, subnets, source tags, and service accounts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What three roles are needed to provision and manage a shared VPC?

A

Org admin -> Shared VPC Admin -> Service Project Admin

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Is transitive peering supported?

A

Nope

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is the advantage of shared VPC over VPC network peering?

A

Centralized network admin, simplifies internal DNS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is the advantage of VPC network peering over shared VPC?

A

Can be used across orgs, multiple projects, or within a single project. Decentralized network admin if you like that. Quotas aren’t used as quickly if you can use multiple projects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Can you peer with a shared VPC?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

How are DNS names handled across VPC peering?

A

DNS names are NOT transferred across with VPC peering

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What policies are available for autoscaling a managed instance group?

A

CPU utilization, load balancing capacity, monitoring metrics, and queue-based workloads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What are the global load balancing services?

A

HTTP(s) Load Balancer, TCP Proxy, and SSL Proxy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What are the regional load balancing services?

A

Network TCP/UDP load balancer, internal load TCP/UDP load balancing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Where is IPv6 supported?

A

HTTP(s) Load Balancer, TCP Proxy, and SSL Proxy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What are the key features of a global HTTP(s) load balancer?

A

Global load balancing, anycast IP, does auto-scaling, can have backend services with health chekcs, session affinity (with timeouts), and one-or-more backends

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What three things does a backend need to be configured?

A

An instance group, a balancing mode (CPU or RPS), and a capacity scaler (ceiling % of CPU/rate targets)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is cloud armor?

A

Protects load balancers from DDOS, can blacklist or whitelist IPs, can configure the deny rule, can set priority to rules

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What are the key features of an SSL proxy?

A

Global load balancing for encrypted, non-HTTP traffic, terminates SSL, can do intelligent routing and certificate management, auto security patching

37
Q

What are the key features of a TCP proxy?

A

Global load balancing for non-encrypted, non-HTTP traffic, terminates TCP connections, intelligent routing and security patching

38
Q

What are the key features of a network load balancer?

A

Regional load balancing for TCP/UDP (non-proxied), forwarding rules, has instance groups and target pools

39
Q

What are the key features of an ILB

A

Similar to NLB but internal, has fully distributed software defined load balancing

40
Q

How do L2 connections connect to GCP?

A

They connect a VLAN to a specific GCP network

41
Q

What routing does a VPN support?

A

Static routing or dynamic routes via BGP with a cloud router

42
Q

What is the VPN gateway?

A

A regional resource that uses external IP address

43
Q

Are any other IPs needed for a VPN setup?

A

Need to add separate link-local IP address to establish BGP for dynamic routing

44
Q

What are the SLAs for dedicated interconnect?

A

99.9% for single connection, 99.99% for double in different regions

45
Q

What is direct peering?

A

Direct connection to Google for access to Google services (non-customer GCP)

46
Q

What do you do if you cannot meet the peering requirements?

A

Partner peering

47
Q

What is the SLA for peering?

A

None

48
Q

When are you charged for networking?

A

Egress to anything out of the zone/region but within-region and global Google products

49
Q

What sacrifices are made for the standard network tier?

A

No global load balancing, no global SLA, more network hops because it doesn’t use GCPs backbone

50
Q

What is private Google access?

A

VMs with only private IPs can still access Google services (like storage buckets), granted at the subnet level

51
Q

What is the benefit of cloud NAT over traditional NAT?

A

It has 1 fewer hop because it’s software defined at the instance level

52
Q

What is manual mode vs auto mode for the NAT?

A

Manually specify IPs for full control or automatically do it with auto-scaling

53
Q

How do you prevent deployment manager from deploying sequential things in parallel?

A

Add a reference to the previous step in the next step

54
Q

What are VPC Flow Logs?

A

A sample of logs flowing to/from VMs on the network, sampled every 5 seconds with no latency hit, enabled at the subnet

55
Q

What is included in the VPC flow log?

A

IPs/ports/protocol, plus start/end times, bytes, instance details, vpc details, geography

56
Q

Cymbal needs to connect two on-premises networks to a single VPC network in Google Cloud. One on-premises network supports BGP routing and is located near the us-central1 region. The other on-premises network does not support BGP routing and is located near us-east1. The VPC network has subnets in each of these regions. You will use Cloud VPN to enable private communication between the on-premises networks and the VPC network. Select the configuration that provides the highest availability and the lowest average latency.

Configure the VPC for global dynamic routing mode, create Cloud Routers in each of the 2 regions, connect the office close to us-central1 to the VPC using an HA VPN gateway with dynamic routing in us-central1, and connect the other office via a Classic VPN gateway using static routing in us-east1.

Configure the VPC for global dynamic routing mode, create Cloud Routers in each of the 2 regions, connect each office to its closest region via an HA VPN gateway with dynamic routing in that region.

Configure the VPC for regional dynamic routing mode, create one Cloud Router in the us-central1 region, connect the office close to us-central1 to the VPC using an HA VPN gateway with dynamic routing in us-central1, and connect the other office via a Classic VPN gateway using static routing in us-east1.

Configure the VPC for regional dynamic routing mode, create a Cloud Router in each of the two regions, connect each office to its closest region via an HA VPN gateway with dynamic routing in that region.

A

Option 2: Configure the VPC for global dynamic routing mode, create Cloud Routers in each of the 2 regions, connect each office to its closest region via an HA VPN gateway with dynamic routing in that region.

Explanation:
Global Dynamic Routing Mode: Configuring the VPC for global dynamic routing allows the sharing of routes across all regions, providing seamless and efficient routing between the on-premises networks and the VPC subnets in different regions.

HA VPN Gateways: Using HA (High Availability) VPN gateways with dynamic routing provides better redundancy and reliability than Classic VPN gateways, ensuring that if one VPN tunnel goes down, another one is available, maintaining high availability.

Closest Region Connectivity: By connecting each on-premises network to the closest region (us-central1 and us-east1), you achieve lower latency due to minimized geographical distance, reducing the time it takes for data to travel.

This setup ensures optimized performance and reliability by leveraging high availability VPNs, global dynamic routing, and regionally closest connections.

57
Q

You are configuring firewall rules for securing a set of microservices (MS1, MS2, MS3) running in separate managed instance groups (MIGs) of VMs in a single subnet of a VPC network. The primary range of the VPC network is 10.128.128.0/20. MS1 will send requests to MS2 on TCP port 8443, MS2 will send requests to MS3 on TCP port 8663, and MS3 will need to send requests to MS1 on TCP port 8883. There will be no other communication to or between these microservices. Select a simple and secure firewall configuration to support this traffic requirement.
Create service accounts (S1, S2, S3) for the microservices and assign those service accounts to the instance template for the MIG used by each microservice, create 3 ingress allow firewall rules, the first for TCP 8443 from source 10.128.128.0/20 to target S2, the second for TCP 8663 from source 10.128.128.0/20 to target S3, the third for TCP 8883 from source 10.128.128.0/20 to target S1’.

Create network tags (T1, T2. T3) for the microservices and assign those network tags to the instance template for the MIG used by each microservice, create 3 ingress allow firewall rules, the first for TCP 8443 from source T1 to target T2, the second for TCP 8663 from source T2 to target T3, the third for TCP 8883 from source T3 to target T4.

Create service accounts (S1, S2, S3) for the microservices and assign those service accounts to the instance template for the MIG used by each microservice, create 3 ingress allow firewall rules, the first for TCP 8443 from source S1 to target S2, the second for TCP 8663 from source S2 to target S3, the third for TCP 8883 from source S3 to target S1.

Create network tags (T1, T2. T3) for the microservices and assign those network tags to the instance template for the MIG used by each microservice, create 3 ingress allow firewall rules, the first for TCP 8443 from source 10.128.128.0/20 to target T2, the second for TCP 8663 from source 10.128.128.0/20 to target T3, the third for TCP 8883 from source 10.128.128.0/20 to target T1.

A

Create service accounts (S1, S2, S3) for the microservices and assign those service accounts to the instance template for the MIG used by each microservice, create 3 ingress allow firewall rules:

First rule for TCP 8443 from source S1 to target S2.
Second rule for TCP 8663 from source S2 to target S3.
Third rule for TCP 8883 from source S3 to target S1.
Reasoning:
Service Accounts: Using service accounts provides a more fine-grained level of control and security than using broad IP ranges.
Ingress Rules: Specifying exact sources and targets minimizes exposure and potential attack vectors, ensuring that only the intended microservices communicate with each other on specified ports.

58
Q

Cymbal has a set of VPC service control service perimeters around several projects with Bigquery datasets (each project in its own separate service perimeter) and would like to restrict access to these projects Bigquery datasets to VMs in the VPCs of one of these projects (project P1) and for a small set of users to have external access (from a combination of a specific IP range, geo-location, and device type). Select the configuration that satisfies these requirements with minimal configuration.
Create a service perimeter bridge connecting the service perimeters of all the projects and update all the service perimeters to add an access level providing the external access for the specified users.

Update the service perimeter configurations for all the projects to add an ingress rule with an access level to provide the external access for the specified users.

Create a service perimeter bridge connecting the service perimeters of all the projects.

Update the service perimeter configurations for all the projects to add an ingress rule to provide the external access for the specified users, and another ingress rule to provide the access from the VPCs of the specified project P1.

A

🗒️ Answer
The most suitable configuration with minimal setup is:

Option 4:

Update the service perimeter configurations for all the projects to:

Add an ingress rule to provide the external access for the specified users.
Add another ingress rule to provide access from the VPCs of the specified project P1.
Reasoning:
Granular Control: This option allows for granular control by setting specific ingress rules for both external users and internal VMs in the VPC of project P1.
Minimal Configuration: There is no need to create a perimeter bridge or modify multiple access levels across all service perimeters, thus reducing the complexity of configuration.

59
Q

<response> # Cymbal has an existing subnet that they’d like to use for a new VPC-native GKE cluster. The subnet primary IP address range is 10.128.128.0/20. Currently there are 1000 other VMs using that subnet and have taken 1000 of the available IP addresses. The new GKE cluster should support 200,000 pods and 30,000 services. Select the minimal set of configuration steps and the smallest possible IP ranges to enable this.
Create a GKE VPC-native cluster in the subnet, specifying the pod range to be of size /13 and services range to be of size /17.

🗒️
</response>

A

Answer
To support a GKE cluster with 200,000 pods and 30,000 services, you need sufficient IP address ranges for both pods and services:

Pods: To support 200,000 pods, you need a pod IP range of size /13, which provides 8192 IP addresses.
Services: To support 30,000 services, you need a service IP range of size /17, which provides 32,768 IP addresses.
The option to create a GKE VPC-native cluster in the subnet, specifying a pod range of size /13 and a service range of size /17, provides the required IP space with minimal configuration adjustments and uses the smallest possible IP ranges to meet the need

60
Q

Cymbal requires restricting access to the the cloud storage buckets in a project to ensure that the only way the buckets or objects within can be accessed is via users (who also have the necessary IAM role or ACL access to the bucket or object) first connecting to a VM running in a VPC in the project via SSH. They would also like to ensure that users and service accounts are blocked from access to other Google Cloud APIs in the same project from VMs in the project VPCs regardless of whether or not they have access via Cloud IAM roles. Select the approach that can accomplish this with minimal configuration effort and complexity.

Create a VPC service controls service perimeter that includes the project and restricts access to Cloud Storage APIs

Create a VPC service controls service perimeter that includes an ingress rule for all users ingressFrom.identityType: ANY_USER_ACCOUNT, ingressFrom.sources.resource set to the project full path, ingressTo.operations.serviceName is set to storage.googleapis.com, ingressTo.operations.methodSelectors.permission set to google.storage.buckets.get and ingressTo.resources set to "*"

Create a VPC service controls service perimeter that includes the project and restricts access to Cloud Storage APIs and enable VPC accessible services configuring Cloud Storage APIs as accessible.

Update the IAM role bindings for all users with access to the buckets to add an IAM condition of the access level attribute type.

A

To achieve the desired restrictions with minimal configuration, you should:

Create a VPC Service Controls service perimeter: This perimeter will include the project and restrict access to Cloud Storage APIs, ensuring that access to these buckets is limited to users who first connect through a VM within the project’s VPC.

Restrict access to Cloud Storage APIs: By configuring the perimeter to restrict access to the Cloud Storage APIs, you ensure that requests must originate from within the defined VPC, effectively preventing access from other sources.

61
Q

How is Cymbal Bank improving efficiency of applying firewall rules uniformly across their four shared VPCs?

Network policies
check
Hierarchical firewall rules

VPC service controls

Automated firewall rules

A

Correct. Hierarchical firewall rules allow for the same rules to be applied to multiple VPCs across multiple projects, and that is a primary reason Cymbal Bank will use them.

62
Q

A professional cloud network engineer could deploy a variety of network services for their organization to optimize performance, enhance security, and support scalable operations. Here are some key network services:

A

Virtual Private Cloud (VPC):

Subnets: Define segments within a VPC to isolate and manage resources.
VPC Peering: Connects two VPCs for private, high-speed communication.
Shared VPC: Allows multiple projects to share a common VPC network.
Cloud Load Balancing:

Global Load Balancing: Distributes traffic across multiple regions.
Internal Load Balancing: Balances traffic within a VPC.
Cloud VPN:

Site-to-Site VPN: Connects on-premises networks to VPCs securely.
Cloud VPN: Provides encrypted communication between VPCs and external networks.
Interconnect and Peering:

Dedicated Interconnect: Provides high-bandwidth, low-latency connection to Google Cloud.
Partner Interconnect: Connects through a service provider.
Direct Peering: Establishes a private network connection to Google Cloud.
Network Security:

Firewall Rules: Control traffic flow to and from resources.
Cloud Armor: Protects applications from DDoS attacks and other threats.
Private Google Access: Allows access to Google services from private IPs.
Network Services for Applications:

Cloud CDN: Caches content at edge locations to improve performance and reduce latency.
Traffic Director: Manages service-to-service communication and traffic management in microservices.
Monitoring and Management:
Network Monitoring: Tools to monitor and analyze network traffic and performance.
Network Intelligence Center: Provides insights and diagnostics for network performance and health.
DNS Services:

Cloud DNS: Provides scalable and reliable Domain Name System (DNS) services.
Hybrid Connectivity:

Cloud Router: Manages dynamic routing and connects on-premises networks to VPCs using BGP.
Service Perimeters:

VPC Service Controls: Defines and enforces security perimeters around Google Cloud resources to mitigate data exfiltration risks.

63
Q

Cymbal Bank uses Cloud CDN to cache a web-application served from a backend bucket connected to a Cloud Storage bucket. You need to cache all the web-app files with appropriate time to live (TTL) except for the index.html file. The index.html file contains links to versioned files and should always be fetched or re-validated from the origin. Select the configuration option to satisfy these requirements with minimal effort.

Set the Cloud CDN cache mode for the backend bucket to CACHE_ALL_STATIC

Set the Cloud CDN cache mode to USE_ORIGIN_HEADERS, set the Cache-Control metadata for index.html to no-store, and set the Cache-Control headers for all the other files with appropriate TTL values.

Set the Cloud CDN cache mode for the backend bucket to CACHE_ALL_STATIC, and ensure the Cache-Control metadata for index.html is not set or set to no-store, no-cache, or private.

Set the Cloud CDN cache mode for the backend bucket to FORCE_CACHE_ALL, and ensure the Cache-Control metadata for index.html is set to private.

A

To meet Cymbal Bank’s requirements for caching with Cloud CDN, where all files should be cached except for index.html, which should always be fetched or re-validated from the origin, the best approach is:

Set the Cloud CDN cache mode to USE_ORIGIN_HEADERS, set the Cache-Control metadata for index.html to no-store, and set the Cache-Control headers for all the other files with appropriate TTL values.

Here’s why this option is ideal:

USE_ORIGIN_HEADERS: This cache mode ensures that Cloud CDN uses the Cache-Control headers set by the origin (in this case, your backend bucket or Cloud Storage bucket). This allows for precise control over caching behavior directly from the origin.

Cache-Control Metadata for index.html: Setting the Cache-Control metadata for index.html to no-store instructs Cloud CDN not to cache this specific file, ensuring it is always fetched from the origin. This is crucial for a file that contains links to versioned files and needs to reflect the latest version.

TTL for Other Files: For all other files, you can set appropriate TTL values in their Cache-Control headers, which Cloud CDN will respect, thus optimizing caching for these files.

64
Q

You are designing a load balanced autoscaling front-end for Cymbal Bank. It is intended to be deployed into Google Kubernetes Engine (GKE). You want to use container-native load balancing and autoscale based on the amount of traffic to the service. Select the type of backend and autoscaling that would accomplish this.

A serverless network endpoint group of Kubernetes pods which autsocale using a HorizontalPodAutoscaler.

A zonal network endpoint group of Kubernetes pods which autoscale using a HorizontalPodAutoscaler.

A managed instance group of Kubernetes Engine nodes which autoscale using cluster autoscaling.

A managed instance group of Kubernetes Engine nodes which contain pods that autoscale using a HorizontalPodAutoscaler.

A

To design a load-balanced, autoscaling front-end for Cymbal Bank using Google Kubernetes Engine (GKE) with container-native load balancing, the ideal configuration would be:

A serverless network endpoint group of Kubernetes pods which autoscale using a HorizontalPodAutoscaler.

Here’s why this option is most suitable:

Serverless Network Endpoint Group (NEG): This type of NEG allows you to use container-native load balancing directly with Kubernetes pods. It is designed to work seamlessly with GKE and provides support for features like HTTP/HTTPS load balancing and traffic distribution at the container level.

HorizontalPodAutoscaler (HPA): HPA automatically adjusts the number of pods in your deployment based on observed CPU utilization or other select metrics. This ensures that your application scales up or down based on the amount of traffic it is receiving.

Alternative Options:

Zonal Network Endpoint Group of Kubernetes Pods: While this is a valid option, it is less flexible compared to serverless NEGs and might require more management.
Managed Instance Group (MIG) of Kubernetes Engine Nodes: MIGs and cluster autoscaling are more related to scaling the underlying VM instances rather than the Kubernetes pods themselves. This is not as directly effective for autoscaling based on container traffic.
Managed Instance Group of Kubernetes Engine Nodes Containing Pods: This configuration is more about scaling the VM instances rather than the pods themselves. Autoscaling should primarily target the container level for effective load management.
Thus, using a serverless NEG with HPA provides a more streamlined and efficient approach for scaling and load balancing Kubernetes pods.

65
Q

Cymbal Bank wants a web application to have global anycast load balancing across multiple regions. The web application will serve static asset files and will also use REST APIs that serve dynamic responses. The load balancer should support HTTP and HTTPS requests and redirect HTTP to HTTPS. The load balancer should also serve all the requests from the same domain name, with different paths indicating static versus dynamic resources. Select the load balancer configuration that would most effectively enable this scenario.

A global external HTTP(S) load balancer with one global forwarding rule, forwarding to one target proxy with one URL map connected to 2 backend services

2 global external HTTP(S) load balancers, each with one global forwarding rule forwarding to one target proxy with one URL map connected to 1 backend service

A global external HTTP(S) load balancer with two global forwarding rules, forwarding to two target proxies, one with URL map and no backend service and the other with URL map and 2 backend services

A global external HTTP(S) load balancer with two global forwarding rules, forwarding to two target proxies, one with URL map and no backend service and the other with URL map, one backend service, and one backend bucket.

A

To meet Cymbal Bank’s requirements for a web application with global anycast load balancing that serves both static asset files and dynamic REST API responses, and supports HTTP-to-HTTPS redirection, the most effective load balancer configuration would be:

A global external HTTP(S) load balancer with one global forwarding rule, forwarding to one target proxy with one URL map connected to 2 backend services.

Here’s why this configuration is ideal:

Global External HTTP(S) Load Balancer: This type of load balancer provides global anycast IP addresses, ensuring traffic is directed to the nearest location with the best performance. It also supports HTTP and HTTPS requests and can handle redirection from HTTP to HTTPS.

One Global Forwarding Rule: This rule will handle all incoming traffic based on the domain name and route it to the appropriate backend services.

One Target Proxy with One URL Map: This setup allows for sophisticated routing based on URL paths, directing traffic to different backend services for static and dynamic content.

Two Backend Services:

One backend service can be used for serving static asset files (e.g., using a backend bucket or cloud storage).
The other backend service can handle dynamic REST API responses.
This configuration ensures that all traffic is processed under a single domain name, and different types of requests (static vs. dynamic) are efficiently routed to the appropriate backend services

66
Q

Select the list of the resources that must be created or configured to enable packet mirroring.

A packet mirroring policy and a collector instance

A packet mirroring policy, an instance group of collector instances, and firewall rules

A packet mirroring policy, a collector instance, and firewall rules

A packet mirroring policy, An internal TCP/UDP load balancer configured for packet mirroring, an instance group of collector instances, and firewall rules

A

To enable packet mirroring in Google Cloud, you need to create or configure the following resources:

A packet mirroring policy, a collector instance, and firewall rules.

Here’s why each component is required:

Packet Mirroring Policy: Defines what traffic to mirror and how to mirror it. This policy specifies the source and destination of the mirrored traffic.

Collector Instance: The instance where the mirrored traffic will be sent for analysis. This can be a single instance or an instance group, depending on the scale of the traffic and the setup.

Firewall Rules: Necessary to allow the traffic between the mirrored source and the collector instance. Firewall rules must be configured to permit the flow of mirrored packets to the collector.

Additional Information:

Instance Group of Collector Instances: While using a single collector instance is typical for small-scale setups, an instance group may be used for high availability or to handle larger volumes of traffic. However, it’s not always a strict requirement.

Internal TCP/UDP Load Balancer: This is not needed for basic packet mirroring setup. The mirroring policy itself and proper firewall rules are sufficient to route the mirrored traffic to the collector.

67
Q

Cymbal is using Cloud NAT to provide internet connectivity to a group of VMs in a subnet. There are 500 VMs in the subnet and each VM may have up to 1000 internet bound connections simultaneously. What Cloud NAT configuration will support this requirement?

Set the minimum ports per VM to 1000 and the number of IP addresses used by the Cloud NAT Gateway to 8.

Set the minimum ports per VM to 1000 and the number of IP addresses used by the Cloud NAT Gateway to 6.

Set the minimum ports per VM to 2000 and the number of IP addresses used by the Cloud NAT Gateway to 10.

Set the minimum ports per VM to 2000 and the number of IP addresses used by the Cloud NAT Gateway to 8.

A

To support 500 VMs, each with up to 1000 simultaneous internet-bound connections, you need to configure Cloud NAT to handle the required number of NAT ports efficiently. Each NAT IP address can support a certain number of ports.

Here’s the detailed calculation and configuration needed:

Total Number of Connections:

500 VMs × 1000 connections/VM = 500,000 simultaneous connections.
Ports per IP Address:

Each NAT IP address can support approximately 64,000 to 128,000 NAT ports. To account for the need for high availability and to avoid exhausting NAT ports, a more conservative estimate would be used.
Configuration Options:

To determine the number of IP addresses required, divide the total number of connections by the maximum number of ports per IP address:
Using the conservative estimate of 64,000 ports per IP address:
500,000 connections ÷ 64,000 ports per IP = approximately 7.81 IP addresses.
Using a more generous estimate of 128,000 ports per IP address:
500,000 connections ÷ 128,000 ports per IP = approximately 3.91 IP addresses.
Based on the options provided:

Set the minimum ports per VM to 1000 and the number of IP addresses used by the Cloud NAT Gateway to 8.
This option ensures that there are sufficient IP addresses to handle the number of simultaneous connections while providing some buffer to accommodate peak loads and ensuring high availability.

68
Q

Cymbal Bank would like to protect their services which are deployed behind an HTTP(S) load balancer from L7 distributed denial of service (DDoS), SQL injection (SQLi) and cross-site scripting (XSS) attacks. Select the simplest approach to accomplish this in Google Cloud

Configure Cloud Armor with the appropriate rules.

Configure Google Cloud NAT with the appropriate rules.

Configure Google Cloud WAF with the appropriate rules.

Configure a VM with appropriate scanning and filtering software in front of the HTTP(S) load balancer.

A

To protect services behind an HTTP(S) load balancer from L7 distributed denial of service (DDoS) attacks, SQL injection (SQLi), and cross-site scripting (XSS) attacks, the simplest and most effective approach in Google Cloud is to:

Configure Cloud Armor with the appropriate rules.

Reasons:
Cloud Armor:

Layer 7 Protection: Google Cloud Armor is specifically designed to protect applications from Layer 7 attacks, including DDoS, SQLi, and XSS.
Integration: It integrates seamlessly with Google Cloud HTTP(S) Load Balancers.
Custom Rules: Allows you to define custom security policies and rules to block malicious traffic.
Managed Security: Provides managed DDoS protection and threat intelligence.
Google Cloud NAT:

Primarily used for outbound internet connectivity for VMs and does not provide Layer 7 security.
Google Cloud WAF:

Google Cloud does not have a separate WAF service; Cloud Armor serves as the WAF solution within Google Cloud.
VM with Scanning and Filtering Software:

Requires additional management and complexity compared to Cloud Armor, and does not provide the same level of integration and automation.
Conclusion:

69
Q
A
70
Q

You are designing a system in Google Cloud to ensure all traffic being sent between two subnets is passed through a security gateway VM. The VM runs 3rd party software that scans traffic for known attack signatures, then forwards or drops traffic based on the scan results. You want to accomplish this without using public IP addresses. Select a configuration that satisfies these requirements.

Create the 2 subnets in the same VPC. Create a VM running the 3rd party scanning software in one of the subnets. Create custom routes in the VPC to send traffic for each subnet from the opposite subnet through that VM.

Create the 2 subnets in the same VPC. Create 2 VMs running the 3rd party scanning software, with one in each of the subnets. Create custom routes in the VPC to send traffic destined for each subnet originating in the opposite subnet through the VM in the opposite subnet.

Create the 2 subnets in 2 separate VPCs. Create a VM with 2 network interfaces (NICs), with each NIC connected to the subnet in each VPC. Create custom routes in each VPC to send traffic destined for each subnet originating in the opposite subnet through the VM.

Create the 2 subnets in the same VPC. Create a VM running the 3rd party scanning software in each of the subnets. Create custom routes in the VPC to send traffic destined for each subnet originating in the opposite subnet through the VM in its subnet.

A

To ensure that all traffic between two subnets is routed through a security gateway VM for inspection and then forwarded or dropped based on scan results, while avoiding the use of public IP addresses, the most suitable configuration is:

Create the 2 subnets in the same VPC. Create 2 VMs running the 3rd party scanning software, with one in each of the subnets. Create custom routes in the VPC to send traffic destined for each subnet originating in the opposite subnet through the VM in the opposite subnet.

Explanation:
Same VPC: Keeping both subnets in the same VPC allows for internal traffic routing without needing public IP addresses.

VMs in Each Subnet: Deploying one VM in each subnet ensures that traffic between the subnets can be inspected regardless of the direction it is traveling. This setup ensures that traffic originating from either subnet is routed through the appropriate VM in the opposite subnet.

Custom Routes: Configuring custom routes to direct traffic through the security VMs ensures that all traffic between the subnets passes through the scanning and filtering process.

Why Other Options Are Less Suitable:
Single VM in One Subnet: This would require all traffic to pass through a single VM, which could create a bottleneck and complicates routing setup.

Separate VPCs: Requires complex setup with VPC peering or VPN, and a VM with multiple NICs, which introduces additional complexity compared to a single VPC setup.

VMs in Each Subnet with Incorrect Routing: This configuration might not ensure traffic between subnets is always routed through the correct VM in the opposite subnet.

Thus, using a VM in each subnet with proper routing rules in a single VPC is the simplest and most effective approach.

71
Q

Select the list of the resources that must be created or configured to enable packet mirroring.

A packet mirroring policy and a collector instance

A packet mirroring policy, an instance group of collector instances, and firewall rules

A packet mirroring policy, a collector instance, and firewall rules

A packet mirroring policy, An internal TCP/UDP load balancer configured for packet mirroring, an instance group of collector instances, and firewall rules

A

To enable packet mirroring in Google Cloud, the following resources must be created or configured:

A packet mirroring policy, an instance group of collector instances, and firewall rules

Explanation:
Packet Mirroring Policy: Defines the configuration for which packets should be mirrored, including the source and destination criteria for mirroring.

Instance Group of Collector Instances: A group of VMs configured to receive and process the mirrored packets. This setup ensures that the collected data can be analyzed.

Firewall Rules: Necessary to allow traffic from the mirrored sources to the collector instances and ensure that the mirrored packets can reach their destination without being blocked.

Other configurations listed either do not align with the requirements for packet mirroring (like an internal TCP/UDP load balancer) or do not include all necessary components (like firewall rules).

72
Q

Cymbal Bank is connecting one of their Shared VPC networks to their on-premise network via Dedicated Interconnect. Select the recommended approach for configuring their VLAN attachments and Cloud Routers.

Create the VLAN attachments and Cloud Routers in the Shared VPC service projects.

Create the Cloud Routers in the Shared VPC host project and the VLAN attachments in the Shared VPC service projects.

Create the VLAN attachments in the Shared VPC host project and the Cloud Routers in the Shared VPC service projects.

Create the VLAN attachments and Cloud Routers in the Shared VPC host project.

A

For connecting a Shared VPC network to an on-premises network via Dedicated Interconnect, the recommended approach is:

Create the VLAN attachments and Cloud Routers in the Shared VPC host project.

Explanation:
VLAN Attachments: These are configured to handle the connection between Google Cloud and your on-premises network. Since the Shared VPC host project contains the network resources, it’s logical to manage VLAN attachments here.

Cloud Routers: These are responsible for exchanging routes between Google Cloud and your on-premises network. Cloud Routers should be created in the same project as the VLAN attachments to ensure seamless integration and management.

By placing both the VLAN attachments and Cloud Routers in the Shared VPC host project, you centralize the management of network resources and maintain consistency in network configurations.

73
Q

Cymbal Bank is connecting a branch office with an old VPN gateway that doesn’t support BGP. The old VPN gateway only supports IKEv1 and does not support local and remote traffic selectors to be configured as 0.0.0.0/0. Select the configuration option that can satisfy these requirements.

Configure a Classic VPN gateway to connect to the on-premise gateway and to use dynamic routing.

Configure an HA VPN gateway to connect to the on-premises gateway and to use dynamic routing.

Configure a Classic VPN gateway to connect to the on-premises gateway using static routing with a policy-based tunnel with local and remote traffic selectors matching the office VPN but reversed.

Configure a Classic VPN gateway to connect to the on-premises gateway using static routing with a route-based tunnel.

A

For connecting a branch office with an old VPN gateway that doesn’t support BGP and only supports IKEv1, the recommended configuration is:

Configure a Classic VPN gateway to connect to the on-premises gateway using static routing with a policy-based tunnel with local and remote traffic selectors matching the office VPN but reversed.

Explanation:
Classic VPN Gateway: Since the on-premises VPN gateway only supports IKEv1 and does not support BGP, you should use a Classic VPN gateway. This type of gateway supports IKEv1 and can be used with static routing.

Static Routing with Policy-Based Tunnel: Given the constraints of the old VPN gateway, which doesn’t support BGP or local and remote traffic selectors of 0.0.0.0/0, you need to use static routing. A policy-based tunnel configuration will match the old gateway’s requirements and allow for specific traffic selectors.

Local and Remote Traffic Selectors: Since the old VPN gateway doesn’t support configurable traffic selectors of 0.0.0.0/0, using policy-based routing with specific traffic selectors (matching and reversed) is appropriate. This approach enables defining exactly which traffic is allowed through the tunnel.

This configuration ensures compatibility with the old VPN gateway’s limitations and supports the connectivity requirements.

74
Q

Cymbal Bank would like to achieve 99.99% availability for their Dedicated Interconnect link from an on-premises network to their VPC. Select the configuration that will achieve this.

2 Cloud Routers in 2 distinct regions, with the VPC in global dynamic routing mode

1 Cloud Router in one region with the VPC in regional dynamic routing mode

2 Cloud Routers in 2 distinct regions, with the VPC in regional dynamic routing mode

2 Cloud Routers in one region, with the VPC in global dynamic routing mode

A

To achieve 99.99% availability for a Dedicated Interconnect link from an on-premises network to a VPC, the recommended configuration is:

2 Cloud Routers in 2 distinct regions, with the VPC in global dynamic routing mode

Explanation:
2 Cloud Routers in 2 Distinct Regions: Deploying Cloud Routers in separate regions ensures that if one region experiences an issue, the other can still handle the routing, contributing to high availability.

VPC in Global Dynamic Routing Mode: Global dynamic routing mode allows the VPC to use Cloud Router and Interconnect connections across multiple regions, enhancing the resilience and availability of the interconnect link.

This configuration leverages redundancy and regional distribution to maintain high availability and meet the 99.99% SLA for Dedicated Interconnect.

75
Q

You are setting up a Dedicated Interconnect connection and need to provide the highest capacity possible. Select the circuit configuration that achieves this.

2 100 Gbps circuits

8 10 Gbps circuits

1 200 Gbps circuit

8 50 Gbps circuits

A

To provide the highest capacity possible for a Dedicated Interconnect connection, the optimal circuit configuration is:

2 100 Gbps circuits

Explanation:
2 100 Gbps Circuits: This configuration provides a total of 200 Gbps capacity with redundancy. Each circuit can handle 100 Gbps, and having two circuits allows for high throughput and failover capabilities if one circuit encounters issues.
The other configurations offer lower capacities or do not provide the same level of redundancy:

76
Q

Cymbal Bank wants to achieve 99.9% availability with Dedicated Interconnect. You want to support 100 Gbps of throughput, even if a single interconnect connection were to fail. Select the simplest and least expensive configuration that can meet these requirements.

2 100 Gbps connections in separate edge availability zones of the co-location facility, 4 50 Gbps VLAN attachments

2 100 Gbps connections in separate edge availability zones of the co-location facility, 2 100 Gbps VLAN attachments

2 50 Gbps connections in separate edge availability zones of the co-location facility, 4 25 Gbps VLAN attachments

1 200 Gbps connection in a single edge availability zone of the co-location facility, 4 50 Gbps VLAN attachments

A

To achieve 99.9% availability with Dedicated Interconnect and support 100 Gbps of throughput even if a single interconnect connection fails, the simplest and least expensive configuration is:

2 100 Gbps connections in separate edge availability zones of the co-location facility, 2 100 Gbps VLAN attachments

77
Q

.

You have an HA VPN gateway with 2 interfaces in active/active mode. You would like to reconfigure them to active/passive mode. Select the simplest configuration change that will satisfy this requirement.

Remove the BGP session for one of the HA VPN tunnels.

Disable the BGP session for one of the HA VPN tunnels.

Update the base advertised route priorities for both of the HA VPN tunnels’ BGP sessions.

Update the base advertised route priority for one of the HA VPN tunnel’s BGP sessions.

A

o reconfigure an HA VPN gateway from active/active mode to active/passive mode, the simplest configuration change is:

Update the base advertised route priority for one of the HA VPN tunnel’s BGP sessions.

Explanation:
In active/passive mode, one tunnel is designated as the active tunnel (handling all traffic), while the other is passive (acting as a backup). By updating the base advertised route priority, you can influence which tunnel is preferred for handling traffic. Here’s how:

Update the Base Advertised Route Priority: Set a higher priority (lower numerical value) for the active tunnel and a lower priority (higher numerical value) for the passive tunnel. This configuration ensures that the active tunnel is preferred for all traffic, and the passive tunnel will only be used if the active tunnel fails.
The other options either do not directly address the configuration change needed or may require additional steps:

Remove or Disable the BGP Session for One of the HA VPN Tunnels: This would effectively take one tunnel out of service but does not configure the remaining tunnel as the active one in an active/passive setup.

Update Base Advertised Route Priorities for Both Tunnels: This is more complex and involves ensuring one tunnel is preferred over the other, but is not as straightforward as updating the priority for a single tunnel.

78
Q

You are using the gcloud tool to create a Classic VPN with static routing and a route-based tunnel. The on-premises resources are all in the 192.168.1.0/24 range. You have issued commands to create the VPN gateway, IP addresses, forwarding rules, and the VPN tunnel. Select the correct final resource that must be created.

A route with destination 0.0.0.0/0 and next hop set to the VPN gateway

A Cloud Router with default route advertisements

A route with destination 192.168.1.0/24 and next hop set to the VPN gateway

A Cloud Router with a custom route advertisements including the range 192.168.1.0/24

A

To complete the setup of a Classic VPN with static routing and a route-based tunnel using the gcloud tool, you need to create the following resource:

A route with destination 192.168.1.0/24 and next hop set to the VPN gateway

Explanation:
Static Routing: Since you are using static routing with your Classic VPN, you need to explicitly define routes to ensure traffic is directed through the VPN tunnel.

Destination Route: For traffic to reach the on-premises network (192.168.1.0/24) via the VPN tunnel, you need to configure a route in your Google Cloud VPC.

Next Hop: The next hop for this route should be the VPN gateway. This ensures that traffic destined for the 192.168.1.0/24 network is forwarded to the VPN gateway, which then routes it through the VPN tunnel.

Other options mentioned are not necessary for static routing with Classic VPN:

Route with destination 0.0.0.0/0: This would be used for directing all internet-bound traffic through the VPN, which is not required for your specific on-premises range.

Cloud Router with default or custom route advertisements: Cloud Routers and route advertisements are relevant for dynamic routing with BGP but are not needed for static routing.

Thus, the correct resource to create is a route with the destination 192.168.1.0/24 and the next hop set to the VPN gateway.

79
Q
A
80
Q
A
81
Q
A
82
Q
A
83
Q

You are using VPC flow logs to analyze traffic arriving at a subnet. You need to capture approximately 10% of the traffic and determine how much traffic originates from outside the subnet. The VPC flow logs have already been enabled for the subnet. You want to use the least expensive process. How should you configure the VPC flow logs?

Configure them with a sampling rate of 1.0 and a filter expression for the connection source and destination IP within the IP range of the subnet.

Configure them with a sampling rate of 0.1 and a filter expression for the connection source and destination IP within the IP range of the subnet.

Configure them with a sampling rate of 0.1 and a filter expression for the connection destination IP within the IP range of the subnet.

Configure them with a sampling rate of 1.0 and a filter expression for the connection destination IP within the IP range of the subnet.

A
  1. Sampling Rate
    Set the sampling rate to 0.1 (10%). This means that out of every 10 packets, 1 packet will be sampled. This helps in reducing the amount of data collected and is cost-effective compared to capturing all traffic.
  2. Filter Expression
    You need a filter expression to ensure that you only capture traffic destined for IP addresses within a specific subnet. Here’s a step-by-step guide on how to set this up:

Example Filter Expression
Assuming you want to filter traffic based on destination IP addresses within a subnet range, the filter expression might look like this:

Set Sampling Rate: Configure the sampling rate to 0.1 in your traffic monitoring or logging configuration. This setting is often found in the settings or configuration options of your traffic monitoring tool or logging service.

Apply Filter Expression: Use the provided filter expression to ensure that only traffic destined for the IP range of your subnet is captured. This filter is typically applied in the traffic monitoring tool’s query or filtering section.

Summary
By configuring the sampling rate to 0.1, you ensure that only 10% of the traffic is captured, which is cost-effective and efficient. Applying a filter expression to capture only traffic destined for specific IP ranges helps focus your analysis on the relevant traffic, excluding internal traffic within the subnet.

84
Q

Cymbal Bank needs to do an analysis to verify which users and groups have been given the Network Admin role for a particular VPC network. You want to use the simplest process. What should you do?

Use the Policy Analyzer with scope set to Organization, and resource set to the VPC, and role set to Network Admin.

Use the Policy Simulator to simulate providing the Network Admin role to each user and group and review the results to determine which identities would have access changes.

Use the Policy Troubleshooter to test each user and group against the VPC and each of the permissions in the Network Admin role.

Use the Policy Analyzer with scope set to Organization, resource set to the VPC, role set to Network Admin, and identity set to all users and groups.

A

To verify which users and groups have been given the Network Admin role for a particular VPC network in the simplest manner, you should use:

Use the Policy Analyzer with scope set to Organization, resource set to the VPC, role set to Network Admin, and identity set to all users and groups.

Explanation:
Policy Analyzer: This tool allows you to review and analyze policies across your organization. By setting the scope to the organization, you can analyze policies applied across the entire organization.

Resource Set to VPC: This ensures that you are focusing on the specific VPC network you are interested in.

Role Set to Network Admin: This filters the analysis to only include policies related to the Network Admin role.

Identity Set to All Users and Groups: This will list all users and groups that have been assigned the Network Admin role for the specified VPC.

Why Not the Other Options?
Policy Simulator: While useful for testing access changes, it is more complex and involves simulating changes rather than directly listing current assignments.

Policy Troubleshooter: This tool helps to determine why access is granted or denied but is more involved in testing specific access scenarios rather than listing all current assignments.

Using the Policy Analyzer with the specified settings provides a direct and straightforward way to determine who has the Network Admin role for the VPC, making it the simplest process for this task.

85
Q

Cymbal Bank needs to log all cache hits and misses for their static assets served from Cloud CDN via an HTTP(S) Load balancer backend bucket. What should you do?

Enable logging on the backend bucket.

Configure the logging sample rate on the backend bucket to 1.0.

Use the default behavior, no configuration required.

Enable logging on the backend bucket and configure logging sample rate to 1.0.

A

Cymbal Bank needs to log all cache hits and misses for their static assets served from Cloud CDN via an HTTP(S) Load balancer backend bucket. What should you do?

Enable logging on the backend bucket.

Configure the logging sample rate on the backend bucket to 1.0.

Use the default behavior, no configuration required.

Enable logging on the backend bucket and configure logging sample rate to 1.0.

86
Q

You are designing a monitoring alert to notify you when a Cloud VPN tunnel approaches the limits for bandwidth. Select the metrics that would be important to include in the alerting policies.

vpn.googleapis.com/network/sent_bytes_count, vpn.googleapis.com/network/received_bytes_count

vpn.googleapis.com/network/dropped_received_packets_count, vpn.googleapis.com/network/network/dropped_sent_packets_count

vpn.googleapis.com/network/sent_bytes_count, vpn.googleapis.com/network/received_bytes_count, vpn.googleapis.com/network/sent_packets_count, vpn.googleapis.com/network/received_packets_count

vpn.googleapis.com/network/sent_packets_count, vpn.googleapis.com/network/received_packets_count, vpn.googleapis.com/network/dropped_received_packets_count, vpn.googleapis.com/network/network/dropped_sent_packets_count

A

For designing a monitoring alert to notify you when a Cloud VPN tunnel approaches the limits for bandwidth, you should include the following metrics in your alerting policies:

vpn.googleapis.com/network/sent_bytes_count, vpn.googleapis.com/network/received_bytes_count, vpn.googleapis.com/network/sent_packets_count, vpn.googleapis.com/network/received_packets_count

Explanation:
vpn.googleapis.com/network/sent_bytes_count: This metric tracks the total number of bytes sent through the VPN tunnel, which is crucial for monitoring bandwidth usage.

vpn.googleapis.com/network/received_bytes_count: This metric tracks the total number of bytes received through the VPN tunnel, also important for understanding the bandwidth usage.

vpn.googleapis.com/network/sent_packets_count: This metric measures the number of packets sent, providing additional context to the bandwidth usage.

vpn.googleapis.com/network/received_packets_count: This metric measures the number of packets received, which complements the byte count metrics to give a fuller picture of the tunnel’s performance.

Why Not the Other Options?
vpn.googleapis.com/network/sent_bytes_count, vpn.googleapis.com/network/received_bytes_count: These metrics alone give information on bandwidth usage but do not include packet counts, which can be useful for understanding traffic patterns and detecting issues.

vpn.googleapis.com/network/dropped_received_packets_count, vpn.googleapis.com/network/network/dropped_sent_packets_count: While useful for diagnosing network issues, these metrics are more about packet loss and not directly about bandwidth usage.

87
Q

You are debugging a Layer 2 Partner Interconnect connection that is indicating a failure to create a BGP session in the Cloud Router for the associated VLAN attachments. Select the most likely cause to investigate when troubleshooting this issue.

Check the route configuration of the VPC the Cloud Router is in.

Check the route advertisement configuration of the Cloud Router.

Check the BGP keepalive timer configuration of the Cloud Router.

Check the ASN configuration of the on-premises router and the Cloud Router.

A

When troubleshooting a Layer 2 Partner Interconnect connection with a failure to create a BGP session in the Cloud Router, the most likely cause to investigate is:

Check the ASN configuration of the on-premises router and the Cloud Router.

Explanation:
ASN Configuration: BGP (Border Gateway Protocol) requires matching ASN (Autonomous System Number) configurations on both ends of the connection. If there is a mismatch in ASN between the Cloud Router and the on-premises router, the BGP session cannot be established. Ensuring that the ASNs are correctly configured and match on both sides is crucial for successful BGP session establishment.
Why Not the Other Options?
Route Configuration of the VPC: While important, route configuration issues typically affect the traffic flow after the BGP session is established, rather than preventing the BGP session from being created.

Route Advertisement Configuration of the Cloud Router: This would affect the routes advertised once the BGP session is established, but it doesn’t directly impact the initial creation of the BGP session.

BGP Keepalive Timer Configuration: This affects the maintenance of the BGP session once it is established, not the initial setup. A misconfigured keepalive timer would lead to issues in maintaining the session rather than its creation.

Therefore, the most pertinent issue to check first when troubleshooting BGP session creation problems is the ASN configuration on both the Cloud Router and the on-premises router.

88
Q

You are trying to debug a connectivity issue between VMs in the same VPC using internal IP addresses. The issue began immediately after configuring routes and firewall rules. What should you do to troubleshoot the problem?

Review the packet loss statistics in the Network intelligence performance dashboard.

Remove static routes one by one in all combinations to determine the problem.

Create and run a Network intelligence connectivity test to determine the problem.

Disable Firewall rules one by one in all combinations to determine the problem.

A

Create and run a Network intelligence connectivity test to determine the problem: This is the most effective initial step as it can help you identify issues with routes, firewall rules, and connectivity problems by analyzing traffic paths and configurations. This test provides a comprehensive view of the network’s connectivity issues and can pinpoint where problems may be occurring [1].

Disable Firewall rules one by one in all combinations to determine the problem: If the connectivity test indicates that firewall rules may be a factor, disabling them one by one can help identify if a specific rule is blocking traffic [2].

Remove static routes one by one in all combinations to determine the problem: If route configurations might be the issue, removing them incrementally can help determine if incorrect routes are affecting connectivity [1].

Review the packet loss statistics in the Network intelligence performance dashboard: This is useful for understanding packet loss, but it may not directly pinpoint routing or firewall rule issues as effectively as the other methods [1].

89
Q
A