GCP Network Deep Flashcards

1
Q

What is a network endpoint group (NEG)?

A

Network endpoint groups (NEGs) are zonal resources that represent collections of IP address and port combinations for GCP resources within a single subnet. Each IP address and port combination is called a network endpoint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a Secondary subnet range?

A

Secondary range you can apply to a subnet for use with alias IP ranges

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why would you use tags over service accounts for firewall rules?

A

Don’t need to restart VM to change, can have multiple tags on a VM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why would you use service accounts over network tags for firewall rules?

A

Anyone can set any tag! Service accounts are resources with permissions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the IP address ranges you need to assign when you build a GKE cluster?

A

“Node subnet
Services secondary range
Pods secondary range
Master IP range (for private clusters)”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is an Ingress controller?

A

It is a GKE service that creates and manages an HTTP(s) load balancer on GCP. The backend can be a NEG.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the traits of a cloud network engineer?

A

1 year experience, use gcloud, use IAC, work with architects for network aspects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Cymbal’s bank existing infrastructure?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the infrastructure going to look like?

A

4 shared VPCs
4 Projects, Dev, test, stage, and prod
Each VPC has six subnets in primary and secondary regions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do you connect the NCC Hub with other parts of your organization

A

VPC Spokes - each vpc is a separate spoke
Router Appliance
Cloud VPN,Cloud Interconnect spokes VLAN attachments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What’s a VPC Spoke?

A

VPC spokes let you connect two or more VPC networks to a hub so that the networks exchange IPv4 subnet routes. VPC spokes attached to a single hub can reference VPC networks in the same project or a different project

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the 3 tiers in an SDN?

A

Application Layer
Control Layer
infrastructure Layer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what are the Google Cloud networking services?

A

Connect
Secure
Scale
Optimize
Modernize

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what are the different network tiers for GCP?

A

Premium
Standard

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the different connectivity options to connect VPCs to one another or another site?

A

Cloud Interconnect uses colocation (dedicated or partner)
Cloud VPN
Cloud Peering
Network Connectivity Center hub and spoke model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is available with Cloud DNS?

A

Public DNS Zones
Global DNS
Private DNS Zones
Split Horizon DNS
DNS Peering
Security

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How do you split up your VPCs?

A

Per environment or Per team
But, fewer VPCs are easier to manage and provide better resource utilization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the two different modes to create subnets?

A

Auto - Puts in all rules, etc…
Custom - You control subnets created and how they work

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the difference between primary and secondary subnet CIDR ranges for IP Address

A

Secondary CIDR Range
Definition: Secondary CIDR ranges are additional IP ranges associated with a subnet to support specific GCP features, such as alias IPs or private Google access.
Characteristics:
Purpose: Used for purposes like:
Alias IP Ranges: Allowing VM instances to have multiple IP addresses from the secondary range.
Private Google Access: Enabling access to Google APIs and services from the private IP addresses in the VPC.
Allocation: Secondary CIDR ranges must be distinct from the primary range but can be within the same or different subnet.
Format: Also specified in CIDR notation (e.g., 10.2.0.0/16).
Key Differences
Functionality:

Primary: Used for the core IP addressing of VMs and other resources.
Secondary: Used for additional features like alias IPs and private Google services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

what are the two ways to figure how many subnets are required?

A

1-subnet per application
Create large subnets
Recommend use large subnet for simplicity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What services are affected by VPC Firewall

A

VM out
VM in
Implied Rules
Ingress Deny
Egress Allow

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are the differences for VPC firewall rules vs firewall policies?

A

Management Level:

Firewall Rules: Individual and directly applied to VPC networks.
Firewall Policies: Higher-level management tool for organizing and applying rules across multiple networks.
Flexibility:

Firewall Rules: Good for simpler setups where rules are managed individually.
Firewall Policies: Better for complex environments where centralized management of rules is beneficial.
Use Cases:

Firewall Rules: Suitable for straightforward, single-network environments or specific use cases within a single VPC.
Firewall Policies: Ideal for larger organizations or projects needing centralized control over multiple networks and a consistent security posture.
In summary, while firewall rules provide the granular control needed for specific network traffic management, firewall policies offer a way to efficiently manage and apply these rules across multiple networks and projects, facilitating better organization and consistency in complex environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

what are the parts of a firewall policy

A

Priority
direction
Action
Source/Destination Filters
Target Type
Protocols and Ports

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

How are firewall policies are hierarchical?

A

Org
Folder
VPC
Global
Regional

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What are the different parts of the shared VPC Network?

A

A Shared VPC (Virtual Private Cloud) network allows multiple Google Cloud projects to share a common VPC network, enabling better management, security, and resource allocation. The Shared VPC architecture is typically composed of several key parts:

Host Project: The host project owns the shared VPC network and its associated subnets, routes, and firewall rules. It is the central management point for the shared VPC.

Service Projects: Service projects are separate projects that use the shared VPC network provided by the host project. Resources like Compute Engine instances, GKE clusters, and App Engine in these service projects can connect to the shared VPC network.

Shared VPC Network: The VPC network that is shared among the host and service projects. It includes subnets, IP address ranges, and routes that are accessible across the participating projects.

Subnets: Subnetworks are specific IP address ranges within the shared VPC network, defined by the host project. These subnets can be accessed by resources in the host and service projects based on permissions.

Firewall Rules: Firewall rules control traffic to and from instances within the shared VPC network. These rules are defined at the VPC network level and apply to all instances across the host and service projects.

Routes: Routes are used to define how traffic is directed within the shared VPC network. This includes default routes, custom routes, and any peering or VPN-related routes.

Service Accounts and IAM Roles: Proper Identity and Access Management (IAM) configuration is crucial. Host and service projects use service accounts and IAM roles to manage permissions, defining which users or services can access and manage VPC resources.

DNS: Cloud DNS configurations can be shared across the shared VPC network, allowing consistent internal DNS resolution for resources across the host and service projects.

Network Peering: Allows the shared VPC network to connect with other VPC networks, both within the same organization or across different organizations, for broader network integration.

Network Connectivity Options: Includes VPN, Cloud Interconnect, or VPC peering configurations that extend the connectivity of the shared VPC network to on-premises networks or other cloud environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

multiple NICs in a vm require what?

A

Each nic needs to be in a different network
You must configure the network interface when you configure the VM, you can’t add later.
Neworks must exist before you add the VM
The address ranges can’t overlap

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

You want to lower cloud networking cost and have no problem leveraging the public internet for cross-region traffic. Which network service tier is best for you?

A

Standard Tier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

what is the decisions you should know before using standard and premium network tier?

A

Standard is lower cost
deploy backends
Users in multiple regins
Use public internet
No CDN or global load balancing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

You are designing a virtual machine in the cloud to act as a network gateway between an external public network and a private internal network. To ensure strong security and traffic separation, what technology can you implement?

A

Mutliple NICs
ultiple NICs attached to separate VPC networks achieve the strongest traffic isolation and control for the gateway scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

You want to improve network performance. You are not comfortable using the public internet to route traffic. Which service tier is the best fit?

A

Premium Tier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What are the limitations with VPC Peering?

A

Not transitive
Can Span Multiple projects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

what permissions are required to administer Shared VPC?

A

See below

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

How are the subnets arranged in a shared VPC?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What are situations where you sould use VPC network peering or Shared VPC

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What are the advantages of network peering?

A

Network latency: Public IP networking results in higher latency than private networking.

Network security: Service owners do not need to have their services exposed to the public internet and deal with its associated risks.
Network cost: Google Cloud charges egress bandwidth pricing for networks using external IPs to communicate, even if the traffic is within the same zone. If, however, the networks are peered, they can use internal IPs to communicate and save on those egress costs. Regular network pricing still applies to all traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Can you migrate a VM to a new network?

A

Must not be in a MIG

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is the rule with subnets and IP Address?

A

Cannot overlap with other subnets.
IP range must be a unique valid CIDR block.
New subnet IP ranges have to fall within valid IP ranges.
Can expand but not shrink.
Auto mode can be expanded from /20 to /16

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What are the different route types in GCP?

A
  • System Generated Routes
  • Custom routes
        • VPC Network Peering Routes
      • NCC Routes
      • Policy based Routes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What is a system generated default Route

A

When you create a VPC it includes a 0.0.0.0/0 default route.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

what are some cons with the static route?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What are the rules with Dynamic Routes

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

When are routes created?

A

A route is created when a network is created, which enables traffic delivery from anywhere.
A route is created when a subnet is created.
This is what allows VMs on the same network to communicate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

If you don’t have custom static routes that meet the routing requirements for Private Google Access, deleting the default route

A

might disable Private Google Access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

To set up hybrid deployments for DNS resolution, which type of DNS policy should you use?

A

A DNS Server Policy allows you to configure inbound DNS forwarding from an on-premises environment to a GCP Virtual Private Cloud (VPC) or outbound DNS forwarding from a GCP VPC to on-premises or external DNS servers. This is crucial for hybrid cloud environments where DNS resolution needs to happen across on-premises and cloud networks.

Why Use a DNS Server Policy for Hybrid Deployments?
Inbound Forwarding: Allows on-premises resources to resolve DNS names for resources hosted in GCP.
Outbound Forwarding: Allows GCP resources to resolve DNS names for on-premises resources or external services not hosted in GCP.
How to Set Up a DNS Server Policy for Hybrid Deployments
Create a Cloud DNS Managed Zone: Define a managed zone in GCP with DNS records for your resources.

Configure a DNS Server Policy:

Use inbound forwarding to forward DNS queries from on-premises resources to GCP’s Cloud DNS.
Use outbound forwarding to forward DNS queries from GCP to on-premises DNS servers.
Establish a VPN or Interconnect: Set up a VPN or a dedicated interconnect between GCP and the on-premises network to facilitate DNS query forwarding.

Apply the DNS Server Policy: Attach the DNS server policy to the appropriate network or subnet in GCP to control DNS query flow.

Test the DNS Resolution: Verify that both GCP and on-premises resources can correctly resolve DNS queries across the hybrid environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

You must create a VM that has an IPv6 address. How do you do it?

A

This is correct. Dual-stack subnets support both IPv4 and IPv6, allowing you to create VMs with both types of addresses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

what are the different ways that google provides private access or private connect?

A

Private Google Access for on-premises hosts lets your on-premises hosts connect Google APIs and services through the default internet gateway of the VPC network.
Your on-premises hosts don’t need external IP addresses; instead, they use internal IP addresses.
Private Service Connect lets you connect to a Google or third-party managed VPC network through a service attachment. As with Private Google Access, the connection is internal.
Serverless VPC Access connects serverless products to your VPC network to access Google, third-party, or your own services with internal IP addresses.
For example, Cloud Run, App Engine standard, and Cloud Functions environments send packets to the internal IPv4 address of the resource.

Private services access is a private connection between your VPC network and a service producer VPC network. This connection is implemented as a VPC Network Peering connection. The service producer network is created exclusively for you, and
it’s not shared with other customers

47
Q

Why use Google Private Access?

A

Private access uses internal IP addresses.
Therefore, consumers connect to supported APIs and services with an internal connection. Unless a consumer connects to Google Cloud by using an external connection, private access communication does not go through the public internet.
**Access is quicker and more secure.
**Choose a private access option based on your needs.
All Google Cloud APIs and services support private access.
You can also set up private access to APIs and services that you publish. You can access these API and services from Google Cloud, other public clouds, or on-premises.

48
Q

What is Google Private Service Connect

A

Google Cloud Private Service Connect (PSC) is a service in Google Cloud Platform (GCP) that allows you to securely connect and consume services across different networks without exposing those services to the public internet. It provides a way to connect VPC networks within GCP or to services outside of GCP (such as third-party services or on-premises networks) using a private and managed connection.

Key Features of Private Service Connect
Private Connectivity: Services are accessed over private IP addresses within a VPC network, enhancing security by avoiding the public internet.

Service Isolation: Private Service Connect allows for isolation of services across different teams or organizations, ensuring data privacy and minimizing the attack surface.

Simplified Network Configuration: It abstracts complex network setups, making it easier to manage connections without handling VPNs or load balancers.

Service Directory Integration: It integrates with Service Directory, allowing easy service discovery and management.

Custom DNS and Security Controls: Allows you to customize DNS settings and enforce security controls on the communication paths between services.

Types of Private Service Connect
Private Service Connect offers two main use cases:
Private Service Connect for Producer Services:
Allows a service producer to offer their services privately to consumers.

Private Service Connect for Google APIs and Services:
Allows organizations to privately access Google APIs (e.g., Cloud Storage, BigQuery) using private IP addresses from within their VPC.
This ensures that traffic to Google APIs does not leave the Google network and is not exposed to the public internet..

49
Q

**How Private Service Connect Works

A

**How Private Service Connect Works
**Service Producer Network: The network where the service is hosted. The producer creates a Private Service Connect endpoint in this network.

Service Consumer Network: The network that consumes the service. The consumer creates a private endpoint in their VPC to connect to the producer’s service.

Private Endpoint: A private IP address in the consumer’s VPC that forwards requests to the service in the producer’s network. This endpoint allows traffic to remain within the Google networ

50
Q

Private Services Access

A

in Google Cloud Platform (GCP) is a feature that allows you to create a private connection between your Virtual Private Cloud (VPC) network and a network owned by Google or a third party. This setup lets you access internal IP addresses of Google-managed services like Cloud SQL, Memorystore, or other third-party services directly from your VPC without traversing the public internet. Private Services Access enhances security and performance by ensuring that data remains within Google’s network, reducing exposure to potential threats from the internet and providing lower latency connections [[1]

51
Q

Briefly explain the difference between these 3 services private google access, private service connect, and private service access.

A

Private Google Access allows resources in a Google Cloud Virtual Private Cloud (VPC) network that do not have external IP addresses to privately access Google APIs and services. Traffic remains within Google’s internal network, enhancing security by avoiding exposure to the public internet.

Private Service Connect creates private endpoints within a VPC to connect to Google, third-party services, or custom services across different networks. It allows you to access these services privately using internal IP addresses, avoiding public exposure and improving security and performance.

Private Services Access establishes a private connection between your VPC network and Google-managed services (such as Cloud SQL and Memorystore) or third-party networks. It enables you to access these services using private IP addresses, ensuring that communication stays within Google’s network and is not exposed to the internet.

52
Q

You want to provide access to services that you created in a VPC network. The services should be available to other specified VPC networks through endpoints that have internal IP addresses. Some of these VPC networks have subnets with overlapping internal IP addresses. Which product can you use?

Cloud NAT

Private services access

Private Google Access

Private Service Connect

A

Private Service Connect allows you to create private endpoints in your VPC network to provide access to services you created. It enables you to share these services with other VPC networks using internal IP addresses, even if those VPC networks have subnets with overlapping internal IP addresses. This is because Private Service Connect uses endpoint-specific IP addresses and routes traffic securely without the need for overlapping IP address management.

Why Not the Other Options?
Cloud NAT: This is used for allowing outbound internet access for instances without external IP addresses; it does not provide private access to services between VPCs.
Private Services Access: This is used to connect your VPC to Google-managed services (like Cloud SQL) via a private connection, not for custom services in your own VPC.
Private Google Access: This enables VMs without external IP addresses to access Google APIs and services privately, not for providing access to services created within a VPC.
Therefore, Private Service Connect is the best choice for the given scenario.

53
Q

Private services access automatically configures which Google Cloud prod

A

Private Services Access automatically configures VPC Network Peering to implement communication between the producer and consumer VPC networks in Google Cloud.

When you use Private Services Access, it sets up a private connection between a consumer VPC network and a producer’s VPC network, allowing the consumer to access services such as Google-managed services (e.g., Cloud SQL, Memorystore) hosted in the producer’s network using internal IP addresses. The underlying VPC Network Peering allows these two VPC networks to communicate privately without exposing traffic to the public internet.

54
Q

To enable Private Google Access for a VPC network.

A

Enable it on all desired subnets in the VPC network.
To enable Private Google Access for a VPC network in Google Cloud Platform (GCP), you need to configure specific settings for the subnets within that VPC network. Here’s how you can enable it:

Go to the VPC Network in the Google Cloud Console: Navigate to VPC network > VPC networks.

Select the Subnet: Choose the subnet within the VPC network where you want to enable Private Google Access.

Edit Subnet Settings: Click on the Edit button for the chosen subnet.

Enable Private Google Access: In the Private Google Access section, select On to enable it for the subnet.

Save Changes: Click Save to apply the changes.

After enabling Private Google Access, any virtual machine (VM) instances or other resources without external IP addresses within the subnet will be able to access Google APIs and services privately, without sending traffic over the public internet. This enhances security by keeping traffic within Google’s internal network.

55
Q

Are Routes for Subnets or VPCs?

A

Routes are for VPCs: In GCP, routes are defined at the VPC network level. This means all subnets within a VPC share the same routing table. While subnets define IP ranges within a VPC, routes govern how traffic flows in and out of these subnets across the VPC.
Subnet Routes as Part of VPC Routing: Though each subnet has its own IP range, routes that determine communication between subnets are managed at the VPC level. You do not create routes for individual subnets; instead, you manage routes that impact the entire VPC network and, consequently, all its subnets.

56
Q

Describe Dynamic Routes

A

Managed by Cloud Router, dynamic routes use the Border Gateway Protocol (BGP) to exchange routing information between GCP VPC networks and on-premises networks.
Dynamic routes allow for automatic updates to the routing table based on network conditions, which is especially useful for hybrid cloud setups.
These routes apply to all subnets in a VPC, again emphasizing that routes are defined for the VPC network as a whole rather than individual subnets.
Dynamic routes are used by:
● Dedicated Interconnect
● Partner Interconnect
● HA VPN tunnels
● Classic VPN tunnels that use dynamic routing
● NCC Router appliances

57
Q

Describe Custom Static Routes

A

Users can create custom static routes to control specific traffic flows within or outside the VPC network.
These routes specify a destination IP range and a next hop. The next hop can be an IP address, an instance, a Cloud VPN tunnel, or a Cloud Router.
Custom static routes are also defined at the VPC level, meaning they apply to all subnets within the VPC unless specifically restricted.

58
Q

System-Generated Routes:

A

When a VPC network is created, GCP automatically generates a set of system routes that define traffic behavior within the network.
For example:
Default Route: A default route (0.0.0.0/0) directs traffic destined for external networks to the internet through a gateway (usually for outbound internet traffic).
Subnet Routes: Each subnet within a VPC automatically gets a route that allows instances within that subnet to communicate with instances in other subnets within the same VPC.
These routes are at the VPC level, and all subnets in the VPC share these system routes.

59
Q

IPV6 Limitattions

A

Pv6 support in Google Cloud Platform (GCP) has several limitations that users need to be aware of when designing and deploying cloud architectures. Firstly, while GCP supports IPv6 addresses for resources like VM instances, load balancers, and other managed services, this support is often limited to external (public) IPv6 addresses. For most resources within GCP, internal IPv6 addresses are not natively supported, meaning that internal communication between VMs and other resources within a Virtual Private Cloud (VPC) must still rely on IPv4. This limitation can complicate network planning and may require dual-stack configurations (support for both IPv4 and IPv6) to ensure full connectivity across both internal and external networks.

Another limitation is that IPv6 is not supported for all GCP services. For example, while IPv6 is available for use with some types of load balancers (like the global HTTP(S) load balancer), it is not universally supported across all types of load balancers or other network services such as Cloud NAT, which remains IPv4-only. Additionally, certain network features, such as VPC Peering, Shared VPC, and Private Google Access, do not support IPv6. These constraints mean that network architects and engineers need to carefully plan and potentially work around the limitations when implementing IPv6 in GCP, often leading to increased complexity in managing and maintaining network configurations.

60
Q

BYOIP

A

Bring Your Own IP (BYOIP) in Google Cloud Platform (GCP) allows organizations to use their own publicly routable IP address ranges instead of relying on Google’s IP addresses for their resources. This capability is particularly useful for companies that have established their IP ranges with customers, partners, or regulatory bodies and want to maintain consistency when migrating or deploying services in GCP. With BYOIP, you can bring in your pre-owned IP addresses, have them validated by Google, and then configure them for use with services like Google Cloud Load Balancing, Google Cloud Armor, or Cloud CDN. This feature ensures that the organization retains control over its IP reputation, compliance, and accessibility without any disruption due to IP address changes.

GCP’s BYOIP process involves several steps, including verifying ownership of the IP range through Regional Internet Registry (RIR) documentation and routing requirements. Once Google validates the IP address ownership, the IP range is imported into a GCP project and can be configured for use with various Google Cloud resources. This enables consistent IP addressing across on-premises and cloud environments, helping to streamline hybrid cloud deployments and disaster recovery plans. However, the BYOIP service is primarily designed for public IP addresses and is not applicable for private internal IP ranges within a Virtual Private Cloud (VPC). Additionally, BYOIP currently supports IPv4 addresses, while IPv6 support may have limitations depending on specific service requirements.

61
Q

explain the difference between public and private zones

A

Private zones are used to provide a namespace that is visible only inside the VPC or hybrid network environment. For example, an organization would use a private zone for a domain dev.gcp.example.com, which is reachable only from within the company intranet.
Public zones are used to provide authoritative DNS resolution to clients on the public internet. For example, a business would use a public zone for its external website, cymbal.com, which is accessible directly from the internet.

62
Q
A

In Google Cloud Platform (GCP), routing policies determine how traffic flows between resources within a Virtual Private Cloud (VPC) and to external destinations. GCP offers different types of routes: system-generated routes (such as default internet routes and subnet routes) and custom routes (both static and dynamic). Routing policies are managed at the VPC level, meaning all subnets in a VPC share the same routing table. Static routes can be defined by the user to direct traffic to specific destinations through specified next hops, like a Virtual Machine (VM), a VPN tunnel, or an internet gateway. Dynamic routing is achieved using Cloud Router, which dynamically exchanges routes between VPC networks and on-premises networks using the Border Gateway Protocol (BGP), making it ideal for hybrid cloud architectures.

GCP routing policies can be further customized through regional and global dynamic routing modes. With regional dynamic routing, routes learned via BGP apply only to the region where the Cloud Router is located, making it more suitable for geographically segmented network designs. In contrast, global dynamic routing allows routes to propagate across all regions within a VPC, providing more flexibility for global, distributed applications. A
DNS routing policies let you steer your traffic based on specific criteria. Google Cloud supports three types of DNS routing policies: weighted round robin, geolocation, geofencing and failover.
A weighted round robin routing policy lets you specify different weights per DNS target, and Cloud DNS ensures that your traffic is distributed according to the weights.
You can use this policy to support manual active-active or active-passive configurations. You can also split traffic between production and experimental versions of software.
A geolocation routing policy lets you map traffic originating from source geographies (Google Cloud regions) to specific DNS targets. Use this policy to distribute incoming
requests to different service instances based on the traffic’s origin. You can use this feature with the internet, with external traffic, or with traffic originating within Google
Cloud and bound for internal load balancers. Google Cloud uses the region where queries enter Google Cloud as the source geography.
A failover routing policy lets you set up active backup configurations. This option is
only available for private zones

63
Q

Describe the different private access in Google Cloud?

A

In GCP, there are several private access options designed to securely connect resources without exposing traffic to the public internet. Private Google Access allows VMs in a VPC to access Google APIs and services privately using internal IP addresses without requiring external IPs. Private Service Connect enables you to securely connect to Google services or third-party services via a private endpoint in your VPC, ensuring that traffic remains private and does not traverse the internet. Private Services Access allows you to connect your VPC network to a network owned by Google or a third party, such as Cloud SQL or Managed Services, using private IP addresses. VPC Peering facilitates private connectivity between VPC networks, allowing traffic between them to remain on Google’s backbone network, while Serverless VPC Access allows serverless environments like Cloud Functions and Cloud Run to connect securely to resources within a VPC. Each of these options provides secure, private communication tailored to different scenarios and needs within the GCP environment.

64
Q

Describe the caveats with private google access.

A
65
Q

Describe private services connect.

A

With Private Google Access, Google APIs and
services can be accessed with internal IP
addresses.
With Private Service Connect, third-party
resources and intra-organization published
services can be also accessed with internal IP
addresses.
You can access resources through a Private
Service Connect endpoint or a backend.
Private Service Connect is fast and scalable

66
Q

private service access

A

Private Service Access in Google Cloud Platform (GCP) is a networking feature that enables private connectivity between a Virtual Private Cloud (VPC) network and a network owned by Google or a third party, such as managed services like Cloud SQL, AI Platform, or partner services. By using Private Service Access, resources in your VPC can communicate with Google-managed services over private IP addresses instead of public IPs, ensuring that data stays within Google’s internal network and does not traverse the public internet. This setup involves reserving an IP range from your VPC subnet and configuring a private connection to the service provider’s network. Private Service Access enhances security and compliance by allowing organizations to keep their data flows private while leveraging Google Cloud’s managed services.

67
Q

Which of the following practices is LEAST likely to improve network security in Google Cloud?

Implementing network firewall rules to control traffic.

Enabling VPC flow logs to monitor network traffic.

Regularly reviewing and updating IAM (Identity and Access Management) permissions.

Assigning public IP addresses to all virtual machines in a VPC.

A

Assigning public IP addresses to all virtual machines in a VPC.
Correct. This is the least likely to improve security and could even degrade it. Assigning public IP addresses to all VMs makes them directly accessible from the internet, increasing their exposure to potential attacks.

68
Q

You are designing a new network infrastructure in Google Cloud to support a global e-commerce application. Which two of the following are key considerations you should prioritize in your network design?
info
Note: To get credit for a multiple-select question, you must select all of the correct options and none of the incorrect ones.

A

his is a key consideration. The network design must support the application’s requirements (e.g., scalability, performance, security) while aligning with the organization’s broader goals (e.g., cost efficiency, compliance).
This is a critical consideration for an e-commerce application, as any downtime can lead to significant revenue loss and impact customer satisfaction. The network design should include redundancy, failover mechanisms, and disaster recovery plans to minimize the impact of outages.

69
Q

You are migrating a large ecommerce company’s existing on-premises data center to Google Cloud. The on-premises network consists of geographically dispersed regional offices, each with its own network segment requiring secure isolation. However, central management and communication between all regional offices are critical for business operations. Which network topology would most effectively address these requirements in Google Cloud?

A

Hub-and-spoke
Correct! This topology establishes a central VPC (the “hub”) in Google Cloud, connecting all regional VPCs (“spokes”) securely. This configuration facilitates centralized management, enforces security policies, and provides a cost-effective and manageable solution for migrating the on-premises network while maintaining regional isolation and communication.

70
Q

You are designing a Google Cloud network for a large financial services company with strict security requirements. The network needs to isolate sensitive customer data from other resources and limit communication between specific network segments. Which of the following network topologies would be most suitable for this scenario?

A

Gated ingress and egressCorrect! This topology allows granular control over incoming and outgoing traffic, enabling isolation of sensitive data and restriction of unauthorized communication between segments.

71
Q

Which Google Cloud service provides defense against infrastructure and application Distributed Denial of Service (DDoS) attacks?

A

Google Cloud Armor is specifically designed to protect against DDoS attacks at both the infrastructure and application layers. It offers features like:
Web Application Firewall (WAF) to filter malicious traffic
Rate limiting to control traffic spikes
DDoS attack detection and mitigation
IP whitelisting and blacklisting

72
Q

Which IAM role contains permissions to create, modify, and delete networking resources, except for firewall rules and SSL certificates?

A

The network administrator role grants permission to create, modify, and delete networking resources, except for firewall rules and SSL certificates.

73
Q

Your company is located in a city where Google Cloud does not have a Dedicated Interconnect location, but you need a private connection to your Google Cloud Virtual Private Cloud (VPC). Which Cloud Interconnect option is most suitable for this scenario?

A

Partner Interconnect allows you to connect to Google’s network through a supported service provider’s facilities, even if there isn’t a Dedicated Interconnect location in your city. This gives you a private connection to your VPC.

74
Q

Which Google Cloud Interconnect option requires the customer to provide their own routing equipment and establish a Border Gateway Protocol (BGP) session with Google’s edge network?

A

edicated Interconnect requires customers to establish a direct physical connection to Google’s network at a colocation facility.

75
Q

In Network Connectivity Center, what are the two main types of spokes that can be connected to a hub?

A

VPC Spokes and hybrid spokes are two types of Network Connectivity Center spokes

76
Q

What is the purpose of a Cloud Router, and why is that important?
check. To dynamically exchange routing information using BGP between Google Cloud VPCs and other networks.

A

Cloud Router enables dynamic routing using BGP, allowing Google Cloud VPCs to learn routes from on-premises networks and other cloud environments.

77
Q

list for securely protecting VPC resources in GCP:

A

Configure Firewall Rules: Control inbound and outbound traffic using specific rules and network tags.

Enable Private Google Access: Allow internal network access to Google APIs without using external IPs.

Use VPC Service Controls: Create security perimeters to prevent data exfiltration and unauthorized access.

Isolate Subnets: Organize resources into multiple subnets with tailored security controls and custom routes.

Encrypt Data: Ensure data in transit is encrypted using TLS and utilize internal IPs.

Apply Least Privilege IAM: Assign minimal permissions necessary for users and services.

Use Service Accounts: Manage permissions securely with service accounts rather than default ones.

Implement Peering and VPNs: Securely connect VPCs with VPC peering and use VPNs for on-premises connections.

Monitor Network Traffic: Enable VPC Flow Logs and use Security Command Center for threat detection.

Regular Audits and Updates: Periodically review configurations and apply security patches.

78
Q

what are the different IAM roles for networks?

A

Network viewer
Description
Read-only access to all networking resources

Network
administrator
Permission to create, modify, and delete networking
resources, except for firewall rules and SSL certificates

Security
administrator
Permission to create,Permission to create, modify, and delete firewall rules and SSL certificates

79
Q

How can you apply firewall rules to network and resources?

A
79
Q
A
80
Q

How can you apply firewall parameters

A

Direction: rules can be applied depending on the connection direction, values
can be ingress or egress.
● Source or destination: the source parameter is only applicable to ingress
rules and the destination parameter is only applicable to egress rules. Firewall
targets can be applied to all instances in a network, source tags, and service
accounts, and can be further filtered by IP addresses or ranges.
● Protocol and port: the protocol, such as TCP, UDP, or ICMP and port
number. You can specify a protocol, a protocol and one or more ports, a
combination of protocols and ports, or nothing. If the protocol is not set, the
firewall rule applies to all protocols.
● Action: an action can be set to either allow or deny, and will determine if the
rule permits or blocks traffic.
● Priority: a numerical value from zero to 65,535, which is used to determine
the order the rules are evaluated. Rules are evaluated starting from zero, so a
lower number indicates a higher priority. If you do not specify a priority when
creating a rule, it is assigned a priority of 1000.

81
Q

WHat traffic is always blocked by Google?

A
82
Q

Describe the different types of firewall policies

A
83
Q

What does cloud monitoring collect?

A

Cloud Monitoring collects measurements to help you understand how your applications and system services are performing. A collection of these measurements is generically called a metric.

The applications and system services being monitored
are called monitored resources.

Measurements might include the latency of requests to a service, the amount of disk space available on a machine, the number of tables in your SQL database, the number of widgets sold, and so forth.

Resources might include virtual machines, database instances, disks, and so forth

84
Q

How does cloud IDS perform

A

Cloud IDS is a network security service offered by Google Cloud that provides real-time detection of intrusions, malware, spyware, and command-and-control attacks. With comprehensive monitoring of both internal and external traffic.

Cloud IDS is an intrusion detection service that provides threat detection for
intrusions, malware, spyware, and command-and-control attacks on your network.
Cloud IDS works by creating a Google-managed peered network with mirrored VMs.
Traffic in the peered network is mirrored, and then inspected by Palo Alto Networks
threat protection technologies to provide advanced threat detection.
Cloud IDS provides full visibility into network traffic, including both north-south and
east-west traffic, letting you monitor VM-to-VM communication to detect lateral
movement.
Cloud IDS gives you immediate indications when attackers are attempting to breach
your network, and the service can also be used for compliance validation, like PCI 11.

85
Q

what is cloud Armor

A

Google Cloud Armor is a DDoS and application defense service. It delivers defense at scale against infrastructure and web application Distributed Denial of Service (DDoS) attacks using Google’s global infrastructure and security systems. Similar to CDNs, Google Cloud Armor protection is delivered at the edge of Google’s network and can block attacks close to their source before they have a chance of affecting your applications. Google Cloud Armor works with the global external Application Load
Balancer to provide built-in defenses against infrastructure DDoS attacks.
It defends against both network-layer (L3/L4) and application-layer (L7) DDoS attacks, safeguarding your services from being overwhelmed by malicious traffic.
Google Cloud Armor comes with pre-defined rulesets specifically designed to protect against the OWASP Top 10 web application vulnerabilities. These include common threats like SQL injection (SQLi), cross-site scripting (XSS), and insecure
deserialization.

86
Q

what’s the difference between an application load balancer and network load balance in gcp.

A

In Google Cloud Platform (GCP), an Application Load Balancer (ALB) operates at the application layer (Layer 7 of the OSI model) and is designed to handle HTTP(S) traffic. It provides advanced routing capabilities based on HTTP(S) attributes such as path-based routing, host-based routing, and SSL termination. This type of load balancer is ideal for web applications, microservices, and scenarios where you need to route traffic to different backend services based on specific URL patterns or headers. ALBs also offer features like session affinity, WebSocket support, and integration with Google Cloud Armor for enhanced security.

On the other hand, a Network Load Balancer (NLB) operates at the transport layer (Layer 4 of the OSI model) and is designed to handle TCP, UDP, and SSL traffic. NLBs provide ultra-fast and low-latency load balancing by forwarding traffic to backend virtual machines (VMs) without inspecting the content of the packets, making them suitable for latency-sensitive applications and non-HTTP workloads. NLBs support regional load balancing, direct server return (DSR), and are ideal for applications requiring high-performance and low-latency connections, such as gaming servers, real-time communication, and database services.

87
Q

Describe a generic cloud load balancer?

A

● Cloud Load Balancing receives client traffic.
● The backend can be a backend service or a backend bucket.
● Backend configuration defines:
○ How traffic is distributed.
○ Which health check to use.
○ If session affinity is used.
○ Which other services are used (such as Cloud CDN or Identity-Aware Proxy).● Cloud Load Balancing receives client traffic.
● The backend can be a backend service or a backend bucket.
● Backend configuration defines:
○ How traffic is distributed.
○ Which health check to use.
○ If session affinity is used.
○ Which other services are used (such as Cloud CDN or Identity-Aware Proxy).

88
Q

Describe a generic cloud load balancer?

A

● Cloud Load Balancing receives client traffic.
● The backend can be a backend service or a backend bucket.
● Backend configuration defines:
○ How traffic is distributed.
○ Which health check to use.
○ If session affinity is used.
○ Which other services are used (such as Cloud CDN or Identity-Aware Proxy).

89
Q

Cloud load ballancing can talk to several backends.

A

ackend services define how the traffic is distributed, which health check to use, and if session affinity is used. Backend services also define which other Google Cloud services to use, such as Cloud CDN or Identity-Aware Proxy. On the other hand, backend buckets direct incoming traffic to Cloud Storage buckets. Backend buckets are useful in serving static content. We will discuss this in more detail in the upcoming section.
Some of the backend services include a managed instance group, or a network endpoint group (NEG). I

90
Q

what is a network endpoint group

A

● A NEG is a configuration object that specifies a group of backend endpoints or services.
●** A common use case for this configuration is deploying services in GKE.**
● There are five types of NEGs:
○ Zonal
○ Internet
○ Serverless
○ Private Service Connect
○ Hybrid connectivity

91
Q

What is hybrid load balancing?

A

● A hybrid strategy lets you extend Cloud Load Balancing to workloads that run on your existing infrastructure outside of Google Cloud.
● This strategy could be:
○ Permanent to provide multiple platforms for your workloads.
○ Temporary as you prepare to migrate your internal or external workload to Google Cloud.

92
Q

How can you use a hybrid load balancer to help

A

he load balancer sends requests to the services that run your workloads. These services are the load balancer endpoints, and they can be located inside or outside of Google Cloud. You configure a load balancer backend service to communicate to the external endpoints by using a hybrid NEG. The external environments can use Cloud Interconnect or Cloud VPN to communicate with Google Cloud. The load balancer must be able to reach each service with a valid IP address:Port combination.

93
Q

how can you use URL maps to distribute traffic as a load balancer?

A

- The default backend service is video-site.
- Requests with the exact U

94
Q

You can use hybrid load balancing to connect these environments:

A

You can use hybrid load balancing to connect these environments:

Google Cloud, other public clouds, and on-premises

95
Q

Traffic management for a load balancer is configured in the:

A

In the URL map

In Google Cloud Platform (GCP), a URL map is used to define how traffic is routed to different backend services based on rules, such as path-based or host-based routing. The URL map allows you to specify conditions and actions that determine which backend service should handle specific requests, making it a key component for managing traffic in an HTTP(S) Load Balancer.

96
Q

what are some use cases for internetal network load balancers?

A

● Load-balance traffic across multiple VMs that are functioning as gateway or router VMs.
● Use gateway virtual appliances as a next hop for a default route.
● Send traffic through multiple load balancers in two or more directions by using the same set of multi-NIC gateway or router VMs as backends.

97
Q

describe ineternal network load balancing to next hop

A

Internal Network Load Balancing to Next Hop in Google Cloud Platform (GCP) involves directing traffic from an Internal TCP/UDP Load Balancer to another endpoint, referred to as the “next hop.” This setup is typically used for routing traffic within a Virtual Private Cloud (VPC) or between peered VPCs, supporting private, non-public-facing applications.

When using an Internal TCP/UDP Load Balancer, the load balancer distributes traffic based on the internal IP address and port defined in a forwarding rule. For the “next hop” configuration, this forwarding rule specifies an internal IP address as the next hop destination. The next hop can be another backend service, a VM instance, or another internal load balancer within the same VPC network or in a peered VPC network. This configuration is beneficial for scenarios such as service chaining, where traffic needs to pass through a series of services (like firewalls or logging systems) before reaching its final destination, or for creating multi-tier architectures within a private network.

98
Q
A
99
Q
A
99
Q

What are cloud CDN Cached modes?

A

Using cache modes, you can control the factors that determine whether Cloud CDN caches your content.
Cloud CDN offers three cache modes. The cache modes define how responses are cached, whether Cloud CDN respects cache directives sent by the origin, and how cache TTLs are applied.
The available cache modes are USE_ORIGIN_HEADERS, CACHE_ALL_STATIC and FORCE_CACHE_ALL.
USE_ORIGIN_HEADERS mode requires origin responses to set valid cache directives and valid caching headers.
CACHE_ALL_STATIC mode automatically caches static content that doesn’t have the no-store, private, or no-cache directive. Origin responses that set valid caching directives are also cached.
FORCE_CACHE_ALL mode unconditionally caches responses; overriding any cache directives set by the origin. If you use a shared backend with this mode configured, ensure that you don’t cache private, per-user content (such as dynamic HTML or API responses).

100
Q

CDN Interconnect traffic billing

A

● Ingress traffic is free for all regions.
● Egress traffic rates apply only to data that leaves Compute Engine or Cloud Storage.
● The reduced price applies only to IPv4 traffic.
● Egress charges for CDN Interconnect appear on the invoice as Compute Engine Network Egress via Carrier Peering Network.

101
Q

Google Cloud Armor with cloud CDN?

A

● Google Cloud Armor inspects for malicious requests.
● Cloud CDN caches content for speed delivery.
● Use Google Cloud Armor with Cloud CDN to protect CDN origin servers from application attacks and security risks.
● Two types of security policies that affect how Google Cloud Armor works:
○ Edge security policy filters requests before content is served from cache.
○ Backend security policies protect requests routed to the backend service.

102
Q

Which of the following best practices help optimize load balancing cost>
A. Choosing a load balancer based on your traffic type.
B. Choosing a load balancer type that closely matches your traffic patterns.
C. Implementing a caching layer with a content delivery network (CDN).
D. Increasing your timeout periods for load balancer health checks.

A

. Implementing a caching layer with a content delivery network (CDN).

Implementing a CDN with a caching layer helps reduce the load on the backend servers by serving cached content closer to the end users. This reduces the number of requests that reach the load balancer and the backend infrastructure, which can lower the overall cost associated with load balancing, data transfer, and server utilization. CDNs help in optimizing network traffic and reduce latency, further enhancing performance and cost-efficiency.

103
Q

CDN Interconnect provides:

A direct connection between your origin servers and Google’s Cloud Load Balancing service.

A virtual private network (VPN) tunnel between your VPC network and Google’s global network.

A direct peering connection between third-party content delivery networks (CDNs) and Google’s edge network.

A private connection between your on-premises network and Google Cloud.

A

A direct peering connection between third-party content delivery networks (CDNs) and Google’s edge network.

CDN Interconnect allows third-party CDNs to connect directly to Google’s edge network at various points of presence (PoPs). This setup reduces egress costs and latency by allowing content to be served closer to users, leveraging Google’s private network. It optimizes the performance and cost of delivering content from CDNs when serving users on Google’s network.

104
Q

When you use the internal IP address of the forwarding rule to specify an internal Network Load Balancer next hop, the load balancer can only be:

In the same subnet as the next hop route or a shared VPC network.

In the same VPC network as the next hop route.

In the same VPC network as the next hop route or in a peered VPC network.

In the same subnet as the next hop route.

A

When you use the internal IP address of the forwarding rule to specify an internal Network Load Balancer next hop, the load balancer can only be in the same VPC network as the next hop route or in a peered VPC network. This configuration allows for traffic routing within the same Virtual Private Cloud (VPC) network or between VPC networks that are peered, ensuring proper communication paths and network accessibility.

105
Q

what are the sections of the Google Cloud Network Engineer Cert?

A

Designing, Planning, and Prototyping a GCP Network:

Planning VPC networks, subnets, IP addressing, and routing.
Designing hybrid network connectivity (e.g., VPN, Cloud Interconnect).
Planning for network security and compliance.
Implementing Virtual Private Cloud (VPC) Networks:

Configuring VPCs, subnets, routes, and firewall rules.
Managing VPC peering, shared VPC, and network segmentation.
Configuring network logging and monitoring.
Configuring Network Services:

Configuring Google Cloud load balancing (HTTP(S), TCP/UDP, SSL Proxy).
Managing Cloud DNS, Cloud CDN, and Cloud NAT.
Working with hybrid connectivity solutions like Cloud VPN and Dedicated Interconnect.
Implementing Hybrid Connectivity:

Configuring on-premises connectivity to GCP using Cloud VPN and Cloud Interconnect.
Managing Partner Interconnect and Direct Peering.
Troubleshooting and optimizing hybrid network solutions.
Managing, Monitoring, and Optimizing Network Operations:

Monitoring network traffic and performance with Cloud Monitoring.
Troubleshooting network connectivity issues.
Optimizing network performance, costs, and security.
Implementing Network Security:

Configuring network security controls (e.g., firewall rules, Identity and Access Management).
Using Cloud Armor, DDoS protection, and IAM policies for secure access.
Managing Private Google Access, VPC Service Controls, and network policies.

106
Q

How do you prepare a network architecture for your company?

A
  • Identify regions nearby your centers for hybrid connectivity
  • identify services for each region
  • Identify a high availability secondary region
107
Q

How can you use shared VPC networks as a major architecture hub in GPC Networks?

A

In Google Cloud Platform (GCP), Shared VPC networks enable centralized network management by allowing multiple projects to use a common Virtual Private Cloud (VPC) network. This architecture allows organizations to maintain a “hub-and-spoke” model, where a central “host” project contains the VPC network, and multiple “service” projects attach to this shared network. By doing so, all the resources in the service projects can securely communicate with each other over internal IPs without traversing the public internet, simplifying security, policy enforcement, and network administration.

Shared VPC networks are ideal for large organizations that want to separate billing, access controls, and resource management across different teams or environments while maintaining a unified network architecture. This approach allows centralized management of critical network components such as subnets, firewall rules, routes, and VPNs, and supports hybrid connectivity scenarios. It also enables consistent application of security and compliance policies, thereby reducing complexity and operational overhead.

108
Q

How can you expose applications and other public endpoints with Google Cloud?

A

Cloud Load Balancing: This is a fully managed service that allows you to expose applications using global or regional load balancers. You can use HTTP(S) Load Balancers for web applications, TCP/UDP Load Balancers for non-HTTP traffic, and SSL Proxy or TCP Proxy Load Balancers for secure connections. These load balancers can distribute traffic across multiple backend services, VMs, or Kubernetes clusters while providing features like SSL termination, global routing, and content-based routing.

Google Kubernetes Engine (GKE) with Ingress: For containerized applications running on GKE, you can use Ingress resources to manage external access to the services. Ingress provides HTTP(S) load balancing, SSL termination, and path-based routing for GKE applications. This allows you to define rules that govern access to backend services based on URL paths or hostnames.

Cloud Endpoints and API Gateway: If you’re exposing APIs or microservices, you can use Cloud Endpoints or API Gateway to manage and secure them. These services provide API management features like authentication, monitoring, logging, rate limiting, and versioning, allowing for fine-grained control over public access to your services.

Compute Engine with External IPs: You can directly expose individual virtual machines (VMs) by assigning them external IP addresses. This is suitable for simpler use cases or when you need direct access to VM instances, but it requires careful management of firewall rules and security settings to prevent unauthorized access.

109
Q
A
110
Q

Which benefits does Cymbal Bank achieve by deploying replica resources to multiple zones in a region? Select two answers.

Support for more Google Cloud features

Reduced Latency

Improved Availability

Increased Capacity

A

By deploying replica resources to multiple zones in a region, Cymbal Bank achieves the following benefits:

Improved Availability: Deploying resources across multiple zones ensures that if one zone becomes unavailable due to maintenance or an outage, the replicas in other zones can continue to operate. This redundancy helps maintain service availability and reliability.

Increased Capacity: Deploying replicas in multiple zones can also distribute the load more evenly across the region, effectively increasing the capacity to handle more requests or users. This helps balance the traffic and prevents any single zone from becoming a bottleneck.

These two benefits—improved availability and increased capacity—are key reasons for deploying resources across multiple zones in a region in Google Cloud.

111
Q

Which Google Cloud features will Cymbal Bank use to connect their on-premise networks to Google Cloud? Select two answers.

Direct Peering

Shared VPC

Cloud NAT

Dedicated Interconnect

Cloud VPN

A

To connect their on-premise networks to Google Cloud, Cymbal Bank will use the following Google Cloud features:

Dedicated Interconnect: This service provides a high-bandwidth, private connection between your on-premises network and Google Cloud. It allows for a direct, private connection that doesn’t traverse the public internet, providing improved security, lower latency, and consistent network performance.

Cloud VPN: This service enables secure connections between your on-premises network and your Google Cloud Virtual Private Cloud (VPC) network over the public internet. Cloud VPN uses IPsec tunnels to securely connect the networks, making it a cost-effective solution for secure, encrypted connectivity.

These two options—Dedicated Interconnect and Cloud VPN—are the primary Google Cloud features used to establish hybrid cloud connectivity between on-premises environments and Google Cloud.

112
Q

To use a Shared VPC as a network hub in Google Cloud, several design requirements must be met to ensure a scalable, secure, and well-managed architecture. These requirements include:

A

Host Project Setup: A host project must be created to own the Shared VPC network. This project contains the VPC network, subnets, and common network resources like firewall rules, routes, and interconnects. The host project should be centrally managed by a team with network administration privileges.

Service Projects: Multiple service projects are attached to the Shared VPC network. These service projects host resources (e.g., VM instances, Kubernetes clusters) that communicate over the shared VPC network. Each service project is owned and managed by different teams or departments but relies on the common networking infrastructure provided by the host project.

Centralized Network Management: Centralized management of network policies, firewall rules, and IAM permissions is necessary for security and compliance. The network team managing the host project must control routing, security policies, and connectivity options (like VPN or Interconnect).

Subnet and IP Address Planning: Proper planning of subnets and IP address ranges is crucial to avoid conflicts and ensure optimal utilization of IP space. Subnets should be segmented based on organizational needs and isolated according to security requirements.

Hybrid Connectivity and Peering: For connectivity between on-premises and GCP, consider configuring Cloud VPN, Dedicated Interconnect, or Partner Interconnect. Additionally, use VPC peering if there is a need to connect with other VPCs not part of the Shared VPC.

High Availability and Redundancy: Ensure that critical network components like Cloud NAT, Cloud Load Balancing, and other services are designed with high availability and redundancy across regions and zones to provide a resilient network architecture.

113
Q
A
114
Q
A