Practice Question Flashcards
A developer has created an application based on customer requirements. The customer needs to run the application with the minimum downtime.
Which design approach regarding high-availability applications, Recovery Time Objective, and Recovery Point Objective
must be taken?
A. Active/passive results in lower RTO and RPO. For RPO, data synchronization between the two data centers must be timely to allow seamless request flow.
B. Active/passive results in lower RTO and RPO. For RPO, data synchronization between the two data centers does not need to be timely to allow seamless request flow.
C. Active/active results in lower RTO and RPO. For RPO, data synchronization between the two data centers does not need to be timely to allow seamless request flow.
D. Active/active results in lower RTO and RPO. For RPO, data synchronization between the two data centers must be timely to allow seamless request flow.
D. Active/active results in lower RTO and RPO. For RPO, data synchronization between the two data centers must be timely to allow seamless request flow.
Active/Active Configuration:
High Availability: In an active/active setup, multiple instances of the application are running simultaneously across different data centers or availability zones. This configuration ensures that if one instance or data center fails, the other instances continue to serve requests without interruption, resulting in lower downtime.
Lower RTO: Since both data centers are actively handling traffic, the system can continue to operate with minimal recovery time if one data center fails. This leads to a lower Recovery Time Objective (RTO).
Lower RPO: In an active/active setup, data is typically synchronized in real-time or near real-time between the data centers. This minimizes the amount of data that could be lost in the event of a failure, leading to a lower Recovery Point Objective (RPO).
Timely Data Synchronization: For the RPO to be low, data synchronization between the active instances must be timely. This ensures that all data centers have the most up-to-date information, allowing for seamless request flow even if one data center goes down.
A cloud native project is being worked on in which all source code and dependencies are written in Python, Ruby, and/or
JavaScnpt. A change in code triggers a notification to the CI/CD tool to run the CI/ CD pipeline.
Which step should be omitted from the pipeline?
A. Deploy the code to one or more environments, such as staging and/or production.
B. Build one of more containers that package up code and all its dependencies.
C. Compile code.
D. Run automated tests to validate the code.
C. Compile code.
Python, Ruby, and JavaScript are interpreted languages, which means they do not require a compilation step before execution. This is different from compiled languages like Java or C++, where source code must be compiled into executable binaries before running.
Other steps in the pipeline such as deploying the code to environments, building containers, and running automated tests are essential for ensuring that the application runs correctly, is packaged with all necessary dependencies, and meets quality standards.
Which two statements are considered best practices according to the 12-factor app methodology for application design?
(Choose two.)
A. Application code writes its event stream to stdout.
B. Application log streams are archived in multiple replicated databases.
C. Application log streams are sent to log indexing and analysis systems.
D. Application code writes its event stream to specific log files.
E. Log files are aggregated into a single file on individual nodes.
A. Application code writes its event stream to stdout.
-The 12-factor methodology suggests that applications should write logs as an unbuffered stream of events to stdout. This allows the environment to handle the storage, indexing, and analysis of logs.
C. Application log streams are sent to log indexing and analysis systems.
-The 12-factor methodology recommends that logs be treated as event streams that can be sent to log indexing and analysis systems for further processing and monitoring. This ensures that logs are centralized and can be analyzed and searched effectively.
An organization manages a large cloud-deployed application that employs a microservices architecture. No notable issues
occur with downtime because the services of this application are redundantly deployed over three or more data center
regions. However, several times a week reports are received about application slowness. The container orchestration logs
show faults in a variety of containers that cause them to fail and then spin up brand new.
Which action must be taken to improve the resiliency design of the application while maintaining current scale?
A. Update the base image of the containers.
B. Test the execution of the application with another cloud services platform.
C. Increase the number of containers running per service.
D. Add consistent “try/catch(exception)” clauses to the code.
D. Add consistent “try/catch(exception)” clauses to the code.
-Adding proper error handling through “try/catch(exception)” clauses can significantly improve the resiliency of the application by allowing it to gracefully handle errors and exceptions without causing the container to crash. This reduces the likelihood of faults that cause containers to fail and restart, leading to improved stability and performance.
How should a web application be designed to work on a platform where up to 1000 requests per second can be served?
A. Use algorithms like random early detection to deny excessive requests.
B. Set a per-user limit (for example, 5 requests/minute/user) and deny the requests from the users who have reached the limit.
C. Only 1000 user connections are allowed; further connections are denied so that all connected users can be served.
D. All requests are saved and processed one by one so that all users can be served eventually.
B. Set a per-user limit (for example, 5 requests/minute/user) and deny the requests from the users who have reached the limit.
Explanation:
Rate Limiting (Per-User Limits): Setting a per-user rate limit is a common and effective strategy for managing high traffic. This ensures that no single user can overwhelm the system with too many requests, thereby preventing abuse and ensuring fair usage for all users.
Prevents Overloading: By limiting the number of requests each user can make within a specific time frame (e.g., 5 requests per minute), you help maintain the application’s performance and availability even under high load conditions.
Fairness and Security: This approach also helps to prevent denial-of-service attacks and ensures that all users have a fair opportunity to access the service.
An organization manages a large cloud-deployed application that employs a microservices architecture across multiple data
centers. Reports have been received about application slowness. The container orchestration logs show that faults have been raised in a variety of containers that caused them to fail and then spin up brand new instances. Which two actions can improve the design of the application to identify the faults? (Choose two.)
A. Automatically pull out the container that fails the most over a time period.
B. Implement a tagging methodology that follows the application execution from service to service.
C. Add logging on exception and provide immediate notification.
D. Do a write to the datastore every time there is an application failure.
E. Implement an SNMP logging system with alerts in case a network link is slow.
B. Implement a tagging methodology that follows the application execution from service to service.
C. Add logging on exception and provide immediate notification.
Explanation:
B. Implement a tagging methodology that follows the application execution from service to service:
Distributed Tracing: Implementing a tagging or tracing methodology allows you to track the flow of requests through different services in a microservices architecture. This helps in identifying the exact point of failure or performance bottlenecks across the different services. Tools like Jaeger or Zipkin are often used for this purpose.
C. Add logging on exception and provide immediate notification:
Enhanced Logging and Alerting: Adding logging on exceptions and setting up immediate notifications ensures that faults are captured in real-time. This helps in quickly diagnosing issues when they occur and reducing the time to resolve them. Properly structured logs are essential for understanding the context of failures.
Which two situations are flagged by software tools designed for dependency checking in continuous integration
environments, such as OWASP? (Choose two.)
A. publicly disclosed vulnerabilities related to the included dependencies
B. mismatches in coding styles and conventions in the included dependencies
C. incompatible licenses in the included dependencies
D. test case failures introduced by bugs in the included dependencies
E. buffer overflows to occur as the result of a combination of the included dependencies
A. Publicly disclosed vulnerabilities related to the included dependencies
C. Incompatible licenses in the included dependencies
Explanation:
A. Publicly disclosed vulnerabilities related to the included dependencies:
Dependency checking tools are primarily used to identify known security vulnerabilities in the libraries and packages that your application depends on. These tools cross-reference the dependencies with databases of known vulnerabilities (such as the National Vulnerability Database) to alert you if any of the dependencies are compromised.
C. Incompatible licenses in the included dependencies:
These tools can also scan the licenses of the dependencies to ensure that they are compatible with the project’s licensing terms. Incompatible licenses might impose restrictions that could legally conflict with how the software is distributed or used, making this a critical aspect of dependency management.
A network operations team is using the cloud to automate some of their managed customer and branch locations. They
require that all of their tooling be ephemeral by design and that the entire automation environment can be recreated without manual commands. Automation code and configuration state will be stored in git for change control and versioning. The engineering high-level plan is to use VMs in a cloud-provider environment, then configure open source tooling onto these VMs to poll, test, and configure the remote devices, as well as deploy the tooling itself.
Which configuration management and/or automation tooling is needed for this solution?
A. Ansible
B. Ansible and Terraform
C. NSO
D. Terraform
E. Ansible and NSO
B. Ansible and Terraform
Explanation:
Ansible:
Ansible is a powerful automation tool that is well-suited for configuration management, application deployment, task automation, and IT orchestration. It can be used to configure the remote devices and deploy the open-source tooling on the VMs.
Ansible is also agentless, meaning it can manage systems over SSH without needing any agent installed on the managed nodes, which aligns well with the need for ephemeral, easy-to-recreate environments.
Terraform:
Terraform is an infrastructure-as-code (IaC) tool that allows you to define and provision data center infrastructure using a declarative configuration language. In this scenario, Terraform would be used to automate the provisioning of VMs in the cloud-provider environment.
Terraform’s ability to store the entire infrastructure configuration in code, which can be versioned in git, makes it an ideal choice for creating and managing ephemeral environments that can be recreated without manual intervention.
An application is hosted on Google Kubernetes Engine. A new JavaScript module is created to work with the existing
application.
Which task is mandatory to make the code ready to deploy?
A. Create a Dockerfile for the code base.
B. Rewrite the code in Python.
C. Build a wrapper for the code to “containerize” it.
D. Rebase the code from the upstream git repo.
A. Create a Dockerfile for the code base.
Explanation:
Google Kubernetes Engine (GKE): GKE is a managed Kubernetes service, and it typically runs containerized applications. To deploy a new JavaScript module to GKE, the code needs to be packaged as a container image.
Dockerfile: A Dockerfile is a script that contains a series of instructions on how to build a Docker image for your application. The Dockerfile will specify the base image (e.g., a Node.js image for JavaScript), copy the necessary files into the container, install dependencies, and define how the application should be run.
Which database type should be used with highly structured data and provides support for ACID transactions?
A. time series
B. document
C. graph
D. relational
D. Relational
Explanation:
Relational databases are designed to handle highly structured data and support ACID (Atomicity, Consistency, Isolation, Durability) transactions. They organize data into tables with predefined schemas, where each table contains rows and columns. Relational databases are widely used for applications that require strict consistency and transactional integrity.
ACID transactions ensure that database operations are processed reliably, maintaining data integrity even in the case of failures or errors. This is critical for applications like financial systems, where precise and consistent data handling is required.
Where should distributed load balancing occur in a horizontally scalable architecture?
A. firewall-side/policy load balancing
B. network-side/central load balancing
C. service-side/remote load balancing
D. client-side/local load balancing
D. client-side/local load balancing
Explanation:
Client-side/local load balancing occurs at the client’s end, where the client is responsible for distributing requests across multiple servers or instances of a service. In a horizontally scalable architecture, this approach can be very efficient because it allows the client to directly interact with the various service instances, reducing the need for centralized load balancers and potentially minimizing latency.
Advantages of Client-Side Load Balancing:
Scalability: Each client independently balances its load across available service instances, which can scale more effectively as the number of clients and services increases.
Reduced Bottleneck: There’s no single point of failure or bottleneck since the load balancing logic is distributed across clients.
Better Performance: Clients can make intelligent decisions based on local conditions, such as choosing the nearest or least loaded instance.
Which two statements about a stateless application are true? (Choose two.)
A. Different requests can be processed by different servers.
B. Requests are based only on information relayed with each request.
C. Information about earlier requests must be kept and must be accessible.
D. The same server must be used to process all requests that are linked to the same state.
E. No state information can be shared across servers.
A. Different requests can be processed by different servers.
B. Requests are based only on information relayed with each request.
Explanation:
A. Different requests can be processed by different servers:
In a stateless application, each request is independent and self-contained, meaning that any server can handle any request without needing information from previous requests. This allows for better scalability and flexibility, as requests can be routed to any available server.
B. Requests are based only on information relayed with each request:
Stateless applications do not retain any session or state information between requests. Instead, all necessary information is sent with each request (e.g., in HTTP headers, parameters, or cookies), making each request self-sufficient.
Which statement about microservices architecture is true?
A. Applications are written in a single unit.
B. It is a complex application composed of multiple independent parts.
C. It is often a challenge to scale individual parts.
D. A single faulty service can bring the whole application down.
B. It is a complex application composed of multiple independent parts.
Explanation:
Microservices Architecture: This approach to software design structures an application as a collection of loosely coupled services. Each service corresponds to a specific business function, and these services communicate with each other through APIs.
Which two data encoding techniques are supported by gRPC? (Choose two.)
A. XML
B. JSON
C. ASCII
D. ProtoBuf
E. YAML
B. JSON
D. ProtoBuf
Explanation:
ProtoBuf (Protocol Buffers):
ProtoBuf is the default serialization format used by gRPC. It is a language-neutral, platform-neutral, extensible mechanism for serializing structured data, which is both compact and efficient. gRPC relies heavily on ProtoBuf due to its performance and efficiency.
JSON:
gRPC can also support JSON encoding, especially in scenarios where interoperability with web clients is required, such as with gRPC-Web. JSON is widely used for its readability and ease of use in web-based applications.
Refer to the exhibit. Which two functions are performed by the load balancer when it handles traffic originating from the
Internet destined to an application hosted on the file server farm? (Choose two.)
A. Terminate the TLS over the UDP connection from the router and originate an HTTPS connection to the selected server.
B. Terminate the TLS over the UDP connection from the router and originate an HTTP connection to the selected server.
C. Terminate the TLS over the TCP connection from the router and originate an HTTP connection to the selected server.
D. Terminate the TLS over the TCP connection from the router and originate an HTTPS connection to the selected server.
E. Terminate the TLS over the SCTP connection from the router and originate an HTTPS connection to the selected server.
C. Terminate the TLS over the TCP connection from the router and originate an HTTP connection to the selected server.
D. Terminate the TLS over the TCP connection from the router and originate an HTTPS connection to the selected server.
Explanation:
TLS Termination over TCP:
TLS (Transport Layer Security) is commonly used to secure communications over a network. TLS operates over TCP (Transmission Control Protocol), not UDP (User Datagram Protocol) or SCTP (Stream Control Transmission Protocol).
The load balancer typically handles incoming TLS-encrypted connections from clients (originating from the router in this scenario) and can “terminate” the TLS connection. This means the load balancer decrypts the incoming traffic.
Options for Connection to Backend Server:
HTTP Connection to the Server (Option C): After terminating the TLS connection, the load balancer can forward the request to the selected server using plain HTTP, which is unencrypted. This is common in environments where encryption is only needed between the client and the load balancer, and not within the internal network.
HTTPS Connection to the Server (Option D): Alternatively, after terminating the TLS connection, the load balancer can re-encrypt the traffic and forward it to the selected server over HTTPS, ensuring end-to-end encryption.
Which transport protocol is used by gNMI?
A. HTTP/2
B. HTTP 1.1
C. SSH
D. MQTT
A. HTTP/2
Explanation:
gNMI (gRPC Network Management Interface) is a protocol used for network management that leverages gRPC, which in turn uses HTTP/2 as its transport protocol. HTTP/2 provides features like multiplexing, flow control, header compression, and server push, making it well-suited for efficient communication in network management.
A developer plans to create a new bugfix branch to fix a bug that was found on the release branch.
Which command completes the task?
A. git checkout -t RELEASE BUGFIX
B. git checkout -b RELEASE BUGFIX
C. git checkout -t BUGFIX RELEASE
D. git checkout -b BUGFIX RELEASE
D. git checkout -b BUGFIX RELEASE
Explanation:
git checkout -b BUGFIX RELEASE:
This command creates a new branch named BUGFIX based on the RELEASE branch. The -b flag is used to create a new branch, and RELEASE specifies the branch from which the new BUGFIX branch should be created.
Which Git command enables the developer to revert back to f414f31 commit to discard changes in the current working tree?
A. git reset –hard f414f31
B. git reset checkout –hard f414f31
C. git reset –soft f414f31
D. git checkout f414f31
A. git reset –hard f414f31
Explanation:
git reset –hard f414f31:
This command resets the current branch to the commit f414f31 and updates the working tree and the index to match the specified commit. This means that all changes in the working directory will be discarded, effectively reverting the project to the state it was in at the f414f31 commit.