Chapter 1, Introduction to Cloud Native Flashcards
What is cloud native?
Cloud native is building software applications as a collection of independent, loosely coupled, business-capability-oriented services (microservices) that can run on dynamic environments (public, private, hybrid, multicloud) in an automated, scalable, resilient, manageable, and observable way.
Page 11
What platform has become the de facto for building, running, and sharing containerized applications?
Docker has become the de facto platform for building, running, and sharing containerized applications.
Page 17
What is container orchestration?
Container orchestration is the process of managing the containers’ life cycle. When you operate real-world cloud native applications, it’s nearly impossible to manually manage containers. Hence, a container orchestration system is an essential part of building a cloud native architecture.
Page 19
List the key features and capabilities of a container orchestration system.
- Automatic provisioning: Automatically provisions container instances and deployment of containers
- High availability: Automatically reprovisions containers when one container runtime fails
- Scaling: Based on the demand, automatically adds or removes container instances to scale up or scale down the application
- Resource management: Allocates resources among the containers
- Service interfaces and load balancing: Exposes containers to external systems and manages the load coming into the containers
- Networking infrastructure abstractions: Provides a networking overlay to build communication among containers
- Service discovery: Offers built-in capability of discovering services with a service name
- Control plane: Provides a single place to manage and monitor a containerized system
- Affinity: Provisions containers nearby or far apart from each other, helping availability and performance
- Health monitoring: Automatically detects failures and provides self-healing
- Rolling upgrades: Coordinates incremental upgrades with zero downtime
- Componentization and isolation: Introduces logical separation between various application domains by using concepts such as namespaces
Page 20
What has become the de facto container orchestration system in the cloud native landscape?
In the cloud native landscape, Kubernetes has become the de facto container orchestration system.
Page 20
How does Kubernetes simplify container orchestration?
Kubernetes creates an abstraction layer on top of containers to simplify container orchestration by automating the deployment, scaling, fault tolerance, networking, and various other container management requirements.
Page 20
What comprises a Kubernetes cluster?
A Kubernetes cluster comprises a set of nodes that run on virtual or physical machines. Among these nodes is at least one control-plane node and several worker nodes. The control-plane node is responsible for managing and scheduling application instances across the cluster. Therefore, the services that the Kubernetes control-plane node runs are known as the Kubernetes control plane.
Page 22
What can a microservice of a cloud native application be modeled as?
A given microservice of a cloud native application can be modeled as a serverless function. This programmatic function serves the business capability of a microservice that runs on a cloud infrastructure. With serverless functions, most of the management, networking, resiliency, scalability, and security are already being provided by the underlying serverless platform.
Page 23
Is using containers mandatory for building cloud native applications?
You may opt to run your microservices without using containers. While using containers to build cloud native applications is not mandatory, you need to manage the complexities and overhead of running applications on top of VMs. For this reason, in most real-world implementations of cloud native architecture, we often see the adoption of containers, container orchestrations, or higher-level abstractions such as serverless functions.
Page 23
What is Infrastructure as Code (IaC) and its benefits?
In automating the creation of the target environment (dev, staging, or production), the technique of infrastructure as code (IaC) is commonly used. With the IaC model, the management of infrastructure (networks, VMs, load balancers, and connection topology) is done using a declarative model that is similar to the source code of an application. With this model, we can continuously create the required environment by using the descriptor without any manual intervention. This improves the speed and efficiency of the development process while keeping the consistency and reduced management overhead. Therefore, IaC techniques are an integral part of the continuous delivery pipeline.
Page 25
What key capabilities are needed to dynamically manage cloud native applications?
- Autoscaling: Scales the application instances up or down based on the traffic or load
- High availability: In the event of a failure, provides the ability to spawn new instances in the current data center or shift traffic to different data centers
- Resource optimization: Ensures optimum use of resources, with dynamic scaling and no up-front costs but with real-time automated response to scaling needs
- Observability: Enables logs, metrics, and tracing of the cloud native application with central control
- Quality of service (QoS): Enables end-to-end security, throttling, compliancy, and versioning across applications
- Central control plane: Provides a central place to manage every aspect of the cloud native application
- Resource provisioning: Manages resource allocations (CPU, memory, storage, network) for each application
- Multicloud support: Provides the ability to manage and run the application across several cloud environments, including private, hybrid, and public clouds (as a given application may require components and services from multiple cloud providers)
Page 27
What are the stages of building a cloud native application?
24