Container Hosting Flashcards
Azure Container Registry
Is a fully managed Docker container registry service. It enables developers and organizations to store, manage, and deploy Docker container images securely in the cloud.
ACR integrates seamlessly with other Azure services, such as Azure Kubernetes Service (AKS) and Azure DevOps, providing a comprehensive platform for building, deploying, and managing containerized applications.
-DockerFile: What should the container look like?
-Container Image: Read only copy of the thing we want to run
-Container: Is a running instance of a container image
Feature Considerations
-All plans support container storage/management, and Azure AD integration (control access to the containers using AD identities)
-Standard includes all core features but with greater storage and throughput
-Premium adds: storage/throughput, geo-replication, content-trust, VNet security
-ARC Tasks provides quick, multi-step, and triggered task to be automated (patching the underlying o.s, framework that are used in your container images, or tasks to be triggered when source code changes)
Architecture
- Registry: Parent management unit. Defines IAM, networking, pricing, and more
- Repository: Create repositories to store your images, or other artifacts.
- Artifacts: ACR supports Windows/Linux containers, and OCI artifacts
-Open Container Initiative (OCI) is an open governance structure for the standardization of container formats and runtime specifications.
The goal of ACR is to provide a standard private repository that can work with a range of different container hosts
Azure Container Instances
Is a serverless container service, allowing developers to run Docker containers in a lightweight and managed environment without the need to provision or manage virtual machines.
Considerations
-For simple solutions that need to start fast (simple apps, task automation)
-Considered to be like container hosting building blocks (can be used by AKS)
-Doesn’t include any scaling, healing, or other container orchestration capabilities
-Pricing is based on resource allocation, and is charged on a per-second basis
Architecture
- Container Instance: Resources (CPU/Memory) to run Windows or Linux containers
- Connectivity: Can be publicly accessible, or deployed to a VNet
-If you configure public access, you would need to go and define what port it should be listening on, what service is operating there. Then when we create the container instance, we can define a DNS entry and we’ll also be given a public IP address and with either of those records, we will provide the users with access to our solution - Storage: Containers are able to mount Azure Files shares for persistent storage
-Containers are considered to be ephemeral
Container Groups
- Container Group: Deploy more than one container together as part of a single solution
- Hosting: Containers are scheduled to operate on the same container host
-If you have a solution where there’s one component that typically goes with another component
-You can expose these services via port 80 to the end users via DNS or a public IP
-You can also have private connectivity, where services aren’t exposed to the internet
-Containers are able to mount Azure Files shares for persistent storage (per container basis) - Deployment: Deployed with ARM Templates or a YAML deployment file
-YAML only allows to deploy containers instances not other resources
Azure Kubernetes Service
Is a fully managed container orchestration service. It simplifies the deployment, management, and scaling of containerized applications using Kubernetes, an open-source platform for automating the deployment, scaling, and management of containerized workloads.
-Built for large scale
-A lot to manage (admin overhead)
-Provides a lot of functionality
Discovery and Load Balancing
-AKS will take care of everything in terms of exposing the network connectivity, the load balancing and distributing to containers that make up our solution
Automation
-Release a v2 of that containerized solution
-We might want to have storage for our containers
-Automate requiremetns from rolling out to rolling back, automatically provisioning storage, automatically attaching storage, and more.
Healing
-It can take a look at all of the containers known as “pods”, and it can go redeploy and heal those to ensure that your solution is meeting the defined requirements
Scaling
-Deploy more containers to meet demand
Considerations
-Managed Kubernetes cluster including integration with Azure / Azure AD
-Supports VMs and Azure Container Instances to host containers
-Includes a large variety of orchestration, healing, monitoring, and other features
-Only charged for the compute (VMs and ACI) required. Not for the control plane.
Impementation Overview
- Containers: Prepare the containers for your solution. Ensure they are in a registry accessible to AKS
- Cluster: Create an AKS cluster with your required networking, access control, nodes, etc
- Deploy: Deploy your solution using the Kubernetes manifest file (YAML) and kubectl
-kubectl is something we use to interact with the Kubernetes service
Architecture
- Cluster: Managed Kubernetes cluster. No charge for the Control Plane
- Nodes: VMs or ACI that host the containers
- Pods: One or more containers are deployed to AKS using as a Pod
AKS Networking Overview
Networking
- Network Type
-Kubenet: network to pods via nodes
–Each pod get an IP address from a special address range called the pod CIDR or the pod address range (Different from the VNet address range)
-Those resources and paired VNets or on-premises can in fact go and connect to your pods, but the pods can’t initiate any outbound connections to a paired VNet or to on-premises resources
-Azure-CNI: pods connect to VNet
–Each pod get an IP address from the VNet (you can run out of IP addresses)
–Use this, if you want connectivity to paired VNet resources or connected with a VPN or ExpressRoute
Services - Services: Network connectivity is abstracted in Kubernetes as “Services”
-Standalone thing that you create and can be associated with your application
-When we create those services, we do so in something called “Service CIDR” or “Service Network”
-This is a network that needs to have a different address range than your VNet or any other network that you are connecting to
There’s a number of different methods to create this:
-Cluster IP: Expose the service on a cluster-internal IP (internal only)
–I want my web app to be able to talk to my API app, using a defined IP address and defined port
-NodePort: Expose the service on a fixed port via the node IP (allows port translation)
-Load Balancer: Expose the service via an Azure (Internal or External) Load Balancer
–Provide access to your pods that exist across multiple nodes
-ExternalName: Provide DNS for the service
- Network Policies
-Azure NPM: supports Az-CNI only
-Calico: supports Az-CNI and Kubenet
AKS Storage Overview
Volumes: A pod definition can declare a volume to read/write data
-Managed Disk: Associated to the pod lifecycle
-You can also use Azure Files for sharing access
Persistent Volumes: Storage that is centrally managed (dynamic or static) by the API
-Our volumes will still be accessible to our pods, but they will be created and maintained centrally through the API.
-Centrally managed resource
-Could be a Managed Disk, Azure File, Blob Storage
Settings we should be aware of:
-Storage Class: Used to define required tiers of storage (e.g. Premium Managed Disk)
-Persistent Volume Claim: Used to request volume of a specific class from the control plane. (Node requests: Slow Object Storage + 100GiB)
–Can be created and dynamically assigned
AKS Autoscale Overview
There’s two things we need to scale:
- We want our pods to be able to scale in response to demand (they operate on underlying nodes)
- We need those nodes to scale as well
We need to scale both the node pools and the applications themselves
Configuration
- Cluster Autoscaler: Use autoscale to increase the number of nodes based on demand
-We can even use additional ACI to support any burst requirements (Burst Node)
-We can define that we want it to be auto scaling, and then as addititonal pods are deployed, it will understand what the resource utilization is, and it can increase adn decrease nodes accrordingly - Horizontal Pod Autoscaler (HPA): Use HPA to increase the number of replicas
-Say the minimum and maximum number of replicas and it can scale in or out according to demand
Azure Container Apps
Is a fully managed Kubernetes-based application platform that helps you deploy apps from code or containers without orchestrating complex infrastructure.
Considerations
-Kubenetes-like features, with the cluster and components managed for you
-Built for microservices developers to more easily get advanced capabilities
-Includes scaling (KEDA), service discovery (DAPR), load balancing (envoy), and more
-Billing based on resources (per second) and requests. Can scale to ZERO
-No Kubernetes API or control plane access!
Implementation Overview
- Containers: Prepare the containers for your solution. Supports any public/private container registry
-We don’t have to create and configure that underlying cluster like we would with Azure Kubernetes, this is managed for you - Deploy: Deploy your solution easily and fast using the Portal or command-line tools
Architecture
- Environment: Logging and VNet connectivity is configured at the environment level
-This is where your app will be deployed and is powered by Kubernetes
-We have to connect it to a VNet (Your own or managed
-Logging is automatically integrated with Log Analytics Workspace) - Container App: One (or more) Linux (x86-64) container(s)that make up your app
- Revision: Container app versioning. Uses immutable container app snapshots
-Enabled by default
-Makes sure that when you are releaseing new versions for your application, you get the capability to roll back or roll forward as needed
-Also allows you to split the traffic