AWS Compute Services - Elastic Container Service Flashcards

1
Q

Elastic Container Service (ECS)
A container management service to run, stop, and manage Docker containers on a cluster. ECS can be used to create a consistent deployment and build experience, manage, and scale batch and Extract-Transform-Load workloads, and build sophisticated application architectures on a microservices model
ECS is a regional service

Features
You can create ECS clusters within a new or existing VPC
After a cluster is up and running, you can define task definitions and services that specify which Docker container images to run across your clusters.
AWS Compute SLA guarantees a monthly uptime percentage of at least 99.99% for ECS

A

ECS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Components- containers and images
your application components must be architected to run in containers - containing everything that your software application needs to run: code, runtime, system tools, system libraries, etc.
Containers are created from a read-only template called an image
images are typically built from a DockerFile, a plain text file that specifies all of the components that are included in the container. These images are then stored in a registry from which they can be downloaded and run on your cluster.
when you launch a container instance, you have the option of passing user data to the instance. the data can be used to perform common automated configuration tasks and even run scripts when the instance boots.
Docker Volumes can be a local instance store volume, EBS volume, or EFS volume. connect your Docker containers to these volumes using Docker drivers and plugins.

A

Containers and images

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Components - task components
Task definitions specify various parameters for your application. it is a text file, in JSON format, that describes one or more containers, up to a maximum of ten, that form your application.
Task definitions are split into separate parts:
Task family - the name of the task, and each family can have multiple revisions
IAM task role - specifies the permissions that containers in the task should have
Network mode - determines how the networking is configured for your containers
container definitions - specify which images to use, how much CPU and memory the container are allocated, and many more options
Volumes - allows you to share data between containers and even persist the data on the container instance when the containers are no longer running
Task placement constraints - lets you customize how your tasks are placed within the infrastructure
Launch types - determines which infrastructure your tasks use

A

Task components

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Components - tasks and scheduling
A task is the instantiation of a task definition within a cluster. After you have created a task definition for your application, you can specify the number of tasks that will run on your cluster. Each task that uses the Fargate launch type has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task.
The task scheduler is responsible for placing tasks within the cluster. there are several different scheduling options available. REPLICA - places and maintains the desired number of tasks across your cluster. By default the service scheduler spreads tasks across AZs. You can use task placement strategies and constraints to customize task placement decisions. DAEMON - deploys exactly one task on each active container instance that meets all of the task placement constrains that you specify in your cluster. When using this strategy, there is no need to specify a desired number of tasks, a task placement strategy, or use service auto scaling policies.
You can upload a new version of your application task definition, and the ECS scheduler automatically starts new containers using the updated image and stop containers running the previous version.
Amazon ECS tasks running on both Amazon EC2 and AWS Fargate can mount Amazon Elastic File System (EFS) file systems

A

Tasks and scheduling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Clusters
when you run tasks using ECS you place them in a cluster, which is a logical grouping of resources.
Clusters are region-specific
clusters can contain tasks using both the Fargate and EC2 launch types
When using the Fargate launch type with tasks within your cluster, ECS manages your cluster resources
When using the EC2 launch type, then your clusters are a group of container instances you manage. These clusters can contain multiple different container instance types, but each container instance may only be part of one cluster at a time. Before you can delete a cluster, you must delete the services and deregister the container instances inside that cluster.
Enabling managed ECS cluster auto scaling allows ECS to manage the scale-in and scale-out actions of the auto scaling group. On your behalf, ECS creates and Auto Scaling scaling plan with a target tracking scaling policy based on the target capacity value that you specify

A

Clusters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Services
ECS allows you to run and maintain a specified number of instances of a task definition simultaneously in a cluster
In addition to maintain the desired count of tasks in your service, you can optionally run your service behind a load balancer.
There are two deployment strategies for ECS:
Rolling update - this involves the service scheduler replacing the current running version of the container with the latest version. The number of tasks ECS adds or removes from the service during a rolling update is controlled by the deployment configuration, which consists of the minimum and maximum number of tasks allowed during a service deployment.
Blue/Green deployment with AWS CodeDeploy - this deployment type allows you to verify a new deployment of a service before sending production traffic to it. The service must be configured to use either an application load balancer or network load balancer.

A

Services

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Container Agent
The container agent runs on each infrastructure resource within an ECS cluster. It sends information about the resource’s current running tasks and resource utilization to ECS, and starts and stops tasks whenever it receives a request from ECS. Container agents is only supported on EC2 instances.
You can attach multiple target groups to your ECS services that are running on either EC2 or Fargate. This allows you to maintain a single ECS service that can serve traffic from both internal and external load balancers and support multiple paths based on routing rules and applications that need to expose more than one port.
The classic load balancer doesn’t allow you to run multiple copies of a task on the same instance. You must statically map port numbers on a container instance. However, an application load balancer uses dynamic port mapping, so you can run multiple tasks from a single service on the same container instance.
if a service’s task fails the load balancer health check criteria, the task is stopped and restarted. This process continues until your service reaches the number of desired running tasks.
Services with tasks that use the awsvpc network mode, such as those with Fargate launch type, do not support classic load balancers. You must use NLB instead for TCP.

A

Container agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Fargate
you can use Fargate with ECS to run containers without having to manage servers or clusters of EC2 instances. You no longer have to provision, configure, or scale clusters of virtual machines to run containers. Fargate only supports container images hosted on Elastic Container Registry or Docker Hub.

A

Fargate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Task definitions for Fargate Launch Type
Fargate task definitions require that the network mode is set to awsvpc. The awsvpc network mode provides each task with its own elastic network interface.
Fargate task definitions require that you specify CPU and memory at the task level.
Fargate task definitions only support the awslogs log driver for the log configuration. This configures your Fargate tasks to send log information to CloudWatch logs.
Task storage is ephemeral. After a Fargate task stops, the storage is deleted. ECS tasks running on both EC2 and Fargate can mount Elastic File System file systems.
Put multiple containers in the same task definition if: containers share a common lifecycle. Containers are required to run on the same underlying host. You want your containers to share resources. Your containers share data volumes.
Otherwise, define your containers in separate tasks definitions so that you can scale, provision, and deprovision them separately.

A

Task definitions for Fargate Launch Type

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Task Definitions for EC2 launch type
Create task definitions that group the containers that are used for a common purpose, and separate the different components into multiple task definitions. After you have your task definitions, you can create services from them to maintain the availability of your desired tasks. For EC2 tasks, the following are the types of data volumes that can be used: docker volumes, bind mounts.
Private repositories are only supported by the EC2 launch type

A

Task Definitions for EC2 launch type

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Monitoring
you can configure your container instances to send log information to CloudWatch logs. This enables you to view different logs from your container instances in one convenient location. With CloudWatch Alarms, watch a single metric over a time period that you specify, and perform one or more actions based on the value of the metric relative to a given threshold over a number of time period. Share log files between accounts, monitor CloudTrail log files in real time by sending them to CloudWatch logs

A

Monitoring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Tagging
ECS resources, including task definitions, clusters, tasks, services, and container instances, are assigned an Amazon Resource Name (ARN) and a unique resource identifier (ID). These resources can be tagged with values that you define, to help you organize and identify them.

A

Tagging

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Pricing
With Fargate, you pay for the amount of vCPU and memory resources that your containerized application requests. vCPU and memory resources are calculated from the time your container images are pulled until the ECS task terminates.
There is no additional charge for EC2 launch type. You pay for AWS resources (EC2 instances or EBS volumes) you create to store and run your applications.

A

Pricing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Task placement strategy
A task placement strategy is an algorithm for selecting instances for task placement o tasks for termination. When a task that uses the EC2 launch type is launched, EC2 must determine where to place the task based on the requirements specified in the task definition, such as CPU and memory. Similarly, when you scale down the task count, EC2 must determine which tasks to terminate.
You can combine different strategy types to suit your application needs.
Task placement strategies are a best effort.
By default, Fargate tasks are spread across AZs
By default, ECS uses the following placement strategies: when you run tasks with the RunTask API action, tasks are placed randomly in a cluster. When you launch and terminate tasks with the CreateService API action, the service scheduler spreads the tasks across the AWs in a cluster
You can update your placement strategies and constraints without having to recreate a service with the desired changes

A

Task placement strategy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

task placement constraint
A task placement constraint is a rule that is considered during task placement. you can use constraints to place tasks based on AZ or instance type. you can also associate attributes, which are name/value pairs, with your container instances and then use a constraint to place tasks based on attribute.

A

task placement constraint

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Binpack
Place tasks based on the least available amount of CPU or memory. This minimizes the number of instances in use and allows you to be cost-efficient. For example, you have running tasks in c5.2xlarge instances that are known to be CPU intensive but are not memory consuming. You can maximize your instances memory allocation by launching tasks in them instead of spawning a new instance.

A

Binpack

17
Q

Random

Place tasks randomly. You use this strategy when task placement or termination does not matter.

A

Random

18
Q

Spread
Place tasks evenly based on the specified value. Accepted values are attribute key-value pairs, instanceid, or host. spread is typically used to achieve high availability by making sure that multiple copies of a task are scheduled across multiple instances. Spread across AZ is the default placement strategy used for services.

A

Spread