AAWWSS Flashcards
Best practice when is comes to designing architectures in the cloud. Allows common tasks to be repeated faster then doing them manually. Benefits include:
Automation
- Rapid testing and experimentation
- Reducing expenses
- Minimizing human error
- Rolling back changes safely and completely
Approach automation in 2 different ways: imperative and declarative.
This focuses on the specific step by step operations required to carry out a task.
Imperative Approach
This focuses on writing code that declares the desired result of the task, rather then how to carry it out. Requires some intelligent software to figure out the operations required to achieve the desired result.
Declarative Approach
Using code to define your infrastructure and configurations
Infrastructure As Code (IaC) Approach
Automatically creates and configures your AWS infrastructure from code that defines the resources you want it to create and how you want those resources configured.
CloudFormation
Automate the testing, building and deployment of applications to EC2 and on-premises instances
AWS Developer Tools of Commit, CodeBuild, CodeDeploy and CodePipeline
Automatically launches, configures and terminates EC2 instances as needed to meet fluctuating demand
EC2 Auto Scaling
Automates common operational tasks such as patching instances and backing up Elastic Block Store (EBS) volumes
Systems Manager
A collection of three different offerings that help automate instance configuration and application deployments using the popular Chef and Puppet configuration management platform
OpsWorks
In CloudFormation the code that defines your resources is stored in text files called ____. These use the proprietary CloudFormation language which can be written in JavaScript Object Notation (JSON) or YAMI format. Also function as infrastructure documentation.
Templates
Templates can be stored in what
S3 Buckets or a GIT Repository
A container that organizes the resources described in the template. The purpose of it is to collectively manage related resources. If you delete this, CloudFormation automatically deletes all of the resources in it.
A Stack
A JSON document, separate from a template, that specifies what resources may be updated. Create this to guard against accidental updates.
Stack Policy
A feature of CloudFormation that monitors your stacks for such changes and alerts you when they occur.
Drift Detection
Difference between CloudFormation and AWS CLI
CloudFormation: You simply adjust the template or parameters to change your resources and CloudFormation figures out how to perform the changes. Easier to update resources in cloudformation then CLI.
CLI: It’s up to you to understand how to change each resource and to ensure that a change you make to one resource doesn’t break another.
Collection of tools designed to help application developers develop, build, test and deploy their application on EC2 and on-premises instances. These tools automate the tasks that must take place to get a new application revision released in production
AWS Developer Tools
AWS Developer Tools enable more than just application development, you can also use them as part of any IaC approach to automate the deployment and configuration of your AWS infrastructure. AWS Developer tools include:
- CodeCommit
- CodeBuild
- CodeDeploy
- CodePipeline
Lets you create private Git repositories that easily integrate with other AWS services.
CodeCommit
_____ is A version control system that you can use to store source code, CloudFormation templates, documents or any arbitrary files –even binary files such as Amazon Machine Image (AMI) and Docker containers. These files are stored in a repository colloquially known as___
Git
Repo
Git uses a process called ____where all changes or commits to a repository are retained indefintely so you can always revert to an old version of a file if you need it.
Versioning
Useful for teams of people who need to collaborate on the same set of files, such as developers who collaborate on a shared source code base for an application. Allows users to check out code by copying or cloning it locally to their machine. Can make changes to it locally and then check it back in to the repository.
CodeCommit
Git uses a process called ____ to identify differences between different versions of a file.
Differencing
A set of actions performed on source code to get it ready for deployment. This specific action depends on the application.
A Build
One of the primary purposes of this process is to run tests against the new code to ensure it works properly
CodeBuild
Automated testing. Developers check their new or modified code into a share repository multiple times a day.
Continuous Integration (CI)
CodeBuild can get source code from
CodeCommit, GitHub, Bitbucket repository or an S3 Bucket
-Any outputs or artifacts that CodeBuild creates are stored in an S3 Bucket, making them accessible to the rest of your AWS environment.
The Codebuild environment always consists of
an operating system and a Docker image that can include a programming language runtime and tools.
AWS offers preconfigured build environments for Java, Ruby, Python, Go, NodeJS, Android, .NET core, PHP and Docker.
You can choose from the following three different compute type for your build environment
- build.general1.small-3 GB of memory & 2 vCPU
- build.general1.medium-7 GB of memory & 4 vCPU
- build.general1.large- 15 GB of memory & 8 vCPU
All of the compute types support Linux, while the medium and large types also support Windows.
Can automatically deploy applications to 1. EC2 instances, 2. the Elastic Container Service (ECS), 3 Lambda and even on premise servers
Works by pulling source files from an S3 bucket, or a GitHub or BitBucket repository
Does not offer the option to deploy from CodeCommit repo but does allow from an S3 Bucket.
CodeDeploy
CodeDeploy Deploy to EC2 or On-Premise Instances
To deploy to an instance, you must install the CodeDeploy agent. The agent allows the CodeDeploy service to copy files to the instance and perform deployment tasks on it.
The agent has been tested with current versions of the following:
- Amazon Linux
- Ubuntu Server
- Microsoft Windows Server
- Red Hat Enterprise Linux (RHEL)
When it comes to upgrade deployments, CodeDeploy supports 2 different deployment types
- In-place Deployment
2. Blue/Green Deployment
Deploys the application to existing instances. If your application is running, CodeDeploy can stop it, perform any needed cleanup, deploy the new version, and then restart the application
In-place Deployment
Deploys your application to a new set of instances that you either create manually or have CodeDeploy create by replicating an existing Auto Scaling group.
Blue/Green Deployment
CodeDeploy Deploying to EC2
The process of deploying to ECS is similiar to deploying to EC2 instances, except instead of deploying applications files to an instance, You deploy Docker images that run your application in containers
CodeDeploy Deploying to Lambda
Deploying a new Lambda application with CodeDeploy simply involves creating a new Lambda function. If you need to update an existing function, CodeDeploy just creates a new version of that function.
Helps orchestrate and automate every task required to move software from development to production. Enables automation of certain tasks that the respective services don’t offer on their own.
CodePipeline
Can require manual approval before calling CodeDeploy to deploy the application to production. Deploying software this way is called
Continuous Delivery
CodePipeline consists of different stages that consist of one or more actions and these actions can occur sequentially or in parallel. There are six types of actions that you can include in a pipeline:
- Source
- Build
- Test
- Approval
- Deploy
- Invoke
- All of these actions except for the Approval action are performed by a Provider, that depending on the action can be an AWS or third-party service.
CodePipeline Third Party Services
- Source Providers
- Build & Test Providers
- Deploy Providers
Providers that include: S3, CodeCommit, GitHub and the Elastic Container registry (ECR) that stores Docker containers for Elastic Container Service.
Source Providers
CodeBuild and third party tools such as CloudBees, Jenkins, and TeamCity can provide building and testing services.
Build and Test Providers
Supports a number of deploy providers. The most common ones you’ll likely see are
Deploy Providers
- CloudFormation
- CodeDeploy
- ECS
- S3
Others: Elastic Beanstalk, OpsWork Stacks, the AWS Service Catalog and the Alexa Skills Kit.
CodePipeline can automatically deploy your AWS infrastructure using this. Developers can create their own template that builds a complete test environment. Developers can create their own development infrastructure
CloudFormation
This can only source application files from GitHub or S3. But you can configure CodePipeline to pull files from CodeCommit, package them up and put them in an S3 bucket for this to pick up and deploy
CodeDeploy
Codepipeline can deploy Docker containers directly to here. By combining this with ECR for the source stage, you can use this as a source for images, rather then have to keep your images in an S3 bucket
ECS
If you have a website hosted here. You can keep the HTML and other files for your website in a CodeCommit repo for versioning. If you want to update your website, you make your changes in the repo. CodePipeline detects the changes and copies the updates to your S3 bucket.
S3
This CodePipeline action invokes Lambda functions and it works only with AWS Lambda
Invoke Action
This CodePipeline action, you can insert manual approvals anywhere in the pipeline after the source stage
Approval Action
Automatically launches preconfigured EC2 instances. The goal is to ensure you have just enough computing resources to meet user demand with out over provisioning
EC2 Auto Scaling
- Auto Scaling can save money by reducing your capacity when you don’t need it and improve performance by increasing it when you do.
Auto Scaling works by spawning instances from either a launch configuration or launch template. Both achieve the same basic purpose of defining the instance’s characteristics, such as AMI, disk configuration and instance type. The difference between Launch Configuration and Launch Templates are
Launch Configuration: Can be used only with Auto Scaling and once you create a launch configuration, you can’t modify it.
Launch Templates: Newer and can be used to spawn EC2 instances manually, even without Auto Scaling. You can also modify them after you create them.
- Instances created by Auto Scaling are organized into:
- All instances there can be automatically registered with:
- This distributes traffic to the instances, spreading the demand out evenly among them:
- An Auto Scaling Group
- Application Load Balancer Target Group
- Application Load Balancer
When you configure an Auto Scaling Group, you define a desired capacity- the number of instances that you want Auto Scaling to create and it then strives to maintain that. Failed instances are self-healing and can be replaced. E
Auto Scaling can use these two health checks to determine whether an instance is healthy:
- EC2 Health Checks
- Elastic Load Balancing (ELB) Health Checks
This health check considers the basic health of an instance, whether it’s running and whether it has network connectivity
EC2 Health Check
This health check looks at the health of the application running on an instance
Elastic Load Balancing (ELB) Health Check
Scaling actions control when Auto Scaling launches or terminates instances. You control how many instances are launched or terminated by specifying a min and max group size. These are the two types of scaling:
- Dynamic Scaling
2. Scheduled Scaling
With this scaling type, Auto Scaling launches a new instance in response to increased demand using a process called Scaling Out. Can also scale in, terminating instances when demand ceases. You would scale in or out according to a metric such as
Dynamic Scaling
- Average CPU utilization of your instances or based on the number of concurrent application users.
Auto Scaling can also scale in our out according to schedule. This is useful if your demand has peaks and valleys. This feature looks at historic usage patterns and predicts future peaks. It then automatically created a scheduled scaling action to match. It needs at least one day’s worth of traffic data to create a scaling schedule
Scheduled Scaling
- Predictive Scaling
- An approach to ensure accurate and consistent configuration of your systems:
- While __ is concerned with carrying out a task, ___ is primarily concerned with enforcing and monitoring the internal configuration state of your instances to ensure they’re what you expect:
- Such configuration states primarily include:
- Configuration Management
- Automation, Configuration
- Operating System Configurations and what software is installed.
As with automation in general, configuration management tools use either Imperative or Declarative Approaches. AWS offers both approaches using two tools to help you achieve configuration management of your EC2 an on-premise instances:
- Systems Manager: Uses Imperative Approach to get your instances and AWS environment into the state that you want.
- OpsWorks: Uses Declarative Approach
SYSTEMS MANAGER
- These are scripts that run once or periodically that get the system into the state you want:
- Using 1, you can:
- Systems Manager can run commands periodically or on a trigger, such as new instance launch. System Manager requires this to be installed on the instances that you want it to manage:
- In addition to providing configuration management for instances, Systems Manager lets you perform many administrative AWS Operations.
- Using this, you can deploy installable software packages to your instances. Create a zip archive, put archive in S3 Bucket and tell this to find it. This takes care of deploying and installing the software:
- Command Documents
- Install Software on an Instance, Install the latest security patches or take inventory of all software on an instance.
- An Agent
- Automation Documents
- EX: Automatically create a snapshot of an Elastic Block Store (EBS) volume, launch or terminate an instance, create a CloudFormation stack or even create an AMI from an existing EBS Volume. - Systems Manager Distributer
- A set of three different services that let you take a declarative approach to configuration management:
- Uses two popular configuration management platforms. These can configure operating systems, deploy applications, create databases and perform just about any configuration task you can dream of, all using code:
- 1 comes in three different types to meet any configuration management decision:
- OpsWorks
- Chefs and Puppet Enterprise
- AWS OpsWorks for Puppet Enterprise, AWS OpsWork for Chef Automate and AWS OpsWork Stacks
These are robust and scalable options that let you run managed servers on AWS. These would be a good fit if you want to use configuration management across all your instances
AWS OpsWorks for Puppet Enterprise and AWS OpsWorks for Chef Automate
both all in. The high level architecture of these are similar in that they both consist of at least 1 Puppet master server or Chef server to communicate with your managed nodes - EC2 or on-premise instances - using an installed agent.
You define the configuration state of your instances (such as operating system configurations applications- using Puppet modules or Chef recipes. OpsWorks manages the servers, but your responsible for understanding and operating the Puppet or Chef software.
Provides a simple and flexible approach to using configuration management just for deploying applications. Instead of going all in on configuration management, you can just use it for deploying and configuring applications. It then takes care of setting up the supporting infrastructure
AWS OpsWorks Stack
– This lets you build your application infrastructure in stacks.
A collection of all the resources your application needs:
EC2 instances, databases, application load balancers, etc.
Each stack contains at least one layer, which is a container for some component of your application.
There are two basic types of layers that OpsWork uses:
- OpsWorks Layers
- Service Layers
OpsWorks Stacks
A template for a set of instances. It specifies instance-level settings such as Security Groups and whether to use Public IP Addresses.It also includes an auto-healing option that automatically re-creates your instances if they fail. OpsWorks can also perform load-based or time-based auto scaling, adding more EC2 instances as needed to meet demand. It can provision Linux or Windows EC2 instances or you can add existing Linux EC2 or on premise instances to a stack.
- OpsWorks Layers
- Supports Amazon Linux, Ubuntu Server, CentOS, and Red Hat Enterprise Linux.
To Configure your instances and deploy applications, OpsWorks uses the same declarative Chef recipes as the Chef Automate platform, but doesn’t provision a Chef server. Stacks perform configuration management tasks using this
Chef Solo Client
- A stack can also include these to extend the functionality of your stack to include other AWS Services:
- It includes the following layers:
- OpsWork Service Layers
- Relational Database Service (RDS): Using an RDS service layer, you can integrate your application with an existing RDS instance.
- Elastic Load Balancer (ELB): If you have multiple instances in a stack, you can create an application load balancer to distribute traffic to them and provide high availability.
- Elastic Container Service (ECS) Cluster: If you prefer to deploy your application to containers instead of EC2 instances, you can create an EC2 Cluster Layer that connects your OpsWorks stacks to an existing ECS Cluster.
Code that uses Imperative Commands specify the exact steps to perform the task. These Automation Services use an imperative approach/languages:
- AWS Systems Manager, CodeBuild, CodeDeploy
Code that uses Declarative Commands are more abstract and you specify the end result of the task. Results oriented, user friendly paradigm. These Automation Services use a declarative approach/languages:
- CloudFormation, OpsWorks for Puppet Enterprise, OpsWork for Chef Automate and OpsWork Stacks .
A form of automation that emphasizes configuration consistancy and compliance
Configuration Management
When you automate infrastructure builds using code, the code simultaneously serves as
de facto documentation
-Code can be placed in Version Control, making it easy to track changes and even roll back when necessary.
- This involves developers regularly checking in code as they create or change it:
- Performs build and test actions against it and offers immediate feedback look to developers to fix it fast:
- Expands and includes deploying the application to production after a manual approval. This effectively enables push-button deployment of an application to production:
- Continuous Integration
- Automated Process
- Continuous Delivery
- What is an advantage of using CloudFormation:
- What format does CloudFormation Templates support:
- Advantage of using Parameters in a Template:
- Why would you use CloudFormation to Automatically create resources for a development environment instead of creating them using AWS CLI commands:
- It lets you create multiple separate AWS environments using a single template.
- YAML & JSON
- Allow customizing a stack without changing the template.
- Resources CloudFormation creates are organized into stacks and can be managed as a single unit. CloudFormation stack updates help ensure that changes to one resource won’t break another.
- What are two features of CodeCommit:
- Which feature of CodeCommit can understand what code change introduced a bug:
- What software development practice regularly tests new code for bugs but doesn’t do any thing else:
- Versioning and Differencing
- Differencing
- Continuous Integration
- What does a CodeBuild environment always contain:
- What can a CodeDeploy Do:
- What is the minimum number of actions in a CodePipeline
- An operating system & Docker image
- Deploy an application to an on-premise Windows instance. Deploy a Docker container to the Elastic Container Service. Upgrade an application on an EC2 instance running RedHat Enterprise Linux.
- 2 (must at least consist of a source stage and deploy stage.
- You want to predefine the configuration of EC2 instances that you plan to launch manually and using Auto Scaling. What resource must you use:
- What auto scaling feature creates a scaling schedule based on past usage patterns:
- What type of AWS System Manager document can run Bash or PowerShell Scripts on an EC2 instance:
- What type of AWS System Manager document can take a snapshot of an EC2 instance
- Launch Template
- Predictive Scaling
- Command Document
- Automation Document
- Which of the following OpsWorks services uses Chef recipes:
- What configuration management platforms does OpsWorks support:
- What OpsWorks Stacks Layers contains at least one EC2 instance:
- AWS OpsWork Stacks
- Puppet Enterprise and Chef
- Only an OpsWork Layer contains at least 1 EC2 instance.