S.A.A. Flashcards

1
Q

CH 1
PG 5
Three models of cloud computing

A
IaaS-
Customer manages**
Application
Runtime
Security
Database
AWS manages**
Servers 
Virtualization
Server Hardware
Storage
Networking

PaaS
Customer manages**
Application
AWS manages everything else

SaaS
AWS manages everything

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

CH1
PG5
What are the three cloud computing Deployment Models?

What are the numbers of:
Regions?
AZ’s?
Edge locations?

A

The 3 deployment models are: All in cloud, Hybrid, and On premise Cloud

Regions QTY 18
AZ’s QTY 53
Edge locations QTY 18

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

CH1
Pg 9

What are the important industry certifications AWS has earned?

A
  • SOC 1/SSAE 16/ISAE 3402/ (formerly SAS 70)
  • SOC 2
  • SOC 3
  • FISMA, DIACAP, and FedRAMP
  • DOD CSM Levls 1-5
  • PCI DSS Level 1
  • ISO 9001/ ISO 27001
  • ITAR
  • FIPS 140-2
  • MTCS
  • Level 3
  • Cloud Security Alliance (CSA)
  • Family Educational rights and Privacy ACT (FERPA)
  • Criminal Justice Information Services (CJIS)
  • Health Insurance Portability and Accountability Act (HIPAA)
  • Motion Picture Association of America (MPAA)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

CH1
PG11

Compute:

Amazon Elastic Compute Cloud (EC2)

Amazon EC2 Auto Scaling

AWS Lambda

EC2 container service

A

EC2 = virtual instances, up to 30 different types: Compute, memory, GPU optimized.

EC2 autoscaling = helps automatically scale EC2 instances up or down. Creates high availability architecture. Also ensures you are always running with the desired instance number

AWS lambda = enables you to run code without provisioning or managing any servers or infrastructure. Scales automatically, only pay when the code is running.

EC2 container service = allows you to run Docker’s containers on Amazon EC2 instances. Managed with API calls. ECS, you don’t have to install, scale or operate your own cluster management infrastructure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

CH1
PG12

Compute:

Elastic Beanstalk?

LightSail?

Batch?

A

Elastic beanstalk = lets you run and manage web applications without worrying about the underlying infrastructure. Use ECS to deploy web applications and Elastic Beanstalk automatically handles deployment, load balancing, auto scaling, and application health monitoring.

Lightsail = great for SMB, developers, students, and anyone who needs a Simple Virtual private Server (VPS) solution. Lightsail provides storage, networking capacity, and compute capabilities to manage and deploy web sites and web applications in the cloud. One stop shop to launch your project instantly.

Batch = allows you to run thousands of batch computing jobs on AWS. Batch dynamically provisions the optimal type and quantity of compute resources such as memory optimized instances, CPU intensive instances, or storage optimized instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

CH1
PG 12

Networking:

Virtual Private Cloud?

Route 53?

Elastic Load Balancing?

Direct Connect?

A

Virtual Private Cloud = allows you to isolate cloud resources within your own private virtual network. VPC is your own data center in the cloud.

Route 53 = is a Domain Name System web services. SLA 100% uptime. Its IPv4 and IPv6

Elastic Load Balancing = allows you to automatically distribute the local across multiple Amazon EC2 instances. Supports load balancing of HTTP, HTTPS, and TCP traffic to EC2 instances. Can be integrated with Auto Scaling

Direct Connect = establishes a private dedicated network connectivity from your data center to AWS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

CH1
PG13

Security and Compliance:

Identity and Access Management?

Inspector

Certificate Manger

Directory Service

A

Identity and Access Management = (IAM) is used to create users, groups, and roles. It is also used to mange and control access to AWS services and resources. It can also be federated with other systems, thereby allowing existing identities (groups user, and roles) of your enterprise to access AWS resources.

Inspector = is an automated security assessment service that helps you to identify the security vulnerabilities in you application when it is being deployed as well as when it is running in a production system. Also assess if an application is deviating from best practices.

Certificate Manager = I sued to manage secure sockets layer (SSL) certificates to use with AWS services. With ACM you can provision, manage and deploy SSL/Transport Layer Security (TLS) certificates. Also used to obtain, renew and import certificates.

Directory Service = is a managed directory service built on MS Active Directory, it can be used to manage AD in the cloud. It enables single sign on and policy management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

CH1
PG13

Security and Compliance:

Web Application Firewall?

Shield?

A

Web Application Firewall = (WAF) is a web application firewall that detects malicious traffic targeted at the web applications. WAF can be used to create rules to protect against SQL injection and scripting

Shield = is a managed service that protects against distributed denial of service (DDoS) attacks targeted at the web applications.
Standard – is free and protects against most commonly occurring DDoS
Attacks
Advanced - includes additional protection for Elastic Load
Balancer, Amazon CloudFront, and Amazon Route 53

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

CH 1
PG 14

Storage and Content Delivery:

Simple Shared Storage (S3)?

Glacier?

Elastic Block Storage?

Elastic File System?

A

S3 = It is the storage for the internet, also used as an object store. Lets you store and retrieve any amount of data, at any time, from anywhere on the Web. It is highly scalable, reliable and secure. Each file cant exceed 5TB

Glacier = is a low cost cloud storage that is mainly used for data archiving and long-term back up purposes. No limit to amount stored. Cheaper than S3, and pay only for what you use.

Elastic Block Storage = choose from either magnetic or SSD. EBS are automatically replicated within their AZ’s to provide fault tolerance and high availability. Can create snapshots using EBS.

Elastic file system = is a fully managed service that provides easy, scalable, shared file storage with Amazon Ec2 instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

CH1
PG 15

Storage and Content Delivery:

Storage Gateway?

Import/Export Options?

Cloud Front?

A

Storage Gateway = helps integrate on-premise storage with AWS cloud storage. Its delivered in a virtual machine installed in an on-premise data center. Can be connected as a file server or can connect it as a local disk. Can be integrated with Amazon S3, Amazon EBS, and Amazon Glacier.

Import/Export Options = can be done with Snowball 80TB or 50TB version. Another option is Direct Connect.

Cloud Front = is the global content delivery network (CDN). It helps to accelerate the delivery of the static content of your web sites including photos, videos, or any other web assets. Can also be used to deliver dynamic content.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

CH1
PG 16

Database:

Relational Database Service?

Dynamo DB?

Redshift?

ElasticCache?

Aurora?

A

Relational Database Service = is a fully managed relational database service. RDS supports mySQL, Oracle, SQL Server, PostgretSQL and Maria DB. Also supports Amazons own database Aurora. Can scale up or down.

DynamoDB = is a fully managed NoSQL database service of AW. It is highly scalable, durable, and highly available and is capable of handling any data volume. It delivers single digit millisecond latency at any scale. No need for database administration. Great fit for mobile, web, gaming, Internet of Things (IoT)

Redshift = is a fully managed peta-byte scale data warehouse service. Stores data in column format providing better I/O efficiency. Continuously backed up on S3

ElasticCache = is a service that helps in deploying an in-memory engines: Redis and Memcached. Since its managed AWS will take care of patching, monitoring, failure recovery and back up. Can also be integrated with CloudWatch and SNS

Aurora = is Amazons relational database built for the cloud. It supports two open source RDBMS engines: MySQL and PostegreSQL it supports database up to 64TB. By default its mirrored into 3 AZ’s and 6 copies of the data are kept. You can create up to 15 read replicas.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

CH1
PG 17

Analytics:

Athena?

EMR?

ElasticSearch Service?

CloudSearch?

Data Pipeline?

A

Athena = is a severless interactive query service that enables users to easily analyze data in S3 using standard SQL. No need for infrastructure setup or management required for end users. Uses Presto with full standard SQL support that works with a variety of standard formats JSON, ORC, CSV, ARVO and Apache Parquet

EMR = is a web service that enables users, businesses, enterprise, data analysist, researchers and developers to process enormouse amounts of data. Utilizes hosted Hadoop freamework running on the web-scale infrastructure of Amazon S3 and Amazon EC2

Elasticsearch Service = is a fully managed web service that makes it easy to create operate and deploy and scale ElastichSearch clusters

CloudSearch = is a fully managed web service that allows you to search solutions for your applications or web site. Supports 34 languages

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

CH!
PG 18

Analytics:

Kinesis?

QuickSight?

A

Kinesis= is a fully managed service that collect, analyze, and process retime streaming data. This enables users to get timely insights and react quickly to new information

Quicksight = is a could powered, fully managed business analytics service that makes it east to build visualizations, perform ad hoc analysis, and quickly get insight from your data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

CH1
PG 18

Application Services:

Amazon API Gateway?

Step Function?

Simple Workflow Service?

Elastic Transcoder?

A

API Gateway = is a fully managed service that provides developers with scalable, flexile, pay as you go service that handles all aspects of building, deploying and operating robust API’s for application back-end services such as code

Step Function = is a fully managed service that enables users to efficiently and securely coordinate various components of distributed applications and microservices using visual workflows. Service provides a graphic interface for users to visualize and arrange the components of their applications, making it easy to run and build multiple layered step applications

Simple Workflow Service = SWF is a web abased cloud service that coordinates work across distributed applications components. It enables applications for a rance of use cases, including web applications back ends, media processing, business process workflows and data. Analytics pipeline to be designed as a coordination of jobs and tasks.

Elastic Transcoder = it converts (or transcode) video and audio files from their source format into the output format of their choice that they can play back on various devices such as smartphones, desktops televisions, tablets and PC’s

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

CH1
PG 19

Developer Tools:

CodeCommit?

CodePipeline?

CodeBuild?

CodeDeploy?

A

CodeCommit = is a fully managed source control service that host highly scalable private GIT repositories.

CodePipeline = is a fully managed continuous integration and continuous delivery service for quick reliable application and infrastructure updates. Codepipeline builds, tests, and deploy code every time the code is modified, update and checked in based on the release process models you define.

CodeBuild = is a fully managed build service that builds and compiles source code, run tests, and products software packages that are ready to deploy, eliminating the need to provision manage and scale build servers.

CodeDeploy = is fully managed service that automates code deployments to any instance or servers, including Amazon EC2 instances and servers running on premises.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

CH1
PG20

Management Tools:

CloudFormation?

ServiceCatlog?

OpsWorks?

CouldWatch?

A

CloudFormation = helps automate resource provisioning using declarative templates and deploying resource stacks. It gives developers and systems administration an easy way to create and manage collections of related AWS resources, provisioning, and updating them in an orderly and predictable fashion.

Service Catalog = allows IT administrators to create, manage and distribute catalogs of approved products to end users, who can then access the products they need in a personalized portal.

OpsWorks = for Chef automated provides a fully manage Chef server and suite of automation tools that gives you workflow automation for continuous deployment automated testing for compliance and security and user interace that gives you visibility into your nodes and their status.The Cehf server gives you full stack automated by handling operational tasks suchas software and operating system configurations package installations database setups and more.

CloudWath = is a monitoring service for AWS cloud resources and the applications you run on AWS. It is used to collect and track metrics collect and monitor log files and set alarms. It is used to get systemwide visibility into resource utilization, application performance, and operational health.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

CH!1
PG 21

Management Tools:

AWS Config?

AWS Cloud Trail?

A

AWS Config = is a fully managed service that provides you with an AWS resource inventory configuration history and configuration change notifications to enable security and governance. It enables compliance auditing, security analysis, resources change tracking, and troubleshooting.

AWS Cloudtrail = is a managed web service that records AWS API calls and user activity in your account and delivers log files to you via Amazon S3. Provides visibility into user activity by recording API calls made on your account.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

CH1
PG 21

Messaging:

Simple Notification Service?

Simple Email Service?

Simple Queue Service?

A

Simple Notification Service = is scalable, flexible and cost effective web service that makes it easy to configure operate and send notifications from the cloud

Simple Email Service = SES is a way to publish messages from an application and immediately deliver them to subscribers or other applications.

Simple Queue Service = SQS is a managed web service that gives you access to messages queues to store messages waiting to be processed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

CH1
PG 22

Migration:

Application Discovery Service?

Database Migration Service?

Snowball?

Server Migration Service?

A

Application Discovery Service = enables you to quickly and reliably plan applications migration projects by automatically identifying applications running in on premise data centers and mapping their association dependencies and their performance profiles.

Database Migration Service = helps you migrate database to AWS reliably and securely. The source database remains fully operational during the migration, minimizing downtime. Data can be migrated homogenously or heterogeneously.

Snowball = helps transform a petabyte-scale amount of data into and out of the AWS cloud.

Server Migration Service = SMS is an agentless service that helps coordinate, automate, Schedule, and track large scale server migrations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

CH1
PG 22

Artificial Intelligence:

Lex?

Polly?

Rekogntion?

Machine Learning?

A

Lex = is a fully managed service for building conversational chatbot interfaces using voice and text. Provides high-quality language understanding capabilities and speech recognition?

Polly = converts text into lifelike speech. It enables existing applications to speak and create the opportunity for entirely new categories of speech-enabled products, including chatbots, cars, mobile apps, devices and web applications

Rekognition = is a fully managed easy to use reliable and efficient image rekognition service powered by deep learning. Its API’s detects thousands of scenes and objects, analyze faces, compares faces, to measure similarity and identifies face in a collect of faces

Machine Learning= is a fully managed machine service that allows you to efficiently build predictive applications, including demand forecasting fraud detection and click prediction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

CH1
PG 23

Internet of Things:

IoT Platform?

IoT Greengrass?

IoT Button?

A

IoT Platform = is a fully managed cloud platform that lets connected devices interact with cloud applications and other devices securely and efficiently.

IoT Greengrass = is a software solution that lets you run local compute, messaging and data caching for connected IoT devices in an efficient and secure way. It enables you to run Lambda functions, keep data in sync and communicate with other devices securely, even when Internet connectivity is not possible.

IoT Button = is a programable button based on the Amazon Dash button hardware. This simple wifi device is easy to configure and designed for developers to get started with AWS IoT, AWS Lambda, Amazon DynamoDB,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

CH 1
PG 24

Mobile services:

Cognito?

Mobile Hub?

Device Farm?

Mobile Analytics?

A

Cognito = Is a web service lets you add users to sign up and sing into your mobile and web apps fast and reliability. It lets you authenticate users through social identity provides such as Twitter, Facebook, or Amazon SNS and many other Amazon web services without writing device specific code

Mobile Hub = lets you can select and configure features to add toyour mobile app. AWS Mobile Hub features help intergrate various AWS services, client SDK’s and client integration code to quickly and easily add new features and capabilities to your mobile app

Device Farm = lets you test mobile apps on real mobile devices and tablets

Mobile Analytics = enables you to measure the app usage and revenue. It helps you track key trends and patterns such as new users versus returning users, user retention, app revenue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

CH2
PG 29

Storage:

Advantages of Simple storage Service?

A

Simple – intuitive graphic web based console. Also has a mobile app used to manage S3. For easy 3rd part integration S3 provides REST API’s and SDK’s
Scalable – can store unlimited data
Durable – only service that provides 99.99999999 percent durability
Secured – supports encryption and the data Is automatically encrypted once uploaded. Supports SSL and IAM
High Performance – lets you choose the AWS region to store data to end user to reduce latency. Also integrated with CloudFront
Available – has 99.99 availability annually give the following potential unavailability
Daily: 8.6 seconds
Weekly: 1 minute and .5 seconds
Monthly: 4 minutes and 23 seconds
Yearly: 52 minutes and 35.7 seconds
Easy integration – can be easily integrated with third party tools as a result it is easy to build an application on top of S3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

CH2
PG30

Usage of Amazon S3 in Real Life?

A

Backup – popular for backup files since its durability is 99.999999999. Also provides versioning capacity
Tape Replacement – S3 replaced magnetic tapes
Static web stie hosting – S3 is scalable and can handle any amount of traffic, and you can store unlimited data
Application hosting – used for hosting mobile and internet based-apps. You can access and deploy website from anywhere in the world
Disaster recovery – S3 supports cross region replication you can automatically replicate each S3 object to a different bucket in a different region
Content distribution – S3 often used to distribute content over the internet. The content can be anything such as files or photos media and so on. Also be used as a software delivery platform. Can be distributed through S3 or Cloud Front.
Data Lake – is a central place for storing massive amounts of data that can be processed, analyzed and consumed by different business units in an organization. S3 is often used with EMR, Redshit, Redshift Spectrum, Athena, Glue and Quick sight for running big data alalytics
Private Repository – using amazon S3 you can create your own private repository like with GIT YUM or Maven

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

CH2
PG32

S3 basic concepts?

A

Bucket – is actually a container for storing objects. Cannot have two buckets with the same name even across multiple regions. Buckets serve the following purposes
• Organizes S3 namespace at the highest level
• Identifies the accounts responsible for charges
• Plays a role in access control
• Serves as the unit of aggregation for usage reporting
By default the data of a bucket is not replicated to any other region unless you do it manually or by using cross region replication. The object stored in the reion never leaves the region unless you explicitly transfer it to a different region.

S3 is accessible through API, which allows developers to write applications on top of S3. The fundamental interface is Representation State Transfer (REST) API. S3 does support SOAP over HTTP but its depreciated. Use REST API over SOAP.
Using REST API you can create , read , update, delete and list.
HTTPS is better over HTTP since its secure.
Verbs
• GET = Read
• PUT = Create
• DELETE = Delete
• POST = Create

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

CH2
PG 35

Steps for installing AWS Command Line Interface?

S3 Data Consistency Model?

A

Steps for installing AWS Command Line Interface = is primarily distributed on Linux, Windows, and macOS in pip. A package manager for Python that provides ane easy way to install python packages and their dependencies

S3 Data Consistency Model = S3 is intended to be a “write once read many times” storage. Therefore infrastructure is different from traditional SAN architecture. The entire architecture is redundant.S3 Standard uses a minimum of 3 AZ’s to store the data.S3 does not support object locking, which means If there are request to update the same file concurrently (PUT request), the request with the latest time stamps wins.

Name of an S3 bucket is unique and by the combining the bucket name and object name (key), every object can be identified uniquely across the globe.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

CH2
PG 40

Encryption in Amazon S3:

SSE with Amazon S3 Key Management (SSE-SE)?

SSE with customer-provided keys (SSE-C)?

SSE with AWS Key Management Service KMS (SSE-KMS)?

A

Side notes: If you upload the data using HTTPS and use SSL-encrypted endpoints the data is automatically secure for all the uploads and downloads, and the data remains encrypted during transit.

SSE with Amazon S3 Key Management (SSE-SE) = In this case, Amazon S3will encrpt your data at rest and manage the encryption keys for you.
• Each object is encrypted using a per object key
• The per object key is encrypted using a master key
• The master key is managed using S3 management
• Can be turned on through S3 console or command line interface or SDK

SSE with customer-provided keys (SSE-C) = Amazon will encrypt your data at rest using the custom encryption keys that you provide. To use SSE-C simply include your custom encryption key in your upload request, and Amazon S3 encrypts.the objects using the key and securely stores the encrypted data at rest.

SSE with AWS Key Management Service KMS (SSE-KMS)= with this there are separate permissions for the user of the master key , providing an additional layer of control as well as protection against unauthorized access to your object. KMS provides an audit trail so you can see whop used your key to access which object and when. As well as view failed attempts to access data from users without permission

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

CH2
PG 44

Amazon S3 Storage Class:

S3 standard?

S3 standard Infrequent Access (IA)?

S3 Reduced Redundancy Storage (RRS)?

S3 One Zone-Infrequent Access (S3 One Zone IA)?

Glacier?

A

S3 standard = is the default for frequently accessed data. Most common usage for web sites, content storage, and big data analytic, mobile applications. This is designed for durability 99.(11x9)%. Supports SSL encryption of data

S3 standard Infrequent Access (IA) = for data accessed less frequently. Had the same durability but its availability is 99.9 percent over a given year. Cost is much cheaper than S3 standard, which makes it economical for long term storage, back up, and disaster recovery

S3 Reduced Redundancy Storage (RRS) = is used to store noncritical, nonproduction data. It is often used for storing data that can be easily reproduced. RRS has 99.99 durability and availability. It is designed to sustain the loss of data in a single facility.

S3 One Zone-Infrequent Access (S3 One Zone IA) = is for data access less frequently but requires rapid access when needed. Same high durability and through put and low latency of S3 standard but cost 20% less

Glacier = is the storage class mainly used for data archiving. Provides 99.99999999 durability of objects and used for archiving data.
• Expedited retrieval: 1-5 minutes
• Standard retrieval: 3-5 hours
• Bulk retrieval: 5-12 hours

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

CH2
PG46

Versioning of Objects in Amazon S3?

Amazon S3 Object Lifecycle Management?

Amazon S3 Cross Region Replication?

A

Versioning of Objects in Amazon S3 = is like insurance policies; you know that regardless of what happens your file is safe. Once you enable versioning you cant disable it. However you can suspend versioning to stop the versioning of objects.

Amazon S3 Object Lifecycle Management =
• Transition Action- the means you can define when the opbjects can be transitioned to another storage class. For Example you may want to cope all older log files after another seven days to S3 IA
• Expiration action- in this case you define what is going to happen when the objects expire. For example if you delete a file from S3 whare are you going to do that file.

Amazon S34 Cross Region Replication = if you automatically copy the files from one region to another, you need to enable cross-regional replication. If you don’t enable versioning you wont be able to do cross-region replication. You Will get an error. CRR copies only the new objects. If you have preexisting files in the bucket you must copy them manually.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

CH2
PG51

Static Website Hosting in Amazon S3?

Amazon Glacier:

Magnetic tape replacement?

HealthCare/Life scientific data storage?

Media assets archiving/digital preservation?

Compliance archiving/long-term backup?

A

Static Website Hosting in Amazon S3 = Lab

Magnetic tape replacement = has zero addition, these is no maintenance overhead like with magnetic tape, and you get the same durability as S3

HealthCare/Life scientific data storage = with the advancement in life sciences such as genomic data, a single sequence of genomes can take up to a terabyte of data

Media assets archiving/digital preservation = Media assets such as video of news coverage and game coverage can grow to several petabytes quickly.

Compliance archiving/long-term backup = many organization have a compliance requirements to achieve all the data that is x years old. Amazon glacier vault lock, helps you set compliance controls to meet your compliance objectives. You will learn more about Amazon Glacier Vault Lock in the next section.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

CH2
PG 53

Amazon Glacier Key Terminology?

Accessing Amazon Glacier?

Uploading Files to amazon Glacier?

Amazon Elastic Block Store?

A

Amazon Glacier Key Terminology = items stored in glacier are considered archived, you can aggregate your files using ZIP or TAR. No limit on how many files you can store each item cant be higher than 40TB. Items are write once but wont be able to modify it. You can use IAM and create the vault-level access policies. You can create up to 1,000 vaults per account per-region. WORM (write once read many)

Accessing Amazon Glacier =
• can access it directly via the Amazon Glacier API or SDK.
• S3 lifecycle integration
• Via third party tools and gateways

Uploading Files to amazon Glacier = can upload direct connect, or snowball. You need to create a vault and an access policy. The next step is to crate the archives upload them

Amazon Elastic Block Store = are highly available, highly reliable volumes that can be leveraged as an Amazon EC2 instances boot partition or attached to running Amazon EC2 instances as a standard block device. EBS considered the harddrive, and multiple can be attached to EC2. Only one EBS can be attached to an EC2 at time. EBS provides the ability to create apoin-in-time consistent snapshot of your vomues that are then stored in amazon S3 and automatically replicated across multi availability zones.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

CH2
PG56

Features of Amazon EBS:

Persistent storage?

General Purpose?

High Availability and reliability?

Encryption?

Variable Size?

Easy to use?

Designed for resilience?

A

Persistent storage =as discussed before the volume lifetime is independent of any particular Amazon EC2 instance

General Purpose = Amazon EBS volumes are raw unformatted block devices that can be used from any operating system

High Availability and reliability = EBS volumes provide 99.999 percent availability and automatically replicate within their availability zones to protect your application from component failure. It is important to note that EBS volumes are note replicated across multiple AZ’s rather they are replicated within different facilities within the same AZ

Encryption = Amazon EBS encryption provides support for the encryption of data at rest and data in transit within the same AZ

Variable Size = Volumes sizes range from 1GB to 16TB and are allocated in 1GB increments

Easy to use = Amazon EBS volumes can be easily created, attached, backed up, restored and deleted.

Designed for resilience = The annual failure rate (AFR) of Amazon EBS is between .1 percent to .2 percent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

CH2
PG57

AWS Block storage offerings:

Amazon EC2 instance store?

Amazon EBS backed Volume?

Amazon EBS SSD-backed volume?

IOPS?

A

Amazon EC2 instance store = the instance store is ephemeral, which means all the data stored in the instance store is gone the moment the EC2 instance shuts down. The data neither persist nor is it replicated in the instance store.

Amazon EBS volumes = have multiple options that allow you to optimize storage performance and cost for any workload you would like to run. Options are divided into two major categories: SSD-backed storage which is mainly used for transactional workloads such as database and boot volumes, and HDD-backed storage, which is for throughput intensive workloads such as log processing and mapreduce.

Amazon EBS-backed volume = elastic volumes are a feature of Amazon EBS that allows you to dynamically increase capacity, tune performance, and change the type of live volumes with no downtime or performance impact. You can simply use a metric from CloudWatch and write a Lambda function to automate it.

Amazon EBS SSD-Backed Volume = there are two types of General Purpose SSD (gp2) and provisional IOPS SSD (io1). SSD backed volume include the highest performance iO1 for latency-sensitive transactional workloads and gp2, which balances price and performance for a wide variety of transactional data.

IOPS = short for input/output operations per second. A drive spinning at 7,200 RPM can perform at 75 to 100 IOPS, whereas a drive spinning at 15,000 RPMwill deliver 175 to 210. The exact number will depend on a number of factors including the access patter (random or sequential) and the amount of data transferred per read or write operation

34
Q

CH2
PG58

General Purpose SSD?

Provisioned IOPS SSD?

Amazon EBS HDD Backed Volume?

Throughput Optimized HDD?

A

General Purpose SSD = delivers single digit millisecond laterncy, which is actually a good use case for the majority of workloads. Gp2s can deliver between100 to 10,000 IOPS. Gp2 provides great performance for a broad set of work loads, all at low cost. They reliably deliver three sustained IOPS for every gigabyte of configured storage

Provisioned IOPS SSD = Example if you have a volume of 100GB then the IOPS you can providio0ns with that will be 100*50

Amazon EBS HDD Backed Volume = include Througput Optomized HDD (st1) which can be used for frequently accessed throughput intensive work-loads; and HDD (sc1) which is for less frequently accessed data that ahs the lowest cost

Throughput Optimized HDD = (st1) is good when they workload you are going to run defines the performance metrics in terms of throughput instead of IOPS. The hard drives are based on magnetic drives. There are lots of workloads that can leverage this EBS volume such as data warehouse, ETL, log processing, mapreduce jobs. And so on. This volume is ideal for any workload that involves sequential IO. Any workloads that has are requirement for random IO should be run either on general-purpose or on a provisional IOPOS depending on the price/performance need.

35
Q

CH2
PG59

Cold HDD?

Amazon Elastic File System:

Fully managed?

File system access semantic?

File system interface?

A

Cold HDD = just like st1, Cold HDD (sc1) also defines performance in terms of throughput instead of IOPS. Great use case for noncritical, cold data workloads and is designed to support infrequently accessed data. Similar to st1 sc1 use a burst- bucket model, but in this case the burst capacity is less since overall throughput is less. To ensure a consistent snapshot, it is reommended that you detach the EMS volumesfrom the EC2 instance, issue that snapshot command and then reattach the EBS volume to instance

Amazon Elastic File System:

Fully managed = is a fully managed file system, and you don’t have to maintain any hardware or software. There is no overhead of managing the file system since it is managed service

File system access semantic = You get what you would expect from a regular file system, including read-after-write consistency, locking, the ability to have a hierarchical directory structure, file operations like appends, atomic renames, the ability to write to a particular block in the middle of a file and so on.

File system interface = it exposes a file system interface that works with standard operating system API’s. EFS appears like any other file system to your operating system. Application that leverage standard OS API’s to work with files will work with EFS.

36
Q

CH2
PG60

Shared Storage?

Elastic and scalable?

Performance?

Highly available and durable?

A

Shared Storage = it is a shared file system. It can be shared across thousands of instances. When EFS is shared across multiple EC2 instances, all the EC2 instances have access to the same data set.

Elastic and scalable = EFS elastically grows to petabyte scale. You don’t have to specify a provisioned size up front. You just create a file system, and it grows and shrinks automatically as you add and remote data.

Performance = it is built for performance across a wide variety of workloads. It provides consistent, low latency, high throughput, and high IOPS

Highly available and durable = the data is EFS is automatically replicated across AZ’s and also well protected from data loss.

37
Q

CH2
PG61

Using Amazon Elastic File System?

Performance Mode of Amazon EFS?

A

Using Amazon Elastic File System = first step is to create a file system. The file system is the primary resource in EFS where you store files and directions. You can create ten file systems per account. Of course like any other AWS service you can increase this limit by raising a support ticket. To access you file system from instances in a VPC, you create mount targets in the VPC. A mount target has an IP address and a DNS name you use in your mount command.

Performance Mode of Amazon EFS = Max I/O mode is optimized for a large-scale and data-heavy applications where tents hundreds, or thousands of EC2 instances are accessing the file system.

38
Q

CH2
PG62

AWS Storage Gateway:

File gateway?

Volume gateway?

Tape gateway?

A

AWS Storage Gateway: the AWS storage Gateway service is deployed as a virtuial machine in your existing environment. This VM is called a storage gateway and you connect your existing application

File gateway = enables you to store and retrieve objects in Amazon S3 using industry-standard fil protocols. Files are stored as objects in your S3 buckets and accessed through Network File System (NFS) mount point. Ownership permission and timestamps are durably stored in S3 and the user metadata of the object associated with the file.

Volume gateway = presents you application with disk volumes using the iSCI block protocol. Data written to these volumes can be asynchronously backed up as a point in the time snapshots. You can set the schedule for when snapshots occur or create them via the AWS Management console or service

Tape gateway = presents the storage gateway to you existing backup application as an industry-standard isCSI based virtual tape library (VTL) consisting of a virtual media changer and virtual tape drives. You can continue to use your existing backup applications and workflows while writing to a nearly limitless collection of virtual tapes.

39
Q

CH3
PG84

VPC?

Amazon VPC?

Subnet - 1?

A

VPC = Amazon VPC can have the app and the database tiers running on a private subnet in the same VPC
• You can have some of the applications running in the cloud within VPC and some of the application running on premise
• You can create a public subnets by providing it with Internet Access and can keep the resource isolated from the internet by creating a private subnet
• You can have dedicated connectivity between you corporate data center and VPN by using Direct Connect. You Can also connect your data center using a hardware virtual private network via an encrypted IPsec connection.
• If you need more than one VPC you can create multiple VPC and can connect each one of them by VPC peering. The way you can share the resources across multiple VPC and accounts
• You can connect to resources such as S3 using a VPC endpoint

Amazon VPC = First step of creating a VPC is deciding the IP range by providing a Classless Inter Domain Routing (CIDR) block. VPC is deciding the IP range by providing a

Subnet = is short for a subnetwork, which is logical subdivision of an IP network. With VPC you can create various subnets as per your needs. Most common ones are public subnets, private subnets and VPN-only subnets. Public subnets are for resources that need to be connected to the internet. Private subnets are for resources that do not. VPN only subnet is for when you want to connect your virtual private could with your corporate data center.

40
Q

CH3
PG84

VPC?

Amazon VPC?

Subnet - 1?

A

VPC = Amazon VPC and can have the app and the database tiers running on a private subnet in the same VPC
• You can have some of the applications running in the cloud within VPC and some of the application running on premise
• You can create a public subnets by providing it with Internet Access and can keep the resource isolated from the internet by creating a private subnet
• You can have dedicated connectivity between you corporate data center and VPN by using Direct Connect. You Can also connect your data center using a hardware virtual private network via an encrypted IPsec connection.
• If you need more than one VPC you can create multiple VPC and can connect each one of them by VPC peering. The way you can share the resources across multiple VPC and accounts
• You can connect to resources such as S3 using a VPC endpoint

Amazon VPC = First step of creating a VPC is deciding the IP range by providing a Classless InterDomain Routing (CIDR) block. VPC is deciding the IP range by providing a

Subnet = is short for a subnetwork, which is logical subdivision of an IP network. With VPC you can create various subnets as per your needs. Most common ones are public subnets, private subnets and VPN-only subnets. Public subnets are for resources that need to be connected to the internet. Private subnets are for resources that do not. VPN only subnet is for when you want to connect your virtual private could with your corporate data center.

41
Q

CH3
PG84

Subnet - 2?

A

Subnet = in VPC you can define subnet usingCIDR block. Smallest subnet you can create within VPC is /28, which corresponds to 16 available IP addresses. If you use IPv6 and create a subnet using /64 as the CIDR block, you get A LOT of ip addresses
• It must be noted that a subnet is tied to only availability zone. You can not have subnet span multiple AZ’s, however a VPC can span multiple AZ’s in aregion
• IF you have 3 AZ’s in a VPC, for example, you need to create a separate subnet in each AZ, such as Subnet 1 for AZ1, subnet 2 for AZ2 and Subnet 3 for AZ3. Of course within an AZ you can have multiple sibnets
• Subnets are AZ specific. For multiple Az’s create multiple subnets
• VPC are regions specific. For multiple AZ’s, create multiple subnets
• VPC are region specific. For multiple regions create different VPC’s

42
Q

CH3
PG84

Subnet - 3?

A

CH3
PG84

Subnet - When creating a VPC you need to provide a CIDR block for the IP address range for VPC. It can be as big as /16, which can have 65,536 IP addresses. When creating multiple subnets, you must take into account the CIDR block of the VPC. Say you create the VPC with /16 and within VPC you create 3 subnets with /18, which has 16,384 IP addresses each. By doing this you have exhausted 49,152 IP addresses. Now you only have 65,536- m49,152 IP addresses left for creating new subnets. At this point you wont be able to create new subnets with /17, which has 32,768 IP addresses however you should be able to create new subnets between /19 and /28. If you create more than one subnets in a VPC, the CIDR blocks of the subnets cannot overlap. There are lots of tools

43
Q

CH3
PG88

Route table?

Internet Gateway?

A

Route table = Every subnet should have a route table. Example if Subnet of a VPC contains an internet gateway in the route table, that subnet has access to the Internet. You can associate multiple subnets with the same route table. Whenever you create a subnet it is automatically associate with the main route table of the VPC. Thus a route with a destination of say, 0.0.0.0/0 for all IPv4 addresses wont carter the destination. If you later add a virtual private gateway, Internet gateway, NAT device or anything like that in your VPC, you must update the route table accordingly so that any subnet that wants to use these gateway can take advantage of them and have a route defined for them. If you look at the routing table, you will notice there are only two column: Destination and target. The target is where the traffic is directed, and the destination specifies the IP range that can be directed to the target. As shown in Table 3-2, the first two entries are local, which indicates internal routing within VPC for IPv4 and IPv6 for the CIDR block.

Internet Gateway = it must be noted that an IG is a horizontally scaled, redundant and highly available component in VPC. An IG support both IPv4 and IPv6 traffic.

44
Q

CH3
PG90

Network Address Translation?

NAT instances?

NAT Gateways?

Egress-Only Internet Gateway?

A

Network Address Translation = (NAT) tries to solve that problem. Using a NAT device you can enable any instances in a private subnet to connect to the Interne, but this does not mean the Internet can initiate a connection to the instance. The reverse is not true. A NAT device forwards traffic from the instances in the private subnet to the internet and then sends the response to the instances. When traffic goes to the internet, the source IPv4 address is replaced with the NAT devices address; similarity when the response traffic goes to those instances, the NAT device translates the address back to those instances private IPv4 addresses. This is another reasonwhy it is called address tranlations. Please note that NAT dvices can be used only for IPv4 traffic; they cant be used fo IPv6 there are two types ofNAT devices available within AWS

NAT instances = NAT instances in the public subnet and route the database servers Internet traffic via the NAT instance running in the public subnet. By doing that, the database server will be able to initiate the connection to the internet, but reverse is not allowed (meaning no one will be able to connect to the database server from the internet using NAT)

45
Q

CH3
PG90

NAT Gateways?

Egress-Only Internet Gateway?

A

NAT Gateways = performs the same function as that of a NAT instance, but it does not have the same limitations as a NAT instances. Moreover it is a managed service and therefore does not require administration overhead. If you want to use the same elastic IP address for a NAT gateway, you need to de-associate it first from the NAT instance and then re-associate it with the NAT gateway.

Egress-Only Internet Gateway = The only difference is that a NAT gateway handles IPv4 traffic and an egress-only gateway handles the IPv6 traffic. When you use an egress-only Internet gateway, you put the entry of the egress-only internet gateway in the routing table

46
Q

CH3
PG93

Elastic Network Interface?

Elastic IP address?

Security Group?

A

Elastic Network Interface = this ENI is avirtual network interface that you can attach to an instance in Amazon VPC. An ENI can have the following attributes:
• A MAC address
• One public IPv4 address
• One or more IPv6 addresses
• A primary private IPv4 address
• One or more secondary private IPv4 addresses
• One elastic IP address (IPv4) per private IPv4 address
• One or more security groups
• A source/destination check flag and description

ENI attributes follow its attachment to the instance.

Elastic IP address = EIP address is designed for application running on the cloud. Every time you launch a new EC2 instance in AWS. Instead of changing the IP address for all applications every time, what you need todo is obtain an EIP and associate that with the EC2 instance and map the EIP with the application. Now whenever the IP address of the EC2 instance changes, you just need to repoint the new EC2 instance to the EIP and applications can connect using the same EIP.
• Please note at this moment that an EIP supports only IPv4 and does not support IPv6
• When you dissasocciate an EIP and don’t re-associate it with any other resource it continuous to remain in your account until you explicitly release it from your account

47
Q

CH3
PG95

Security Group =

A

Security Group = is like a virtual firewall that can be assigned to any instance running in a virtual private cloud. A security group define what traffic can flow inside and outside a particular instance. Since it is instance specific, you can have different security group for different instances. The security group is applied at the instance level and not at the subnet level. Therefore, even within a subnet, you can have different security groups for different instances. You can attach up to five different security group to each group is stateful and consist of IP address.
• Security groups are stateful. This means if you send a request from your instance and vice versa traffic is allows
• The only exception is security groups
• Amazon VPC always comes with a default security group

48
Q

CH4
PG128

Introduction to Amazon Elastic Compute Cloud

Operating systems supported by EC2?

Benefits of Amazon EC2?

A
Operating systems supported by EC2 = 
•	Windows 2003R2, 2008/2008R2, 2012/2012R2, 2016
•	Amazon Linux
•	Debian
•	SUSE
•	CentOS
•	Red Hat enterprise Linux
•	Unbuntu
Benefits of Amazon EC2 = 
•	Time to market
•	Scalability
•	Control
•	Reliable
•	Secure
•	Multiple Instances Type
•	Integration
•	Cost effective
49
Q

CH4
PG129

Instance types?

A
Instance types = 
•	General Purpose
•	Compute Optimize
•	Memory Optimize
•	Storage Optimize
•	Advanced computing
50
Q

CH4
PG130

General Purpose?

Compute Optimized?

Memory Optimized?

Storage Optimized?

A

General Purpose = provide a balance of computer memory and network resources and are a pretty good choice for many applications, some provide burstable performance. T2 burstable, M5, M4 and M3 do not provide burstable performance

Compute Optimized = has high-performance processors, and as a result any application that needs a lot processing power benefits from these instances like > media transcoding, application supporting a large number of concurrent users, long running, batch jobs, high performance computing, gaming servers and so on

Memory Optimized = is for workloads you are planning to run that has a lot of memory requirements. Good use cases are memory databases such as SAP HANA or Oracle Database in-memory, NoSQL databases like MongoDB and Cassandra, big data processing engines like Presto or Apache Spark High Performance computing HPC and electronic design automation EDA applications, Genome Assembly and analysis and so on

Storage Optimized = can be used for workloads that require high sequential read and write access to very large data sets on local storage. They deliver many thousands of low-latency, random I/O operations per second (IOPS). Use cases are like running a relational database that is I/O bound running an I/O bound application, NoSQL database, data warehouse applications, MapReduce and Hadoop distributed caches for in memory databases like Redis and so on

Advanced Computing = for high processing requirements like running a machine learning algorithm , molecular modeling, genomics, computation of fluid dynamics, computational finance, and so on. These instances provide access to hardware bas

51
Q

CH4
PG131

Advanced Computing?

A

Advanced Computing = for high processing requirements like running a machine learning algorithm, molecular modeling, genomics, computation of fluid dynamics, computational finance, and so on. These instances provide access to hardware based accelerator such as GPUs or field programmable gate arrays (FPGA) which enable parallelism and give high throughput.

52
Q

CH4
PG131

Processor Features

Intel AES New Instruction (AES-NI)?

Intel Advanced Vector Extensions?

Intel Turbo Boost technology?

A

Intel AES New Instruction (AES-NI) = faster and better encryption than the original all new EC2 instances have this

Intel Advanced Vector Extensions = improves performance for application such as image and audio/video processing. For instances launched with HVM AMIs

Intel Turbo Boost technology = provides more performance when needed

53
Q

CH4
PG132

Network features?

A

Network features = EC2- Classic run in a single flat network that is shared with other customers. New instances use VPC only. VPC has a lot of advantages such as assigning multiple private IPv4 addresses to instances, assigning IPv6 addresses to an instance, changing the security group, adding the NACL rules and so on. You can also launch an instance in a placement group, that is a logical placement group of instances in an AZ. EX if you have an application or workload that needs low latency or high-network throughput, the placement group is going to provide you with that benefit. This is know as cluster networking. When launched in a placement group instances can utilize up to 10GBPS for single flow and 25GBPS for multiflow traffic. No charge for using placement groups. Cant span multiple Az groups, must also be unique within the AWS account. To get the most out of an instance group you should get a type that supports enhanced networking. Enhanced networking provides higher bandwith, higher packet per second (PPS) performance andlower inter instance latencies. It uses single root I/O virtualization (SR-IOV)

54
Q

CH4
PG133

Storage features

General Purpose?

Provisioned IOPS (PIOPS)?

Magnetic?

A

General Purpose = The is the general purpose EBS volume backed up by SSD and can be used for any purpose. Often used as the default volume for all EC2 instances

Provisioned IOPS (PIOPS) = if you have a computing need for a lot of I/O for example running a database workload then, you can use a provisioned IOPS-based EBS volume to maximize the I/O throughput and get the IOPS that your application

Magnetic = have the lowest cost per gigabyte for all the volume time. The are good for running a development workload a non-mission critical workload or any other workload where data is accessed frequently.

55
Q

CH4
PG133

Steps for using Amazon EC2?

A

Steps for using Amazon EC2
• Select AMI
• Configure the networking and security (virtual private cloud, public subnet, private subnet, and so on)
• Choose instance type
• Choose the AZ attach EBS and optionally choose static EIP
• Stat the instance

56
Q

CH4
PG136

Shared Tenancy, Dedicated Hosts, and Dedicated Instances

Shared Tenancy?

Dedicated Host?

Dedicated Instance?

Instance and AMI’s?

A

Shared Tenancy = default behavior when launching an EC2 instance

Dedicated Host = means it is a physical server exclusively assigned to you. Might save you money by allowing you to use your existing server-bound software licenses including Window Server SQL Server and SUSE Linux Enterprise Server. You could also carve out many VMs

Dedicated Instance = you run the EC2 instances on a single tenant hardware. Dedicated instances are Amazon EC2 instances that run in a virtual private cloud on hardware that’s dedicated to a single customer. Your dedicated instances are physically isolate at the host hardware level from instances that belong to other AWS accounts

Instance and AMI’s = can launch as many as many instances from an AMI
Launch Permissions =
• Public – The owner grants launch permissions to all AWS accounts
• Explicit – The owner grants launch permissions to specific AWS accounts
• Implicit – The Owner has implicit launch permissions for an AMI
Soft limit on the number of instance types.

57
Q

CH4
PG137

Instance Root Volume?

A

Instance Root Volume = If the instance is backed by an S3 then the instance is backed by an instance store. Stop functionality is not allows here. If backed by instance store it should be constantly or in multiple Az zones. If its is backed by an EBS its called EBS-backed AMI. These can be stopped and restarted

58
Q

CH4
PG140

Virtualization in AMI

Hardware Virtual Machine (HVM)?

Paravirtual (PV)?

A

Hardware Virtual Machine (HVM) = The OS runs directly on top of the VM as it is without any modification similar to the way it runs on bare metal hardware. EC2 simulates some if not all of the underlying hardware that is presented to the guest. Supported by all current generation instances. Supported by CC2, CR1, HI1 and HS1 of previous generations.

Paravirtual (PV) = boots with PV-GRUB, starts the boot cycle and loads the kernel specified in the menu. Partialvirtual guest can run on host hardware that does not explicit support for virtualization. Cant take advantage of special hardware extensions that HVM can take such as enhanced networking on GPU processing and so on.

59
Q

CH4
PG141

Instance Life Cycle

Launch?

Start and Stop?

Reboot?

Terminate?

Retirement?

A

Launch = when the instance is launched it enters the pending state. The AMI you choose is used to boot the instance. Before starting the instance health checks are performed, once its up and running it enters the running state. Once in running state you are beginning to get billed.

Start and Stop = if it passes the health check it starts, If its backed by EBS you can stop it and maintain the information

Reboot = you can reboot an instance backed either by an instance store or backed by EBS. Everything is saved in a reboot.

Termination = as soon as you terminate the instance you will see the status change to shutting down or terminated. Once this happens billing stops. If the instance has Termination protection, additional steps are required. You can choose to either delete the EBS volume or keep it.

Retirement = when the instance has irrepreble damage the hardware is retired or scheduled to be retired.

60
Q

CH4
PG 144

Connecting to an instance?

A

Connecting to an instance = If you launch the instance in the public subnet it will be assigned a public IP address and public DNS name via which you can reach the instance from the Internet.
How all Ips look
Public DNS (IPv4) ec2-34-210-110-189.us-west-2.compute.amazonaws.com
IPv4 public IP 34.210.110.189
Private DNS ip-10-0-0-111.us-west-2.compute.internal
Private Ips 10.0.0.111
Public DNS automatically created when instance is created, cant change it. If you terminate the instance the IP address will automatically be disassociated. If you want to associate one IP address with another server us Elastic IP address

If you create it in a private then you will get a private IP address and a private DNS. You will get prompted to download a private key. In your local machine and then change the permission in it. Amazon EC2 uses the public-private key conectp used in cryptography to encrypt and decrypt

61
Q

CH4
PG146

Characteristics of security groups?

A

Characteristics of security groups =
• Be default security groups allow all outbound traffic
• You cant change the outbound rules for an EC2 classic security group
• Security group rules are always permissive; you cant create rules that deny access
• Are stateful if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of the inbound security group rules. For VPC security groups this also means that reponses to allows inbound traffic are allows to flow out, regardless of outbound rules.
• You can add and remove rules at any time. Your changes are automatically applied to the instance associated with the security group after a short period
• When you associated multiple security groups with an instance the rules from each security group are effectively aggregate to create one set of rules. You use this set of rules to determine whether to allow access

62
Q

CH4
PG146

For each rule you specify the following?

A

For each rule you specify the following =
Protocol- most common protocols are 6 (TCP), 17 (UDP), and 1 (ICMP)
Port range- for TCP, UDP, or a custom protocol, this is the range of ports to allow. You can specify a single port number (for example, 22) or range of port numbers (for example, 7000 to 8000)
ICMP type and code – for ICMP, this is the ICMP type and code
Source or destination –
• An individual IPv4 address. You must use the /32 prefix after the IP4v address for example 203.0.113.1/32
• (VPC only) an individual IPv6 address, in CIDR block notation for examples. 203.0.113.0/24
• (VPC only) a range of IPv6 addresses in CIDR block notations for examples 2001:db8:1234:1a00::/64
• Another security group. This allows instances associated with the specified security group to access instances associated with this security group. This does not addrules from the source security group to this ecurity group. You can specify on of the following security group
• The current security group
• EC2-Classic A different security group for EC2-Classic in the same region
• EC2-Classic a security group for another AWS account in the same region (add the AWS account ID as a prefix, for examples, 111122223333/sg-edcd9784)
• EC2-VPC a different security group for the same VPC or a peer VPC in a VPC peering connection
• Rule allows the most permissive rule.

63
Q

CH4
PG147

Amazon Elastic Container Service?

A

Amazon Elastic Container Service = Using ECS, you can easily launch any contrainer based application with simple API calls. Containers are similar to hardware virtualization (like EC2), but instead of partitrioning a machine , containers isolate the process the processes running on a single operating system. This is a useful concept that lets you use the OS Kernel to create multiple isolated user space processes that can have constraints on them like CPU and memory. Containers are very efficient. You can allocate exactly the amount of resources (CPU memory) you want and at any point in time you can increase or decrease these resources depending on your need. Containers enable the concept of microservice. Microservice encourage the decomposition of an app intos smaller chunks, reducing complexity and letting teams move faster while still running the process on the same host.

64
Q

CH4
PG148

Benefits of running containers on ECS?

A

Benefits of running containers on ECS =
• Eliminates cluster management software. There is no need to install any cluster management software
• You can easily manage clusters for any scale
• Using ECS you can design fault-tolerant cluster architecture
• You can manage cluster sate using Amazon ECS
• You can easily control and monitor the containers seamlessly
• You can scale from one to tens of thousands of container almost instantly
• ECS gives you the ability to make good placement decisions about where to plate you containers.
• ECS gives you the intel about the availability of resources (CPU, memory)
• At any time you can add new resources to the cluster with EC2 auto scaling
• It is integrated with other services such as Amazon Elastic container registry elastic load balancing, elastic block store, elastic network interfaces, virtual private cloud, IAM and Cloud trail.

65
Q

CH5
PG165

Authorization?

Auditing

A

Authorization = Best practice if the concept of least privilege and segregation of duties. An IAM policy is a piece of code written in JavaScript Object nation where you can define one or more permissions. All users by default have no access.

Auditing = Cloud trail logs every API request made through the console such as the following
• Who made the request?
• When was the request made?
• What was the request about?
• Which resources were acted upon in the response to the request?
• Where was the request made from and made to?

66
Q

CH5
Pg168

Types of security credentials

IAM username and password?

E-mail address and password?

Access Keys?

Key Pair?

Multifactor authentication?

A

IAM username and password = this will be used mainly for accessing the AWS management console

E-mail address and password= This is associated with your root account

Access Keys = this is often used with the CLA, API’s and SDK’s

Key Pair = This is used with Amazon EC2 for logging in to the servers

Multifactor authentication = This is an additional layer of security that can be used with the root account as well

67
Q

CH5
PG168

Temporary Security Credentials?

Users?

A

Temporary Security Credentials = Temporary security credentials are short lived and expire automatically; therefore, you can provide access to your AWS resources to users without having to define an AWS identity for them. By using IAM you can create users and group and then assign permissions to the users.

Users = Creating users via IAM steps
• Create a user VIA IAM
• Provide security credentials
• Attach permissions roles and responsibilities
• Add the user to one or more groups covered in the next section

68
Q

CH5
PG170

Groups?

A

Groups = These are the characteristics of IAM groups
• A group consists of multiple roles and privileges, and you gran these permissions using IAM
• Any user who is added to a group inherits the groups roles and privileges
• You can add multiple users in a group
• A user can be part of multiple groups
• You cant add one group into another group. A group can contain only users and not other groups
• There is no default group that automatically includes all users in the AWS account. However you can create one and assign it to each and every user

69
Q

CH5
PG171

Roles?

A

Roles = are for the following use cases
• Delegate access to users, applications, or services that don’t normally have access to your AWS resources
• When you don’t want to embed AWS keys within the app
• When you want to grant AWS access to users who already have identities defined outside of AWS (for example, corporate directories)
• To grant access to your account to third parties (such as external auditors)
• So that applications can user other AWS services
• To request temporary credentials for performing certain tasks
When you create a role you need to specify to policies. One policy governs who can assume therole (in other words the principal) This policy is also called the trust policy. The second policy is the permission or access policy, which degfines what resources and action the principal or who assuming the role is allows to access to.

70
Q

CH5
PG172

IAM Hierarchy of Privileges

AWS Root user or the account owner?

AWS IAM user?

Temporary security credentials?

IAM Best Practices?

A

AWS Root user or the account owner = this user has unrestricted access to all enables services and resources

AWS IAM user = in this case the user has limited permissions. The access is restricted by group and user policies

Temporary security credentials = Access is restricted by generating identity and further by policies used to generate tokens

IAM Best Practices =
• Use the IAM user: immediately acter creating the account create the IAM user and Lock down the root user
• Create a strong password policy: Min 8-10 characters, expires in 90 days. One upper, lower, symbol, number
• Rotate Security Credentials Regularly: Use the credential report to audit credentials rotation. 90 days
• Enable MFA:
• Manage permissions with groups:
• Grant the least Privileges:
• Use IAM Roles: to delegate cross-account access, to delegate access within an account, and to provide access for federated users. If you use roles, then there is no need to share security credentials or store long-term credentials and you have a clear idea and have control over who has what access.

71
Q

CH5
PG172

IAM Hierarchy of Privileges

Use IAM roles for Amazon EC2 instances?

Enable AWS Cloud Trail?

AWS Compliance Program?

A

Use IAM roles for Amazon EC2 instances = best way to provide credentials to an application running on an EC2 instance is by using IAM roles.

Enable AWS Cloud Trail = You must ensure that AWS cloud trail is enabled in all regions and that the AWS CloudTrail log validation is enabled. It is also important to make sure that the Amazon S3 bucket of cloudtrail logs is not publicly accessible.

AWS Compliance Program = helps customers understand controls in place at AWS to maintain security and data protection in the cloud. Compliance responsibilities are shared.

72
Q

CH5
PG186

IAM Roles for Amazon EC2?

Caution?

A

IAM Roles for Amazon EC2 =
• AWS access keys for signing request to other services in AWS are automatically made available on running instances
• AWS access keys on an instance are rotated automatically multiple times a day New access keys on an instance are rotated automatically multiple times a day. New access keys will be made available at least fine minutes prior to the expiration of the old access keys
• You can assign granular services permissions for applications running on an instance that make request to other services in AWS
• You can include an IAM role when you launch on-demand ,spot or reserved instances
• IAM roles can be used with all Windows and Linux AMI’s

Caution = Types of services that cloud expose you credentials include the following
• HTTP proxies
• HTML/CSS validation services
• XML processor that support XML inclusion

73
Q

CH6
PG196

Benefits of Auto Scaling

Dynamic Scaling?

Best user experience?

Health check and fleet management?

Load Balancing?

Target Tracking?

A

Dynamic Scaling = helps them provision 2 or hundreds of thousands in real time

Best user experience = it also helps provide the best possible experience for users because it never runs out of resources. You can create rules suck as if the CPU utilization increases to more than 70% a new instance is started

Health check and fleet management = can be done by autoscaling. Helps you maintain the fleet and replace failed instances

Load Balancing = Auto Scaling can be used to balancing the workloads across multiple EC2 instances. It automatically balances the EC2 instances across multiple AZ’s when multiple AZ’s are configured. Auto scaling makes sure that there is a uniform balance of EC2 instances across multiple AZ’s

Target Tracking = Auto scaling adjust the number of EC2 instances for you in order to meet that target. Target can be scaling metric that Auto Scaling supports. Example if you always want the CPU utilization of your application server to remain at 65 percent. Auto scaling will increase and decrease the number of EC2 instances automatically to meet the 65 percent CPU utilization metric.

74
Q

CH5
PG199

Launch Configuration?

Auto Scaling Groups

Maintain the instance level?

Manual Scaling?

Scaling as per the demand?

Scaling as per schedule?

A

Launch Configuration = is a template used for auto scaling that stores all the information about the instance, such as the AMI details, instance type, key pair, security group, IAM and instance profile, user data, storage attached and so on.

Auto Scaling Groups

Maintain the instance level = also know as the default scaling plan. In this scaling policy you define the number of instances you will always operate with. You define the minimum or the specified number of servers that will be running all the time. This makes it so that you are always running with the number of instances that you designate.

Manual Scaling = Can be done manually either via the console or the API or CLI call.

Scaling as per the demand = another usage of auto scaling is to scale to meet the demand. You can scale according to various Cloud Watch metrics such as an increase in CPU, disk reads, disk writes, network in, network out, and so on.

Scaling as per schedule = If your traffic is predictable and you know that you are going to have an increase in traffic during certain hours, you can have a scaling policy as per the schedule.

75
Q

CH5
PG199

Auto Scaling Groups?

Simple Scaling?

A

Auto Scaling Groups = To create an auto scaling group you need to provide the minimum number of instances running at any time. You also need to set the maximum number or servers to which the instance can scale
• If the desired capacity is greater than the current capacity, then launch the instances
• If the desired capacity is less than the current capacity then terminate instances
• Auto Scaling groups cant span regions

Simple Scaling = means you can scale up or down based on ONE scaling adjustment. In this mechanism you can select an alarm which can be CPU utilization, disk read, disk write, network in or network out . You can also define how long to wait before starting or stopping a new instance. This is called the cooldown period

76
Q

CH6
PG203

Simple scaling with steps

Exact capacity?

Change in Capacity?

Percentage change in capacity?

A

Exact capacity = You can provide the exact capacity to increase or decrease to. EX I want the total to be 5

Change in Capacity = You can provide the numbers you want it changed by. I want to increase it by 5

Percentage change in capacity =

77
Q

CH6
Ph204

Target tracking scaling policies?

Termination Policy?

A

Target tracking scaling policies = For example setting the utilization at 50 percent Autoscaling will automatically scale up or down to stay at 50 percent

Termination Policy = When you scale down the instances will terminate. If you have to terminate tow instances, it is important to shut down an instance from each AZ so that you can have a balanced configuration. When you terminate an instance it deregisters itself from a load balancer if there was one and then has a grace period to terminate any open connections.

78
Q

CH6
Ph204

Elastic Load Balancing

Elastic?

Integrated?

Secured?

High Available?

A

Elastic = no manual intervention at all. On prem you would have to physically hard wire the load balancer into the server. None of that here.

Integrated = ELB is integrated with various AWS services. Its also integrated with cloudWatch. ELB also integrated with Route 53 for DNS failover

Secured = ELB provides a lot of security features such as integrated certificate management and SSL decryption, port forwarding and so on. ELB is capable of terminating HTTP/SSL traffic at the load balancer to avoid having to run the CPU intensive decryption process on their EC2 instances.

High Available = with ELB you can distribute traffic across Amazon EC2 instances containers and IP addresses.

Cheap = ELB is cheap and cost effective. For example auto scaling saves network administrators a lot of time.

79
Q

CH6

How ELB works?

Type of Load Balancers

Network Load balancer?

Application Load balancer?

Classic Load balancer?

A

How ELB works = Even if you do not deploy your application or workload across multiple AZ’s, the load balancer that you are going to use will be always deployed across multiple AZ’s

Type of Load Balancers

Network Load balancer [TCP] = NLB or the TCP load balancer acts in layer 4 of the OSI model. A connection based model that can handle connections across EC instances containers and IP addresses based on IP data. Supports both TCP and SSL

Application Load balancer [HTTP, HTTPS] = ALB works on layer 7 of the osi model. It supports HTTP and HTTPS. When a package comes from an application it looks at its header and then decided the course of action. It can also do content based routing depending on the services needed. Can also do Host-based routing where you route the request based on Host Field of the HTTP header and path based routing where you can route a client rqeust based on the URL path of the HTTP header

Classic Load balancer [TCP, SSL, HTTP, HTTPS]= supports both network and application load balancing, in other words it operates on layer 4 and layer 7 of the OSI model

80
Q

CH6
PG210

Load Balancer Key Concepts and Terminology?

Listeners?

Target Groups and Target?

A

Load Balancer Key Concepts and Terminology = allows you to host multiple applications to be hosted behind a single load balancer. Example if one app handles images and another app. You can also have upto 10 different set of rules, which means you can host up to ten applications. Application load balancer also has native support for microservices and container-based architectures.

Listeners = define the protocol and port on which the load balancer listens for incoming traffic connections. Each load balancer needs atleast one listener. For both

Target Groups and Target = Tg are logical grouping of targets behind a load balancer.TG can exist independently from the load balancer. TG are regionally constructed.