Terms Flashcards
What is AWS?
Amazon Web Services (Cloud Supplier).
A cloud services platform such as Amazon Web Services owns and maintains the network-connected hardware required for application services, while you provision and use what you need via a web application.
What is Amazon Athena (Analytics)?
Amazon Athena is a query service that allows for easy data analysis in Amazon S3 by using standard SQL.
Services like Amazon Athena, data warehouses like Amazon Redshift, and sophisticated data processing frameworks like Amazon EMR, all address different needs and use cases.
Amazon Athena provides the easiest way to run ad-hoc queries for data in S3 without the need to setup or manage any servers.
Primary use case: Query
When to use: Run interactive queries against data directly in Amazon S3 without worrying about formatting data or managing infrastructure. Can use with other services such as Amazon RedShift.
Amazon Athena is an interactive query service that makes it easy to analyse data in Amazon S3 using standard SQL.
Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.
Athena is easy to use – simply point to your data in Amazon S3, define the schema, and start querying using standard SQL.
Amazon Athena uses Presto with full standard SQL support and works with a variety of standard data formats, including CSV, JSON, ORC, Apache Parquet and Avro.
While Amazon Athena is ideal for quick, ad-hoc querying and integrates with Amazon QuickSight for easy visualization, it can also handle complex analysis, including large joins, window functions, and arrays.
Amazon Athena uses a managed Data Catalogue to store information and schemas about the databases and tables that you create for your data stored in Amazon S3.
Amazon Athena is an analytics service that makes it easy to query data in Amazon S3 using standard SQL commands. AWS customers can also use an Amazon S3 feature called S3 Select to query data on S3 using SQL commands; however, S3 Select can only be used to perform simple SQL queries on a single S3 Object.
Query data in S3 using SQL (Analytics).
Amazon Athena allows you to query data in S3 using SQL (Analytics). Athena is server-less, so there is no infrastructure to manage, and you pay only for the queries that you run.
Amazon Athena is an interactive query service that makes it easy to analyse data in Amazon S3 using standard SQL. AWS customers can also use an Amazon S3 feature called S3 Select to query data on S3 using SQL commands; however, S3 Select can only be used to perform simple SQL queries on a single S3 Object.
What is Amazon DynamoDB?
Amazon DynamoDB is a fully managed NoSQL database service.
Amazon DynamoDB is not a storage service.
Amazon DynamoDB is a key-value and document database service.
DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS so that they do not have to worry about hardware provisioning, setup and configuration, throughput capacity planning, replication, software patching, or cluster scaling.
DynamoDB is a fully managed NoSQL offering provided by AWS. It is now available in most regions for users to consume.
For more information on AWS DynamoDB, please refer to the below URL:http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html
Part of abstracted services for which AWS is responsible for the security & infrastructure layer. Customers are responsible for data that is saved on these resources.
What is Amazon Elastic Compute Cloud (EC2) (Compute)?
Resize compute capacity: Amazon Elastic Compute Cloud (EC2) is a (web) service that provides secure, resizable, compute capacity in the cloud.
Use secure, sizable compute capacity
• Boot server instances in minutes
• Pay only for what you use
You can install and run any database software you want on Amazon EC2. In this case, you are responsible for managing everything related to this database.
Amazon EC2 can be used to run any number of batch processing jobs, but you are responsible for installing and managing a batch computing software and creating the server clusters.
EC2 is a core AWS service and runs VMs. Resize compute capacity. You cannot have an EC2 instance without a security group.
PAYG. Broad selection of HW/SW, where to host.
- Log into AWS console.
- Choose Region.
- Launch EC2 wizard.
- Select Amazon Machine Image (AMI) - software platform - windows/Linux etc.
- Select Instance Type (#cores, RAM etc)
- Configure network
- Configure storage
- Configure key pairs/tags (for connecting to instance after we launch it e.g. name)
- Configure firewall security groups.
What is Amazon ElastiCache?
In memory storage for fast, managed information retrieval.
Amazon ElastiCache is used to improve the performance of your existing apps by retrieving data from high throughput and low latency in-memory data stores.
Amazon ElastiCache is a memory cache system service on the cloud and supports Redis and Memcached.
ElastiCache improves the memory performance by CPU Intensive Queries and Caching I/O queries in memory for quick results.
Redis is an in-memory data structure store, used as a distributed, in-memory key–value database, cache and message broker, with optional durability. Redis supports different kinds of abstract data structures, such as strings, lists, maps, sets, sorted sets, HyperLogLogs, bitmaps, streams, and spatial indices.
Memcached is a general-purpose distributed memory-caching system. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source must be read. Memcached is free and open-source software, licensed under the Revised BSD license.
What is Amazon Elastic MapReduce (EMR)? (Analytics)
Amazon EMR makes it simple and cost effective to run highly distributed processing frameworks such as Hadoop, Spark, and Presto when compared to on-premises deployments.
services like Amazon Athena, data warehouses like Amazon Redshift, and sophisticated data processing frameworks like Amazon EMR, all address different needs and use cases.
Primary use case: Data Processing
When to use: Highly distributed processing frameworks such as Hadoop, Spark, and Presto. Run a wide variety of scale-out data processing tasks for applications such as machine learning, graph analytics, data transformation, streaming data.
Amazon EMR is flexible – you can run custom applications and code, and define specific compute, memory, storage, and application parameters to optimize your analytic requirements.
Amazon Elastic MapReduce (EMR) is a web service that enables you to process vast amounts of data across dynamically scalable Amazon EC2 instances.
Amazon EMR is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data.
EMR utilizes a hosted Hadoop framework running on Amazon EC2 and Amazon S3.
Managed Hadoop framework for processing huge amounts of data.
Also support Apache Spark, HBase, Presto and Flink.
Most commonly used for log analysis, financial analysis, or extract, translate and loading (ETL) activities.
A Step is a programmatic task for performing some process on the data (e.g. count words).
A cluster is a collection of EC2 instances provisioned by EMR to run your Steps.
EMR uses Apache Hadoop as its distributed data processing engine, which is an open source, Java software framework that supports data-intensive distributed applications running on large clusters of commodity hardware.
EMR is a good place to deploy Apache Spark, an open-source distributed processing used for big data workloads which utilizes in-memory caching and optimized query execution.
You can also launch Presto clusters. Presto is an open-source distributed SQL query engine designed for fast analytic queries against large datasets.
EMR launches all nodes for a given cluster in the same Amazon EC2 Availability Zone.
You can access Amazon EMR by using the AWS Management Console, Command Line Tools, SDKS, or the EMR API.
With EMR you have access to the underlying operating system (you can SSH in).
A tool for big data processing and analysis. Amazon EMR processes big data across a Hadoop cluster of virtual servers on Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (S3) (Analytics).
What is Amazon Inspector?
Amazon Inspector is a security assessment service that automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.
Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.
Amazon Inspector is a vulnerability management service that continuously scans your AWS workloads for vulnerabilities. Amazon Inspector automatically discovers and scans Amazon EC2 instances and container images residing in Amazon Elastic Container Registry (Amazon ECR) for software vulnerabilities and unintended network exposure.
- Amazon Inspector allows you to analyse Application Security.
- An automated security assessment service.
- Assesses applications for security vulnerabilities or deviations from best practices
- Produces a report with security findings and prioritised next steps
- AWS doesn’t guarantee but does present useful information.
- Can build into DevOps process to proactively spot things and make part of build and deployment process.
- Can access Inspector through the console, SDKs, API and CLI.
Amazon Inspector can be used to analyse potential security threats for an Amazon EC2 instance against an assessment template with predefined rules. It does not provide historical data for configurational changes done to AWS resources.
What is Amazon Kinesis (Analytics)?
Amazon Kinesis is an analytics service that allows you to easily collect, process, and analyse video and data streams in real time.
Amazon Kinesis makes it easy to collect, process, and analyse real-time, streaming data so you can get timely insights and react quickly to new information.
Collection of services for processing streams of various data.
Data is processed in “shards”.
There are four types of Kinesis service.
Amazon Kinesis makes it easy to collect, process, and analyse real-time streaming data so you can get timely insights and react quickly to new information (Analytics). Reliably load real-time streams into data lakes, warehouses, and analytics services. A real-time data streaming service.
What is Amazon Macie (Security)?
Amazon Macie is a data security and data privacy service.
Amazon Macie is a machine learning powered security service to discover, classify and protect sensitive data.
Amazon Macie is a data security and data privacy service that uses machine learning (ML) and pattern matching to discover and protect your sensitive data.
Amazon Macie is a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS. Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property, and provides you with dashboards and alerts that give visibility into how this data is being accessed or moved.
Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect sensitive data stored in Amazon S3. Macie automatically detects a large and growing list of sensitive data types, including personally identifiable information (PII) such as names, addresses, and credit card numbers. Macie automatically provides an inventory of Amazon S3 buckets including a list of unencrypted buckets, publicly accessible buckets, and buckets shared with other AWS accounts. Then, Macie applies machine learning and pattern matching techniques to the buckets you select to identify and alert you to sensitive data. Amazon Macie can also be used in combination with other AWS services, such as AWS Step Functions to take automated remediation actions. This can help you meet regulations, such as the General Data Privacy Regulation (GDPR).
AWS Macie primarily matches and discovers sensitive data such as personally identifiable information (PII).
What is Amazon Artifact (Security)?
AWS Artifact provides on-demand access to AWS’ security and compliance reports. Used to download AWS’ security & compliance documents.
Examples of these reports include Service Organization Control (SOC) reports, Payment Card Industry (PCI) reports.
Amazon Artifact enables you to download AWS security and compliance documents.
What are AWS Availability Zones?
One or more discrete data centres with redundant power, networking, and connectivity in an AWS Region.
Availability Zones (AZs) may consist of multiple data centres. For deployment of highly available applications.
Deploying your resources across multiple Availability Zones helps you maintain high availability of your infrastructure.
What is AWS Billing Console?
The AWS Billing console allows you to easily understand:
Your AWS spending;
View and pay invoices;
Manage billing preferences and tax settings; and
access additional Cloud Financial Management services.
Quickly evaluate whether your monthly spend is in line with prior periods, forecast, or budget, and investigate and take corrective actions in a timely manner.
The Billing Console offers you a number of different ways to view and monitor your AWS usage.
What is AWS Budgets?
AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount.
Set custom budgets that alert you when you have exceeded your budgeted thresholds.
What is AWS CloudTrail?
AWS Monitoring and Logging Services.
AWS CloudTrail is a web service that records activity made on your account and delivers log files to an Amazon S3 bucket.
CloudTrail is for auditing (CloudWatch is for performance monitoring).
CloudTrail is about logging and saves a history of API calls for your AWS account.
Provides visibility into user activity by recording actions taken on your account.
Logs API calls made via:
- AWS Management Console.
- AWS SDKs.
- Command line tools.
- Higher-level AWS services (such as CloudFormation).
CloudTrail records account activity and service events from most AWS services and logs the following records:
- The identity of the API caller.
- The time of the API call.
- The source IP address of the API caller.
- The request parameters.
- The response elements returned by the AWS service.
CloudTrail is enabled by default.
CloudTrail is per AWS account.
You can consolidate logs from multiple accounts using an S3 bucket:
- Turn on CloudTrail in the paying account.
- Create a bucket policy that allows cross-account access.
- Turn on CloudTrail in the other accounts and use the bucket in the paying account.
You can integrate CloudTrail with CloudWatch Logs to deliver data events captured by CloudTrail to a CloudWatch Logs log stream.
CloudTrail log file integrity validation feature allows you to determine whether a CloudTrail log file was unchanged, deleted, or modified since CloudTrail delivered it to the specified Amazon S3 bucket.
API history enables security analysis, resource change tracking, and compliance auditing.
CloudTrail logs all API calls made to AWS services with credentials linked to your accounts.
Track user activity and API usage:
- security analysis
- resource tracking
- troubleshooting
CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure.
What is AWS Consolidated Billing?
Track the combined costs of all of the AWS accounts in your organisation.
What is AWS Cost Explorer?
Visualise, understand, and manage your AWS costs and usage over time.
Additional information:
AWS Cost Explorer is a free tool that you can use to view your costs and usage. You can view data up to the last 13 months, forecast how much you are likely to spend for the next twelve months. You can use AWS Cost Explorer to see patterns in how much you spend on AWS resources over time, identify areas that need further inquiry, and see trends that you can use to understand your costs. AWS Cost Explorer allows you to explore your AWS costs and usage at both a high level and at a detailed level of analysis, and empowering you to dive deeper using a number of filtering dimensions (e.g., AWS Service, Region, Linked Account, etc.)
What are AWS Edge Locations?
AWS edge locations are used by the CloudFront service to cache and serve content to end-users from a nearby geographical location to reduce latency. Edge locations are used by the CloudFront service to distribute content globally.
A datacentre owned by a trusted partner of AWS which has a direct connection to the AWS network. Allows low latency no matter where the end user is geographically.
Outnumber AZ.
An edge location is where end users access services which are located at AWS. They are located in most of the major cities around the world and are specifically used by CloudFront (CDN) to distribute content to end users to reduce latency. It is like a frontend for the services we access which are located in the AWS Cloud. Edge Locations - local (e.g. in most cities) locations for performance delivery of content (Amazon CloudFront). Cache = Edge Location.
Benefits of using Edge Locations include:
- Edge locations are used by CloudFront to improve your end users’ experience when uploading files
- Edge locations are used by CloudFront to distribute content to global users with low latency
- Edge locations are used by CloudFront to cache the most recent responses
What is Amazon Elastic Beanstalk (Compute)?
It is a PaaS service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.
Elastic Beanstalk provides an answer to the question “how can I quickly get my app to the Cloud?”.
You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling, to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time. Choose instance type, choose database, adjust autoscaling.
A developer centric view of deploying an application on AWS. Beanstalk = Platform as a Service (PaaS).
Developers can easily deploy the services and web applications developed with .NET, Java, PHP, Python and more without providing any infrastructure (Compute).
What is AWS Glue? (Analytics)
AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics (Analytics).
Primary use case: ETL Service
When to use: Transform and move data to various destinations. Used to prepare and load data for analytics. Data source can be S3, RedShift or another database. Glue Data Catalog can be queried by Athena, EMR and RedShift Spectrum
AWS Glue is a fully managed, pay-as-you-go, extract, transform, and load (ETL) service that automates the time-consuming steps of data preparation for analytics.
AWS Glue automatically discovers and profiles data via the Glue Data Catalogue, recommends and generates ETL code to transform your source data into target schemas.
AWS Glue runs the ETL jobs on a fully managed, scale-out Apache Spark environment to load your data into its destination.
AWS Glue also allows you to setup, orchestrate, and monitor complex data flows.
You can create and run an ETL job with a few clicks in the AWS Management Console.
Use AWS Glue to discover properties of data, transform it, and prepare it for analytics.
Glue can automatically discover both structured and semi-structured data stored in data lakes on Amazon S3, data warehouses in Amazon Redshift, and various databases running on AWS.
It provides a unified view of data via the Glue Data Catalogue that is available for ETL, querying and reporting using services like Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum.
Glue automatically generates Scala or Python code for ETL jobs that you can further customize using tools you are already familiar with.
AWS Glue is serverless, so there are no compute resources to configure and manage.
What is AWS Identity and Access Management (AWS IAM)?
Tools to control access and authentication to your network-facing applications and resources.
AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.
Securely manage access to services and resources. IAM is free to use on top of other services.
IAM Permissions let you specify the desired access to AWS resources. Permissions are granted to IAM entities (users, user groups, and roles) and by default these entities start with no permissions. In other words, IAM entities can do nothing in AWS until you grant them your desired permissions.
AWS IAM is a global service.
AWS IAM is used to control access to AWS services or resources. It is not suited for authenticating large numbers of users to mobile applications.
What is an AWS Local Region?
An AWS Local Region is a single data centre designed to complement an existing AWS Region. Like all AWS Regions, AWS Local Regions are completely isolated from other AWS Regions.
What is the AWS Management Console?
AWS Management Console is a web application for managing Amazon Web Services.
You can interact with AWS services via the management console web interface. Can use a command line, SDK or code interface (web, terminal, code).
AWS Management Console lets you access and manage individual AWS resources through a web-based user interface.
What is AWS Marketplace?
AWS Marketplace is a digital catalogue with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS.
What is the AWS Pricing Calculator?
The AWS Pricing Calculator is a Tool to help predict monthly bills. It is used to create estimates.
AWS Pricing Calculator does not record any information about your AWS cost and usage.
AWS Pricing Calculator is just a tool for estimating your monthly AWS bill based on your expected usage.
For example, to estimate your monthly AWS CloudFront bill, you just enter your expected CloudFront usage (Data Transfer Out, Number of requests, etc.) and AWS Pricing Calculator provides an estimate of your monthly bill for CloudFront.
What is the AWS Health Dashboard?
Provides alerts and remediation guidance when AWS is experiencing events that may impact you.
AWS Health Dashboard (previously AWS Personal Health Dashboard) is the service that notifies AWS customers about abuse events once they are reported. AWS addresses many different types of potentially abusive activity such as phishing, malware, spam, and denial of service (DoS)/distributed denial of service (DDoS) incidents. When abuse is reported, AWS alerts customers so they can take the necessary remediation action. AWS Health Dashboard can also help customers build automation for handling abuse events and the actions to remediate them.
When customers receive abuse notifications via e-mail only, it is challenging to manage the alerts because e-mails could be lost or could be sent to incorrect contacts on the account, or they might not be reviewed in a timely manner. AWS addressed those challenges by surfacing abuse alerts in the AWS Health Dashboard where customers are already monitoring the health of their AWS environments.
The AWS Health Dashboard (previously AWS Personal Health Dashboard) is the single place to learn about the availability and operations of AWS services. You can view the overall status of all AWS services, and you can sign in to access a personalized view of the health of the specific services that are powering your workloads and applications. AWS Health Dashboard proactively notifies you when AWS experiences any events that may affect you, helping provide quick visibility and guidance to minimize the impact of events in progress, and plan for any scheduled changes, such as AWS hardware maintenance.
The AWS Health Dashboard is the single place to learn about the availability and operations of AWS services. You can view the overall status of all AWS services, and you can sign in to access a personalized view of the health of the specific services that are powering your workloads and applications. AWS Health Dashboard proactively notifies you when AWS experiences any events that may affect you, helping provide quick visibility and guidance to minimize the impact of events in progress and plan for any scheduled changes, such as AWS hardware maintenance. With AWS Health Dashboard, alerts are triggered by changes in the health of AWS resources, giving you event visibility and guidance to help quickly diagnose and resolve issues.
What is Amazon RDS?
Cost-efficient and resizable capacity.
Amazon Relational Database Service (Amazon RDS) is used to set up and operate a relational database in the cloud.
AWS RDS - Relational Database Services. Considered fault tolerant.
Set up, scale and operate a number of different types of DBs.
Auto mirrors to a different AZ for redundancy. All day-to-day DB tasks done by AWS. User org only needs to manage data.
Part of abstracted services for which AWS is responsible for the security & infrastructure layer. Customers are responsible for data that is saved on these resources.
Amazon RDS is not a storage service. Amazon RDS provides AWS-managed databases.
Amazon RDS provides six database engines to choose from, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and Microsoft SQL Server. These engines are already installed and ready to be used. The customer does not install the actual database software on RDS, nor has access to the underlying host as it is a managed service.
Amazon RDS for Oracle does not automatically replicate data. Amazon RDS supports six database engines (Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server). Amazon Aurora is the only database engine that replicates data automatically across three Availability Zones. For other database engines, you must enable the “Multi-AZ” feature manually. In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a standby copy of your data in a different Availability Zone. If a storage volume on your primary instance fails, Amazon RDS automatically initiates a failover to the up-to-date standby.
What is Amazon Redshift?
A managed data warehouse that lets you take large amounts of structured data from other relational databases and perform complex queries and analysis against that data.
Query services like Amazon Athena, data warehouses like Amazon Redshift, and sophisticated data processing frameworks like Amazon EMR, all address different needs and use cases.
Primary use case: Data Warehouse
When to use: Pull data from many sources, format and organize it, store it, and support complex, high speed queries that produce business reports.
Amazon Redshift provides the fastest query performance for enterprise reporting and business intelligence workloads, particularly those involving extremely complex SQL with multiple joins and sub-queries.
Amazon Redshift is a data warehouse service. Amazon Redshift provides a fully managed data warehouse in the AWS Cloud.
Amazon Redshift is a fully managed data warehouse service in the cloud. Redshift gives you access to structured data from the existing SQL, ODBC and JDBC. Amazon Redshift service is a data warehouse.
Currently, Amazon Redshift only supports Single-AZ deployments.
What are AWS Regions?
AWS Regions are separate geographic areas around the world that AWS uses to provide its Cloud Services, including Regions in North America, South America, Europe, Asia Pacific, and the Middle East. Choosing a specific AWS Region depends on its proximity to end-users, data sovereignty, and costs.
A physical location around the world where we cluster data centres.
One Region is three or more Availability Zones.
What is AWS Shield?
Shield provides firewall protection to your resources.
AWS Shield is a Distributed Denial of Service (DDoS) protection service that applies to applications running in the AWS environment.
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield Standard is automatically enabled to all AWS customers and provides always-on detection and automatic inline mitigations that minimize application downtime and latency.
A Managed Distributed Denial of Service (DDoS) protection service.
A Managed DDoS protection service.
Standard - automatic protections for all customers at no charge.
- Automatic protection from most frequently occurring attacks
- Always on
- Inline attack mitigation - built-in automated techniques and avoids latency
AWS Shield provides always-on DDoS detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection. All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge. AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target your web site or applications.
What is an AWS Spot Instance?
EC2 instances that can be purchased at a significant discount with the
knowledge that they may be shut down at any time.
Spare compute capacity in the AWS Cloud available to you at steep discounts compared to On-Demand prices.
Spot instances may be more cost effective than On-Demand instances, but AWS does not guarantee the availability of the instances. Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks.
Spot, Savings Plans, and Reserved instances are all cheaper than On-Demand instances.
What is AWS Trusted Advisor?
AWS Trusted Advisor can help optimize resources within AWS cloud with respect to cost, security, performance, fault tolerance, and service limits. It does not provide historical data for configurational changes done to AWS resource.
AWS Trusted Advisor will provide notification on AWS resources created within the account for cost optimization, security, fault tolerance, performance, and service limits. It will not provide notification for scheduled maintenance activities performed by AWS on its resources.
An online tool that helps you follow AWS best practice. Not a human but an intelligent service based on AI.
Security Groups Check is one of the core security checks provided by AWS Trusted Advisor. AWS Trusted Advisor continuously checks security groups for rules that allow unrestricted access to AWS resources. Unrestricted access increases opportunities for malicious activity (hacking, denial-of-service attacks, loss of data).
What is AWS WAF?
AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.
Filter malicious web traffic. AWS WAF is a web application firewall that lets you monitor the HTTP(S) requests that are forwarded to your protected web application resources. You can protect the following resource types:
Amazon CloudFront distribution
Amazon API Gateway REST API
Application Load Balancer
AWS AppSync GraphQL API
Amazon Cognito user pool
AWS WAF also lets you control access to your content.
AWS WAF allows you to control the inbound traffic only (the traffic that can reach your applications), but not the outbound traffic. Security Groups and Network Access Control Lists (Network ACLs) are the features you can use to control the inbound and outbound traffic.
AWS WAF is a web application firewall that helps protect web applications from attacks by allowing you to configure rules that block malicious traffic.
You use WAF rules in a web ACL to block web requests based on criteria like the following:
- Scripts that are likely to be malicious. Attackers embed scripts that can exploit vulnerabilities in web applications. This is known as cross-site scripting (XSS).
- Malicious requests from a set of IP addresses or address ranges.
- SQL code that is likely to be malicious. Attackers try to extract data from your database by embedding malicious SQL code in a web request. This is known as SQL injection.
AWS WAF allows you to control the inbound traffic only (the traffic that can reach your applications), but not the outbound traffic. Security Groups and Network Access Control Lists (Network ACLs) are the features you can use to control the inbound and outbound traffic.
What is the AWS Well-Architected Tool?
Free tool to review your architectures against the 5 pillars of a well architected framework.
What is Hybrid Cloud?
“Hybrid Cloud Architecture” can be defined as having each of these three environments (Public Cloud, Dedicated Cloud, and “On-premise” Cloud) in play and the “hybrid” is around the ability to interface between these different environments as necessary.
A hybrid deployment is a way to connect infrastructure and applications between cloud-based resources and existing resources that are not located in the cloud.
The most common method of hybrid deployment is between the cloud and existing on-premises infrastructure to extend, and grow, an organization’s infrastructure into the cloud while connecting cloud resources to the internal system.
What is Elastic Load Balancing or ELB?
Distribute incoming traffic: Elastic Load Balancing (ELB) is used to distribute traffic automatically across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions.
Elastic Load Balancing does not scale resources. Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions.
A service to automatically distribute traffic across multiple resources.
Elastic load balancers help with high availability, as they distribute traffic (load) can recognise unhealthy EC2 instances, can send metrics to CloudWatch, triggers/notifications etc.
Elastic Load Balancing is a service that can be used to distribute requests to multiple instances.
What is the basic definition of Cloud Computing?
Cloud computing is the on-demand delivery of compute power, database storage, applications, and other IT resources through a cloud services platform via the Internet with pay-as-you-go pricing.
Cloud computing provides a simple way to access servers, storage, databases, and a broad set of application services over the Internet.
A cloud services platform such as Amazon Web Services owns and maintains the network-connected hardware required for these application services, while you provision and use what you need via a web application.
- Access services (IT resources) on demand
- Avoid large upfront investments
What is AWS Lambda?
Run code in response to events.
AWS Lambda lets you run code without provisioning or managing servers. Run or execute code without provisioning or managing the servers. You only pay for the compute time you consume.
Lambda is not a storage service. It is a compute service to run your applications.
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code, and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services, or you can call it directly from any web or mobile app.
AWS Lambda allows you to run applications without managing or provisioning servers.
What does Amazon SNS do?
Amazon SNS is a publish/subscribe messaging service that enables you to decouple microservices, distributed systems, and serverless applications.
Both Amazon SNS and Amazon EventBridge can be used to implement the publish-subscribe pattern. Amazon EventBridge includes direct integrations with software as a service (SaaS) applications and other AWS services. It’s ideal for publish-subscribe use cases involving these types of integrations.
Alerting. Messages are published to topics.
What does AWS Fargate do?
AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS).
AWS Fargate allows customers to run containers without having to manage servers or clusters.
A “serverless” container compute engine where you only pay for the resources required to run your containers. Suited for customers who do not want to worry about managing servers, handling capacity planning, or figuring out how to isolate container workloads for security.
AWS customers who use AWS Fargate to run their containers do not have control over the underlying infrastructure. AWS Fargate is a serverless compute engine for Amazon ECS that allows customers to run containers without having to manage servers or clusters. AWS Fargate launch type is more suitable for customers who want to run containers without managing the underlying infrastructure.
Fargate runs serverless containers. Used for spikey workloads. https://aws.amazon.com/fargate/
Part of abstracted services for which AWS is responsible for the security & infrastructure layer. Customers are responsible for data that is saved on these resources.
What is Amazon ECS?
Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS. Amazon ECS eliminates the need for you to install and operate your own container orchestration software, manage and scale a cluster of virtual machines, or schedule containers on those virtual machines.
Run applications on a managed cluster. Amazon Elastic Container Service is used to run containerized applications in AWS.
Amazon Elastic Container Service (ECS) is the service that can be used to run and manage Docker containers in AWS.
Amazon ECS has only two modes: Fargate launch type (serverless) and EC2 launch type (server-based).
Elastic Container Service, A Container Orchestrator.
On both Amazon EKS and Amazon ECS, you have the option of running your containers on the following compute options:
AWS Fargate — a “serverless” container compute engine where you only pay for the resources required to run your containers. Suited for customers who do not want to worry about managing servers, handling capacity planning, or figuring out how to isolate container workloads for security.
EC2 instances — offers the widest choice of instance types, including processor, storage, and networking. Ideal for customers who want to manage or customize the underlying compute environment and host operating system.
AWS Outposts — run your containers using AWS infrastructure on premises for a consistent hybrid experience. Suited for customers who require local data processing, data residency, and hybrid use cases.
AWS Local Zones — an extension of an AWS Region. Suited for customers who need the ability to place resources in multiple locations closer to end users.
AWS Wavelength — ultra-low-latency mobile edge computing. Suited for 5G applications, interactive and immersive experiences, and connected vehicles.
What is Amazon EKS? (Compute)
Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that allows you to use Kubernetes to run and scale containerized applications in the cloud or on-premises.
Kubernetes is an open-source container orchestration system that allows you to deploy and manage containerized applications at scale.
AWS handles provisioning, scaling, and managing the Kubernetes instances in a highly available and secure configuration. This removes a significant operational burden and allows you to focus on building applications instead of managing AWS infrastructure.
Elastic Kubernetes Service. Also a Container Orchestrator.
Amazon Elastic Container Service for Kubernetes (EKS) is a managed Kubernetes service that makes it easy for you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane.
EKS is certified Kubernetes conformant, so existing applications running on upstream Kubernetes are compatible with Amazon EKS.
EKS automatically manages the availability and scalability of the Kubernetes control plane nodes that are responsible for starting and stopping containers, scheduling containers on virtual machines, storing cluster data, and other tasks.
EKS automatically detects and replaces unhealthy control plane nodes for each cluster.
Generally available but only in limited regions currently.
On both Amazon EKS and Amazon ECS, you have the option of running your containers on the following compute options:
AWS Fargate — a “serverless” container compute engine where you only pay for the resources required to run your containers. Suited for customers who do not want to worry about managing servers, handling capacity planning, or figuring out how to isolate container workloads for security.
EC2 instances — offers the widest choice of instance types, including processor, storage, and networking. Ideal for customers who want to manage or customize the underlying compute environment and host operating system.
AWS Outposts — run your containers using AWS infrastructure on premises for a consistent hybrid experience. Suited for customers who require local data processing, data residency, and hybrid use cases.
AWS Local Zones — an extension of an AWS Region. Suited for customers who need the ability to place resources in multiple locations closer to end users.
AWS Wavelength — ultra-low-latency mobile edge computing. Suited for 5G applications, interactive and immersive experiences, and connected vehicles.
https://aws.amazon.com/eks/features/
https://aws.amazon.com/kubernetes/
What is the difference between SNS and SQS?
SNS is a service to notify
SQS is a service to hold information
How to go about determining the best Region for a customer?
Understand if there are any Compliance requirements and the proximity to end customers.
What is AWS CloudFront?
CloudFront is a content distribution system.
AWS Content Delivery and DNS Services
This category of AWS services includes services for caching content around the world and providing intelligent Domain Name System (DNS) services for your applications.
Amazon CloudFront is a content delivery network (CDN) that allows you to store (cache) your content at “edge locations” located around the world.
This allows customers to access content more quickly and provides security against DDoS attacks.
CloudFront can be used for data, videos, applications, and APIs.
CloudFront benefits:
Cache content at Edge Location for fast distribution to customers.
Built-in Distributed Denial of Service (DDoS) attack protection.
Integrates with many AWS services (S3, EC2, ELB, Route 53, Lambda).
Origins and Distributions:
An origin is the origin of the files that the CDN will distribute.
Origins can be either an S3 bucket, an EC2 instance, an Elastic Load Balancer, or Route 53 – can also be external (non-AWS).
To distribute content with CloudFront you need to create a distribution.
CloudFront uses Edge Locations and Regional Edge Caches:
An edge location is the location where content is cached (separate to AWS regions/AZs).
Requests are automatically routed to the nearest edge location.
Regional Edge Caches are located between origin web servers and global edge locations and have a larger cache.
Regional Edge caches aim to get content closer to users.
The diagram below shows where Regional Edge Caches and Edge Locations are placed in relation to end users
Amazon CloudFront lets you securely deliver data, videos, applications, and APIs to your global customers with low latency and high transfer speed.
CloudFront is a Caching service that is used to deliver content to end users with low latency.
It caches content close to the end customers. CloudFront = Cache. Caching data that is mostly used or viewed close to the end users.
An edge location content delivery mechanism to enable delivery to customers using low latency (as closer). Has lower rates for data transfer out.
CloudFront is therefore essentially a caching and Content Delivery Network (CDN) service, not a storage service. It does not have the concept of volumes or storage classes.
It speeds up the sharing of your dynamic and static web content such as .css, .html, and image files to your users (Network). Amazon CloudFront is a global service. CloudFront is not for storage.
What is CDN?
Content Delivery Network
What is Amazon DynamoDB?
A serverless key value database providing fast and predictable performance.
What is Amazon Aurora?
A relational database engine. Available in RDS. Compatible with MySQL and Postgre.
Amazon Aurora is a relational database service, not a cost management service. The name of the service that performs this function is AWS Cost Explorer.
Amazon Aurora is a database service.
What is a Standard Reserved Instance?
Provides you with a significant discount (up to 75%) compared to On-Demand instance pricing, and can be purchased for a 1-year or 3-year term. Can pay upfront for bigger discounts or monthly.
Using Reserved instances requires a contract of at least one year. Amazon EC2 Reserved Instances provide a significant discount (up to 75%) compared to On-Demand pricing. Reserved instances can be purchased for a one or three-year term so you are committing to pay for them throughout this time period even if you don’t use them.
Spot, Savings Plans, and Reserved EC2 instances are all cheaper than On-Demand instances.
What is a Convertible Reserved Instance?
If you need additional flexibility, such as the ability to use different instance families, operating systems, or tenancies over the Reserved Instance term. Convertible Reserved Instances provide you with a significant discount (up to 54%) compared to On-Demand Instances and can be purchased for a 1-year or 3-year term.
Spot, Savings Plans, and Reserved instances are all cheaper than On-Demand instances.
What is an On-Demand Instance?
With On-Demand instances you only pay for the EC2 instances you use.
You can configure and launch your EC2 instances in minutes. There is no free capacity for application testing. You can only have specific types of instances for free during the free tier period (12 months).
With On-Demand instances, you pay for compute capacity by the hour or the second depending on which instances you run.
No longer-term commitments or upfront payments are needed.
You can increase or decrease your compute capacity depending on the demands of your application and only pay for what you use.
The use of On-Demand instances frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs.
On-Demand instances also remove the need to buy “safety net” capacity to handle periodic traffic spikes.
What is Amazon ECE?
Elastic Cloud Enterprise
What is Dedicated Cloud?
Closed to the internet, hosted on provider hardware.
What is AWS Lake Formation?
AWS’ big data lake platform.
AWS Lake Formation is a service that makes it easy to set up a secure data lake in days. A data lake is a centralised, curated, and secured repository that stores all your data, both in its original form and prepared for analysis. A data lake lets you break down data silos and combine different types of analytics to gain insights and guide better business decisions.
What is localstack?
A fully functional local AWS cloud stack. Develop and test your cloud & serverless apps offline
What is SageMaker?
A Cloud Machine Learning platform to create, train, and deploy machine learning models in the cloud.
What are the AWS Leadership Principles?
Customer obsession
Learn and be curious
Earn trust
Dive deep
Invent and simplify
Think big
Bias for action
Drive results
AWS Service breadth and depth involves?
Analytics
Application Integration
AR and VR
AWS Cost Management
AWS Marketplace
Blockchain
Business Applications
Compute
Customer Engagement
Database
Desktop and App Streaming
Developer Tools
Game Tech
Internet of Things
Machine Learning
Management and Governance
Media Services
Migration and Transfer
Mobile
Network and Content Delivery
Robotics
Satellite
Security, Identity, and Compliance
Storage
What is the AWS Shared Responsibility Model?
Customer - Responsible for security “IN” the Cloud
AWS - Responsible for security “OF” the Cloud. AWS responsible for anything physical.
AWS is responsible for the security and compliance of its physical infrastructure, including the PCI DSS requirements.
What is a Region?
A geographic region containing multiple AZs. A Region is an area of 100km around a location e.g. a big city. Every Region has one or more AZs.
What are AWS Compute services?
Develop, deploy, run, and scale workloads in the AWS Cloud.
What is Amazon VPC?
A Virtual Private Cloud (VPC) is a virtual network on AWS dedicated to your AWS account. A VPC spans all the Availability Zones in the region. It can be divided into a public or private sub network.
Therefore, Amazon VPC is a logically isolated network of the AWS Cloud.
Amazon VP:
- Logically isolated network
- Created per Account per Region
- Spans a single Region
- Can use all AZs within one Region
- Can peer with other VPCs
- Internet and VPN Gateways
- Numerous security mechanisms
What is a subnet in AWS?
A subnet is a range of IP addresses within a VPC.
It allows you to partition your network inside your VPC (Availability Zone resource).
A subnet is a section in a VPC in which you can place groups of isolated resources. A subnet can be public (accessible from outside the VPC) or private.
What is AWS DirectConnect?
AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS.
Using AWS Direct Connect, you can establish private connectivity between AWS and your datacentre, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
Establish a physical connection between on-premises and AWS. The connection is private, secure, and fast. Goes over the private network and takes at least a month to establish.
Dedicated Fibre. It does not use the internet. A direct dedicated connection between “On-premise” and an AWS Region. More secure.
What is a Security Group in AWS?
Security groups act as a firewall for associated Amazon EC2 instances, controlling both inbound and outbound traffic at the instance level.
Security groups can not be used to protect resources outside of AWS.
The fundamental of network security in AWS. They control how traffic is allowed into or out of our EC2 instances.
They only contain allow rules and can reference by IP or by security group. The firewall.
A Security Group is a virtual firewall for an EC2 instance. It protects the EC2 instance and filters traffic. Security Groups perform stateful packet filtering.
There have to be some instances associated with a Security Group to change a Security Group.
What is a Network ACL in AWS?
Network access control lists (Network ACLs) act as a firewall for associated subnets, controlling both inbound and outbound traffic at the subnet level. Network ACLs can not be used to protect resources outside of AWS.
A virtual firewall for a subnet. Network ACLs perform stateless packet filtering.
What does a Cache translate to?
Cache = Edge Location
What is AWS Batch?
Fully managed batch processing at any scale (Compute).
What is Amazon SDK?
AWS Software Development Kit. Call other services such as S3.
What is AWS Lambda?
AWS Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you.
These events may include changes in state or an update, such as a user placing an item in a shopping cart on an ecommerce website.
Part of serverless computing. Holds code that can be triggered in response to an event. For example a change to a file in an S3 bucket, by an API call etc.
What is the AWS Well Architected Framework Pillar: Performance Efficiency?
The Performance Efficiency pillar includes the ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.
What is the AWS Well Architected Framework Pillar: Cost Optimisation
The Cost Optimisation pillar includes the ability to run systems to deliver business value at the lowest price point.
What is the AWS Well Architected Framework Pillar: Reliability
The Reliability pillar includes the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.
The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. A resilient workload quickly recovers from failures to meet business and customer demand. Key topics include distributed system design, recovery planning, and how to handle change.
What is the AWS Well Architected Framework Pillar: Security?
The Security pillar encompasses the ability to protect data, systems, and assets to take advantage of cloud technologies to improve your security.
The Security Pillar includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.
The security pillar provides an overview of design principles, best practices, and questions. You can find prescriptive guidance on implementation in the Security Pillar whitepaper.
The security pillar includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies. Key topics include confidentiality and integrity of data, identifying and managing who can do what with privilege management, protecting systems, and establishing controls to detect security events.
What is the AWS Well Architected Framework Pillar: Operational Excellence?
The Operational Excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.
The operational excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. Key topics include automating changes, responding to events, and defining standards to manage daily operations.
What is AWS CloudSearch?
Amazon CloudSearch is a managed service in the AWS Cloud that makes it simple and cost effective to set up, manage, and scale a search solution for your website or application (Analytics).
What is the AWS Storage Gateway (Storage)?
Seamless and secure integration.
Hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases.
AWS Storage Gateway is a hybrid storage service that enables your on-premises applications to seamlessly use AWS cloud storage. You can use the service for backup and archiving, disaster recovery, cloud data processing, storage tiering, and migration.
What is Amazon S3 Glacier?
Data archiving and backup.
Low cost storage service, high latency (storage). S3 Glacier is more expensive/not cheaper than S3 Glacier Deep Archive. You can use the Amazon S3 Glacier storage classes to backup large amounts of data at very low costs.
You can store virtually any kind of data in any format (using Amazon Glacier). But your costs will be lower if you aggregate and compress your data.
Glacier cannot be attached to EC2 instances. Glacier is a storage class of S3.
Glacier is not for frequently accessed data.
The storage service that AWS customers can use to attach storage volumes to an Amazon EC2 instance is Amazon EBS. An Amazon EBS volume is a durable, block-level storage device that you can attach to your EC2 instances. After you attach a volume to an instance, you can use it as you would use a physical hard drive. AWS recommends Amazon EBS for data that must be quickly accessible and requires long-term persistence. EBS volumes are particularly well-suited for use as the primary storage for operating systems, databases, or for any applications that require fine granular updates and access to raw, unformatted, block-level storage.
What is Amazon Simple Storage Service (S3) (Storage)?
Amazon S3 is object storage built to store and retrieve any amount of data from anywhere – web sites and mobile apps, corporate applications, and data from IoT sensors or devices.
Durable, scalable object storage.
99.99% availability
99.999999999% durability
You can store any type of file in S3.
S3 is designed to deliver 99.999999999% durability, and stores data for millions of applications used by market leaders in every industry.
S3 provides comprehensive security and compliance capabilities that meet even the most stringent regulatory requirements.
S3 gives customers flexibility in the way they manage data for cost optimization, access control, and compliance.
Typical use cases include:
Backup and Storage – Provide data backup and storage services for others.
Application Hosting – Provide services that deploy, install, and manage web applications.
Media Hosting – Build a redundant, scalable, and highly available infrastructure that hosts video, photo, or music uploads and downloads.
Software Delivery – Host your software applications that customers can download.
Static Website – you can configure a static website to run from an S3 bucket.
S3 provides query-in-place functionality, allowing you to run powerful analytics directly on your data at rest in S3. And Amazon S3 is the most supported cloud storage service available, with integration from the largest community of third-party solutions, systems integrator partners, and other AWS services.
Files can be anywhere from 0 bytes to 5 TB.
There is unlimited storage available.
Files are stored in buckets.
Buckets are root level folders.
Any subfolder within a bucket is known as a “folder”.
S3 is a universal namespace so bucket names must be unique globally.
There are seven S3 storage classes.
S3 Standard (durable, immediately available, frequently accessed).
S3 Intelligent-Tiering (automatically moves data to the most cost-effective tier).
S3 Standard-IA (durable, immediately available, infrequently accessed).
S3 One Zone-IA (lower cost for infrequently accessed data with less resilience).
S3 Glacier Instant Retrieval (data that is rarely accessed and requires retrieval in milliseconds).
S3 Glacier Flexible Retrieval (archived data, retrieval times in minutes or hours).
S3 Glacier Deep Archive (lowest cost storage class for long term retention).
When you successfully upload a file to S3 you receive a HTTP 200 code.
S3 is a persistent, highly durable data store.
Persistent data stores are non-volatile storage systems that retain data when powered off.
This contrasts with transient data stores and ephemeral data stores which lose the data when powered off.
Amazon S3 is a serverless data store service that stores customer data without requiring management of underlying storage infrastructure. Amazon S3 enables customers to offload the administrative burdens of operating and scaling storage to AWS so that they do not have to worry about hardware provisioning, operating system patching, or maintenance of the platform.
AWS is responsible for most of the configuration and management tasks, but customers are still responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.
A serverless service is a service that does not require the customer to manage the infrastructure layer, the operating system layer, or the platform layer. A serverless service can be a compute service such as AWS Lambda, an integration service such as Amazon SQS, or a data store service such as Amazon S3.
Infinitely scaling storage. Allows people to store objects (files) in buckets (directories).
Simple Storage Solution (Storage).
Managed cloud service
Storage not associated with any particular server/EC2 instance
Store unlimited number of objects
Fine grained security control (S3 bucket, object level too)
Objects have a key - common approach is to use keys that look like a folder+file structure. Must be suitable for use in URLs.
Considered fault tolerant.
https://awsexamplebucket/s3-us-west-2.amazonaws.com/docs/hello.txt
awsexamplebucket - bucket name
s3-us-west-2.amazonaws.com - Region-specific endpoint
docs/hello.txt - object key
USE OF BUCKETS IN S3
Buckets are used in S3 storage. Amazon S3 Bucket ACLs enable you to manage access to buckets. Each bucket has an ACL attached to it as a sub-resource. You can use Bucket ACLs to grant basic read/write permissions to other AWS accounts.
Note: You have three options to control access to an Amazon S3 Bucket:
1- IAM Policies
2- Bucket Policies
3- Bucket ACLs
Data is secured using ACLs and bucket policies.
What is Amazon Elastic File System (EFS)?
File storage for Amazon EC2 instance.
Amazon Elastic File System (Amazon EFS) provides a fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
A fully managed elastic NFS file system (Storage).
Amazon EFS is a storage service.
What is Amazon EBS (Elastic Block Storage)?
A virtualized partition of a physical storage drive that’s not directly connected to the EC2 instance it’s associated with.
Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud.
Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability.
Amazon EBS volumes offer the consistent and low-latency performance needed to run your workloads. With Amazon EBS, you can scale your usage up or down within minutes – all while paying a low price for only what you provision.
Raw block level storage attached to an Amazon EC2 Instance (Storage). Amazon EBS volumes are not suitable for data archive and faster retrieval.
Amazon EBS is not a cost-effective solution for storing backups. Amazon EBS is a block level storage that can be used as a disk drive for Amazon EC2 or Amazon RDS instances. Amazon EBS is designed for application workloads that benefit from fine tuning for performance and capacity. Typical use cases of Amazon EBS include Big Data analytics engines (like the Hadoop/HDFS ecosystem and Amazon EMR clusters), relational and NoSQL databases (like Microsoft SQL Server and MySQL or Cassandra and MongoDB), stream and log processing applications (like Kafka and Splunk), and data warehousing applications (like Vertica and Teradata).
Amazon EBS does not use buckets.
Amazon EBS is a storage service, not a compute service.
There are no reservations in Amazon EBS independent of Amazon EC2.
What is Amazon VPC?
Virtual Private Cloud - VPC (Networking).
Networking AWS service - lives within a region
A private virtual network in the AWS cloud that uses the same concepts as on premise networking.
Subnets to divide up the VPC. Subnets are where EC2 instances reside, but they do not actually control ingress and egress traffic themselves.
Allows VPC to span multiple AZs
Routing tables
Internet gateway (IGW) + NAT gateway
Network ACLs
Allows complete control of network configuration (isolate and expose resources inside VPC)
Offers several layers of security controls (allow/deny specific internet and internal traffic)
Other AWS services deploy into the VPC (inherent security built in)
What is Amazon Route 53?
Amazon Route 53 helps AWS Customers improve their application’s performance for a global audience. Amazon Route 53 latency-based policy routes user requests to the closest AWS Region, which reduces latency and improves application performance.
A DNS Web Service (Networking). Map a name to a destination on AWS. Helps with High Availability through DNS, geolocation routing, health checks, latency-based routing, and round robin. Route 53 is a global service.
What is Amazon Direct Connect?
AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS (Network).
What is Amazon CloudWatch?
Amazon CloudWatch is mainly used to monitor the utilization of your AWS resources. A service that collects and monitors log, metric, and event data from AWS and non-AWS services and applications. You can search logs, visualize metric data, create alarms, and trigger actions based on specific events. Has been described as “Standard Out on Steroids”.
AWS Monitoring and Logging Services
Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS.
CloudWatch is for performance monitoring (CloudTrail is for auditing).
Used to collect and track metrics, collect, and monitor log files, and set alarms.
Automatically react to changes in your AWS resources.
Monitor resources such as:
EC2 instances.
DynamoDB tables.
RDS DB instances.
Custom metrics generated by applications and services.
Any log files generated by your applications.
Amazon CloudWatch is a monitoring service for resource utilization. Gain system-wide visibility into resource utilization.
CloudWatch monitoring includes application performance.
Monitor operational health.
CloudWatch is accessed via API, command-line interface, AWS SDKs, and the AWS Management Console.
CloudWatch integrates with IAM.
Amazon CloudWatch Logs lets you monitor and troubleshoot your systems and applications using your existing system, application, and custom log files.
CloudWatch Logs can be used for real time application and system monitoring as well as long term log retention.
CloudWatch Logs keeps logs indefinitely by default.
CloudTrail logs can be sent to CloudWatch Logs for real-time monitoring.
CloudWatch Logs metric filters can evaluate CloudTrail logs for specific terms, phrases, or values.
CloudWatch retains metric data as follows:
Data points with a period of less than 60 seconds are available for 3 hours. These data points are high-resolution custom metrics.
Data points with a period of 60 seconds (1 minute) are available for 15 days.
Data points with a period of 300 seconds (5 minute) are available for 63 days.
Data points with a period of 3600 seconds (1 hour) are available for 455 days (15 months).
Dashboards allow you to create, customize, interact with, and save graphs of AWS resources and custom metrics.
Alarms can be used to monitor any Amazon CloudWatch metric in your account.
Events are a stream of system events describing changes in your AWS resources.
Logs help you to aggregate, monitor and store logs.
Basic monitoring = 5 mins (free for EC2 Instances, EBS volumes, ELBs and RDS DBs).
Detailed monitoring = 1 min (chargeable).
Metrics are provided automatically for several AWS products and services.
There is no standard metric for memory usage on EC2 instances.
A custom metric is any metric you provide to Amazon CloudWatch (e.g. time to load a web page or application performance).
Options for storing logs:
CloudWatch Logs.
Centralized logging system (e.g. Splunk).
Custom script and store on S3.
Do not store logs on non-persistent disks:
Best practice is to store logs in CloudWatch Logs or S3.
CloudWatch Logs subscription can be used across multiple AWS accounts (using cross account access).
Amazon CloudWatch uses Amazon SNS to send e-mail.
Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. CloudWatch alarms send notifications or automatically make changes to the resources you are monitoring based on rules that you define. For example, you can monitor the CPU usage and disk reads and writes of your Amazon EC2 instances and then use this data to determine whether you should launch additional instances to handle increased load. You can also use this data to stop under-used instances to save money. In addition to monitoring the built-in metrics that come with AWS, you can monitor your own custom metrics. With CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health.
Monitoring service.
Distributed statistics gathering system.,
Tracks metrics of your infrastructure. Can create and use custom metrics.
Observability of your AWS resources and applications on AWS and on-premises (Monitoring).
Near real-time stream of system events that describe changes in AWS resources. Collect and track metrics (e.g., standard ones like CPU utilisation or custom ones from your application), collect and monitor log files, set alarms and automatically react to changes (e.g., through SNS or trigger an autoscaling event). Example use case include responding to state changes in AWS resources.
Amazon CloudWatch dashboards are used to monitor AWS system resources and infrastructure services, and are customizable and present information graphically.
What is AWS Snowball?
With AWS Snowball (Snowball), you can transfer hundreds of terabytes or petabytes of data between your on-premises data centres and Amazon Simple Storage Service (Amazon S3).
Snowball can import to S3 or export from S3.
Import/export is when you send your own disks into AWS – this is being deprecated in favour of Snowball.
Snowball must be ordered from and returned to the same region.
To speed up data transfer it is recommended to run simultaneous instances of the AWS Snowball Client in multiple terminals and transfer small files as batches.
Bulk data transfer, edge storage, and edge compute.
Uses a secure storage device for physical transportation.
AWS Snowball Client is software that is installed on a local computer and is used to identify, compress, encrypt, and transfer data.
Uses 256-bit encryption (managed with the AWS KMS) and tamper-resistant enclosures with TPM.
AWS uses storage transportation devices, like AWS Snowball and Snowmobile to allow companies to transfer data to the cloud.
Petabyte-scale data transport with on-board storage and compute capabilities. It is well suited for local storage snd large scale data transfer (Migration & Transfer).
What is AWS Database Migration Service (AWS DMS)?
AWS Database Migration Service helps you migrate databases to AWS quickly and securely.
The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.
AWS Database Migration Service supports homogenous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle or Microsoft SQL Server to Amazon Aurora.
With AWS Database Migration Service, you can continuously replicate your data with high availability and consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift and Amazon S3.
https://aws.amazon.com/dms/
AWS Database Migration Service is used to migrate your data to and from most of the widely used commercial and open source databases.
Simplify migration of a database to AWS. Simple to use, minimal downtime, supports widely used databases. Low cost. Fast and easy to setup. Reliable. Replication Instance, Endpoint and Task are the three main components of DMS. Logging is not enabled by default.
What is Amazon SQS?
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale micro-services, distributed systems, and server-less applications.
It can be considered fault tolerant as it is a distributed messaging system that can ensure the queue is always available.
Amazon SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.
What is Amazon SNS?
Amazon SNS is a web service provided by the AWS. SNS stands for Simple Notification Service, and it manages and delivers the messages or notifications to the users and clients from any cloud platform. Messaging.
Simple Notification Service. Flexible pub/sub messaging and mobile comms service. Coordinates delivery of messages to endpoints/clients. Decouple and scale micro-services, distributed systems and server-less applications.
SNS is not used for monitoring. The service can be used in conjunction with CloudWatch to monitor and send notifications to your Email address. Using Amazon CloudWatch alarms, you can set up metric thresholds and send alerts to Amazon Simple Notification Service (SNS). SNS can send notifications using e-mail, HTTP(S) endpoints, and Short Message Service (SMS) messages to mobile phones.
What is AWS OpsWorks?
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers.
Lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments.
Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. The AWS service that uses Chef and Puppet is AWS OpsWorks.
What is AWS Config?
AWS Config is a management service that enables you to assess, audit, and evaluate the configurations of your AWS resources.
Config continually monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.
AWS config is used for evaluating configuration on the resources deployed in AWS cloud. It will not help for creating portfolios of resources for quick deployment.
AWS Config cannot be used to monitor or set thresholds for your CPU usage. AWS Config enables you to review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting.
What is AWS Cloud Formation?
AWS CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts.
AWS CloudFormation provides templates to provision and configure resources in AWS. AWS CloudFormation is a service for provisioning AWS resources using templates.
It provides a declarative way of outlining your AWS Infrastructure, for any resources (most of them are supported).
Then CloudFormation creates those for you in the right order with the exact configuration that you specify.
AWS Cloud Formation provides a common language for you to model and provision AWS and third-party application resources in your cloud environment.
Simplifies the task of repeatedly and predictably creating groups of related resources for your applications.
Automates resource provisioning
Create, update and delete resources (provisioned resources known as stacks)
CloudFormation reads template files (JSON/YAML) and creates resources accordingly
Templates can have conditions (variables) - useful for using templates for different environments (test vs production)
This is an example of infrastructure as code. Can use CloudFormation Designer to help create the template.
What is Amazon AppStream 2.0?
Stream desktop applications securely securely to a browser (end user computing). AppStream 2.0 helps you move your existing desktop applications to AWS so that users can access them from anywhere.
Amazon AppStream 2.0 doesn’t provide any cost information. AppStream 2.0 helps you move your existing desktop applications to AWS so that users can access them from anywhere.
Interactively streaming your application from the cloud provides several benefits:
Instant-on: Streaming your application with Amazon AppStream 2.0 lets your users start using your application immediately, without the delays associated with large file downloads and time-consuming installations.
Remove device constraints: You can leverage the compute power of AWS to deliver experiences that wouldn’t normally be possible due to the GPU, CPU, memory, or physical storage constraints of local devices.
Multi-platform support: You can take your existing applications and start streaming them to a computer without any modifications.
Easy updates: Because your application is centrally managed by Amazon AppStream 2.0, updating your application is as simple as providing a new version of your application to Amazon AppStream 2.0.
Interactively streaming your application from the cloud provides several benefits:
1- Instant-on: Streaming your application with Amazon AppStream 2.0 lets your users start using your application immediately, without the delays associated with large file downloads and time-consuming installations.
2- Remove device constraints: You can leverage the compute power of AWS to deliver experiences that wouldn’t normally be possible due to the GPU, CPU, memory, or physical storage constraints of local devices.
3- Multi-platform support: You can take your existing applications and start streaming them to a computer without any modifications.
4- Easy updates: Because your application is centrally managed by Amazon AppStream 2.0, updating your application is as simple as providing a new version of your application to Amazon AppStream 2.0.
What is AWS X-Ray?
AWS X-Ray is a service that collects data about requests that your application serves, and provides tools you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization.
A developer tool. Analyse and debug production, distributed applications.
What is AWS SDK?
AWS Software Development Kit? Call other services such as S3.
What is Amazon Aurora?
MySQL and PostgreSQL compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases.
What is Public Cloud?
A cloud-based application is fully deployed in the cloud and all parts of the application run in the cloud. Applications in the cloud have either been created in the cloud or have been migrated from an existing infrastructure to take advantage of the benefits of cloud computing.
Cloud-based applications can be built on low-level infrastructure pieces or can use higher level services that provide abstraction from the management, architecting, and scaling requirements of core infrastructure
Public Cloud is connected to and accessible via the internet. Public Cloud offers rapidly available, flexible use, and secure technology capability. The ‘Public’ in Public Cloud relates to the services being available to the public without lengthy procurement processes - not that systems or data is publicly accessible. Whilst you can choose to expose your workload or data to the internet, many organisations do not - using Public cloud capacity only as a flexible pay-as-you-go extension to their own private networks. For example AWS Direct Connect and Azure Express Route.
What is CDEL?
Capital DEL (CDEL) - spending on items deemed capital in nature.
What is RDEL?
Resource DEL (RDEL) excluding depreciation - effectively current spending
What is Enhanced Technical Support?
Support from AWS Support Team
What are Dedicated Instances?
Dedicated Instances are Amazon EC2 instances that run in a VPC on hardware that’s dedicated to a single customer.
What is “On-premise” Cloud?
Cloud provider deploying platform on customer hardware or some sort of custom built cloud environment.
The deployment of resources on-premises, using virtualization and resource management tools, is sometimes called the “private cloud.”
On-premises deployment doesn’t provide many of the benefits of cloud computing but is sometimes sought for its ability to provide dedicated resources.
In most cases this deployment model is the same as legacy IT infrastructure while using application management and virtualization technologies to try and increase resource utilization.
What are On-Demand Instances?
With On-Demand instances you only pay for EC2 instances you use.
What is Amazon Managed Blockchain?
Amazon Managed Blockchain is a fully managed service that makes it easy to join public networks or create and manage scalable private networks using the popular open-source frameworks Hyperledger Fabric and Ethereum.
What is Amazon Kinesis Data Analytics?
Amazon Kinesis Data Analytics is the easiest way to transform and analyse streaming data in real time using Apache Flink. Gain actionable insights from streaming data with server-less, fully managed Apache Flink.
Amazon Kinesis Data Analytics is the easiest way to process and analyse real-time, streaming data.
Can use standard SQL queries to process Kinesis data streams.
Provides real-time analysis.
Use cases:
Generate time-series analytics.
Feed real-time dashboards.
Create real-time alerts and notifications.
Quickly author and run powerful SQL code against streaming sources.
Can ingest data from Kinesis Streams and Kinesis Firehose.
Output to S3, RedShift, Elasticsearch and Kinesis Data Streams.
Sits over Kinesis Data Streams and Kinesis Data Firehose.
What is AWS Outposts?
AWS Outposts is a fully managed service that extends AWS infrastructure, services, APIs, and tools, to customer premises. It supports a hybrid architecture by giving companies the possibility to extend AWS infrastructure and AWS services to their own data centres.
AWS Outposts is an AWS service that delivers the same AWS infrastructure, native AWS services, APIs, and tools to virtually any customer on premises facility. With AWS Outposts, customers can run AWS services locally on their Outpost, including EC2, EBS, ECS, EKS, and RDS, and also have full access to services available in the Region. Customers can use AWS Outposts to securely store and process data that needs to remain on premises or in countries where there is no AWS region. AWS Outposts is ideal for applications that have low latency or local data processing requirements, such as financial services, healthcare, etc.
Run your containers using AWS infrastructure on premises for a consistent hybrid experience. Suited for customers who require local data processing, data residency, and hybrid use cases.
What is CloudHSM?
CloudHSM is a cryptographic service for creating and maintaining hardware security modules (HSMs) in an AWS environment. Not multi tenant like KMS.
How to use the core service Elastic Compute Cloud - EC2?
PAYG. Broad selection of HW/SW, where to host.
- Log into AWS console
- Choose region
- Launch EC2 Wizard
- Select Amazon Machine Image - AMI (software platform - windows/Linux etc)
- Select instance type (number of cores, RAM etc)
- Configure network
- Configure storage
- Configure key pairs/tags (for connecting to instance after we launch it e.g. name)
- Configure firewall security groups
How to use Elastic Block Store EBS - used for EC2 - see in EC2 console?
- Choose between HDD or SSD (SSD for performance e.g. recall, HDD for OS, log storage etc)
- Persistent and customisable block storage for EC2 instances
- Automatically replicated in same AZ
- Backup using snapshots (and share these)
- Easy/transparent encryption even within AWS centres
- Elastic volume
What is Amazon Route 53?
Amazon Route 53 can be used for:
• Registering domain names
• DNS configuration and management
• Configuring health checks to route traffic only to healthy endpoints
• Managing global application traffic (cross-regions) through a variety of routing types.
AWS Content Delivery and DNS Services
This category of AWS services includes services for caching content around the world and providing intelligent Domain Name System (DNS) services for your applications.
Amazon Route 53 is the AWS Domain Name Service.
Route 53 performs three main functions:
Domain registration – Route 53 allows you to register domain names.
Domain Name Service (DNS) – Route 53 translates name to IP addresses using a global network of authoritative DNS servers.
Health checking – Route 53 sends automated requests to your application to verify that it’s reachable, available, and functional.
You can use any combination of these functions.
Route 53 benefits:
Domain registration.
DNS service.
Traffic Flow (send users to the best endpoint).
Health checking.
DNS failover (automatically change domain endpoint if system fails).
Integrates with ELB, S3, and CloudFront as endpoints.
Routing policies determine how Route 53 DNS responds to queries.
Key functions of each type of routing policy
Policy What it Does
Simple Simple DNS response providing the IP address associated with a name
Failover If primary is down (based on health checks), routes to secondary destination
Geolocation Uses geographic location you’re in (e.g. Europe) to route you to the closest region
Geoproximity Routes you to the closest region within a geographic area
Latency Directs you based on the lowest latency route to resources
Multivalue answer Returns several IP addresses and functions as a basic load balancer
Weighted Uses the relative weights assigned to resources to determine which to route to
Concept of AWS Mechanisms
A mechanism is a complete process… a “virtuous cycle” that reinforces and improves itself as it operates. It takes controllable inputs and transforms them into ongoing outputs to address a recurring business challenge.
What is a “point of presence” in AWS?
The combination of a “Regional Edge Cache” and “Edge Location”
How are On-Demand Instances priced?
On-Demand instances are offered at a set price by AWS Region.
How are Reserved Instances priced?
Reserved Instances reserve capacity at a discounted rate. The customer commits to purchase a certain amount of compute.
How are Spot Instances priced?
Spot Instances are discounted more heavily when there is more capacity available in the Availability Zones.
Spot, Savings Plans, and Reserved instances are all cheaper than On-Demand instances.
How are Convertible Reserved Instances priced?
Reserved Instances reserve capacity at a discounted rate. The customer commits to purchase a certain amount of compute. With Convertible Reserved Instances, you can change the instance family, operating system and tenancies.
Which Amazon EC2 pricing model adjusts based on supply and demand of EC2 instances?
Spot Instances. Spot Instances are discounted more heavily when there is more capacity available in the Availability Zones.
Which AWS service provides a simple and scalable shared file storage solution for use with Linux-based Amazon EC2 instances and on-premises servers?
Amazon Elastic File System (Amazon EFS)
What is the AWS Well Architected Framework for?
It is a guide to help with the design of cloud architecture. It helps you to assess and improve architectures and understand how design decisions impact the business.
What are the 5 Pillars of the AWS Well Architected Framework?
Security, Reliability, Performance efficiency, Cost optimisation and Operational excellence.
What do we mean by “Fault tolerance” in the context of AWS?
The ability of a system to remain in operation. Related to the built-in redundancy of an application’s components.
What do we mean by High Availability in AWS?
A configuration that ensures application availability 100 percent or
near-100 percent of the time.
Refers to the entire system. Ensures that systems are generally functioning, and that downtime is minimised, with minimal human intervention. Minimal upfront investment for customers of AWS.
How do we maintain Confidentiality, Availability and Integrity in AWS Security?
Tools from AWS and partners
Encryption in transit with Transport Layer Security (TLS)
Built in firewalls
Private/dedicated connections
Distributed Denial of Service (DDOS) protection
List some key fault tolerant tools in AWS
Amazon Simple Queue Service. A distributed messaging system. Can ensure queue is always available.
Amazon Simple Storage Service (S3).
Amazon Relational Database Service.
What are Elastic IP addresses?
Static IPs designed for dynamic cloud computing, can mask failures if they occur. Helps with High Availability.
What is Cognito in AWS?
Amazon Cognito provides authentication, authorization, and user management for your web and mobile apps. Your users can sign in directly with a user name and password, or through a third party such as Facebook, Amazon, Google or Apple.
A way to provide identity to your web and mobile applications users. Amazon Cognito can be used to control access to AWS resources from an application.
Mobile-based auth and identity, where you can have the user management like, create, modify, delete and reset password done for you. You can also have external web-based identity providers integrated.
Web applications usually allow a valid username and password combination for successful sign into the application. Modern authentication flows incorporate more approaches to ensure user authentication. When using AWS, this is no exception, thanks to the abilities and features offered by AWS Cognito.
Amazon Cognito service is designed to provide APIs and infrastructure for key features in the user management space such as authentication, authorization, and managing the user repository with different operations for your web and mobile apps.
What is AWS Inventory?
Inventory and config management tools for managing settings over time. Deployment tools. Templates definition/management tools.
What does AWS offer for Data encryption?
Encryption capabilities.
Key management options (AWS Key Management).
Hardware based cryptographic key storage options (CloudHSM)
Within AWS Security, what does AWS offer for Access Control Management?
IAM
Multi-factor authentication (2FA etc)
Integration and federation with corporate directories (to reduce admin overhead)
Amazon Cognito - a simple user identity and data synchronisation service that helps you securely manage and synchronise app data for your users across their mobile devices.
AWS SSO
List the key business objects within the AWS Identity Access Model (IAM)
User - named operator, could be human or machine
Group - collection of users. Groups have multiple users and users can be in many groups
Role - NOT your permissions. Authentication Method. A user is an operator - could be human could be machine. Role is operator - could be human could be machine. Permissions with a role are temporary. Role is authentication..
What are the key characteristics of the AWS Shared Security Model?
Physical - AWS do this.
Network - AWS do this. Whilst they don’t tell clients the things they do to make it secure, they do tell the people that certify.
Hypervisor - uses a Zen based hypervisor with changes to make it secure and scalable.
Guest OS - If you are running EC2 then there is a magic dividing line between that and the hypervisor. AWS don’t have access to the OS. YOU are responsible for this and all things above it. Therefore, patching OS is your responsibility (Using Systems Manager Patch Manager).
Application and User Data - AWS doesn’t have access to this as it requires security keys.
Where is AWS Largest Region?
US-EAST
What is the latency between AZs?
< 10 ms
What do edge locations serve requests for?
CloudFront and Route53. Requests going to either of these services will be routed to the nearest edge location automatically
S3 Transfer Acceleration traffic and API Gateway traffic also use Edge Network.
What are GovCloud Regions?
Allow customers to host sensitive Controlled Unclassified Information and other types of regulated workloads.
Only operated by employees who are US citizens on US soil.
A hybrid company would like to provision desktops to their employees so they can access securely both the AWS Cloud and their data centres. Which AWS service can help?
Managed Desktop as a Service (DaaS) solution to easily provision Windows or Linux desktops. Helps eliminate management of on-premise Virtual Desktop Infrastructure. Pay as you go service with monthly or hourly rates.
How must S3 buckets be named?
Globally unique name (across all regions and all accounts)
1. No uppercase
2. No underscores
3. 3063 characters long
4. Not an IP
5. Must start with a lowercase letter or number
What are six advantages of cloud computing?
- Trade capital expense for variable expense
Instead of having to invest heavily in data centres and servers before you know how you’re going to use them, you can pay only when you consume computing resources, and pay only for how much you consume.
- Benefit from massive economies of scale
By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go price.
- Stop guessing capacity
Eliminate guessing on your infrastructure capacity needs. When you make a capacity decision prior to deploying an application, you often end up either sitting on expensive idle resources or dealing with limited capacity.
With cloud computing, these problems go away. You can access as much or as little capacity as you need and scale up and down as required with only a few minutes’ notice.
- Increase speed and agility
In a cloud computing environment, new IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes.
This results in a dramatic increase in agility for the organization since the cost and time it takes to experiment and develop is significantly lower.
- Stop spending money on running and maintaining data centres
Focus on projects that differentiate your business, not the infrastructure. Cloud computing lets you focus on your own customers, rather than on the heavy lifting of racking, stacking, and powering servers.
- Go global in minutes
Easily deploy your application in multiple regions around the world with just a few clicks. This means you can provide lower latency and a better experience for your customers at minimal cost.
What is cost explorer forecasting?
Gives an idea of future costs.
What are Volume Discounts?
The more you use, the more you save.
What allows you to take advantage of Volume Discounts?
Consolidated Billing because it combines usage across all organization accounts.
What is cost explorer?
A tool to visualize, understand, and manage your AWS costs and usage over time. Default reports and custom reports. Can filter and group.
What is consolidated billing?
One bill for all your accounts. For billing, AWS treats all accounts in an organization as if they were one account. You can designate one master account. No extra charge.
What are the AWS Trusted Advisor categories?
- Cost Optimization
- Performance
- Security
- Fault Tolerance
- Service Limits
What are the two types of patterns for application communication?
- Synchronous communications (application to application)
- Asynchronous / Event based (application to queue to application)
EC2 is an example of ___ as a Service?
Infrastructure
What is port 22 for?
SSH (Secure Shell) - log into a Linux instance
What 4 main things make up EC2?
- Renting virtual machines (EC2)
- Storing data on virtual drives (EBS)
- Distributing load across machines (ELB)
- Scaling the services using an auto-scaling group (ASG)
What is port 443 for?
HTTPS - access secured websites
What is EC2 On Demand pricing?
Pay for what you use. The trick then is to use what you really need.
Linux: billing per second, after the first minute
All other operating systems: billing per hour
What is a CloudWatch dashboard of metrics for?
So you can see the metrics of many services at once. This is referring to the AWS CloudWatch logs dashboard.
You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Route 53, and other sources. CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service. You can then easily view them, search them for specific error codes or patterns, filter them based on specific fields, or archive them securely for future analysis. By default, logs are kept indefinitely and never expire. You can adjust the retention policy for each log group, keeping the indefinite retention, or choosing a retention periods between 10 years and one day.
What is AWS’s responsibility with databases?
AWS offers managed databases. Meaning they handle operations, upgrades, patches, monitoring, alerts, and backups.
Is S3 a global or regional service?
Global, but buckets are created in a region.
What is S3 Object metadata?
List of text key / value pairs - system or user metadata
What is AWS OpsHub?
A software you install on your computer to manage your Snow Family Device
What are Object Access Control Lists (ACL)?
Finer grain at object level
Are S3 buckets global or regional?
Region level
Can S3 host websites?
Yes it can host static websites.
What is multi-tenancy in relation to cloud computing?
Multiple customers can share the same infrastructure and applications with security and privacy.
What do policies in IAM do?
Define the permissions of the users.
What are the 2 kinds of scalability?
- Vertical Scalability
- Horizontal Scalability (= elasticity)
Is scalability the same as high availability?
No, but they are linked.
What is vertical scalability?
Means increasing/decreasing the size of the instance. There is usually a limit.
What is high availability?
Means running your application / system in at least 2 AZs.
What is horizontal scalability?
Means increasing/decreasing the number of instances/systems for your application. Implies distributed systems and is very common for web applications.
What is the goal of high availability?
To survive a data centre loss (disaster).
High availability goes hand in hand with what type of scaling?
Horizontal
Scalability vs Elasticity?
Scalability: ability to accommodate a larger load by making the hardware stronger (scale up), or by adding nodes (scale out).
Elasticity: once a system is scalable, elasticity means that there will be some ‘auto-scaling’ so that the system can scale based on the load. This is the ‘cloud-friendly’: pay-per-use, match demand, optimize costs. Elasticity will not have positive effects on storage, cost or design agility.
What is Amazon Transcribe?
Automatically convert speech into text. Uses a deep learning process called automatic speech recognition (ASR) to convert speech to text quickly and accurately.
What is Amazon Polly?
Turning text into lifelike speech using deep learning.
Is Aurora in the AWS free tier?
No.
Who is GovCloud accessible by?
Only US entities and root account holders who pass a screening process
How to add billing preferences?
Hover over account, my billing dashboard, billing preferences
What is SaaS?
There are 3 common types of cloud computing model:
Infrastructure as a service (IaaS).
Platform as a service (PaaS).
Software as a service (SaaS).
Software as a Service (SaaS) provides you with a completed product that is run and managed by the service provider. In most cases, people referring to Software as a Service are referring to end-user applications.
With a SaaS offering you do not have to think about how the service is maintained or how the underlying infrastructure is managed; you only need to think about how you will use that piece of software.
A common example of a SaaS application is web-based email which you can use to send and receive email without having to manage feature additions to the email product or maintain the servers and operating systems that the email program is running on.
SaaS provides high availability, fault tolerance, scalability an elasticity.
Software as a service, a completed product that is run and managed by the service provider. Like g-mail.
IaaS, PaaS, and SaaS are not deployment models. They represent the different use cases of Cloud Computing, and the different levels of control customers need over their IT resources.
Software as a Service (SaaS) provides you with a completed product that is run and managed by the service provider. In most cases, people referring to Software as a Service are referring to end-user applications. With a SaaS offering you do not have to think about how the service is maintained or how the underlying infrastructure is managed; you only need to think about how you will use that particular piece software. A common example of a SaaS application is the web-based email where you can send and receive email without having to manage feature additions to the email product or maintaining the servers and operating systems that the email program is running on.
What is a Cloud Provider?
Someone else owns the servers, hires the IT people, pays for the real estate. You are responsible for configuring cloud services and code, someone else takes care of the rest.
What is “On-Premise”?
You own the servers, hire the IT people, pay or rent the real estate, and take all the risk
What is PaaS?
There are 3 common types of cloud computing:
- Infrastructure as a service (IaaS).
- Platform as a service (PaaS).
- Software as a service (SaaS).
Platform as a Service (PaaS) removes the need for your organization to manage the underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications.
This helps you be more efficient as you don’t need to worry about resource procurement, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application.
Platform as a service removes the need for your organization to manage the underlying infrastructure. Focus on deployment and management of your application. Like heroku.
IaaS, PaaS, and SaaS are not deployment models. They represent the different use cases of Cloud Computing, and the different levels of control customers need over their IT resources.
Platform as a Service (PaaS) removes the need for organizations to manage the underlying infrastructure (usually hardware and operating systems) and allow you to focus on the deployment and management of your applications. This helps you be more efficient as you don’t need to worry about resource procurement, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application.
What is IaaS?
There are 3 common types of cloud computing:
- Infrastructure as a service (IaaS).
- Platform as a service (PaaS).
- Software as a service (SaaS).
Infrastructure as a Service (IaaS) contains the basic building blocks for cloud IT and typically provide access to networking features, computers (virtual or on dedicated hardware), and data storage space.
IaaS provides you with the highest level of flexibility and management control over your IT resources and is very similar to the existing IT resources that many IT departments and developers are familiar with today.
Infrastructure as a service. The basic building blocks for cloud IT. Provides access to networking features, computers, and data storage space. Like AWS.
IaaS, PaaS, and SaaS are not deployment models. They represent the different use cases of Cloud Computing, and the different levels of control customers need over their IT resources.
Infrastructure as a Service, sometimes abbreviated as IaaS, contains the basic building blocks for cloud IT and typically provides access to networking features, computers (virtual or on dedicated hardware), and data storage space. Infrastructure as a Service provides you with the highest level of flexibility and management control over your IT resources and is most similar to existing IT resources that many IT departments and developers are familiar with today.
Which Global Infrastructure identity is composed of one or more discrete data centres with redundant power, networking, and connectivity, and are used to deploy infrastructure?
- Edge locations
- Availability Zones
- Regions
Availability zones
How many availability zones in each region?
Usually 3
Min 2
Max 6
Can data leave a region without your explicit permission?
No.
What is an AWS region?
A cluster of data centres.
What is an AWS availability zone?
A discrete data centre with redundant power, networking, and connectivity.
How does the cloud solve problems related to high-availability and fault-tolerance?
Build across data centres.
What does it mean to say cloud computing has rapid elasticity?
Automatically and quickly acquire and dispose of resources when needed.
Can you have multiple docker apps running on a single EC2 instance?
Yes.
What is Amazon Lightsail?
Virtual servers, storage, databases, and networking with low and predictable pricing.
Amazon Lightsail provides a low-cost Virtual Private Server (VPS) in the cloud. Lightsail plans include everything you need to jumpstart your project – virtual machines, containers, databases, CDN, load balancers, SSD-based storage, DNS management, etc. – for a low, predictable monthly price.
Who is Amazon Lightsail for?
For people with little cloud experience. No auto-scaling, but has high availability.
Which AWS server-less service can be used by developers to create APIs?
1. ECR
2. Lambda
3. API Gateway
API Gateway
Which of the following statements is INCORRECT regarding the definition of the term ‘server-less’?
- Server-less allows you to deploy functions as a service
- There are no servers
- You don’t need to manage servers
- Lambda is the server-less pioneer
There are no servers.
What is serverless?
Server-less is a new paradigm in which the developers don’t have to manage servers anymore. Includes anything that is managed.
Does server-less mean there are no servers?
No, you just don’t manage/provision/see them.
Which of these are server-less?
- Amazon S3
- DynamoDB
- Fargate
- Lambda
- EC2
- RDS
- Amazon S3
- DynamoDB
- Fargate
- Lambda
What is Amazon MQ?
Managed Apache ActiveMQ
Which principle is mainly applied when using Amazon SQS or Amazon SNS?
- Scalability
- Automation
- Decouple your applications
Decouple your applications.
When are messages in Amazon SQS Standard Queue deleted?
Messages are deleted after they are read by consumers.
What is an EBS Volume?
An Elastic Block Store Volume is a network drive you can attach to your instances while they run. Think of as a network USB stick.
Is EFS multi-AZ?
Yes.
How many instances can an EBS Volume be mounted to at a time?
One (at the CCP level)
Though technically with EBS Multi-Attach you can attach to multiple but this is out of scope for cloud pract. Exam.
What do EBS Volumes let you do?
Allow your instances to persist data, even after their termination.
What is Chef & Puppet?
Third party services that help you perform server configuration automatically, or repetitive actions.
What is CloudFormation Stack Designer?
A graphic tool for creating, viewing, and modifying AWS CloudFormation templates.
Is Amazon MQ serverless?
No, it runs on a dedicated machine.
What is Amazon Lex?
Same technology that powers Alexa. Automated Speech Recognition to convert speech to text. Natural Language Understanding to recognize the intent of text, callers. Helps build chatbots and call centre bots.
What is Layer 4?
TCP
Is there such thing as an IAM user for an EC2 Instance?
No
What is the only disadvantage of using RDS?
You can’t SSH into your Instance.
What is Sumerian?
Create and run virtual reality, augmented reality, and 3D apps. Easy to use and accessible via a web-browser
What is AWS CodeCommit?
AWS CodeCommit is mainly used for software version control, not for managing encryption keys.
Additional information:
AWS CodeCommit is designed for software developers who need a secure, reliable, and scalable source control system to store and version their code. In addition, AWS CodeCommit can be used by anyone looking for an easy to use, fully managed data store that is version controlled. For example, IT administrators can use AWS CodeCommit to store their scripts and configurations. Web designers can use AWS CodeCommit to store HTML pages and images.
AWS version of Github. Git based repositories.
What is an EBS Volume tied to?
- A region
- A data centre
- An edge location
- An availability zone
An availability zone.
How would you best describe ‘event-driven’ in AWS Lambda?
- Happens on a certain day
- Happens at a certain time
- Happens on a regular basis
- Happens when needed
Happens when needed.
What does AWS CloudFront use to improve read performance?
- DDoS Protection
- S3 Bucket Fast-Read
- Caching Content in Edge Locations
- Caching Content in Edge Regions
Caching Content in Edge Locations
What is Amazon AppStream 2.0?
Desktop Application Streaming Service. Deliver to any computer without acquiring, provisioning infrastructure. App delivered from within the web browser. Amazon AppStream 2.0 can be used to provide access to applications or a non-persistent desktop from any location.
Where is US-EAST-1?
North Virginia
What is AWS Device Farm?
A fully managed service that tests your web and mobile apps against desktop browsers, real mobile devices, and tablets.
Run tests concurrently on multiple devices.
Ability to configure device settings.
What is Elastic Transcoder?
Used to convert media files stored in S3 into media files in the formats required by consumer playback devices.
What is CloudEndure Disaster Recovery?
CloudEndure Disaster Recovery is an agent-based solution that replicates entire virtual machines, including the operating system, all installed applications, and all databases, into a staging area located in your target AWS Region.
The staging area contains low-cost resources that are automatically provisioned and managed by CloudEndure Disaster Recovery.
This allows you to quickly and easily recover your physical, virtual, and cloud-based servers into AWS. Continuous block-level replication for your servers. Protect your data from ransomware attacks.
You would like to convert an S3 file so it can be played on users’ devices. Which AWS service can help?
- Transcribe
- Elastic Transcoder
- AppSteam2.0
- Sumerian
Elastic Transcoder.
You would like to access desktop applications through a browser. Which AWS service would you use?
- Outposts
- WorkSpaces
- AppStream2.0
- EC2 Instance Connect
AppStream2.0.
A company would like to create 3D applications for its customers. Which AWS service can it use?
- Sumerian
- SageMaker
- Polly
- Elastic Transcoder
Sumerian
Which AWS service is server-less and lets you connect billions of devices to the AWS Cloud?
- Transit Gateway
- Connect
- Elastic Transcoder
- IoT Core
IoT Core
A hybrid company would like to provision desktops to their employees so they can access securely both the AWS Cloud and their data centres. Which AWS service can help?
- WorkSpaces
- AppSteam2.0
- Site-To-Site VPN
- Sumerian
WorkSpaces
What is AWS IoT Core?
IoT stands for Internet of Things. The network of internet connected devices that are able to collect and transfer data. Core allows you to easily connect IoT devices to the AWS Cloud. Server-less, secure, and scalable.
Amazon AppStream 2.0 vs Workspaces?
AppStream2.0: stream a desktop app to web browsers. Works with any device (that has a browser). Allow to configure an instance type per application type.
Workspaces: fully managed Virtual Desktop Infrastructure (VDI) and desktop available. Users connect to the VDI and open native or WorkSpaces Application Manager (WAM) apps. Are “on-demand” and “always-on”
For more information on Amazon WorkSpaces, refer to the following URL: https://aws.amazon.com/workspaces/features/
What is max size of an S3 object?
5TB (5000GB)
In S3, what do you do if you need to upload an object that is more than 5TB?
Multi-part upload.
What is S3 Object metadata?
List of text key / value pairs - system or user metadata.
Do all S3 Objects have a Version ID?
Only if versioning is enabled.
What are the types of S3 security?
- User based
- Resource based
- Encryption
Is there such thing as an IAM user for an EC2 Instance?
No.
How would you allow an IAM user from another AWS account access to an S3 bucket?
S3 bucket policy that allows cross-account access.
What is an S3 bucket policy made of?
It is a JSON based policy with:
1. Resources: buckets and objects
2. Actions: Set of API to Allow or Deny
3. Effect: Allow / Deny
4. Principle: The account or user to apply the policy to
In what situations would you use an S3 bucket policy?
- Grant public access to the bucket
- Force objects to be encrypted at upload
- Grant access to another account (Cross Account)
By default are S3 buckets accessable by the public?
No, there are settings created to prevent company data leaks.
What are the advantages of versioning your buckets?
- Protect against unintended deletes (ability to restore a version)
- Easy roll back to previous version
In S3, does suspending versioning delete the previous versions?
No.
What is S3 Replication (CRR)?
S3 Cross-Region Replication (CRR) is an Amazon S3 feature that enables customers to replicate data across different AWS Regions; to minimize latency for global users and\or meet compliance requirements. Disabling S3 Cross-Region Replication (CRR) does not help protect data from accidental deletion.
Cross Region Replication - used for compliance, lower latency access, replication across accounts. Cross-Region Replication (CRR) is an Amazon S3 feature that enables customers to replicate data across different AWS Regions; to minimize latency for global users and\or meet compliance requirements.
What is S3 Replication (SRR)?
Same Region Replication - log aggregation, live replication between production and test accounts.
What is VPC Peering?
Connect two VPC, privately using AWS’ network. Make them behave as if they are on the same network. VPC Peering connection is not transitive (must be established for each VPC that need to communicate with one another).
What are VPC Endpoints?
A VPC endpoint enables customers to privately connect to supported AWS services and VPC endpoint services powered by AWS PrivateLink.
Amazon VPC instances do not require public IP addresses to communicate with resources of the service. Traffic between an Amazon VPC and a service does not leave the Amazon network.
VPC endpoints are virtual devices. They are horizontally scaled, redundant, and highly available Amazon VPC components that allow communication between instances in an Amazon VPC and services without imposing availability risks or bandwidth constraints on network traffic. There are two types of VPC endpoints:
interface endpoints
gateway endpoints
Endpoints therefore allow you to connect to AWS Services using a private network instead of the public www network. This gives you better security and lower latency.
What 2 types of VPC Endpoints can you have?
- VPC Endpoint Gateway: S3 & DynamoDB
- VPC Endpoint Interface: the rest of the services
What is AWS Site-to-Site VPN?
AWS Site-to-Site VPN provides an internet-based connection that enables customers to connect their on-premises network or branch office site to AWS. Internet-based connectivity can have unpredictable performance and despite being encrypted, can present security concerns.
Connect an on-premises VPN to AWS. The connection is automatically encrypted and goes over the public internet. Only a few minutes to make.
AWS Direct Connect bypasses the public Internet and uses a standard Ethernet fiber-optic cable to establish a secure, dedicated, and more consistent connectivity from on-premises data centres into AWS.
Transferring large data sets over the Internet can be time consuming and expensive. AWS VPN is an internet-based connection and does not meet the requirement of consistent connectivity.
Additional information:
Unlike AWS Direct Connect, VPN Connections can be configured in minutes and are a good solution if customers have an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.
Your private subnets need to connect to the Internet while still remaining private. Which AWS managed VPC component allows you to do this?
- NAT Instances
- Internet Gateway
- Security Groups
- NAT Gateways
NAT Gateway
NAT devices (NAT Gateway, NAT Instance) allow instances in private subnets to connect to the internet, other VPCs, or on-premises networks. It is deployed in a public subnet.
Your VPC needs to connect with the Internet. Which VPC component can help?
- NAT Gateways
- NAT Instances
- Network ACL
- Internet Gateway
Internet Gateway.
An Internet Gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet.
Internet Gateways provide access for a VPC and subnet to reach the internet. They are not directly attached to EC2 instances.
A company needs to have a private, secure, and fast connection between its on-premises data centres and AWS Cloud. Which connection should they use?
- AWS Connect
- Site-to-Site VPN
- VPC Peering
- AWS Direct Connect
AWS Direct Connect
You need a logically isolated section of AWS, where you can launch AWS resources in a private network that you define. What should you use?
- Subnets
- Availability Zones
- A VPC
- NAT Instances
A VPC
A company needs two VPCs to communicate with each other. What can they use?
- VPC Endpoints
- AWS Direct Connect
- Internet Gateway
- VPC Peering
VPC Peering
You would like to connect hundreds of VPCs and your on-premises data centres together. Which AWS service allows you to link all these together efficiently?
- Site-to-Site VPN
- Transit Gateway
- Internet Gateway
- Direct Connect
Transit Gateway
Which type of firewall has both allow and deny rules and operates at the subnet level?
- Network Access Control List (NACL)
- Web Application Firewall (WAF)
- Security Groups
- GuardDuty
Network Access Control List (NACL)
A public subnet is accessible from the Internet while a private subnet is not accessible from the Internet?
- Yes
- No, all subnets are accessible from the Internet
- No, all subnets are not accessible from the Internet
Yes
AWS Cloud Best Practices - Design Principles
Scalability
Disposable Resources
Automation
Loose Coupling
Think in services not servers
Implementing Security Groups, NACLs, KMS, or CloudTrail reflects which Well architected framework pillar?
- Reliability
- Performance Efficiency
- Security
- Cost Optimization
Security
AWS Cost Explorer and AWS Trusted Advisor are services examples of which Well Architected framework pillar?
- Security
- Operational Excellence
- Cost Optimization
- Performance Efficiency
Cost Optimization
Which of the following is NOT a vertical scaling limit?
- Downtime
- Higher cost
- Capacity limitation
- Better fault tolerance
Better fault tolerance
Which AWS service is the key to Operational Excellence?
- CloudFormation
- EC2
- OpsWork
- CodeDeploy
CloudFormation
Testing recovery procedures, stop guessing capacity, and managing changes in automation are design principles of Performance Efficiency?
- True
- False
False
Which of the following is NOT an AWS Partner Network Type?
- APN Technology Partner
- APN Services Partner
- APN Consulting Partner
- AWS Training Partner
APN Services Partner
Which of the following are design principles of Performance Efficiency?
- Go global in minutes and experiment more often
- Analyze and attribute expenditure & stop spending money on data centre operations
- Make frequent, small, reversible changes & anticipate failure
- Automate security best practices & keep away people from data
Go global in minutes and experiment more often
AWS Trusted Advisor can provide guidance against the 5 well architected pillars and architectural best practices?
- True
- False
False
Auto Scaling in EC2 and DynamoDB are examples of?
- Horizontal scaling
- Vertical Scaling
Horizontal
What is the AWS Navigate Program?
AWS Navigate - Partner enablement arm in the partner network. Provides partners guidance of how to specialise with AWS.
Help Partners become better Partners.
What is AWS Competency Program?
AWS Competencies are granted to APN Partners who have demonstrated technical proficiency and proven customer success in specialized solution areas.
What is APN Training Partners?
Can help you learn AWS
What are APN Consulting Partners?
Consulting Partner – organisation who helps organisations migrate to and work within the cloud.
Professional services firm to help build on AWS
What is APN Technology Partners?
Technology Partner – organisation who builds software to be made available to multiple organisations via AWS.
Providing hardware, connectivity, and software.
List the EC2 instance categories?
Spot Instance, On-Demand Instances, Reserved Instances.
EC2 instances offers the widest choice of instance types, including processor, storage, and networking. Ideal for customers who want to manage or customize the underlying compute environment and host operating system.
What is an EC2 spot instance?
You can bid on unused EC2 capacity by using a spot instance, but a spot instance can be stopped and unallocated by AWS at any point in time.
What is a reserved instance?
You pay for EC2 capacity and you are guaranteed to be able to use this capacity when you need it, even if the AWS region is at 100% capacity.
EC2 or RDS instances that can be purchased over long periods of
time at significant savings. Payment can be up-front, not up-front, or partial up-front. See also on-demand instances and spot instances.
What is a on demand instance?
The ability to rent cloud resources to meet a specific need, exactly
when the need arises.
You use what you need and pay as you go.
See also reserved instances and spot instances.
When using a reserved instance are you guaranteed you will be able to provision the EC2 instance when needed, even if the AWS region is at 100% capacity.
Yes
When using spot instances are you guaranteed resources?
No
When using a spot instance can the instance be stopped at any time?
Yes
When using a reserved instance can the instance be stopped at any time?
No
When using an on-demand instance can the instance be stopped at any time?
No
With an on demand instance are you guaranteed resources?
No
Is a spot instance the best choice for a situation where the load is changing all the time and the workload cannot be interrupted?
No a better choice here would be on-demand instance.
What is the default number of instances you can create ?
20
I have data stored in S3 and I need to transform it and push the transformed data to DynamoDB, what AWS service would I use?
AWS Glue
Is S3 a global or regional service?
It is a global service with regional storage. Data is stored across multiple AZs within a single region, 3 or more AZ’s.
Is Route53 a global or regional service or something else?
Route53 operates from AWS edge locations.
Are ELBs regional or global?
Regional, ELBs are deployed to one or more AZ’s in a region.
In AWS Redshift, when you create a cluster, what do you get as a base configuration?
You get two nodes, leader and a data node, giving 160GB.
Do you get to select the disk size for RedShift?
No, you do not get to select the disk size. You do get to select the overall size of the Redshift cluster, through a slider in the console or parameter in CLI & API. AWS will then figure the number of disks in each data node.
I need to add capacity to my redshift cluster, how can I do this?
You have two options, you can scale up or out. Scaling up means you can change the size of the instance or you can add more nodes by scaling out.
What interfaces does RedShift support?
- ODBC
- JDBC
- Postgres
What is RedShift built on?
AWS Postgress, AWS separated the storage from the query engine and then replaced the storage engine with a columnar database.
What is RedShift used for?
Data Warehouse
Analytics
I have data in S3, is it possible to query this data from RedShift?
Yes, RedShift has a service called RedShift Spectrum, the data in S3 must be in a CVS format.
What is the Max data the RedShift can manage?
2PB
What encryption protocol is used for AWS transport today?
TLS 1.2, other protocols are considered weak, such as, TLS 1.1, TLS 1.0, SSL3.0 and SSL 2.0
What is a public subnet?
A subnet that is accessible from the internet
What is a private subnet?
A subnet that is not accessible from the internet
What are Route Tables?
Used to define access to the internet and between subnets
What is an Internet Gateway?
A VPC resource that allows EC2 instances to obtain a public IP address
and access the internet.
Helps our VPC instances connect with the internet
What does a NAT Gateway (AWS Managed) & NAT Instances (self-managed) allow you do to?
Allow your instances in your private subnets to access the internet while remaining private
What is NACL (Network ACL)?
A logical firewall that operates at the subnet level.
A firewall which controls traffic from and to a subnet. Can have allow and deny rules. Are attached to a subnet and rules only include IP addresses
What are security groups?
A firewall that controls traffic to and from an Elastic Network Interface (ENI) / an EC2 Instance. Can only have allow rules, rules include IP address and other security groups
What level is NACL at?
Subnet level
What level is security group at?
Instance level
What are VPC Flow Logs?
Capture information about IP traffic going to your instances. Helps to monitor and troubleshoot connectivity issues.
Where do VPC Flow logs data go?
S3 or CloudWatch Logs
What is Transit Gateway?
For having transitive peering between thousands of VPC and on-premises, hub-and-spoke (star) connection. One single gateway to provide this functionality. Works with Direct Connect Gateway and VPN connections
What 4 main things make up EC2?
- Renting virtual machines (EC2)
- Storing data on virtual drives (EBS)
- Distributing load across machines (ELB)
- Scaling the services using an auto-scaling group (ASG)
What EC2 sizing and configuration options are there?
- Operating System: Linux, Windows, Mac
- How much compute power and cores (CPU)
- How much random-access memory (RAM)
- How much storage space: Network-attached (EBS & EFS) / hardware (EC2 Instance Store)
- Network card: speed of the card, Public IP address
- Firewall rules: security group
- Bootstrap script (configure at first launch): EC2 User Data
What is an EC2 User Data script?
Used to bootstrap our instances. Runs only once at the instance first start
What user does the EC2 User Data Script run with?
Root user
What does AMI stand for?
Amazon Machine Image
Are EC2 instances bound to an AZ?
Yes
How long can you reserve an EC2 Reserved Instance?
- 1 or 3 years
- 2 or 4 years
- 6 months or 1 year
- Anytime between 1 and 3 years
1 or 3 years
Under the Shared Responsibility Model, who is responsible for operating-system patches and updates on EC2 Instances?
- The customer
- AWS
- Both AWS and the customer
The customer
Which network security tool can you use to control traffic in and out of EC2 Instances?
- Network Access Control List (NACL)
- Identity and Management Access (IAM)
- GuardDuty
- Security Groups
Security Groups
Which EC2 Purchasing Option can provide the biggest discount, but is not suitable for critical jobs or databases?
- Scheduled Instances
- Convertible Instances
- Dedicated Hosts
- Spot Instances
Spot Instances
What is an EC2 Instance made of?
AMI (OS) + Instance Size (CPU + RAM) + Storage + security groups + EC2 User Data
What are you responsible for with regards to EC2?
- Security Groups rules
- Operating-system patches and updates
- Software and utilities installed on the EC2 instance
- IAM Roles assigned to EC2 & IAM user access management
- Data security on your instance
What is AWS responsible for with regards to EC2?
- Infrastructure (global network security)
- Isolation on physical hosts
- Replacing faulty hardware
- Compliance validation
What are EC2 Dedicated Instances?
Instances running on hardware that’s dedicated to you. May share hardware with other instances in same account. No control over instance placement (can move hardware after Stop / Start). Soft version of Dedicated Host
What situations are EC2 Dedicated Hosts recommended for?
Useful for software that have complicated licensing model (BYOL - Bring Your Own License) or for companies that have strong regulatory or compliance needs
What is the reservation period for EC2 Dedicated Hosts?
3 years
What is an EC2 Dedicated Host?
A physical server with EC2 instance capacity fully dedicated to your use. Can help you address compliance requirements and reduce costs by allowing you to use your existing server-bound software licenses.
What is the most cost-effective instance type in AWS?
EC2 Spot Instances
How can you lose your EC2 Spot Instance?
Any time your max price is less than the current spot price
How does EC2 Spot Instance compare to On Demand for pricing?
Up to 90% off compared to On Demand
What situations is EC2 Scheduled Reserved Instances recommended for?
When you require a fraction of a day / week / month
What are EC2 Scheduled Reserved Instances?
Launch within a time window you reserved.
How does EC2 Convertible Reserved Instance compare to On Demand for pricing?
Up to 54% off compared to On Demand
What is a Convertible Reserved Instance?
Allows you to change instance type
What situations are EC2 Reserved Instances recommended for?
Steady-state usage applications (think databases)
Can you change instance type on a regular EC2 Reserved Instance?
No
What are EC2 Reserved Instances purchasing options?
- No upfront
- Partial upfront = + discount
- All upfront = ++ discount
How does regular EC2 Reserved Instance compare to On Demand for pricing?
Up to 75% off compared to On Demand
What is EC2 Reserved Instances reservation periods?
- 1 year =+ discount
- 3 years = +++ discount
Which EC2 Instance has the highest cost?
On Demand
Does EC2 On Demand have any upfront cost?
No
Does EC2 On Demand have a long-term commitment?
No
What are EC2 On Demand instances recommended for?
Short-term and un-interrupted workloads where you can’t predict how the application will behave
What is EC2 On Demand pricing?
Pay for what you use.
Linux: billing per second, after the first minute
All other operating systems: billing per hour
What are the 4 main types of EC2 Instance purchasing options?
- On-Demand: short workloads, predictable pricing
- Reserved: minimum 1 year
- Spot Instances: short workloads, cheap, can lose instance (less reliable)
- Dedicated Hosts: book an entire physical server, control instance placement
Will EC2 have the same public IP address when you restart it?
No
What does EC2 Instance Connect work for? (can be multiple)
- Mac
- Linux
- Windows < 10
- Windows >= 10
Mac, Linux, Windows <10, Windows >= 10
What does Putty work for? (can be multiple)
- Mac
- Linux
- Windows < 10
- Windows >= 10
Windows <10 and Windows >=10
What does SSH work for? (can be multiple)
- Mac
- Linux
- Windows < 10
- Windows >= 10
Mac, Linux, Windows >= 10
What is port 3389 for?
RDP (Remote Desktop Protocol) - log into a Windows instance
What is port 80 for?
HTTP - access unsecured websites
What is port 22 for?
SSH (Secure Shell) port - 22 is used to get CLI access to Linux instances. Allowing inbound traffic from all external IP addresses to SSH port is vulnerable to banner grabbing and brute force attack. It is a best practice to restrict access from specific IP addresses to port 22.
SFTP (Secure File Transport Protocol) - upload files using SSH
SSH (Secure Shell) - log into a Linux instance
What is port 21 for?
FTP (File Transport Protocol) - upload files into a file share
All outbound traffic is ___ by default
authorized
All inbound traffic is ___ by default
blocked
Is it a security group issue if your application gives a ‘connection refused’ error
No, it’s is an application error or it’s not launched
Is it a security group issue if your application is not accessible (time out)
Yes
Is it good to maintain one separate security group for SSH access?
Yes
If traffic is blocked by a security group, will the EC2 instance see it
No
Are security groups locked down to a region / VPC combination?
Yes
Can a security group be attached to multiple instances?
Yes
Can an instance have multiple security groups attached?
Yes
What do security groups regulate?
- Access to Ports
- Authorized IP ranges - IPv4 and IPv6
- Control of inbound network (from other to the instance)
- Control of outbound network (from the instance to other)
For Kinesis data streams, how does it work?
Kinesis data streams enable you to inject data from thousands of sources, Kinesis data streams scales based on the number of shards you create. Kinesis data streams buffer the data for 24hrs by default and enable one or more consumers to read form the stream.
When you create a Kinesis stream, in what region are you creating the stream or is it a global service?
You are creating the stream in the region you have selected as Kinesis data stream is not a global service.
What is Kinesis?
It is a family of products for data stream processing, this means injection, analysis/process and store.
What is the Kinesis producer?
it is the entity that puts data into the stream,
- IOT device
- Mobile device
- Application device
- EC2 device
- On-prem server
What is a Kinesis consumer?
This is the entity that takes data out of the stream.
Can I have multiple Kinesis consumers?
Yes
What types of Kinesis consumers can I have?
You can have:
- EC2 using Kinesis Customer library (KCL)
- Lambda
- Kinesis Firehose
What types of streams can I have in Kinesis?
- Data streams
- Video streams
How long is data stored in a Kinesis stream?
24hrs (you can increase this to 7 days for an extra charge)
How does a Kinesis stream relate to shards?
A kinesis stream is a collection of shards.
What are the units associated with a single (Kinesis) shard?
Read at 2mb per second
Write at 1mb per second
What is the max number of shards (in Kinesis)?
500
I require a Kinesis stream capable of 10mb write, how many shards do I need?
You need 10
How many data records per second can a single shard in a Kinesis accept?
100 per shard
How big can a single Kinesis data record be?
1MB
What is the partition key used for in Kinesis?
It is used to select the shard to use when writing the data.
We are using Kinesis and our org has a policy where all data in transit and at rest is encrypted, is it possible to have Kinesis encrypt the data at rest?
Yes 100%, you can use SSE-KMS with AWS keys or SSE-KMS encryption with client keys.
How can I monitor the Kinesis stream metrics?
You can use cloud watch to monitor shard level metrics like incoming bytes, outgoing bytes, etc
I want to access Kinesis from my VPC without going on the internet, how can I do this?
VPC endpoints.
I have Lambda configured to operate in the VPC, how best can I have lambda access Kinesis?
Through VPC endpoints
What is an “enhanced fan out” in relation to Kinesis?
It means that you select a consumer to be given more of the bandwidth.
What is Kinesis Firehose?
It enables you to take data from a Kinesis stream and push it to a datastore like:
- Elasticsearch
- S3
- Redshift
- Splunk
For Kinesis what are the two input sources you can have?
- Kinesis
- Direct, send records direct to Kinesis Firehose.
I want to push CloudWatch events into a Kinesis Firehose, how can I do this?
Kinesis Firehose can take Direct input from these sources.
I want to push CloudWatch Logs into a Kinesis Firehose, how can I do this?
Kinesis Firehose can take Direct input from these sources.
I wnat to transform data from S3 to Kinesis Firehose, how can I do this?
Kinesis Firehose has the option to transform data using Lambda.
What are the products in Kinesis?
Kinesis streaming
Kinesis Firehose
Kinesis Analytics
For Kinesis can I compress and encrypt data?
Yes you can take an input stream and when delivering it you can encrypt and compress it.
What are the main functions of what Kinesis does?
Take an input stream, transforms if and stores it while optionally compressing and encrypting it to a number of destinations like S3, Redshift, Redshift and Elastic search.
What types of inputs can you have with Kinesis?
Direct PUTS from sources like IoT, etc
Kinesis
If Kinesis is used as an input for Kinesis firehose and the Kinesis stream is already encrypted, will the firehose be automatically encrypted?
Yes.
For Kinesis what are the Inputs we can have?
You can have Kinesis Streams and Kinesis Firehose
For Kinesis Analytics how can I perform preprocessing of the stream data?
Kinesis Analytics has the ability to use Lambda as a pre-processor.
What can I output Kinesis Analytics too?
Kinesis Streams and Kinesis Firehose
I want to add some reference data to the stream data in my Kinesis Analytics, how can I do this?
You can use the reference table to supply the reference data.
What is the flow of data through Kinesis Analytics?
Input stream form Kinesis Streams or Firehose to Input table, select query and output to the application output stream and then data passed to Kinesis Streams or Firehose.
What is NOT authorized to do on AWS according to the AWS Acceptable Use Policy?
- Building a gaming application
- Deploying a website
- Run analytics on stolen content
- Backup your data
Run analytics on stolen content
A company would like to benefit from the advantages of the Public Cloud but would like to keep sensitive assets in its own infrastructure. Which deployment model should the company use?
- Private Cloud
- Public Cloud
- Hybrid Cloud
Hybrid Cloud
What defines the distribution of responsibilities for security in the AWS Cloud?
- AWS Pricing Fundamentals
- The Shared Responsibility Model
- AWS Acceptable Use Policy
- The AWS Management Console
The Shared Responsibility Model
Which of the following is the definition of Cloud Computing?
- Rapidly develop, test, and launch software applications
- Automatic and quick ability to acquire resources as you need and release resources when you no longer need them
- On-demand availability of computer system resources, especially data storage (cloud storage) and computing power, without direct active management by the user
- Change resource types when needed
On-demand availability of computer system resources, especially data storage (cloud storage) and computing power, without direct active management by the user
Which of the following services has a global scope?
- EC2
- IAM
- Lambda
- Rekognition
IAM
AWS Regions are composed of?
- Two or more Edge Locations
- One or more discrete data centres
- Two or more Availability Zones
Two or more Availability Zones
Which of the following is NOT an advantage of Cloud Computing?
- Trade capital expense (CAPEX) for operational expense (OPEX)
- Train your employees less
- Go global in minutes
- Stop spending money running and maintaining data centres
Train your employees less
Which of the following options is NOT a point of consideration when choosing an AWS Region?
- Compliance and data governance
- Latency
- Capacity availability
- Pricing
Capacity availability
Which are the 3 pricing fundamentals of the AWS Cloud?
- Compute, Storage, and Data transfer in the AWS Cloud
- Compute, Networking, and Data transfer out of the AWS Cloud
- Compute, Storage, and Data transfer out of the AWS Cloud
- Storage, Functions, and Data transfer in the AWS Cloud
Compute, Storage, and data transfer out of the AWS Cloud
Which of the following is NOT one of the Five Characteristics of Cloud Computing?
- Rapid elasticity and scalability
- Multi-tenancy and resource pooling
- Dedicated Support Agent to help you destroy applications
- On-demand self service
Dedicated Support Agent to help you destroy applications
Which Global Infrastructure identity is composed of one or more discrete data centres with redundant power, networking, and connectivity, and are used to deploy infrastructure?
- Edge locations
- Availability Zones
- Regions
Availability zones
What is Amazon Quicksight?
Amazon QuickSight is a machine learning-powered business intelligence (BI) service built for the cloud. QuickSight lets you easily create and publish interactive BI dashboards that include Machine Learning-powered insights. QuickSight dashboards can be accessed from any device, and seamlessly embedded into your applications, portals, and websites.
Unlike traditional BI or data discovery solutions, getting started with Amazon QuickSight is simple and fast. When you log in, Amazon QuickSight seamlessly discovers your data sources in AWS services such as Amazon Redshift, Amazon RDS, Amazon Athena, and Amazon Simple Storage Service (Amazon S3). You can connect to any of the data sources discovered by Amazon QuickSight and get insights from this data in minutes. Amazon QuickSight supports rich data discovery and business analytics capabilities to help customers derive valuable insights from their data without worrying about provisioning or managing infrastructure.
Amazon QuickSight is a cloud-scale business intelligence (BI) service that you can use to deliver easy-to-understand insights to the people who you work with, wherever they are. Amazon QuickSight connects to your data in the cloud and combines data from many different sources. In a single data dashboard, QuickSight can include AWS data, third-party data, big data, spreadsheet data, SaaS data, B2B data, and more. As a fully managed cloud-based service, Amazon QuickSight provides enterprise-grade security, global availability, and built-in redundancy. It also provides the user-management tools that you need to scale from 10 users to 10,000, all with no infrastructure to deploy or manage.
QuickSight gives decision-makers the opportunity to explore and interpret information in an interactive visual environment. They have secure access to dashboards from any device on your network and from mobile devices.
What Is Amazon EventBridge?
Amazon EventBridge is a serverless event bus service that you can use to connect your applications with data from a variety of sources. EventBridge delivers a stream of real-time data from your applications, software as a service (SaaS) applications, and AWS services to targets such as AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
Amazon EventBridge (also called Amazon CloudWatch Events): Amazon EventBridge is a serverless event bus service that makes it easy for you to build event-driven application architectures. Amazon EventBridge helps you accelerate modernizing and re-orchestrating your architecture with decoupled services and applications. With EventBridge, you can speed up your organization’s development process by allowing teams to iterate on features without explicit dependencies between systems.
What is Amazon Managed Blockchain (EMB)?
Amazon Managed Blockchain allows you to easily create and manage scalable blockchain networks.
Is a fully managed service that makes it easy to join public networks or create and manage scalable private networks using the popular open-source frameworks Hyperledger Fabric and Ethereum.
What is AWS Snowball?
The AWS Snowball is a service that uses physical storage devices to transfer large amounts of data between Amazon’s Simple Storage Service (popularly known as an S3 bucket) and your on-premise data storage location at faster speed than the Internet.
Amazon claims that it can save you time and money. Snowball offers a powerful interface that you can use to create jobs, track data, and track your jobs’ status through to completion.
Snowball is a physically rugged device that can be protected by the AWS Key Management Service (AWS KMS). They secure and protect your data in transit—regional shipping carriers transport Snowballs between Amazon S3 and your on-premise data storage location.
Generally, Snowball used when there is a data migration project; when there is a vast amount of data stored locally, and a need to move that Data to the cloud. However, there may be petabytes of information; the Internet is not a viable option because of its speed issues, security concerns, and networking complexities.
What is Amazon Detective?
SUMMARY
Amazon Detective is the service that helps AWS customers analyse, investigate, and quickly identify the root cause of potential security issues or suspicious activities.
DETAIL
Amazon Detective makes it easy to analyse, investigate, and quickly identify the root cause of security findings or suspicious activities. Detective automatically collects log data from your AWS resources. It then uses machine learning, statistical analysis, and graph theory to generate visualizations that help you to conduct faster and more efficient security investigations.
The Detective prebuilt data aggregations, summaries, and context help you to quickly analyse and determine the nature and extent of possible security issues. Detective maintains up to a year of historical event data. This data is easily available through a set of visualizations that show changes in the type and volume of activity over a selected time window. Detective links those changes to GuardDuty findings.
How does Amazon Detective differ from Amazon GuardDuty?
Amazon GuardDuty is helpful in alerting you when something is wrong and pointing out where to go to fix it. But sometimes, there might be a security finding where you need to dig a lot deeper and analyse more information to isolate the root cause and take action.
Amazon Detective simplifies this process by enabling you to easily investigate and quickly get to the root cause of a security finding. Amazon Detective analyzes trillions of events from multiple data sources such as Virtual Private Cloud (VPC) Flow Logs, AWS CloudTrail logs, and automatically creates a unified view of user and resource interactions over time, with all the context and details in one place to help you quickly analyse and get to the root cause of a security finding.
For example, an Amazon GuardDuty finding, like an unusual Console Login API call, can be quickly investigated in Amazon Detective with details about the API call trends over time, and user login attempts on a geolocation map. These details enable you to quickly identify if you think it is legitimate or an indication of a compromised AWS resource.
What is AWS Snowcone?
AWS Snowcone is the smallest member of the AWS Snow Family of devices—a collection of physical devices designed for environments outside of traditional data centres that lack consistent network connectivity, space, power, cooling, and/or require portability.
From the suitcase-sized 50 pound AWS Snowball to the 45-foot long shipping container AWS Snowmobile, the Snow services collect and process data, run local computing applications, and move large volumes of data, such as digital media, genomic data, and sensor data to AWS.
Weighing less than 5 pounds and able to fit in a standard mailbox or a small backpack, Amazon Web Services (AWS) has launched a new small, ultra-portable, rugged, and secure edge computing and data transfer device called AWS Snowcone.
The smallest device in the range that is best suited for outside the data centre.
What is AWS Snowmobile?
AWS uses storage transportation devices, like AWS Snowball and Snowmobile to allow companies to transfer data to the cloud.
A literal shipping container full of storage (up to 100PB) and a truck to transport it.
AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS, including video libraries, image repositories, or even a complete data center migration. Customers can transfer up to 100 PetaBytes per Snowmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck.
AWS Snowmobile is the service that can be used to transfer Exabyte-scale data from on-premises data centres into AWS.
AWS Snowmobile is an Exabyte-scale data migration device and data transfer service used to move extremely large amounts of data to AWS. Migrate up to 100PB in a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck.
What is Amazon Quantum Ledger Database (QLDB)?
Amazon Quantum Ledger Database (QLDB) is a fully managed ledger database that provides a transparent, immutable, and cryptographically verifiable transaction log.
What is the AWS Command Line Interface (CLI)?
The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.
What is AWS CodeStar?
AWS CodeStar is a cloud-based service for creating, managing, and working with software development projects on AWS. You can quickly develop, build, and deploy applications on AWS with an AWS CodeStar project. An AWS CodeStar project creates and integrates AWS services for your project development toolchain. Depending on your choice of AWS CodeStar project template, that toolchain might include source control, build, deployment, virtual servers or serverless resources, and more. AWS CodeStar also manages the permissions required for project users (called team members). By adding users as team members to an AWS CodeStar project, project owners can quickly and simply grant each team member role-appropriate access to a project and its resources.
What is Amazon Neptune? (Database)
Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets.
With Amazon Neptune, you can create sophisticated, interactive graph applications that can query billions of relationships in milliseconds.
SQL queries for highly connected data are complex and hard to tune for performance. Instead, Amazon Neptune allows you to use the popular graph query languages Apache TinkerPop Gremlin and W3C’s SPARQL to execute powerful queries that are easy to write and perform well on connected data.
The core of Neptune is a purpose-built, high-performance graph database engine. This engine is optimized for storing billions of relationships and querying the graph with milliseconds latency.
Neptune supports the popular graph query languages Apache TinkerPop Gremlin, the W3C’s SPARQL, and Neo4j’s openCypher, enabling you to build queries that efficiently navigate highly connected datasets.
Neptune powers graph use cases such as recommendation engines, fraud detection, knowledge graphs, drug discovery, and network security.
Neptune is highly available, with read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across Availability Zones. Neptune provides data security features, with support for encryption at rest and in transit. Neptune is fully managed, so you no longer need to worry about database management tasks like hardware provisioning, software patching, setup, configuration, or backups.
https://aws.amazon.com/neptune/features/
Whats is the DMS AWS Schema Conversion Tool (SCT)?
The AWS Schema Conversion Tool (AWS SCT) makes heterogeneous database migrations predictable. It automatically converts the source database schema and a majority of the database code objects, including views, stored procedures, and functions, to a format compatible with the target database. Any objects that cannot be automatically converted are clearly marked so that they can be manually converted to complete the migration. SCT can also scan your application source code for embedded SQL statements and convert them as part of a database-schema-conversion project. During this process, SCT performs cloud-native code optimization by converting legacy Oracle and SQL Server functions to their equivalent AWS service, helping you modernize the applications at the same time of database migration. Once schema conversion is complete, SCT can help migrate data from a range of data warehouses to Amazon Redshift using built-in data migration agents.
What is the AWS Systems Manager (SSM) Parameter Store?
Parameter Store, a capability of AWS Systems Manager, provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, Amazon Machine Image (AMI) IDs, and license codes as parameter values. You can store values as plain text or encrypted data. You can reference Systems Manager parameters in your scripts, commands, SSM documents, and configuration and automation workflows by using the unique name that you specified when you created the parameter. To get started with Parameter Store, open the Systems Manager console. In the navigation pane, choose Parameter Store.
Parameter Store is also integrated with Secrets Manager. You can retrieve Secrets Manager secrets when using other AWS services that already support references to Parameter Store parameters. For more information, see Referencing AWS Secrets Manager secrets from Parameter Store parameters.
Which AWS service or feature can be used to call AWS Services from different programming languages?
AWS Software Development Kit (SDK)
You are working on two projects that require completely different network configurations. Which AWS service or feature will allow you to isolate resources and network configurations?
Virtual Private Cloud
Your company is developing a critical web application in AWS, and the security of the application is a top priority. Which of the following AWS services will provide infrastructure security optimization recommendations?
AWS Trusted Advisor
In order to implement best practices when dealing with a “Single Point of Failure,” you should attempt to build as much automation as possible in both detecting and reacting to failure. Which AWS services would help?
Auto Scaling and ELB
You should attempt to build as much automation as possible in both detecting and reacting to failure.
You can use services like ELB and Amazon Route53 to configure health checks and mask failure by only routing traffic to healthy endpoints.
In addition, Auto Scaling can be configured to automatically replace unhealthy nodes.
You can also replace unhealthy nodes using the Amazon EC2 auto-recovery feature or services such as AWS OpsWorks and AWS Elastic Beanstalk.
It won’t be possible to predict every possible failure scenario on day one.
Make sure you collect enough logs and metrics to understand normal system behaviour.
After you understand that, you will be able to set up alarms that trigger automated response or manual intervention.
AWS allows users to manage their resources using a web based user interface. What is the name of this interface?
AWS Management Console
The AWS Management Console allows you to access and manage Amazon Web Services through a simple and intuitive web-based user interface. You can also use the AWS Console mobile app to quickly view resources on the go.
Two options related to the reliability of AWS are?
- Ability to recover quickly from failures.
- Automatically provisioning new resources to meet demand.
The reliability term encompasses the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. The automatic provisioning of resources and the ability to recover from failures meet these criteria.
What are the benefits of having infrastructure hosted in AWS?
All of the physical security and most of the data/network security are taken care of for you
Increasing speed and agility
In the AWS Shared Responsibility Model do responsibilities vary depending on the services used?
Yes
One of the most important AWS best-practices to follow is the cloud architecture principle of elasticity. How does this principle improve your architecture’s design?
By automatically provisioning the required AWS resources based on changes in demand.
Before cloud computing, you had to overprovision infrastructure to ensure you had enough capacity to handle your business operations at the peak level of activity. Now, you can provision the amount of resources that you actually need, knowing you can instantly scale up or down with the needs of your business. This reduces costs and improves your ability to meet your users’ demands.
The concept of Elasticity involves the ability of a service to scale its resources out or in (up or down) based on changes in demand. For example, Amazon EC2 Autoscaling can help automate the process of adding or removing Amazon EC2 instances as demand increases or decreases.
You are working on a project that involves creating thumbnails of millions of images. Consistent uptime is not an issue, and continuous processing is not required. Which EC2 buying option would be the most cost-effective?
Spot Instances
Spot instances provide a discount (up to 90%) off the On-Demand price. The Spot price is determined by long-term trends in supply and demand for EC2 spare capacity. If the Spot price exceeds the maximum price you specify for a given instance or if capacity is no longer available, your instance will automatically be interrupted.
Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if you don’t mind if your applications get interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks.
Adjusting compute capacity dynamically to reduce cost is an implementation of which AWS cloud best practice?
Implement Elasticity.
The concept of Elasticity is the means of an Application having the ability to scale up and scale down based on demand. An example of such a service is the Autoscaling service. The benefit of Elasticity is therefore creating systems that scale to the required capacity based on changes in demand.
A Japanese company hosts their applications on Amazon EC2 instances in the Tokyo Region. The company has opened new branches in the United States, and the US users are complaining of high latency. What can the company do to reduce latency for the users in the US while minimizing costs?
Deploying new Amazon EC2 Instances in a Region located in the US.
The only way to reduce latency for the US users is to provision new Amazon EC2 instances in a Region closer to or in the US, OR by using Amazon CloudFront to cache copies of the content in edge locations close to the US users. In both cases, user requests will travel a shorter distance over the network, and the performance will improve.
The principle “design for failure and nothing will fail” is very important when designing your AWS Cloud architecture. Which elements of AWS would help adhere to this principle?
Availability Zones and Elastic Load Balancing
Each AWS Region is a separate geographic area. Each AWS Region has multiple, isolated locations known as Availability Zones. When designing your AWS Cloud architecture, you should make sure that your system will continue to run even if failures happen. You can achieve this by deploying your AWS resources in multiple Availability zones. Availability zones are isolated from each other; therefore, if one availability zone goes down, the other Availability Zones will still be up and running, and hence your application will be more fault-tolerant. In addition to availability zones, you can build a disaster recovery solution by deploying your AWS resources in other regions. If an entire region goes down, you will still have resources in another region able to continue to provide a solution. Finally, you can use the Elastic Load Balancing service to regularly perform health checks and distribute traffic only to healthy instances.
A company has an AWS Enterprise Support plan. They want quick and efficient guidance with their billing and account inquiries. What service should they use?
AWS Support Concierge
The AWS Support Concierge Service assists customers with account and billing enquiries.
A company is introducing a new product to their customers, and is expecting a surge in traffic to their web application. As part of their Enterprise Support plan, what would they use for architectural and scaling guidance?
Infrastructure Event Management
AWS Infrastructure Event Management is a short-term engagement with AWS Support, included in the Enterprise-level Support product offering, and available for additional purchase for Business-level Support subscribers. AWS Infrastructure Event Management partners with your technical and project resources to gain a deep understanding of your use case and provide architectural and scaling guidance for an event. Common use-case examples for AWS Event Management include advertising launches, new product launches, and infrastructure migrations to AWS.
What should you do in order to keep the data on EBS volumes safe?
- Ensure that EBS data is encrypted at rest.
- Create EBS snapshots.
Which service allows customers to manage their agreements (contracts) with AWS?
AWS Artifact
What is the AWS database service that allows you to upload data structured in key-value format?
Amazon DynamoDB
How can you view the distribution of AWS spending in one of your AWS accounts?
By using AWS Cost Explorer.
A company is concerned that they are spending money on underutilized compute resources in AWS. Which AWS feature will help ensure that their applications are automatically adding/removing EC2 compute capacity to closely match the required demand?
AWS Auto Scaling
An organization has a large number of technical employees who operate their AWS Cloud infrastructure. What does AWS provide to help organize them into teams and then assign the appropriate permissions for each team?
IAM Groups
What do you gain from setting up consolidated billing for five different AWS accounts under another master account?
Each AWS account gets volume discounts
What does AWS provide to deploy popular technologies - such as IBM MQ - on AWS with the least amount of effort and time?
AWS Partner Solutions (formerly AWS Quick Start reference deployments).
AWS Partner Solutions (formerly AWS Quick Starts) outline the architectures for popular enterprise solutions on AWS and provide AWS CloudFormation templates to automate their deployment. Each Partner Solution launches, configures, and runs the AWS compute, network, storage, and other services required to deploy a specific workload on AWS, using AWS best practices for security and availability.
AWS Partner Solutions are automated reference deployments built by AWS solutions architects and partners to help you deploy popular technologies on AWS, based on AWS best practices. These accelerators reduce hundreds of manual installation and configuration procedures into just a few steps, so you can build your production environment quickly and start using it immediately.
TWO examples of the AWS shared controls are?
Configuration Management
Patch Management
A company has moved to AWS recently. Which AWS Services will help ensure that they have the proper security settings?
Amazon Inspector and AWS Trusted Advisor
You have AWS Basic support, and you have discovered that some AWS resources are being used maliciously, and those resources could potentially compromise your data. What should you do?
Contact the AWS Abuse team
According to the AWS Acceptable Use Policy, is the following statement true regarding penetration testing of EC2 instances?
Penetration testing can be performed by the customer on their own instances without prior authorisation from AWS.
Yes
Is it true that in relation to Amazon EC2 On-demand instances, you have to pay a start-up fee when launching a new instance for the first time?
No
A global company with a large number of AWS accounts is seeking a way in which they can centrally manage billing and security policies across all accounts. Which AWS Service will assist them in meeting these goals?
AWS Organisations
What is the advantage of the AWS-recommended practice of “decoupling” applications?
Reduces inter-dependencies so that failures do not impact other components of the application
Adding more EC2 instances of the same size to handle an increase in traffic is an example of horizontal or vertical scaling?
Horizontal
A company has developed an eCommerce web application in AWS. What should they do to ensure that the application has the highest level of availability?
Deploy the application across multiple Regions and Availability Zones
What must an IAM user provide to interact with AWS services using the AWS Command Line Interface (AWS CLI)?
Access keys
Two examples of AWS-Managed Services, where AWS is responsible for the operational and maintenance burdens of running the service are?
Amazon Elastic MapReduce
Amazon DynamoDB
Which service helps a customer view the Amazon EC2 billing activity for the past month?
AWS Cost & Usage Reports
A developer is planning to build a two-tier web application that has a MySQL database layer. Which AWS database service would provide automated backups for the application?
Amazon Aurora.
Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud.
Amazon Aurora combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. It delivers up to five times the throughput of standard MySQL and up to three times the throughput of standard PostgreSQL. Amazon Aurora is designed to be compatible with MySQL and with PostgreSQL, so that existing applications and tools can run without requiring modification. It is available through Amazon Relational Database Service (RDS), freeing you from time-consuming administrative tasks such as provisioning, patching, backup, recovery, failure detection, and repair.
Your company has a data store application that requires access to a NoSQL database. Which AWS database offering would meet this requirement?
Amazon DynamoDB
You work as an on-premises MySQL DBA. The work of database configuration, backups, patching, and DR can be time-consuming and repetitive. Your company has decided to migrate to the AWS Cloud. Which service can help save time on database maintenance so you can focus on data architecture and performance?
Amazon RDS
What is the AWS service that provides a virtual network dedicated to your AWS account?
Amazon VPC
AWS Snowball provides?
- Secure transfer of large amounts of data into and out of the AWS Cloud.
- Built-in computing capabilities that allow customers to process data locally.
Which S3 storage class is best for data with unpredictable access patterns?
Amazon S3 Intelligent-Tiering
The S3 Intelligent-Tiering storage class is designed to optimize costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead. It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. For a small monthly monitoring and automation fee per object, Amazon S3 monitors access patterns of the objects in S3 Intelligent-Tiering, and moves the ones that have not been accessed for 30 consecutive days to the infrequent access tier. If an object in the infrequent access tier is accessed, it is automatically moved back to the frequent access tier. There are no retrieval fees when using the S3 Intelligent-Tiering storage class, and no additional tiering fees when objects are moved between access tiers. It is the ideal storage class for long-lived data with access patterns that are unknown or unpredictable.
Types of AWS Identity and Access Management (IAM) identities?
IAM Users
IAM Roles
An IAM user is uniquely associated with only one person, however a role is intended to be assumable by anyone who is authorized to use it.
An IAM user has permanent credentials associated with it, however a role has temporary credentials associated with it.
AWS IAM and its features are offered at no additional charge.
Which AWS Service allows customers to create a template that programmatically defines policies and configurations of all AWS resources as code and so that the same template can be reused among multiple projects?
AWS CloudFormation
An AWS customer has used one Amazon Linux instance for 2 hours, 5 minutes and 9 seconds, and one CentOS instance for 4 hours, 23 minutes and 7 seconds. How much time will the customer be billed for?
2 hours, 5 minutes and 9 seconds for the Linux instance and 5 hours for the CentOs instance
A company is planning to use Amazon S3 and Amazon CloudFront to distribute its video courses globally. What tool can the company use to estimate the costs of these services?
AWS Pricing Calculator
A customer is planning to migrate their Microsoft SQL Server databases to AWS. Which AWS Services can the customer use to run their Microsoft SQL Server database on AWS?
Amazon RDS
Amazon Elastic Compute Cloud
Amazon Web Services offers the flexibility to run Microsoft SQL Server as either a self-managed component inside of EC2, or as a managed service via Amazon RDS. Using SQL Server on Amazon EC2 gives customers complete control over the database, just like when it’s installed on-premises. Amazon RDS is a fully managed service where AWS manages the maintenance, backups, and patching.
Which AWS service or feature can be used to call AWS Services from different programming languages?
AWS Software Development Kit
The AWS Software Development Kit (AWS SDK) can simplify using AWS services in your applications with an API tailored to your programming language or platform. Programming languages supported include Java, .NET, Node.js, PHP, Python, Ruby, Go, and C++.
What are AWS shared controls?
Controls that apply to both the infrastructure layer and customer layers
Shared Controls are controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services. Examples include:
- Patch Management – AWS is responsible for patching the underlying hosts and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.
- Configuration Management – AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.
- Awareness & Training - AWS trains AWS employees, but a customer must train their own employees.
What is AWS Textract?
Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents.
It goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables.
What is AWS Comprehend?
Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find meaning and insights in text. Customers can use Amazon Comprehend to identify the language of the text, extract key phrases, places, people, brands, or events, understand sentiment about products or services, and identify the main topics from a library of documents. The source of this text could be web pages, social media feeds, e-mails, or articles. Amazon Comprehend is fully managed, so there are no servers to provision, and no machine learning models to build, train, or deploy.
Amazon Comprehend is a natural-language processing (NLP) service that uses machine learning to uncover valuable insights and connections in text.
Note: Natural language processing (NLP) is an artificial intelligence technology that helps computers identify, understand, and manipulate human language.
What is QuickSight?
It is a Business Intelligence Tool.
What services can you query data from with QuickSight?
Athena
Aurora
Redshift
S3
Sparc 2.0
MariaDB
MS SQL 2012+
MySQL 51+
Is QuickSight a “pay as you use” model?
No, you sign up for a subscription.
What am I paying for in DynamoDB?
- Paying for storage
- Read/Write capacity
What is a DynamoDB trigger?
Amazon DynamoDB is integrated with AWS Lambda so that you can create triggers, that is, pieces of code that automatically respond to events in DynamoDB Streams.
With triggers, you can build applications that react to data modifications in DynamoDB tables.
This is where an item changes in the DynamoDB and a trigger fires and lambda is called.
Can I have reserved capacity on DynamoDB?
You can purchase reserved capacity
What is AWS WorkDoc?
It is a dropbox type service.
Can WorkDoc integrate with AD and SSO?
Yes
What clients are available for WorkDoc?
Web
Mobile
Native
But no linux
Is WorkDoc HIPPA compliant?
Yes
What is OIDC?
Open ID connect
What is JWT?
Java web token
I need to analyze clickstream data, what is my best architecture?
Kinesis and a Kinesis worker.
When talking about an RPO of 15min, if a disaster occurred at 5 pm, what is the acceptable data loss window?
4.45 - 5pm
What is IKE?
IKE is an Internet Key Exchange and is used to set up the security associations. There are two IKE version IKEv1 and IKEv2.
Your company is designing a new application that will store and retrieve photos and videos. Which service should you recommend as the underlying storage mechanism?
Amazon S3.
Which EC2 instance purchasing option supports the “Bring Your Own License (BYOL) model for almost every BYOL scenario?
Dedicated Hosts.
What is a Dedicated Host in AWS?
Dedicated Hosts are physically isolated Amazon EC2 servers that support bring-your-own-license and compliance use cases.
An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to your use. Dedicated Hosts allow you to use your existing per-socket, per-core, or per-VM software licenses, including Windows Server, Microsoft SQL Server, SUSE, and Linux Enterprise Server.
What is the AWS service or feature that takes advantage of Amazon CloudFront’s globally distributed edge locations to transfer files to S3 with higher upload speeds?
S3 Transfer Acceleration
According to the AWS Shared Responsibility Model two examples of customer responsibilities are?
Patching applications installed on Amazon EC2.
Protecting the confidentiality of data in transit in Amazon S3.
Which service provides object-level storage in AWS?
Amazon S3
Amazon S3 is an object level storage built to store and retrieve any amount of data from anywhere – web sites and mobile apps, corporate applications, and data from IoT sensors or devices. It is designed to deliver 99.999999999% durability, and stores data for millions of applications used by market leaders in every industry.
You have noticed that several critical Amazon EC2 instances have been terminated. Which of the following AWS services would help you determine who took this action?
AWS CloudTrail
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.
What does the “Principle of Least Privilege” refer to?
You should grant your users only the permissions they need when they need them and nothing more
The principle of least privilege is one of the most important security practices and it means granting users the required permissions to perform the tasks entrusted to them and nothing more. The security administrator determines what tasks users need to perform and then attaches the policies that allow them to perform only those tasks. You should start with a minimum set of permissions and grant additional permissions when necessary. Doing so is more secure than starting with permissions that are too lenient and then trying to tighten them down.
An organization has decided to purchase an Amazon EC2 Reserved Instance (RI) for three years in order to reduce costs. It is possible that the application workloads could change during the reservation period.
What is the EC2 Reserved Instance (RI) type that will allow the company to exchange the purchased reserved instance for another reserved instance with higher computing power if they need to?
Convertible RI
When your needs change, you can exchange your Convertible Reserved Instances and continue to benefit from the reservation’s pricing discount. With Convertible RIs, you can exchange one or more Reserved Instances for another Reserved Instance with a different configuration, including instance family, operating system, and tenancy. There are no limits to how many times you perform an exchange, as long as the new Convertible Reserved Instance is of an equal or higher value than the original Convertible Reserved Instances that you are exchanging.
A company is planning to host an educational website on AWS. Their video courses will be streamed all around the world. Which AWS service will help achieve high transfer speeds?
Amazon CloudFront
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
The use cases of Amazon CloudFront include:
1- Accelerate static website content delivery.
CloudFront can speed up the delivery of your static content (for example, images, style sheets, JavaScript, and so on) to viewers across the globe. By using CloudFront, you can take advantage of the AWS backbone network and CloudFront edge servers to give your viewers a fast, safe, and reliable experience when they visit your website.
2- Live & on-demand video streaming.
The Amazon CloudFront CDN offers multiple options for streaming your media – both pre-recorded files and live events – at sustained, high throughput required for 4K delivery to global viewers.
3- Security.
CloudFront integrates seamlessly with AWS Shield for Layer 3/4 DDoS mitigation and AWS WAF for Layer 7 protection.
4- Customizable content delivery with Lambda@Edge.
Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your application, which improves performance and reduces latency.
A company has decided to migrate its Oracle database to AWS. Which AWS service can help achieve this without negatively impacting the functionality of the source database?
AWS Database Migration Service
AWS Database Migration Service (DMS) helps you migrate databases to AWS easily and securely.
The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.
The service supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL.
It also allows you to stream data to Amazon Redshift from any of the supported sources including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, SAP ASE, and SQL Server, enabling consolidation and easy analysis of data in the petabyte-scale data warehouse.
AWS Database Migration Service can also be used for continuous data replication with high availability.
Your company has a data store application that requires access to a NoSQL database. Which AWS database offering would meet this requirement?
Amazon DynamoDB
Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale.
It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model, reliable performance, and automatic scaling of throughput capacity, makes it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications.
Which service provides DNS in the AWS cloud?
Route 53
Amazon Route 53 is a global service that provides highly available and scalable Domain Name System (DNS) services, domain name registration, and health-checking web services. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like example.com into the numeric IP addresses, such as 192.0.2.1, that computers use to connect to each other.
Route 53 also simplifies the hybrid cloud by providing recursive DNS for your Amazon VPC and on-premises networks over AWS Direct Connect or AWS VPN.
Under the shared responsibility model, a key responsibility of AWS is?
Configuring infrastructure devices
Under the shared responsibility model, AWS is responsible for the hardware and software that run AWS services. This includes patching the infrastructure software and configuring infrastructure devices. As a customer, you are responsible for implementing best practices for data encryption, patching guest operating system and applications, identity and access management, and network & firewall configurations.
What is the advantage of the AWS-recommended practice of “decoupling” applications?
Reduces inter-dependencies so that failures do not impact other components of the application
As application complexity increases, a desirable attribute of an IT system is that it can be broken into smaller, loosely coupled components. This means that IT systems should be designed in a way that reduces interdependencies—a change or a failure in one component should not cascade to other components. On the other hand if the components of an application are tightly coupled and one component fails, the entire application will also fail. Therefore when designing your application, you should always decouple its components.
Which service is used to ensure that messages between software components are not lost if one or more components fail?
Amazon SQS
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. SQS lets you decouple application components so that they run independently, increasing the overall fault tolerance of the system. Multiple copies of every message are stored redundantly across multiple availability zones so that they are available whenever needed.
As part of the Enterprise support plan, who is the primary point of contact for ongoing support needs?
Technical Account Manager (TAM)
TAM refers to the AWS technical account manager.
You have been tasked with auditing the security of your VPC. As part of this process, you need to start by analysing what inbound and outbound traffic is allowed on your EC2 instances. What two parts of the VPC do you need to check to accomplish this task?
For Enterprise-level customers, a TAM (Technical Account Manager) provides technical expertise for the full range of AWS services and obtains a detailed understanding of your use case and technology architecture. TAMs work with AWS Solution Architects to help you launch new projects and give best practices recommendations throughout the implementation life cycle. Your TAM is the primary point of contact for ongoing support needs, and you have a direct telephone line to your TAM.
Proactive Technical Account Management is only available for AWS customers who have an Enterprise On-Ramp or Enterprise support plan. A Technical Account Manager (TAM) is your designated technical point of contact who provides advocacy and guidance to help plan and build solutions using best practices, coordinate access to subject matter experts and product teams, and proactively keep your AWS environment operationally healthy.
What is the AWS database service that allows you to upload data structured in key-value format?
Amazon DynamoDB
Amazon DynamoDB is a NoSQL database service. NoSQL databases are used for non-structured data that are typically stored in JSON-like, key-value documents.
Your company is developing a critical web application in AWS, and the security of the application is a top priority. Name the AWS service which will provide infrastructure security optimization recommendations?
AWS Trusted Advisor
AWS Trusted Advisor is an online tool that provides you real time guidance to help you provision your resources following AWS best practices. AWS Trusted Advisor offers a rich set of best practice checks and recommendations across five categories: cost optimization; security; fault tolerance; performance; and service limits (also referred to as service quotas).
AWS Trusted Advisor improves the security of your application by closing gaps, enabling various AWS security features, and examining your permissions.
What are the benefits of having infrastructure hosted in AWS?
Increasing speed and agility
All of the physical security and most of the data/network security are taken care of for you.
All of the physical security are taken care of for you. Amazon data centres are surrounded by three physical layers of security. “Nothing can go in or out without setting off an alarm”. It’s important to keep bad guys out, but equally important to keep the data in which is why Amazon monitors incoming gear, tracking every disk that enters the facility. And “if it breaks we don’t return the disk for warranty. The only way a disk leaves our data centre is when it’s confetti.”
Most (not all) data and network security are taken care of for you. When we talk about the data/network security, AWS has a “shared responsibility model” where AWS and the customer share the responsibility of securing them. For example, the customer is responsible for creating rules to secure their network traffic using the security groups and is also responsible for protecting data with encryption.
“Increasing speed and agility” is also a correct answer because in a cloud computing environment, new IT resources are only a click away, which means it requires less time to make those resources available to developers - from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.
In the AWS Shared Responsibility Model, are responsibilities static?
No. Responsibilities vary depending on the services used.
Customers should be aware that their responsibilities may vary depending on the AWS services chosen. For example, when using Amazon EC2, you are responsible for applying operating system and application security patches regularly. However, such patches are applied automatically when using Amazon RDS.
Adjusting compute capacity dynamically to reduce cost is an implementation of which AWS cloud best practice?
Implement elasticity
In the traditional data centre-based model of IT, once infrastructure is deployed, it typically runs whether it is needed or not, and all the capacity is paid for, regardless of how much it gets used. In the cloud, resources are elastic, meaning they can instantly grow (to maintain performance) or shrink ( to reduce costs).
Which of the below is an example of an architectural benefit of moving to the cloud?
- Monolithic services
- Elasticity
- Proprietary hardware
- Vertical scalability
- Elasticity
How can an organisation assess applications for vulnerabilities and deviations from best practice?
- Use AWS WAF
- Use AWS Shield
- Use AWS Inspector
- Use AWS Artifact
- Use AWS Inspector
Amazon Inspector is a vulnerability management service that continuously scans your AWS workloads for vulnerabilities. Amazon Inspector automatically discovers and scans Amazon EC2 instances and container images residing in Amazon Elastic Container Registry (Amazon ECR) for software vulnerabilities and unintended network exposure.
Which items can be configured from within the VPC management console? (Select TWO)
- Regions
- Load Balancing
- Security Groups
- Subnets
- Auto Scaling
- Security Groups and 4. Subnets can be configured from within the VPC console.
Which benefit of the AWS Cloud eliminates the need for users to try estimating future infrastructure usage?
- Economies of scale
- Easy global deployments
- Security of the AWS Cloud
- Elasticity of the AWS Cloud
- Elasticity of the AWS Cloud. Elasticity means that your infrastructure scales based on actual usage.
Which AWS support plan should you use if you need a response time of < 15 minutes for a business-critical system failure?
- Basic
- Developer
- Business
- Enterprise
- Enterprise.
Which of the following is a principle of good AWS Cloud architecture design?
- Implement loose coupling
- Implement vertical scaling
- Implement single points of failure
- Implement monolithic design
- Implement loose coupling. As application complexity increases, a desirable attribute of an IT system is that it can be broken into smaller, loosely coupled components. This means that IT systems should be designed in a way that reduces interdependencies - a change or a failure in one component should not cascade to other components. Where possible horizontal scaling should be used with loose coupling.
Loose Coupling does not eliminate the need for Change Management. Change Management is the process responsible for controlling the Lifecycle of all Changes made in an AWS account. The primary objective of Change Management is to enable beneficial changes to be made, with minimum disruption to IT Services. An erroneous configuration or misstep in a process can frequently lead to infrastructure or service disruptions. Creating and implementing a change management strategy will help reduce the risk of failure by monitoring all changes and rolling back failed changes.
What is vertical scaling?
Vertical scaling means adding resources such as CPU and memory to an existing application or instance.
Which service can be used to cost effectively move exabytes of data into AWS?
- S3 Cross-Region Replication (CRR)
- AWS Snowmobile
- AWS Snowball
- S3 Transfer Acceleration
- AWS Snowmobile. You can move up to 100PB per snowmobile.
What is the scope of an Amazon Virtual Private Cloud (VPC)?
- It spans all the Availability Zones within a region
- It spans multiple subnets
- It spans all Availability Zones in all regions
- It spans a single CIDR block
- It spans all the Availability Zones within a region
What are the fundamental charges for an Amazon EC2 instance? (choose 2)
- Your own AMIs
- Private IP address
- Basic monitoring
- Server uptime
- Data storage
- Server uptime and 5. Data storage
Which AWS service uses a highly secure hardware storage device to store encryption keys?
- Amazon Cloud Directory
- AWS IAM
- AWS WAF
- AWS CloudHSM
- AWS CloudHSM
AWS CloudHSM is a cloud-based hardware security module (HSM) that allows you to easily add secure key storage and high-performance crypto operations to your AWS applications.
What is the term for describing the action of automatically running scripts on Amazon EC2 instances when launched to install software?
- Workflow Automation
- Bootstrapping
- Golden Images
- Containerisation
- Bootstrapping.
Bootstrapping is the execution of automated actions to services such as EC2 and RDS. This is typically in the form of scripts that run when the instances are launched.
What is AWS Security Hub?
AWS Security Hub is a cloud security posture management service that performs automated, continuous security best practice checks against your AWS resources.
AWS Security Hub aggregates, organizes, and prioritizes security alerts and findings from multiple AWS security services, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie, and supported third-party partners to help you analyse your security trends and identify the highest priority security issues.
What is AWS Guard Duty?
Amazon GuardDuty is a threat detection service for responding to and identifying potential threats.
that continuously monitors for malicious activity and unauthorized behaviour to protect your AWS accounts, EC2 workloads, container applications, and data stored in Amazon Simple Storage Service (S3).
Amazon GuardDuty offers threat detection that enables you to continuously monitor and protect your AWS accounts and workloads. GuardDuty analyzes continuous streams of meta-data generated from your account and network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. It also uses integrated threat intelligence such as known malicious IP addresses, anomaly detection, and machine learning to identify threats more accurately.
For further information see:
https://aws.amazon.com/products/security/detection-and-response/
https://aws.amazon.com/guardduty/
What is an ENI in AWS?
Elastic Network Interface. An elastic network interface is a logical networking component in a VPC that represents a virtual network card.
What is Amazon ECS Anywhere?
Run containers on your on-premises infrastructure. Amazon Elastic Container Service (ECS) Anywhere is a feature of Amazon ECS that lets you run and manage container workloads on your infrastructure. This feature helps you meet compliance requirements and scale your business without sacrificing your on-premises investments. Run a familiar, in-region ECS control plane so you can reduce operational overhead and focus on innovation. Ensure a simple and consistent experience no matter where your container-based applications are running. Streamline software management on premises and on AWS with a standardized container orchestrator.
What is AWS ROSA?
Red Hat OpenShift Service on AWS. Managed OpenShift integration in the cloud. Red Hat OpenShift Service on AWS (ROSA) provides an integrated experience with OpenShift. You can use the wide range of AWS compute, database, analytics, machine learning (ML), networking, mobile, and other services to build secure and scalable applications faster. Use the production-ready OpenShift integration to adjust workloads on AWS as business needs change. Build applications faster with self-service provisioning, automatic security enforcement, and streamlined deployment. Pay as you go with flexible pricing and an on-demand hourly or annual billing model.
According to AWS, what is the benefit of Elasticity?
A. Minimize storage requirements by reducing logging and auditing activities
B. Create systems that scale to the required capacity based on changes in demand
C. Enable AWS to automatically select the most cost-effective services.
D. Accelerate the design process because recovery from failure is automated, reducing the need for testing
B. Create systems that scale to the required capacity based on changes in demand
The concept of Elasticity is the means of an application having the ability to scale up and scale down based on demand. An example of such a service is the Autoscaling service
For more information on AWS Autoscaling service, please refer to the below URL: https://aws.amazon.com/autoscaling/
Which tool can you use to forecast your AWS spending?
A. AWS Organizations
B. Amazon Dev Pay
C. AWS Trusted Advisor
D. AWS Cost Explorer
D. AWS Cost Explorer
The AWS Documentation mentions the following.
Cost Explorer is a free tool that you can use to view your costs. You can view data up to the last 12 months. You can forecast how much you are likely to spend for the next 12 months and get recommendations for what Reserved Instances to purchase. You can use Cost Explorer to see patterns in how much you spend on AWS resources over time, identify areas that need further inquiry, and see trends that you can use to understand your costs. You also can specify time ranges for the data and view time data by day or by month.
A business analyst would like to move away from creating complex database queries and static spreadsheets when generating regular reports for high-level management. They would like to publish insightful, graphically appealing reports with interactive dashboards. Which service can they use to accomplish this?
A. Amazon QuickSight
B. Business intelligence on Amazon Redshift
C. Amazon CloudWatch dashboards
D. Amazon Athena integrated with Amazon Glue
A. Amazon QuickSight
Amazon QuickSight is the most appropriate service in the scenario. It is a fully-managed service that allows for insightful business intelligence reporting with creative data delivery methods, including graphical and interactive dashboards. QuickSight includes machine learning that allows users to discover inconspicuous trends and patterns on their datasets.
What is the AWS feature that enables fast, easy and secure transfers of files over long distances between your client and your Amazon S3 bucket?
A. File Transfer
B. HTTP Transfer
C. Amazon S3 Transfer Acceleration
D. S3 Acceleration
C. Amazon S3 Transfer Acceleration
The AWS Documentation mentions the following.
Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.
For more information on S3 transfer acceleration, please visit the Link: http://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
What best describes the “Principle of Least Privilege”? Choose the correct answer from the options given below.
A. All users should have the same baseline permissions granted to them to use basic AWS services.
B. Users should be granted permission to access only resources they need to do their assigned job.
C. Users should submit all access requests in written form so that there is a paper trail of who needs access to different AWS resources.
D. Users should always have a little more permission than they need.
B. Users should be granted permission to access only resources they need to do their assigned job.
The principle means giving a user account only those privileges which are essential to perform its intended function. For example, a user account for the sole purpose of creating backups does not need to install the software. Hence, it has rights only to run backup and backup-related applications.
For more information on the principle of least privilege, please refer to the following link: https://en.wikipedia.org/wiki/Principle_of_least_privilege
A web administrator maintains several public and private web-based resources for an organisation. Which service can they use to keep track of the expiry dates of SSL/TLS certificates as well as updating and renewal?
A. AWS Data Lifecycle Manager
B. AWS License Manager
C. AWS Firewall Manager
D. AWS Certificate Manager
D. AWS Certificate Manager
The AWS Certificate Manager allows the web administrator to maintain one or several SSL/TLS certificates, both private and public certificates including their update and renewal so that the administrator does not worry about the imminent expiry of certificates.
AWS Certificate Manager is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources.
AWS Certificate Manager is a service that lets you provision, manage, and deploy (SSL/TLS) certificates for use with AWS services and your internal connected resources.
For example, AWS Certificate Manager can be used to import third-party SSL/TLS certificates that can be used to deploy on Amazon Elastic Load Balancer.
https://aws.amazon.com/certificate-manager/
Which of the following is the responsibility of the customer to ensure the availability and backup of the EBS volumes?
A. Delete the data and create a new EBS volume.
B. Create EBS snapshots.
C. Attach new volumes to EC2 Instances.
D. Create copies of EBS Volumes.
B. Create EBS snapshots.
Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved.
When you create an EBS volume based on a snapshot, the new volume begins as an exact replica of the original volume that was used to create the snapshot. The replicated volume loads data in the background so that you can begin using it immediately.
Which of the following services can be used as an application firewall in AWS?
A. AWS Snowball
B. AWS WAF
C. AWS Firewall
D. AWS Protection
B. AWS WAF
AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.
The AWS Documentation mentions the following:
AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to Amazon CloudFront or an Application Load Balancer. AWS WAF also lets you control access to your content.
AWS Snowball, a part of the AWS Snow Family, is an edge computing, data migration, and edge storage device that comes in two options. Snowball Edge Storage Optimized devices provide both block storage and Amazon S3-compatible object storage, and 40 vCPUs.
For more information on AWS WAF, please refer to the below URL:https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html
https://aws.amazon.com/snowball/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc
Your design team is planning to design an application that will be hosted on the AWS Cloud. One of their main non-functional requirements is: Reduce inter-dependencies so failures do not impact other components. Which of the following concepts does this requirement relate to?
A. Integration
B. Decoupling
C. Aggregation
D. Segregation
B. Decoupling
The entire concept of decoupling components ensures that the different components of applications can be managed and maintained separately. If all components are tightly coupled, the entire application would go down when one component goes down. Hence it is always a better practice to decouple application components.
For more information on a decoupled architecture, please refer to the below URL: http://whatis.techtarget.com/definition/decoupled-architecture
A manufacturing firm has recently migrated their application servers to the Amazon EC2 instance. The IT Manager is looking for the details of upcoming scheduled maintenance activities which AWS would be performing on AWS resources, that may impact the services on these EC2 instances. Which of the following services can alert you about the changes that can affect resources in your account?
A. AWS Organizations
B. AWS Personal Health Dashboard
C. AWS Trusted Advisor
D. AWS Service Health Dashboard
B. AWS Personal Health Dashboard
AWS Personal Health Dashboard provides alerts for AWS services availability & performance which may impact resources deployed in your account. Customers get e-mails & mobile notifications for scheduled maintenance activities which might impact services on these AWS resources.
For more information on the AWS Organizations, please refer to the below URL: https://aws.amazon.com/premiumsupport/technology/personal-health-dashboard/
Which of the following AWS services can be used to retrieve configuration changes made to AWS resources causing operational issues?
A. Amazon Inspector
B. AWS CloudFormation
C. AWS Trusted Advisor
D. AWS Config
D. AWS Config
AWS Config can be used to audit and evaluate configurations of AWS resources. If there are any operational issues, AWS config can be used to retrieve configurational changes made to AWS resources that may have caused these issues.
AWS Config and AWS CloudTrail are change management tools that help AWS customers audit and monitor all resource and configuration changes in their AWS environment. AWS Config provides information about the changes made to a resource, and AWS CloudTrail provides information about who made those changes. These capabilities enable customers to discover any misconfigurations, fix them, and protect their workloads from failures.
For more information on AWS Config, refer to the following URL:https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html
An organization runs several EC2 instances inside a VPC using three subnets, one for Development, one for Test, and one for Production. The Security team has some concerns about the VPC configuration. It requires restricting communication across the EC2 instances using Security Groups.
Which of the following options is true for Security Groups related to the scenario?
A. You can change a Security Group associated with an instance if the instance is in the running state.
B. You can change a Security Group associated with an instance if the instance is in the hibernate state.
C. You can change a Security Group only if there are no instances associated to it.
D. The only Security Group you can change is the Default Security Group.
A. You can change a Security Group associated with an instance if the instance is in the running state.
AWS documentation mentions it in the section called “Changing an Instance’s Security Group” using the following sentence: “After you launch an instance into a VPC, you can change the security groups that are associated with the instance. You can change the security groups for an instance when the instance is in the running or stopped state.”
Reference: https://docs.aws.amazon.com/en_pv/vpc/latest/userguide/VPC_SecurityGroups.html
Which of the following features of Amazon RDS allows for better availability of databases? Choose the answer from the options given below.
A. VPC Peering
B. Multi-AZ
C. Read Replicas
D. Data encryption
B. Multi-AZ
The AWS Documentation mentions the following.
If you are looking to use replication to increase database availability while protecting your latest database updates against unplanned outages, consider running your DB instance as a Multi-AZ deployment.
Deploying an Amazon EC2 instance in a multiple AZ might enhance application availability but will not reduce operational expenses.
For more information on AWS RDS, please visit the FAQ Link:https://aws.amazon.com/rds/faqs/
Your company wants to move an existing Oracle database to the AWS Cloud. Which of the following services can help facilitate this move?
A. AWS Database Migration Service
B. AWS VM Migration Service
C. AWS Inspector
D. AWS Trusted Advisor
A. AWS Database Migration Service
The AWS Documentation mentions the following.
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from the most widely used commercial and open-source databases.
For more information on AWS Database migration, please refer to the below URL:https://aws.amazon.com/dms/
Which of the following services allows you to analyze EC2 Instances against pre-defined security templates to check for vulnerabilities?
A. AWS Trusted Advisor
B. AWS Inspector
C. AWS WAF
D. AWS Shield
B. AWS Inspector
The AWS Documentation mentions the following.
Amazon Inspector enables you to analyze the behavior of your AWS resources and helps you to identify potential security issues. Using Amazon Inspector, you can define a collection of AWS resources that you want to include in an assessment target. You can then create an assessment template and launch a security assessment run of this target.
For more information on AWS Inspector, please refer to the below URL:https://docs.aws.amazon.com/inspector/latest/userguide/inspector_introduction.html
A website for an international sport governing body would like to serve its content to viewers from different parts of the world in their vernacular language. Which of the following services provide location-based web personalization using geolocation headers?
A. Amazon CloudFront
B. Amazon EC2 Instance
C. Amazon Lightsail
D. Amazon Route 53
A. Amazon CloudFront
Amazon CloudFront supports country-level location-based web content personalization with a feature called Geolocation Headers.
You can configure CloudFront to add additional geolocation headers that provide more granularity in your caching and origin request policies. The new headers give you more granular control of cache behavior and your origin access to the viewer’s country name, region, city, postal code, latitude, and longitude, all based on the viewer’s IP address.
https://aws.amazon.com/about-aws/whats-new/2020/07/cloudfront-geolocation-headers/
https://aws.amazon.com/blogs/networking-and-content-delivery/leverage-amazon-cloudfront-geolocation-headers-for-state-level-geo-targeting/
Which of the following can be used to protect against DDoS attacks? Choose 2 answers from the options given below.
A. AWS EC2
B. AWS RDS
C. AWS Shield
D. AWS Shield Advanced
C. AWS Shield
D. AWS Shield Advanced
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service.
The AWS Documentation mentions the following:
AWS Shield – All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge. AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target your web site or applications
AWS Shield Advanced – For higher levels of protection against attacks targeting your web applications running on Amazon EC2, Elastic Load Balancing (ELB), CloudFront, and Route 53 resources, you can subscribe to AWS Shield Advanced. AWS Shield Advanced provides expanded DDoS attack protection for these resources.
For more information on AWS Shield, please refer to the below URL:https://docs.aws.amazon.com/waf/latest/developerguide/ddos-overview.html
Which of the following are the recommended resources to be deployed in the Amazon VPC private subnet?
A. NAT Gateways
B. Bastion Hosts
C. Database Servers
D. Internet Gateways
C. Database Servers
As Database servers contain confidential information, from a security perspective, they should be deployed in a Private Subnet.
Amazon Virtual Private Cloud (Amazon VPC) enables the user to launch AWS resources into a virtual network that a user has defined.
For more information on AWS VPC, please refer to the below URL:https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Networking.html
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat.html
https://aws.amazon.com/blogs/security/how-to-record-ssh-sessions-established-through-a-bastion-host/
A company wants to utilize AWS storage. For them, low storage cost is paramount. The data is rarely retrieved and a data retrieval time of 13-14 hours is acceptable for them. What is the best storage option to use?
A. Amazon S3 Glacier
B. S3 Glacier Deep Archive
C. Amazon EBS volumes
D. AWS CloudFront
B. S3 Glacier Deep Archive (Storage)
S3 Glacier Deep Archive offers low-cost storage and is appropriate for use when retrieval time doesn’t matter for the company. For fast retrieval time then S3 Glacier is appropriate.
S3 Glacier Deep Archive offers the lowest cost storage in the cloud, at prices lower than storing and maintaining data in on-premises magnetic tape libraries or archiving data offsite.
Amazon S3 Glacier Deep Archive does not provide immediate retrieval. With S3 Glacier Deep Archive, the minimum retrieval period is 12 hours. S3 Glacier Deep Archive is Amazon S3’s lowest-cost storage class that supports long-term retention and digital preservation for data that may be accessed once or twice in a year.
It expands our data archiving offerings, enabling you to select the optimal storage class based on storage and retrieval costs, and retrieval times.
With S3 Glacier, customers can store their data cost-effectively for months, years, or even decades. S3 Glacier enables customers to offload the administrative burdens of operating and scaling storage to AWS, so they don’t have to worry about capacity planning, hardware provisioning, data replication, hardware failure detection, and recovery, or time-consuming hardware migrations.
Amazon S3 Glacier for archiving data that might infrequently need to be restored within a few hours
S3 Glacier Deep Archive for archiving long-term backup cycle data that might infrequently need to be restored within 12 hours
Storage class Expedited Standard Bulk
Amazon S3 Glacier 1–5 minutes 3–5 hours 5–12 hours
S3 Glacier Deep Archive Not available Within 12 hours Within 48 hours
Reference:
https://docs.aws.amazon.com/amazonglacier/latest/dev/introduction.html
https://docs.aws.amazon.com/prescriptive-guidance/latest/backup-recovery/amazon-s3-glacier.html
https://aws.amazon.com/s3/storage-classes/
Which AWS service provides a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability?
A. AWS RDS
B. DynamoDB
C. Oracle RDS
D. Elastic Map Reduce
B. DynamoDB
DynamoDB is a fully managed NoSQL offering provided by AWS. It is now available in most regions for users to consume.
For more information on AWS DynamoDB, please refer to the below URL:http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html
For which of the following AWS resources, the Customer is responsible for the infrastructure-related security configurations?
A. Amazon RDS
B. Amazon DynamoDB
C. Amazon EC2
D. AWS Fargate
C. Amazon EC2
Amazon EC2 is an Infrastructure as a Service (IaaS) for which customers are responsible for the security and the management of guest operating systems.
For more information on the Shared responsibility model, refer to the following URL:https://aws.amazon.com/compliance/shared-responsibility-model/
In the shared responsibility model for infrastructure services, such as Amazon Elastic Compute Cloud, which of the below two are customers responsibility?
A. Network infrastructure
B. Amazon Machine Images (AMIs)
C. Virtualization infrastructure
D. Physical security of hardware
E. Policies and configuration
B. Amazon Machine Images (AMIs) and E. Policies and configuration
In the shared responsibility model, AWS is primarily responsible for “Security of the Cloud.” The customer is responsible for “Security in the Cloud.” In this scenario, the mentioned AWS product is IAAS (Amazon EC2) and AWS manages the security of the following assets:
– Facilities
– Physical security of hardware
– Network infrastructure
– Virtualization infrastructure
Customers are responsible for the security of the following assets:
– Amazon Machine Images (AMIs)
– Operating systems
– Applications
– Data in transit
– Data at rest
– Data stores
– Credentials
– Policies and configuration
https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/welcome.html
https://aws.amazon.com/architecture/well-architected/?wa-lens-whitepapers.sort-by=item.additionalFields.sortDate&wa-lens-whitepapers.sort-order=desc
AWS offers two savings plans to enable more savings and flexibility for its customers, namely, compute saving plans and EC2 Instance Savings plans. Which of the below statement is FALSE regarding Saving Plans?
A. Capacity Reservations are not provided with Saving Plans.
B. Savings Plans are available for all the regions.
C. Savings plans will apply on ‘On-Demand Capacity Reservations’ that customers can allocate for their needs.
D. The prices for Savings Plans do not change based on the amount of hourly commitment.
B. Savings Plans are available for all the regions. For China Regions, savings plans are not available.
Spot, Savings Plans, and Reserved instances are all cheaper than On-Demand instances.
Using Savings Plans requires a contract of at least one year. Savings Plans is a flexible pricing model that offers low prices on EC2, Lambda, and Fargate usage, in exchange for a commitment to a consistent amount of compute usage (measured in $/hour) for a one or three-year term.
https://docs.aws.amazon.com/savingsplans/latest/userguide/what-is-savings-plans.html#sp-ris
Which of the below-listed services is a region-based AWS service?
A. AWS IAM
B. Amazon EFS
C. Amazon Route 53
D. Amazon CloudFront
B. Amazon EFS. EFS is a regional service.
https://aws.amazon.com/efs/
https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/
Which of the following LightSail Wizard allows the customers to “create a copy of the LightSail instance in EC2”?
A. LightSail Backup
B. LightSail Copy
C. Upgrade to EC2
D. LightSail-EC2 snapshot
C. Upgrade to EC2
“Upgrade to EC2” is the feature that allows customers to “create a copy of the LightSail instance in EC2”.
To get started, you need to export your Lightsail instance manual snapshot. You’ll then use the Upgrade to EC2 wizard to create an instance in EC2.
Customers who are comfortable with EC2 can then use the EC2 creation wizard or API to create a new EC2 instance as they would from an existing EC2 AMI.
https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-exporting-snapshots-to-amazon-ec2
https://aws.amazon.com/lightsail/features/upgrade-to-ec2/
Which of the following features of Amazon Connect helps better customer engagement on AWS Cloud?
A. Push Notification
B. High Quality Audio
C. Mailbox Simulator
D. Reputation Dashboard
B. High Quality Audio
Amazon Connect is an omnichannel cloud contact centre which can be setup easily & with low cost. It has following features which helps to provide customers a superior service:
- Telephone as a service
- High quality Audio
- Omnichannel routing
- Web & Mobile Chat
- Task management
- Contact Centre automation
- Rules Engine.
For more information on Amazon Connect, refer to the following URL: https://aws.amazon.com/connect/features/
A large IT company is looking to enable its large user base to remotely access Linux desktops from any location. Which service can be used for this purpose?
A. Amazon Cognito
B. Amazon AppStream 2.0
C. Amazon WorkSpaces
D. Amazon WorkLink
C. Amazon WorkSpaces
Amazon WorkSpaces provides a secure managed service for virtual desktops for remote users. It supports both Windows & Linux based virtual desktops for a large number of users.
For more information on Amazon WorkSpaces, refer to the following URL: https://aws.amazon.com/workspaces/features/
Users in the Developer Team need to deploy a multi-tier web application. Which service can be used to create a customized portfolio that will help users for quick deployment?
A. AWS Config
B. AWS Code Deploy
C. AWS Service Catalog
D. AWS Cloud Formation
C. AWS Service Catalog
AWS Service Catalog is used to create and manage catalogs of IT services that are approved for use on AWS. This helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need.
AWS Service Catalog can be used to create & deploy portfolio of products within AWS infrastructure. This helps to create consistent resources within AWS infrastructure with quick deployment. These catalogues can be used for deployment of single resource or a multi-tier web application consisting of web, application, & database layer resources.
For more information on AWS Service Catalog, refer to the following URL: https://aws.amazon.com/servicecatalog/features/
A large Oil & gas company is planning to deploy a high-volume application on multiple Amazon EC2 instances. Which of the following can help to reduce operational expenses?
A. Deploy Amazon EC2 instance with Auto-scaling
B. Deploy Amazon EC2 instance in multiple AZ’s
C. Deploy Amazon EC2 instance with Amazon instance store-backed AMI
D. Deploy Amazon EC2 instance with Cluster placement group
A. Deploy Amazon EC2 instance with Auto-scaling
Using Amazon EC2 Auto-Scaling helps to match the workload on the application with the optimum number of the Amazon EC2 instance. Due to this, during low load on application, Amazon EC2 instances are terminated which reduces operational cost.
For more information on reducing cost using AWS cloud , refer to the following URL: https://aws.amazon.com/economics/
Which of the following activities are within the scope of AWS Support?
A. Troubleshooting API issues
B. Code Development
C. Debugging custom software
D. Third-party application configuration on AWS resources
E. Database query tuning
A. Troubleshooting API issues and D. Third-party application configuration on AWS resources
As a part of AWS Support following activities are performed,
Queries regarding all AWS Services & features.
Best Practices to integrate, deploy & manage applications in the AWS cloud.
Troubleshooting API & SDK issues.
Troubleshooting operational issues.
Issues related to any AWS Tools.
Problems detected by EC2 health checks
Third-Party application configuration on AWS resources & products.
AWS Support does not include:
Code development
Debugging custom software
Performing system administration tasks
Database query tuning
Cross-Account Support
Code Development is not in the scope of AWS Support. This needs to be taken care of by the customer.
Debugging custom software is not in the scope of AWS Support. This needs to be taken care of by the customer.
Database query tuning is not in the scope of AWS Support. This needs to be taken care of by the customer.
For more information on AWS Support, refer to the following URL: https://aws.amazon.com/premiumsupport/
What is the AWS Data Lifecycle Manager?
The AWS Lifecycle Manager creates life cycle policies for specified resources to automate operations. https://docs.aws.amazon.com/dlm/?id=docs_gateway
What is AWS License Manager?
AWS License Manager serves the purpose of differentiating, maintaining third-party software provisioning vendor licenses. It also decreases the risk of license expirations and the penalties. https://docs.aws.amazon.com/license-manager/?id=docs_gateway
What is AWS Firewall Manager?
AWS Firewall Manager aids in the administration of Web Application Firewall (WAF), by presenting a centralised point of setting firewall rules across different web resources. https://docs.aws.amazon.com/firewall-manager/?id=docs_gateway
What is the AWS Service Health Dashboard (Management)?
The AWS Service Health Dashboard displays the general status of all AWS services & will not display scheduled maintenance activities.
What is a Bastion Host in AWS?
A server whose purpose is to provide access (SSH access) to a private network from an external network, such as the Internet. It is deployed in a public subnet.
A bastion host is a server whose purpose is to provide access to a private network from an external network, such as the Internet. Because of its exposure to potential attack, a bastion host must minimize the chances of penetration.
What is Amazon PinPoint?
Amazon Pinpoint is an AWS service that you can use to engage with your customers across multiple messaging channels. You can use Amazon Pinpoint to send push notifications, in-app notifications, e-mails, text messages, voice messages, and messages over custom channels.
What is Amazon Connect?
Amazon Connect Lets You Build Reliable and Inexpensive Automatic Calling Services. Try Now. Amazon Connect’s Pay-as-You-Go Pricing Model Allows You to Build According to Your Needs. Safe, Secure. Flexible, Low Cost. Built-In Intelligence. Automated & Easy. Push Notification is not a feature of Amazon Connect.
What is Amazon SES?
Simple E-mail Service. Get reliable, scalable e-mail to communicate with customers at the lowest industry prices. Features include:
Mailbox Simulator
Reputation Dashboard
Deliver high-volume e-mail campaigns with the service that sends hundreds of billions of e-mails per year.
Reach customers’ inboxes as a trusted sender with secure e-mail authentication.
Improve your bottom line with transparent pricing designed for bulk e-mail.
Stay compliant from day one with HIPAA-eligible and FedRAMP-, GDPR-, and ISO-certified options.
Amazon Simple E-mail Service (SES) lets you reach customers confidently without an on-premises Simple Mail Transfer Protocol (SMTP) system.
Why Amazon SES?
Amazon SES is a cloud e-mail service provider that can integrate into any application for bulk e-mail sending. Whether you send transactional or marketing e-mails, you pay only for what you use. Amazon SES also supports a variety of deployments including dedicated, shared, or owned IP addresses. Reports on sender statistics and a deliverability dashboard help businesses make every e-mail count.
What is Amazon Worklink?
Amazon WorkLink can be used by internal employees to securely access internal websites & applications using mobile phones.
What is AWS CodeDeploy?
AWS CodeDeploy is a managed service for automating software deployment on AWS resources & on-premise systems. It is not suitable for creating portfolios of resources for quick deployment. AWS CodeDeploy is a service that automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises, and is not used for managing encryption keys.
What is a Cluster Placement Group in AWS?
A cluster placement group is a logical grouping of instances within a single Availability Zone. A cluster placement group can span peered virtual private networks (VPCs) in the same Region. Instances in the same cluster placement group enjoy a higher per-flow throughput limit for TCP/IP traffic and are placed in the same high-bisection bandwidth segment of the network.
A cluster placement group will help to have low latency between instances but will not reduce operational expenses.
What is Amazon instance store-backed AMI?
The root device for an instance launched from the AMI is an instance store volume created from a template stored in Amazon S3.
What is AMI in Amazon?
An Amazon Machine Image (AMI) is a supported and maintained image provided by AWS that provides the information required to launch an instance. You must specify an AMI when you launch an instance. You can launch multiple instances from a single AMI when you require multiple instances with the same configuration.
Why is AWS more economical than traditional data centres for applications with varying compute
workloads?
A) Amazon EC2 costs are billed on a monthly basis.
B) Users retain full administrative access to their Amazon EC2 instances.
C) Amazon EC2 instances can be launched on demand when needed.
D) Users can permanently run enough instances to handle peak workloads.
C – The ability to launch instances on demand when needed allows users to launch and terminate instances in
response to a varying workload. This is a more economical practice than purchasing enough on-premises servers
to handle the peak load.
Which AWS service would simplify the migration of a database to AWS?
A) AWS Storage Gateway
B) AWS Database Migration Service (AWS DMS)
C) Amazon EC2
D) Amazon AppStream 2.0
B – AWS DMS helps users migrate databases to AWS quickly and securely. The source database remains
fully operational during the migration, minimizing downtime to applications that rely on the database. AWS DMS
can migrate data to and from most widely used commercial and open-source databases.
Which AWS offering enables users to find, buy, and immediately start using software solutions in their
AWS environment?
A) AWS Config
B) AWS OpsWorks
C) AWS SDK
D) AWS Marketplace
D – AWS Marketplace is a digital catalog with thousands of software listings from independent software
vendors that makes it easy to find, test, buy, and deploy software that runs on AWS.
Which AWS networking service enables a company to create a virtual network within AWS?
A) AWS Config
B) Amazon Route 53
C) AWS Direct Connect
D) Amazon Virtual Private Cloud (Amazon VPC)
D – Amazon VPC lets users provision a logically isolated section of the AWS Cloud where users can launch
AWS resources in a virtual network that they define.
Which of the following is an AWS responsibility under the AWS shared responsibility model?
A) Configuring third-party applications
B) Maintaining physical hardware
C) Securing application access and data
D) Managing guest operating systems
B – Maintaining physical hardware is an AWS responsibility under the AWS shared responsibility model.
Which component of the AWS global infrastructure does Amazon CloudFront use to ensure low-latency delivery?
A) AWS Regions
B) Edge locations
C) Availability Zones
D) Virtual Private Cloud (VPC)
B – To deliver content to users with lower latency, Amazon CloudFront uses a global network of points of
presence (edge locations and regional edge caches) worldwide.
How would a system administrator add an additional layer of login security to a user’s AWS
Management Console?
A) Use Amazon Cloud Directory
B) Audit AWS Identity and Access Management (IAM) roles
C) Enable multi-factor authentication
D) Enable AWS CloudTrail
C – Multi-factor authentication (MFA) is a simple best practice that adds an extra layer of protection on top of a
username and password. With MFA enabled, when a user signs in to an AWS Management Console, they will be
prompted for their username and password (the first factor—what they know), as well as for an authentication
code from their MFA device (the second factor—what they have). Taken together, these multiple factors provide
increased security for AWS account settings and resources.
Which service can identify the user that made the API call when an Amazon EC2 instance is
terminated?
A) AWS Trusted Advisor
B) AWS CloudTrail
C) AWS X-Ray
D) AWS Identity and Access Management (AWS IAM)
B – AWS CloudTrail helps users enable governance, compliance, and operational and risk auditing of their
AWS accounts. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events
include actions taken in the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs
and APIs.
AWS Config and AWS CloudTrail are change management tools that help AWS customers audit and monitor all resource and configuration changes in their AWS environment. AWS Config provides information about the changes made to a resource, and AWS CloudTrail provides information about who made those changes. These capabilities enable customers to discover any misconfigurations, fix them, and protect their workloads from failures.
Which service would be used to send alerts based on Amazon CloudWatch alarms?
A) Amazon Simple Notification Service (Amazon SNS)
B) AWS CloudTrail
C) AWS Trusted Advisor
D) Amazon Route 53
A – Amazon SNS and Amazon CloudWatch are integrated so users can collect, view, and analyze metrics for
every active SNS. Once users have configured CloudWatch for Amazon SNS, they can gain better insight into the
performance of their Amazon SNS topics, push notifications, and SMS deliveries.
Where can a user find information about prohibited actions on the AWS infrastructure?
A) AWS Trusted Advisor
B) AWS Identity and Access Management (IAM)
C) AWS Billing Console
D) AWS Acceptable Use Policy
D – The AWS Acceptable Use Policy provides information regarding prohibited actions on the AWS
infrastructure.
You have a real-time IoT application that requires sub-millisecond latency. Which of the following services should you use?
- AWS Cloud9
- Amazon Athena
- Amazon Elasticache for Redis
- Amazon Redshift
- Amazon Elasticache for Redis
Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Built on open-source Redis and compatible with the Redis APIs, ElastiCache for Redis works with your Redis clients and uses the open Redis data format to store your data. Your self-managed Redis applications can work seamlessly with ElastiCache for Redis without any code changes. ElastiCache for Redis combines the speed, simplicity, and versatility of open-source Redis with manageability, security, and scalability from Amazon to power the most demanding real-time applications in Gaming, Ad-Tech, E-Commerce, Healthcare, Financial Services, and IoT.
What is AWS Cloud9?
AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser. It includes a code editor, debugger, and terminal. Cloud9 comes pre-packaged with essential tools for popular programming languages, including JavaScript, Python, PHP, and more, so you don’t need to install files or configure your development machine to start new projects.
AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser. It includes a code editor, debugger, and terminal.
A company has infrastructure hosted in an on-premises data centre. They currently have an operations team that takes care of identity management. If they decide to migrate to the AWS cloud, which of the following services would help them perform the same role in AWS?
- AWS Federation
- AWS Outposts
- AWS IAM
- Amazon Redshift
- AWS IAM
AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to access and use AWS resources.
What is AWS Federation?
Federation is an AWS feature that enables users to access and use AWS resources using their existing corporate credentials.
Which of the following is a feature of Amazon RDS that performs automatic failover when the primary database fails to respond?
- RDS Write Replica
- RDS Multi-AZ
- RDS Snapshots
- RDS Single-AZ
- RDS Multi-AZ
When you enable Multi-AZ, Amazon Relational Database Service (Amazon RDS) maintains a redundant and consistent standby copy of your data. If you encounter problems with the primary copy, Amazon RDS automatically switches to the standby copy (or to a read replica in the case of Amazon Aurora) to provide continued availability to the data. The two copies are maintained in different Availability Zones (AZs), hence the name “Multi-AZ.” Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Having separate Availability Zones greatly reduces the likelihood that both copies will concurrently be affected by most types of disturbances.
RDS Single-AZ is not an Amazon RDS Feature.
What are RDS Snapshots?
RDS snapshots are user-initiated backups of your instance.
What is RDS Read Replica?
Amazon RDS can be configured to use Read Replicas to scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.
Which of the following are use cases for Amazon EMR? (Choose TWO)
- Enables you to move Exabyte-scale data from on-premises data centres into AWS
- Enables you to backup extremely large amounts of data at very low costs
- Enables you to easily run and scale Apache Spark, Hadoop, and other Big Data Frameworks
- Enables you to easily run and manage Docker containers
- Enables you to analyse and process extremely large amounts of data in a timely manner
- Enables you to easily run and scale Apache Spark, Hadoop, and other Big Data Frameworks and
- Enables you to analyse and process extremely large amounts of data in a timely manner
Amazon Elastic Map Reduce (Amazon EMR) is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data. It utilizes a hosted Hadoop framework running on the web-scale infrastructure of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3).
Amazon EMR is ideal for problems that necessitate the fast and efficient processing of large amounts of data. EMR securely and reliably handles a broad set of big data use cases, including log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific simulation, and bioinformatics.
Amazon EMR lets you focus on crunching or analyzing your data without having to worry about time-consuming set-up, management or tuning of Hadoop clusters or the compute capacity upon which they sit.
EMR is not a storage service.
Which of the following services allows you to install and run custom relational database software?
- Amazon EC2
- Amazon Cognito
- Amazon Inspector
- Amazon RDS
- Amazon EC2
If an AWS customer needs full control over a database, AWS provides a wide range of Amazon EC2 instances - with different hardware characteristics - on which they can install and run their custom relational database software.
If EC2 is used instead of RDS to run a relational database, the customer is responsible for managing everything related to this database.
Which of the following can help secure your sensitive data in Amazon S3? (Choose TWO)
- With AWS you do not need to worry about encryption
- Delete all IAM users that have access to S3
- Enable S3 Encryption
- Delete the encryption keys once your data is encrypted
- Encrypt the data prior to uploading it
- Enable S3 Encryption and 5. Encrypt the data prior to uploading it
Data protection refers to protecting data while in-transit (as it travels to and from Amazon S3) and at rest (while it is stored on disks in Amazon data centres). You can protect data in transit by using SSL/TLS or by using client-side encryption.
Also, you have the following options of protecting data at rest in Amazon S3.
1- Use Server-Side Encryption – You configure Amazon S3 to encrypt your object before saving it on disks in its data centres and decrypt it when you download the objects.
2- Use Client-Side Encryption – You can encrypt your data on the client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.
AWS does not encrypt the customer data automatically unless it is configured to do so. The customer is responsible for everything related to their data - access management, encryption, validation, lifecycle management, etc.
You should also restrict access to the S3 buckets using IAM policies.
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html
You need to migrate a large number of on-premises workloads to AWS. Which AWS service is the most appropriate?
- AWS Database Migration Service.
- AWS File Transfer Acceleration.
- AWS Server Migration Service.
- AWS Application Discovery Service.
- AWS Server Migration Service.
AWS Server Migration Service (SMS) is an agentless service which makes it easier and faster for you to migrate thousands of on-premises workloads to AWS.
AWS SMS allows you to automate, schedule, and track incremental replications of live server volumes, making it easier for you to coordinate large-scale server migrations.
AWS Server Migration Service currently supports virtual machine migrations from VMware vSphere, Windows Hyper-V, or Microsoft Azure to AWS. Each server volume migrated is saved as a new Amazon Machine Image (AMI), which can be launched as an EC2 instance (virtual machine) in the AWS cloud.
https://aws.amazon.com/server-migration-service/
What is the AWS Application Discovery Service?
AWS Application Discovery Service is used to discover on-premises server inventory and behaviour. This service is very useful when creating a migration plan to AWS.
What is AWS File Transfer Acceleration?
AWS File Transfer Acceleration is an S3 feature that enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket.
To protect against data loss, you need to backup your database regularly. What is the most cost-effective storage option that provides immediate retrieval of your backups?
- Instance Store
- Amazon S3 Glacier Deep Archive
- Amazon EBS
- Amazon S3 Standard-Infrequent Access
- Amazon S3 Standard-Infrequent Access
Amazon S3 has a wide variety of storage classes to cover different workloads and use cases. The S3 storage class you choose primarily depends upon two factors: accessibility and cost. If you need immediate access to your data, then you want to use either S3 Standard, S3 Intelligent-Tiering, S3 Standard-Infrequent Access, S3 One Zone-IA, or Amazon S3 Glacier Instant Retrieval. S3 Standard-Infrequent Access (S3 Standard-IA) is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval charge. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files.
Database backup is an important operation to consider for any database system. Taking backups not only enables data restore on database failure but also enables recovery from data corruption. Amazon S3 Standard-Infrequent Access is the best choice because it provides immediate access to your database backups while reducing costs. S3 Standard-IA is ideal for data that is accessed less frequently (like database backups), but requires immediate access when needed.
https://aws.amazon.com/s3/storage-classes/
https://aws.amazon.com/s3/
What are Instance Stores in AWS?
An Instance Store is a storage volume that acts as a physical hard drive. It provides temporary storage for an Amazon EC2 instance. The data in an instance store persists during the lifetime of its instance. If an instance reboots, data in the instance store will persist. When the instance hibernates or terminates, you lose any data in the instance store.
Instance Store can only be used to store temporary data such as buffers, caches, scratch data, and other temporary content. You cannot rely on an instance store for valuable, long-term data because data in the instance store is lost if the instance stops, terminates or if the underlying disk drive fails.
An instance store provides temporary block-level storage for EC2 instances. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content.
An instance store provides temporary block-level storage for EC2 instances. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content.
Which AWS service collects metrics from running EC2 instances?
- Amazon Inspector
- AWS CloudFormation
- Amazon CloudWatch
- AWS CloudTrail
- Amazon CloudWatch
Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources.
https://aws.amazon.com/cloudwatch
Your application requirements for CPU and RAM are changing in an unpredictable way. Which service can be used to dynamically adjust these resources based on load?
- Auto Scaling
- Amazon Elastic Container Service
- Amazon Route53
- ELB
- Auto Scaling
AWS Auto Scaling is a service that can help you optimize your utilization and cost efficiencies when consuming AWS services so you only pay for the resources you actually need. When demand decreases, Auto Scaling shuts down unused resources automatically to reduce costs. When demand increases, Auto Scaling provisions new resources automatically to meet demand and maintain performance.
https://d1.awsstatic.com/whitepapers/aws-overview.pdf
What is Amazon Route53?
Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service.
Amazon Route 53 is not used for storing data. It is a globally available, cloud-based Domain Name System (DNS) web service not tied to Availability Zones.
Amazon Route 53:
- Register Domains
- Use AWS nameservers
- Public and Private DNS zones
- Automated via API
- Health checks
- Different routing methods:
- Latency
- Geographic
- Failover
- Weighted Sets
Why are Serverless Architectures more economical than Server-based Architectures?
- With Serverless Architectures you have the ability to scale automatically up or down as demand changes.
- When you reserve serverless capacity, you will get large discounts compared to server reservation.
- With the Server-based Architectures, compute resources continue to run all the time but with serverless architecture, compute resources are only used when code is being executed.
- Serverless Architectures use new powerful computing devices.
- With the Server-based Architectures, compute resources continue to run all the time but with serverless architecture, compute resources are only used when code is being executed.
Serverless architectures can reduce costs because you do not have to manage or pay for underutilized servers, or provision redundant infrastructure to implement high availability. For example, you can upload your code to the AWS Lambda compute service, and the service can run the code on your behalf using AWS infrastructure. With AWS Lambda, you are charged for every 100ms your code executes and the number of times your code is triggered.
AWS uses the same devices for both server-based and serverless architectures.
With Serverless Architecture, you do not have to worry about scaling compute capacity. AWS handles that for you.
There are no reservations when using Serverless Architectures.
https://aws.amazon.com/serverless/
You have migrated your application to AWS recently. How can you view the AWS costs applied to your account?
- Using the AWS CloudWatch logs dashboard
- Using the AWS Cost & Usage Report
- Using the Amazon AppStream 2.0 dashboard
- Using the Amazon VPC dashboard
- Using the AWS Cost & Usage Report
The AWS Cost & Usage Report is your one-stop shop for accessing the most detailed information available about your AWS costs and usage. The AWS Cost & Usage Report lists AWS usage for each service category used by an account and its IAM users in hourly or daily line items, as well as any tags that you have activated for cost allocation purposes.
Amazon VPC dashboard doesn’t provide any cost information.
https://aws.amazon.com/aws-cost-management/aws-cost-and-usage-reporting/
Which statement best describes the AWS Pay-As-You-Go pricing model?
- With AWS, you replace low upfront expenses with large variable payments.
- With AWS, you replace large upfront expenses with low fixed payments.
- With AWS, you replace large capital expenses with low variable payments.
- With AWS, you replace low upfront expenses with large fixed payments.
- With AWS, you replace large capital expenses with low variable payments.
AWS does not require minimum spend commitments or long-term contracts. You replace large fixed upfront expenses with low variable payments that only apply based on what you use. For example, when using On-demand instances you pay only for the hours\seconds they are running and nothing more.
https://aws.amazon.com/pricing/
You manage a blog on AWS that has different environments: development, testing, and production. What can you use to create a custom console for each environment to view and manage your resources easily?
- AWS Management Console
- AWS Placement Groups
- AWS Resource Groups
- AWS Tag Editor
- AWS Resource Groups
Resource Groups help you organize multiple AWS resources in groups. By default, the AWS Management Console is organized by AWS service. But with the Resource Groups tool, you can create a custom console that organizes and consolidates information based on your project and the resources that you use.
If you work with multiple resources in multiple environments, you might find it useful to manage all the resources in each environment as a group rather than move from one AWS service to another for each task. Resource Groups help you do just that. By default, the AWS Management Console is organized by AWS service. But with the Resource Groups tool, you can create a custom console that organizes and consolidates information based on your project and the resources that you use.
Resource Groups help you organize multiple AWS resources in groups. By default, the AWS Management Console is organized by AWS service. But with the Resource Groups tool, you can create a custom console that organizes and consolidates information based on your project and the resources that you use.
https://docs.aws.amazon.com/ARG/latest/APIReference/Welcome.html
What is the AWS Tag Editor?
AWS Tag Editor is used to add, edit, or delete tags from AWS resources.
What are AWS Placement Groups?
Placement Groups are logical groupings or clusters of EC2 instances within a single Availability Zone.
Placement Groups are logical groupings or clusters of EC2 instances within a single Availability Zone. Placement groups are recommended for applications that require low network latency, high network throughput, or both.
https://docs.aws.amazon.com/ARG/latest/APIReference/Welcome.html
An organization uses a hybrid cloud architecture to run their business. Which AWS service enables them to deploy their applications to any AWS or on-premises server?
- Amazon Kinesis
- AWS CodeDeploy
- Amazon Athena
- Amazon QuickSight
- AWS CodeDeploy
AWS CodeDeploy is a service that automates application deployments to any instance, including Amazon EC2 instances and instances running on-premises. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during deployment, and handles the complexity of updating your applications. You can use AWS CodeDeploy to automate deployments, eliminating the need for error-prone manual operations, and the service scales with your infrastructure so you can easily deploy to one instance or thousands.
You can also use AWS OpsWorks to automate application deployments to any instance, including Amazon EC2 instances and instances running on-premises. OpsWorks is a service that helps you automate operational tasks like code deployment, software configurations, package installations, database setups, and server scaling using Chef and Puppet.
https://aws.amazon.com/codedeploy/
https://aws.amazon.com/about-aws/whats-new/2015/04/aws-codedeploy-supports-on-premises-instances/
https://aws.amazon.com/about-aws/whats-new/2014/12/08/aws-opsworks-supports-existing-ec2-instances-and-on-premises-servers/
A company experiences fluctuations in traffic patterns to their e-commerce website when running flash sales. What service can help the company dynamically match the required compute capacity to handle spikes in traffic during flash sales?
- Amazon Elastic File System
- Amazon Elastic Compute Cloud
- AWS Auto Scaling
- Amazon ElastiCache
- AWS Auto Scaling
AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, you maintain optimal application performance and availability, even when workloads are periodic, unpredictable, or continuously changing. When demand spikes, AWS Auto Scaling automatically increases the compute capacity, so you maintain performance. When demand subsides, AWS Auto Scaling automatically decreases the compute capacity, so you pay only for the resources you actually need.
https://aws.amazon.com/autoscaling/
A media company has an application that requires the transfer of large data sets to and from AWS every day. This data is business critical and should be transferred over a consistent connection. Which AWS service should the company use?
- AWS Direct Connect
- Amazon Comprehend
- AWS VPN
- AWS Snowmobile
- AWS Direct Connect
AWS Direct Connect makes it easy for businesses to establish a dedicated network connection from their on-premises datacentres to AWS. Using AWS Direct Connect, customers can establish private connectivity between AWS and their datacentre, office, or co-location environment, which in many cases can reduce their network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
https://aws.amazon.com/directconnect/
Which of the following would you use to manage your encryption keys in the AWS Cloud? (Choose TWO)
- AWS CodeCommit
- AWS CodeDeploy
- AWS KMS
- CloudHSM
- AWS Certificate Manager
- AWS KMS and 4. CloudHSM
AWS Key Management Service (KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data, and uses FIPS 140-2 validated hardware security modules to protect the security of your keys. AWS Key Management Service is integrated with most other AWS services to help you protect the data you store with these services. AWS Key Management Service is also integrated with AWS CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs.
AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. CloudHSM offers you the flexibility to integrate with your applications using industry-standard APIs, such as PKCS#11, Java Cryptography Extensions (JCE), and Microsoft CryptoNG (CNG) libraries.
https://aws.amazon.com/kms/
https://aws.amazon.com/cloudhsm/
For Amazon RDS databases, what does AWS perform on your behalf? (Choose TWO)
- Access Management
- Management of the operating system
- Management of firewall rules
- Network traffic protection
- Database setup
- Management of the operating system and 5. Database setup
In relation to Amazon RDS databases:
AWS is responsible for:
1- Managing the underlying infrastructure and foundation services.
2- Managing the operating system.
3- Database setup.
4- Patching and backups.
The customer is still responsible for:
1- Protecting the data stored in databases (through encryption and IAM access control).
2- Managing the database settings that are specific to the application.
3- Building the relational schema.
4- Network traffic protection.
The customer is responsible for managing access to all AWS services and resources.
The customer is responsible for managing firewall rules using security groups.
The customer is responsible for protecting network traffic using security groups, Network ACLs and AWS WAFs.
Amazon RDS for Oracle does not automatically replicate data. Amazon RDS supports six database engines (Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server). Amazon Aurora is the only database engine that replicates data automatically across three Availability Zones. For other database engines, you must enable the “Multi-AZ” feature manually. In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a standby copy of your data in a different Availability Zone. If a storage volume on your primary instance fails, Amazon RDS automatically initiates a failover to the up-to-date standby.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.html
Which of the following has the greatest impact on cost? (Choose TWO)
- Data Transfer Out charges
- The number of IAM roles provisioned
- The number of services used
- Data Transfer In charges
- Compute charges
- Data Transfer Out charges and 5. Compute charges
The factors that have the greatest impact on cost include: Compute, Storage and Data Transfer Out. Their pricing differs according to the service you use.
It does not matter how many AWS services you are using. Each AWS service has its own pricing details, and many of them are free to use.
There is no charge for inbound data transfer (also called Data Transfer IN) across all services in all Regions.
Data transfer from AWS to the internet (Data Transfer OUT) is charged per service, with rates specific to the originating Region.
IAM and all of its features are free to use.
https://aws.amazon.com/pricing/
You are facing a lot of problems with your current contact centre. Which service provides a cloud-based contact centre that can deliver a better service for your customers?
- AWS Direct Connect
- Amazon Lightsail
- Amazon Connect
- AWS PrivateLink
- Amazon Connect
Amazon Connect is a cloud-based contact centre solution. Amazon Connect makes it easy to set up and manage a customer contact centre and provide reliable customer engagement at any scale. You can set up a contact centre in just a few steps, add agents from anywhere, and start to engage with your customers right away. Amazon Connect provides rich metrics and real-time reporting that allow you to optimize contact routing. You can also resolve customer issues more efficiently by putting customers in touch with the right agents. Amazon Connect integrates with your existing systems and business applications to provide visibility and insight into all of your customer interactions.
https://aws.amazon.com/connect/
What is AWS PrivateLink?
AWS PrivateLink enables you to securely connect your VPCs to supported AWS services: to your own services on AWS, to services hosted by other AWS accounts, and to third-party services on AWS Marketplace. With AWS PrivateLink, traffic between AWS resources, VPCs, and third-party services stays on the global AWS backbone and never traverses the public internet, reducing exposure to brute force and distributed denial-of-service attacks, along with other threats.
For example, customers who want to use a SaaS application offered by an independent software vendor in the AWS Marketplace have to choose between allowing Internet access from their VPC, which puts the VPC resources at risk, and not using these applications at all. With AWS PrivateLink, customers can connect to AWS services and SaaS applications from their VPC in a private, secure, and scalable manner and without traversing the public internet.
Which of the below is a fully managed Amazon search service based on open source software?
1. Amazon CloudSearch
2. Amazon ElasticSearch
3. AWS Elastic Beanstalk
4. AWS OpsWorks
- Amazon ElasticSearch
Amazon ElasticSearch service is a fully managed service that makes it easy for you to deploy, secure, operate, and scale ElasticSearch to search, analyse and visualise data in real-time. ElasticSearch is based on open source software.
What is High Availability?
It is the ability to recover from a failure, an example is having a second server that you failover to.
What are RTO and RPO?
RPO is the recovery point, this is the time between the last backup and the outage. RTO is recovery time, this is the time taken to recover from an outage.
How many support plans are there in AWS?
AWS provides multiple support plans to meet the different support requirements of its customers.
There are four main support plans in AWS:
- Developer
- Basic
- Business
- Enterprise
Under the shared responsibility model, which of the following is the customer responsible for?
A. Ensuring that disk drives are wiped after use.
B. Ensuring that firmware is updated on hardware devices.
C. Ensuring that data is encrypted at rest.
D. Ensuring that network cables are category six or higher.
Answer: C. Ensuring that data is encrypted at rest.
The use of what AWS feature or service allows companies to track and categorize spending on a detailed
level?
A. Cost allocation tags
B. Consolidated billing
C. AWS Budgets
D. AWS Marketplace
Answer: C. AWS Budgets.
Which service stores objects, provides real-time access to those objects, and offers versioning and lifecycle
capabilities?
A. Amazon Glacier
B. AWS Storage Gateway
C. Amazon S3
D. Amazon EBS
Answer: C. Amazon S3.
What AWS team assists customers with accelerating cloud adoption through paid engagements in any of several specialty practice areas?
A. AWS Enterprise Support
B. AWS Solutions Architects
C. AWS Professional Services
D. AWS Account Managers
Answer: C. AWS Professional Services.
AWS Professional Services is the service that helps organizations achieve their desired business outcomes with AWS.
AWS Professional Services is the service that helps organizations design and travel an accelerated path to successful cloud adoption.
A customer would like to design and build a new workload on AWS Cloud but does not have the AWS related software technical expertise in-house. Which of the following AWS programs can a customer take advantage of to achieve that outcome?
A. AWS Partner Network Technology Partners
B. AWS Marketplace
C. AWS Partner Network Consulting Partners
D. AWS Service Catalog
Answer: C. AWS Partner Network Consulting Partners.
Distributing workloads across multiple Availability Zones supports which cloud architecture design
principle?
A. Implement automation.
B. Design for agility.
C. Design for failure.
D. Implement elasticity.
Answer: C. Design for failure.
Which AWS services can host a Microsoft SQL Server database? (Choose two.)
A. Amazon EC2
B. Amazon Relational Database Service (Amazon RDS)
C. Amazon Aurora
D. Amazon Redshift
E. Amazon S3
A. Amazon EC2 and B. Amazon Relational Database Service (Amazon RDS).
Which of the following inspects AWS environments to find opportunities that can save money for users and
also improve system performance?
A. AWS Cost Explorer
B. AWS Trusted Advisor
C. Consolidated billing
D. Detailed billing
B. AWS Trusted Advisor
Which of the following Amazon EC2 pricing models allow customers to use existing server-bound software
licenses?
A. Spot Instances
B. Reserved Instances
C. Dedicated Hosts
D. On-Demand Instances
C. Dedicated Hosts
Which AWS characteristics make AWS cost effective for a workload with dynamic user demand? (Choose
two.)
A. High availability
B. Shared security model
C. Elasticity
D. Pay-as-you-go pricing
E. Reliability
C. Elasticity and D. Pay-as-you-go pricing
A company is planning to run a global marketing application in the AWS Cloud. The application will feature videos that can be viewed by users. The company must ensure that all users can view these videos with low latency. Which AWS service should the company use to meet this requirement?
A. AWS Auto Scaling
B. Amazon Kinesis Video Streams
C. Elastic Load Balancing
D. Amazon CloudFront
D. Amazon CloudFront
Which pillar of the AWS Well-Architected Framework refers to the ability of a system to recover from infrastructure or service disruptions and dynamically acquire computing resources to meet demand?
A. Security
B. Reliability
C. Performance efficiency
D. Cost optimization
B. Reliability
Which of the following are benefits of migrating to the AWS Cloud? (Choose two.)
A. Operational resilience
B. Discounts for products on Amazon.com
C. Business agility
D. Business excellence
E. Increased staff retention
A. Operational resilience and C. Business agility.
A company is planning to replace its physical on-premises compute servers with AWS serverless compute services. The company wants to be able to take advantage of advanced technologies quickly after the migration.
Which pillar of the AWS Well-Architected Framework does this plan represent?
A. Security
B. Performance efficiency
C. Operational excellence
D. Reliability
B. Performance efficiency.
A large company has multiple departments. Each department has its own AWS account. Each department has purchased Amazon EC2 Reserved Instances.
Some departments do not use all the Reserved Instances that they purchased, and other departments need more Reserved Instances than they purchased.
The company needs to manage the AWS accounts for all the departments so that the departments can share the Reserved Instances.
Which AWS service or tool should the company use to meet these requirements?
A. AWS Systems Manager
B. Cost Explorer
C. AWS Trusted Advisor
D. AWS Organizations
B. Cost Explorer.
Which component of the AWS global infrastructure is made up of one or more discrete data centres that have redundant power, networking, and connectivity?
A. AWS Region
B. Availability Zone
C. Edge location
D. AWS Outposts
B. Availability Zone
Which duties are the responsibility of a company that is using AWS Lambda? (Choose two.)
A. Security inside of code
B. Selection of CPU resources
C. Patching of operating system
D. Writing and updating of code
E. Security of underlying infrastructure
A. Security inside of code and D. Writing and updating of code
Which AWS services or features provide disaster recovery solutions for Amazon EC2 instances? (Choose two.)
A. ׀•׀¡2 Reserved Instances
B. EC2 Amazon Machine Images (AMIs)
C. Amazon Elastic Block Store (Amazon EBS) snapshots
D. AWS Shield
E. Amazon GuardDuty
B. EC2 Amazon Machine Images (AMIs) and C. Amazon Elastic Block Store (Amazon EBS) snapshots
A company is migrating to the AWS Cloud instead of running its infrastructure on premises.
Which of the following are advantages of this migration? (Choose two.)
A. Elimination of the need to perform security auditing
B. Increased global reach and agility
C. Ability to deploy globally in minutes
D. Elimination of the cost of IT staff members
E. Redundancy by default for all compute services
B. Increased global reach and agility and D. Elimination of the cost of IT staff members
A user is comparing purchase options for an application that runs on Amazon EC2 and Amazon RDS. The application cannot sustain any interruption. The application experiences a predictable amount of usage, including some seasonal spikes that last only a few weeks at a time. It is not possible to modify the application.
Which purchase option meets these requirements MOST cost-effectively?
A. Review the AWS Marketplace and buy Partial Upfront Reserved Instances to cover the predicted and seasonal load.
B. Buy Reserved Instances for the predicted amount of usage throughout the year. Allow any seasonal usage to run on Spot Instances.
C. Buy Reserved Instances for the predicted amount of usage throughout the year. Allow any seasonal usage to run at an On-Demand rate.
D. Buy Reserved Instances to cover all potential usage that results from the seasonal usage.
B. Buy Reserved Instances for the predicted amount of usage throughout the year. Allow any seasonal usage to run on Spot Instances.
Spot, Savings Plans, and Reserved instances are all cheaper than On-Demand instances.
Which AWS services can be used to store files? Choose 2 answers from the options given below.
A. Amazon Cloud Watch
B. Amazon Simple Storage Storage (Amazon S3)
C. Amazon Elastic Block Store (Amazon EBS)
D. AWS Config
E. Amazon Athena
B. Amazon Simple Storage Storage (Amazon S3) and C. Amazon Elastic Block Store (Amazon EBS)
Which of the following services uses AWS edge locations?
A. Amazon Virtual Private Cloud (Amazon VPC)
B. Amazon CloudFront
C. Amazon Elastic Compute Cloud (Amazon EC2)
D. AWS Storage Gateway
B. Amazon CloudFront
Which of the following is a benefit of Amazon Elastic Compute Cloud (Amazon EC2) over physical servers?
A. Automated backup
B. Paying only for what you use
C. The ability to choose hardware vendors
D. Root / administrator access
B. Paying only for what you use
Which AWS service provides infrastructure security optimization recommendations?
A. AWS Price List Application Programming Interface (API)
B. Reserved Instances
C. AWS Trusted Advisor
D. Amazon Elastic Compute Cloud (Amazon EC2) Spot Fleet
C. AWS Trusted Advisor
Which service allows for the collection and tracking of metrics for AWS services?
A. Amazon Cloud Front
B. Amazon Cloud Search
C. Amazon Cloud Watch
D. Amazon Machine Learning (Amazon ML)
C. Amazon Cloud Watch
A Company needs to know which user was responsible for terminating several Amazon Elastic Cloud (Amazon EC2) Instances. Where can the customer find this information?
A. AWS Trusted Advisor
B. Amazon EC2 instance usage report
C. Amazon Cloud Watch
D. AWS Cloud Trail Logs
D. AWS Cloud Trail Logs
Which service should an administrator use to register a new domain name with AWS?
A. Amazon Route 53
B. Amazon Cloud Front
C. Elastic Load Balancing
D. Amazon Virtual Private Cloud (Amazon VPC)
A. Amazon Route 53
What is the value of having AWS Cloud services accessible through an Application Programming Interface (API)?
A. Cloud resources can be managed programatically
B. AWS infrastructure use will always be cost-optimized
C. All application testing is managed by AWS
D. Customer -owned, on -premises infrastructure becomes programmable
A. Cloud resources can be managed programatically
Engineers are wasting a lot of time and effort managing batch computing software in traditional data centres. Which of the following AWS services allows them to easily run thousands of batch computing jobs?
A. Lambda@Edge
B. AWS Fargate
C. AWS Batch
D. Amazon EC2
C. AWS Batch
AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory-optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems. AWS Batch plans, schedules, and executes your batch computing workloads across the full range of AWS compute services and features, such as Amazon EC2 and Spot Instances.
What is Lambda@Edge?
Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to your global end-users, which improves performance and reduces latency.
What factors determine how you are charged when using AWS Lambda? (Choose TWO)
A. Placement Groups
B. Number of volumes
C. Number of requests to your functions
D. Compute time consumed
E. Storage consumed
C. Number of requests to your functions and D. Compute time consumed
With AWS Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the time it takes for your code to execute.
Which of the following is NOT a benefit of using AWS Lambda?
A. There is no charge when your AWS Lambda code is not running
B. AWS Lambda can be called directly from any mobile app
C. AWS Lambda runs code without provisioning or managing servers
D. AWS Lambda provides resizable compute capacity in the cloud
D. AWS Lambda provides resizable compute capacity in the cloud
“AWS Lambda provides resizable compute capacity in the cloud” is not a benefit of AWS Lambda, so is the correct choice. AWS Lambda automatically runs your code without requiring you to adjust capacity or manage servers. AWS Lambda automatically scales your application by running code in response to each trigger. Your code runs in parallel and processes each trigger individually, scaling precisely with the size of the workload.
Which of the following is a benefit of the “Loose Coupling” architecture principle?
A. It allows for Cross-Region Replication
B. It allows individual application components or services to be modified without affecting other components
C. It eliminates the need for change management
D. It helps AWS customers reduce Privileged Access to AWS resources
B. It allows individual application components or services to be modified without affecting other components
As application complexity increases, a desirable attribute of an IT system is that it can be broken into smaller, loosely coupled components. This means that IT systems should be designed in a way that reduces interdependencies - a change or a failure in one component should not cascade to other components.
The AWS services that can help you build loosely-coupled applications include:
1- Amazon Simple Queue Service (Amazon SQS): Amazon SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.
2- Amazon EventBridge (also called Amazon CloudWatch Events): Amazon EventBridge is a serverless event bus service that makes it easy for you to build event-driven application architectures. Amazon EventBridge helps you accelerate modernizing and re-orchestrating your architecture with decoupled services and applications. With EventBridge, you can speed up your organization’s development process by allowing teams to iterate on features without explicit dependencies between systems.
3- Amazon SNS: Amazon SNS is a publish/subscribe messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Both Amazon SNS and Amazon EventBridge can be used to implement the publish-subscribe pattern. Amazon EventBridge includes direct integrations with software as a service (SaaS) applications and other AWS services. It’s ideal for publish-subscribe use cases involving these types of integrations.
Each AWS Region is composed of multiple Availability Zones. Which of the following best describes what an Availability Zone is?
A. It is a distinct location within a region that is insulated from failures in other Availability Zones
B. It is a collection of data centres distributed in multiple countries
C. It is a logically isolated network of the AWS Cloud
D. It is a collection of Local Zones designed to be completely isolated from each other
A. It is a distinct location within a region that is insulated from failures in other Availability Zones
Availability Zones are distinct locations within a region that are insulated from failures in other Availability Zones.
Note:
Although Availability Zones are insulated from failures in other Availability Zones, they are connected through private, low-latency links to other Availability Zones in the same region.
An Availability Zone is a collection of data centres located in one AWS Region.
An Availability Zone consists of one or more discrete data centres located in one AWS Region.
What is a Local Zone in AWS?
A Local Zone is an extension of an AWS Region in geographic proximity to your users.
With AWS Local Zones, you can easily run highly-demanding applications that require single-digit millisecond latencies to your end-users, such as real-time gaming, hybrid migrations, AR/VR, and machine learning. AWS Local Zones enable you to comply with state and local data residency requirements in sectors such as healthcare, financial services, iGaming, and government.
AWS Local Zones are connected to the parent region via Amazon’s redundant and very high bandwidth private network, giving applications running in AWS Local Zones fast, secure, and seamless access to the full range of in-region services through the same APIs and tool sets.
An extension of an AWS Region. Suited for customers who need the ability to place resources in multiple locations closer to end users.
Which of the following is the responsibility of AWS according to the AWS Shared Responsibility Model?
A. Performing auditing tasks
B. Monitoring AWS resources usage
C. Securing access to AWS resources
D. Securing regions and edge locations
D. Securing regions and edge locations
All other options represent responsibilities of the customer.
According to the Shared Security Model, AWS’ responsibility is the Security of the Cloud. AWS is responsible for protecting the infrastructure that runs the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.
You have multiple standalone AWS accounts and you want to decrease your AWS monthly charges. What should you do?
A. Try to remove unnecessary AWS accounts
B. Enable AWS-tiered pricing before provisioning resources
C. Add the accounts to an AWS Organisation and use Consolidated Billing
D. Track the AWS charges that are incurred by the member accounts
C. Add the accounts to an AWS Organisation and use Consolidated Billing
Consolidated billing has the following benefits:
1- One bill – You get one bill for multiple accounts.
2- Easy tracking – You can track each account’s charges, and download the cost data in .csv format.
3- Combined usage – If you have multiple standalone accounts, your charges might decrease if you add the accounts to an organization. AWS combines usage from all accounts in the organization to qualify you for volume pricing discounts.
4- No extra fee – Consolidated billing is offered at no additional cost.
Removing accounts or resources depends on your needs.
Tracking the AWS charges will not decrease your charges.
AWS tiered-pricing is applied for every AWS account regardless of whether it is part of an organization or not. With AWS, you can get volume-based discounts and realize important savings as your usage increases. For services such as S3 and data transfer OUT from EC2, pricing is tiered, meaning the more you use, the less you pay per GB. But if you have multiple AWS accounts, you can achieve even more discounts by adding them to an Organization and enable consolidated billing (because in that case, AWS will treat all the accounts as one account).
What are the main differences between an IAM user and an IAM role in AWS? (Choose TWO)
A. IAM users are more cost effective than IAM roles
B. A role is uniquely associated with only one person, however an IAM user is intended to be assumable by anyone who needs it
C. An IAM user has temporary credentials associated with it, however a role has permanent credentials associated with it
D. An IAM user has permanent credentials associated with it, however a role has temporary credentials associated with it
E. An IAM user is uniquely associated with only one person, however a role is intended to be assumable by anyone who needs it
D. An IAM user has permanent credentials associated with it, however a role has temporary credentials associated with it and E. An IAM user is uniquely associated with only one person, however a role is intended to be assumable by anyone who needs it.
An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it (as long as they are authorized to do so). Also, a role does not have standard long-term credentials (password or access keys) associated with it. Instead, if a user assumes a role, temporary security credentials are created dynamically and provided to the user.
For some services, AWS automatically replicates data across multiple Availability Zones to provide fault tolerance in the event of a server failure or Availability Zone outage. Select TWO services that automatically replicate data across Availability Zones.
A. Amazon RDS for Oracle
B. Amazon Route 53
C. S3
D. Instance Store
E. Amazon Aurora
C. S3 and E. Amazon Aurora
For S3 Standard, S3 Standard-IA, and S3 Glacier storage classes, your objects are automatically stored across multiple devices spanning a minimum of three Availability Zones, each on different power grids within an AWS Region. This means your data is available when needed and protected against AZ failures.
Amazon Aurora is an Amazon RDS database engine. All of your data in Amazon Aurora is automatically replicated across three Availability Zones within an AWS region, providing built-in high availability and data durability.
Other Amazon RDS database engines (PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server) do not replicate data automatically. To protect from data loss when using any of these engines, you need to manually enable the Multi-AZ feature. In a Multi-AZ Deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. If you encounter problems with the primary copy, Amazon RDS automatically switches to the standby copy to provide continued availability to the data.
Which of the following Cloud Computing deployment models eliminates the need to run and maintain physical data centres?
A. IaaS
B. Cloud
C. On-premises
D. PaaS
B. Cloud
There are three Cloud Computing Deployment Models:
1- Cloud:
A cloud-based application is fully deployed in the cloud and all parts of the application run in the cloud. This Cloud Computing deployment model eliminates the need to run and maintain physical data centres.
2- Hybrid:
A hybrid deployment is a way to connect infrastructure and applications between cloud-based resources and existing resources that are not located in the cloud (On-premises data centres).
3- On-premises:
Deploying resources on-premises, using virtualization and resource management tools, is sometimes called “private cloud”. On-premises deployment does not provide many of the benefits of cloud computing but is sometimes sought for its ability to provide dedicated resources
The AWS account administrator of your company has been fired. With the permissions granted to him as an administrator, he was able to create multiple IAM user accounts and access keys. Additionally, you are not sure whether he has access to the AWS root account or not. What should you do immediately to protect your AWS infrastructure? (Choose TWO)
A. Change the e-mail address and password of the root user account and enable MFA.
B. Rotate all access keys.
C. Delete all IAM accounts and recreate them.
D. Download all the attached policies in a safe place.
E. Use the CloudWatch service to check all API calls that have been made in your account since the administrator was fired.
A. Change the e-mail address and password of the root user account and enable MFA. and B. Rotate all access keys.
To protect your AWS infrastructure in this situation you should lock down your root user account and all IAM user accounts that the administrator had access to.
To protect your AWS infrastructure you should:
1- Change the e-mail address and the password of the root user account
2- Enable MFA on the root user account
4- Rotate (change) all access keys for all accounts
3- Change the user name and password of all IAM users
5- Enable MFA on all IAM user accounts
Deleting all IAM accounts is not necessary, and it could cause disruption to your operations.
IAM policies are used to authorize users to perform actions on AWS resources. Downloading them save you some time if they were deleted, but it is not an immediate first step to take to protect your AWS infrastructure.
CloudTrail is the service that gives you a complete history of the API calls that have been made in your account from all users, not CloudWatch.
Which of the following AWS services integrates with AWS Shield and AWS Web Application Firewall (AWS WAF) to protect against network and application layer DDoS attacks?
A. AWS Secrets Manager
B. AWS Systems Manager
C. Amazon CloudFront
D. Amazon EFS
C. Amazon CloudFront
Amazon CloudFront, AWS Shield, and AWS Web Application Firewall (AWS WAF) work seamlessly together to create a flexible, layered security perimeter against multiple types of attacks including network and application layer DDoS attacks. These services are co-resident at the AWS edge location and provide a scalable, reliable, and high-performance security perimeter for your applications and content.
All CloudFront distributions are defended by default against the most frequently occurring DDoS attacks that target your websites or applications with AWS Shield Standard. To defend against more complex attacks, you can add a flexible, layered security perimeter by integrating CloudFront with AWS Shield Advanced and AWS Web Application Firewall (AWS WAF).
What is AWS Systems Manager?
AWS Systems Manager gives you visibility and control of your infrastructure on AWS.
Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources.
With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and execute actions on your groups of resources.
Systems Manager simplifies resource and application management, shortens the time to detect and resolve operational problems, and makes it easy to operate and manage your infrastructure at scale.
AWS Systems Manager helps you select and deploy operating system and software patches automatically across large groups of Amazon EC2 or on-premises instances. Through patch baselines, you can set rules to auto-approve select categories of patches to be installed, such as operating system or high severity patches. Systems Manager helps ensure that your software is up-to-date and meets your compliance policies.
What is AWS Secrets Manager?
AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily store, rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text.
What should you consider when storing data in Amazon Glacier?
A. Attach Glacier to an EC2 instance to be able to store data
B. Pick the right Glacier class based on your retrieval needs
C. Amazon Glacier only accepts data in a compressed format
D. Glacier can only be used to store frequently accessed data and data archives
B. Pick the right Glacier class based on your retrieval needs
AWS customers use Amazon Glacier to backup large amounts of data at very low costs. There are three different storage classes for Amazon Glacier: Amazon S3 Glacier Instant Retrieval, Amazon S3 Glacier Flexible Retrieval, and Amazon S3 Glacier Deep Archive.
Choosing between S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, or S3 Glacier Deep Archive depends on how quickly you must retrieve your data. S3 Glacier Instant Retrieval delivers the fastest access to archive storage, with the same throughput and milliseconds access as the S3 Standard and S3 Standard-IA storage classes. With S3 Glacier Flexible Retrieval, you can retrieve your data within a few minutes to several hours (1-5 minutes to 12 hours), whereas with S3 Glacier Deep Archive, the minimum retrieval period is 12 hours.
For archive data that needs immediate access, such as medical images, news media assets, or genomics data, choose the S3 Glacier Instant Retrieval storage class. For archive data that does not require immediate access but needs the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, choose S3 Glacier Flexible Retrieval (formerly S3 Glacier), with retrieval in minutes or free bulk retrievals in 5 - 12 hours. To save even more on long-lived archive storage such as compliance archives and digital media preservation, choose S3 Glacier Deep Archive, the lowest cost storage in the cloud with data retrieval from 12 - 48 hours.
What is the Amazon ElastiCache service used for? (Choose TWO)
A. Distribute requests to multiple instances
B. Provide a Chef-compatible cache to speed up application response
C. Provide an in-memory data storage service
D. Stream desktop applications from the cloud to user devices
E. Improve web application performance
C. Provide an in-memory data storage service and E. Improve web application performance
Amazon ElastiCache improves the performance of web applications by allowing you to retrieve information from a fast, managed, in-memory data store, instead of relying entirely on slower disk-based databases. Querying a database is always slower and more expensive than locating a copy of that data in a cache. By caching (storing) common database query results, you can quickly retrieve the data multiple times without having to re-execute the query.
ElastiCache is not “Chef-compatible”. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. The AWS service that uses Chef and Puppet is AWS OpsWorks.
Which of the following can be used to protect websites not hosted on AWS?
A. AWS Network ACLs
B. AWS Ground Station
C. AWS WAF
D. AWS Security Groups
C. AWS WAF
AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.
AWS WAF is a web application firewall that helps protect web applications from attacks by allowing you to configure rules that block traffic based on conditions that you define. These conditions include IP addresses, HTTP headers, HTTP body, URI strings, SQL injection, and cross-site scripting. AWS WAF is integrated with Amazon CloudFront, which supports custom origins outside of AWS. Therefore, AWS WAF can help you protect websites not hosted on AWS.
What is AWS Ground Station?
AWS Ground Station is a fully managed service that lets you control satellite communications, process satellite data, and scale your satellite operations.
With AWS Ground Station, you no longer have to build or manage your own ground station infrastructure.
What are some of the benefits of using On-Demand EC2 instances? (Choose TWO)
A. They are cheaper than all other EC2 options
B. They only require 1-2 days for setup and configuration
C. They provide free capacity when testing your new applications
D. You can increase or decrease your compute capacity depending on the demands of your application
E. They remove the need to buy “safety net” capacity to handle periodic traffic spikes
D. You can increase or decrease your compute capacity depending on the demands of your application and E. They remove the need to buy “safety net” capacity to handle periodic traffic spikes.
With On-Demand instances, you pay for compute capacity by the hour or the second depending on which instances you run. No longer-term commitments or upfront payments are needed. You can increase or decrease your compute capacity depending on the demands of your application and only pay for what you use. The use of On-Demand instances frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs. On-Demand instances also remove the need to buy “safety net” capacity to handle periodic traffic spikes.
Spot, Savings Plans, and Reserved instances are all cheaper than On-Demand instances.
Which statement is true in relation to security in AWS?
A. AWS is responsible for the security of your application
B. Server-side encryption is the responsibility of AWS
C. For serverless data stores such as Amazon S3, the customer is responsible for patching the operating system
D. AWS customers are responsible for patching any database software running on Amazon EC2
D. AWS customers are responsible for patching any database software running on Amazon EC2
AWS customers have two options to host their databases on AWS:
1- Using a managed database:
AWS Customers can use managed databases such as Amazon RDS to host their databases. In this case, AWS is responsible for performing all database management tasks such as hardware provisioning, patching, setup, configuration, backups, or recovery.
2- Installing a database software on Amazon EC2:
Instead of using a managed database, AWS customers can install any database software they want on Amazon EC2 and host their databases. In this case, Customers are responsible for performing all of the necessary configuration and management tasks.
Note: For Amazon RDS, all security patches and updates are applied automatically to the database software once they are released. But for databases installed on Amazon EC2, customers are required to apply the security patches and the updates manually or use the AWS Systems Manager service to apply them on a scheduled basis (every week, for example).
It is the responsibility of the customer to build secure applications.
It is the responsibility of the customer to encrypt data either on the client side or on the server side.
Which of the following factors affect Amazon CloudFront cost? (Choose TWO)
A. Storage Class.
B. Traffic Distribution.
C. Number of Requests.
D. Instance type.
E. Number of Volumes.
B. Traffic Distribution. and C. Number of Requests.
Amazon CloudFront charges are based on the data transfer out of AWS and requests used to deliver content to your customers. There are no upfront payments or fixed platform fees, no long-term commitments, no premiums for dynamic content, and no requirements for professional services to get started.
To estimate the costs of an Amazon CloudFront distribution consider the following:
- Traffic Distribution: Data transfer and request pricing varies across geographic regions, and pricing is based on the edge location through which your content is served.
- Requests: The number and type of requests (HTTP or HTTPS) made and the geographic region in which the requests are made.
- Data Transfer OUT: The amount of data transferred out of your Amazon CloudFront edge locations.
Note: Data Transfer IN is free. There is no charge for inbound data transferred from AWS services such as Amazon S3 or Elastic Load Balancing.
Instance type is a factor that affects Amazon EC2 costs, not Amazon CloudFront costs.
You are planning to launch an advertising campaign over the coming weekend to promote a new digital product. It is expected that there will be heavy spikes in load during the campaign period, and you can’t afford any downtime. You need additional compute resources to handle the additional load. What is the most cost-effective EC2 instance purchasing option for this job?
A. Reserved Instances.
B. Savings Plans.
C. On-Demand Instances.
D. Spot Instances.
C. On-Demand Instances.
On Demand instances would help provision any extra capacity that the application may need without any interruptions.
Spot instances may be more cost effective, but AWS does not guarantee the availability of the instances. Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks.
Using Savings Plans requires a contract of at least one year. Savings Plans is a flexible pricing model that offers low prices on EC2, Lambda, and Fargate usage, in exchange for a commitment to a consistent amount of compute usage (measured in $/hour) for a one or three-year term.
A company needs to host a big data application on AWS using EC2 instances. Which of the following AWS Storage services would they choose to automatically get high throughput to multiple compute nodes?
A. Amazon Elastic Block Store
B. Amazon Elastic File System
C. AWS Storage Gateway
D. S3
B. Amazon Elastic File System
Amazon Elastic File System (Amazon EFS) provides simple, scalable, elastic file storage for use with AWS Cloud services and on-premises resources. It offers a simple interface that allows you to create and configure file systems quickly and easily. Amazon EFS is built to elastically scale on demand without disrupting applications, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it.
Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS that scale as a file system grows, with consistent low latencies. As a regional service, Amazon EFS is designed for high availability and durability storing data redundantly across multiple Availability Zones. With these capabilities, Amazon EFS is well suited to support a broad spectrum of use cases, including web serving and content management, enterprise applications, media and entertainment processing workflows, home directories, database backups, developer tools, container storage, and big data analytics workloads.
S3 is an object level storage. S3 cannot be attached to compute resources.
What is Amazon EBS Multi-Attach?
Big data applications require shared access to hundreds or thousands of EC2 instances in multiple Availability Zones.
Amazon EBS Multi-Attach lets you share access to an EBS data volume between up to 16 Nitro-based EC2 instances within the same Availability Zone (AZ).
Which of the following AWS Support Plans gives you 24/7 access to Cloud Support Engineers via e-mail & phone? (Choose TWO)
A. Standard
B. Developer
C. Enterprise
D. Business
E. Premium
C. Enterprise and D. Business
For Technical Support, each of the Business, Enterprise On-Ramp, and Enterprise support plans provides 24x7 phone, email, and chat access to Support Engineers.
Premium and Standard are not valid support plans on AWS.
The Developer plan does not include phone support 24/7.
AWS provides disaster recovery capability by allowing customers to deploy infrastructure into multiple ___________ .
A. Regions.
B. Support plans.
C. Transportation devices.
D. Edge locations.
A. Regions.
Businesses are using the AWS cloud to enable faster disaster recovery of their critical IT systems without incurring the infrastructure expense of a second physical site. The AWS cloud supports many popular disaster recovery architectures from “pilot light” environments that may be suitable for small customer workload data centre failures to “hot standby” environments that enable rapid failover at scale. With data centres in Regions all around the world, AWS provides a set of cloud-based disaster recovery services that enable rapid recovery of your IT infrastructure and data.
Which of the following resources can an AWS customer use to learn more about prohibited uses of the services offered by AWS?
A. AWS Service Control Policies (SCPs)
B. AWS Artifact
C. AWS Acceptable Use Policy
D. AWS Budgets
C. AWS Acceptable Use Policy
The AWS Acceptable Use Policy describes prohibited uses of the web services offered by AWS. For example, any activities that are illegal, that violate the rights of others, or that may be harmful to others are prohibited. If a customer violates the policy or authorizes or helps others to do so, AWS may suspend or terminate their use of the services.
What are AWS Service Control Policies (SCPs)?
AWS Service Control Policies (SCPs) or AWS Organizations Policies are a type of organization policy that you can use to manage permissions for all accounts in your organization. SCPs offer central control over the maximum available permissions for all member accounts in your organization. SCPs help you to ensure member accounts stay within your organization’s access control guidelines. In SCPs, you can restrict which AWS services, resources, and individual API actions the users and roles in each member account can access.
Which of the following services is an AWS repository management system that allows for storing, versioning, and managing your application code?
A. AWS CodePipeline
B. AWS CodeCommit
C. Amazon CodeGuru
D. AWS X-Ray
B. AWS CodeCommit
AWS CodeCommit is designed for software developers who need a secure, reliable, and scalable source control system to store and version their code. In addition, AWS CodeCommit can be used by anyone looking for an easy to use, fully managed data store that is version controlled. For example, IT administrators can use AWS CodeCommit to store their scripts and configurations. Web designers can use AWS CodeCommit to store HTML pages and images.
AWS CodeCommit makes it easy for companies to host secure and highly available private Git repositories. Customers can use AWS CodeCommit to securely store anything from source code to binaries.
What is AWS CodePipeline?
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.
What is Amazon CodeGuru?
Amazon CodeGuru is a developer tool that provides intelligent recommendations to improve code quality and identifying an application’s most expensive lines of code.
What is Amazon Kinesis Data Firehose (Analytics)?
Amazon Kinesis Data Firehose provides the facility of loading data streams into AWS data stores. Kinesis Data Firehose provides the simplest approach for capturing, transforming, and loading data streams into AWS data stores.
Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools.
Captures, transforms, and loads streaming data.
Enables near real-time analytics with existing business intelligence tools and dashboards.
Kinesis Data Streams can be used as the source(s) to Kinesis Data Firehose.
You can configure Kinesis Data Firehose to transform your data before delivering it.
With Kinesis Data Firehose you don’t need to write an application or manage resources.
Firehose can batch, compress, and encrypt data before loading it.
Firehose synchronously replicates data across three AZs as it is transported to destinations.
Each delivery stream stores data records for up to 24 hours.
What is Amazon Kinesis Data Streams (Analytics)?
Amazon Kinesis Data Streams is the real-time data streaming service in Amazon Kinesis with high scalability and durability. It can help in continuously capturing multiple gigabytes of data every second from multiple sources. The higher customizability with Kinesis Data Streams is also one of the profound highlights.
Kinesis Data Streams enables you to build custom applications that process or analyse streaming data for specialised needs.
Kinesis Data Streams enables real-time processing of streaming big data.
Kinesis Data Streams is useful for rapidly moving data off data producers and then continuously processing the data.
Kinesis Data Streams stores data for later processing by applications (key difference with Firehose which delivers data directly to AWS services).
Common use cases include:
Accelerated log and data feed intake.
Real-time metrics and reporting.
Real-time data analytics.
Complex stream processing.
What is Amazon EC2 Auto Scaling?
Increase or decrease number of instances: Amazon EC2 Auto Scaling helps you maintain application availability and lets you automatically add or remove EC2 instances using scaling policies that you define.
Dynamic or predictive scaling policies let you add or remove EC2 instance capacity to service established or real-time demand patterns.
The fleet management features of Amazon EC2 Auto Scaling help maintain the health and availability of your fleet.
You have been tasked with auditing the security of your VPC. As part of this process, you need to start by analysing what inbound and outbound traffic is allowed on your EC2 instances. What two parts of the VPC do you need to check to accomplish this task?
A. Security Groups and Network ACLs
B. Network ACLs and Subnets
C. Security Groups and Internet Gateways
D. AWS WAF and Traffic Manager
A. Security Groups and Network ACLs
Security Groups and Network Access Control Lists (Network ACLs) are the two parts of the VPC Security Layer. Security Groups are a firewall at the instance layer, and Network ACLs are a firewall at the subnet layer.
Traffic manager is an Azure service, not an AWS service.
Internet Gateways provide access for a VPC and subnet to reach the internet. They are not directly attached to EC2 instances.
Subnets are where EC2 instances reside, but they do not actually control ingress and egress traffic themselves.
Which of the following services is used when encrypting EBS volumes?
A. AWS WAF
B. AWS KMS
C. Amazon GuardDuty
D. Amazon Macie
B. AWS KMS
Amazon EBS encryption offers a straight-forward encryption solution for your EBS volumes that does not require you to build, maintain, and secure your own key management infrastructure. You can configure Amazon EBS to use the AWS Key Management Service (AWS KMS) to create and control the encryption keys used to encrypt your data. AWS Key Management Service is also integrated with other AWS services including Amazon S3, and Amazon Redshift, to make it simple to encrypt and decrypt your data.
What is the maximum amount of data that can be stored in S3 in a single AWS account?
A. 10 Exabytes
B. Virtually unlimited storage
C. 5 TeraBytes
D. 100 PetaBytes
B. Virtually unlimited storage
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes.
Which feature enables users to sign into their AWS accounts with their existing corporate credentials?
A. Access keys
B. IAM Permissions
C. WAF rules
D. Federation
D. Federation
With Federation, you can use single sign-on (SSO) to access your AWS accounts using credentials from your corporate directory. Federation uses open standards, such as Security Assertion Markup Language 2.0 (SAML), to exchange identity and security information between an identity provider (IdP) and an application.
AWS offers multiple options for federating your identities in AWS:
1- AWS Identity and Access Management (IAM): You can use AWS Identity and Access Management (IAM) to enable users to sign in to their AWS accounts with their existing corporate credentials.
2- AWS IAM Identity Centre (Successor to AWS Single Sign-On): AWS IAM Identity Centre makes it easy to centrally manage federated access to multiple AWS accounts and business applications and provide users with single sign-on access to all their assigned accounts and applications from one place.
3- AWS Directory Service: AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, uses secure Windows trusts to enable users to sign in to the AWS Management Console, AWS Command Line Interface (CLI), and Windows applications running on AWS using their existing corporate Microsoft Active Directory credentials.
IAM Permissions let you specify the desired access to AWS resources. Permissions are granted to IAM entities (users, user groups, and roles) and by default these entities start with no permissions. In other words, IAM entities can do nothing in AWS until you grant them your desired permissions.
What are access keys in the context of AWS IAM?
Access keys are long-term credentials for an AWS IAM user or the AWS account root user.
Access keys are not used for signing in to your account.
You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK).
Which of the following services enables you to easily generate and use your own encryption keys in the AWS Cloud?
A. AWS Shield
B. AWS Certificate Manager
C. AWS CloudHSM
D. AWS WAF
C. AWS CloudHSM
AWS CloudHSM is a cloud-based Hardware Security Module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud.
Which of the following makes it easier for you to categorize, manage and filter your resources?
A. AWS Service Catalog
B. AWS Directory Service
C. AWS Tagging
D. Amazon CloudWatch
C. AWS Tagging
Amazon Web Services (AWS) allows customers to assign metadata to their AWS resources in the form of tags. Each tag is a simple label consisting of a customer-defined key and an optional value that can make it easier to manage, search for, and filter resources. Although there are no inherent types of tags, they enable customers to categorize resources by purpose, owner, environment, or other criteria.
What is AWS Managed Microsoft AD?
AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, enables your directory-aware workloads and AWS resources to use managed Active Directory in the AWS Cloud.
Which of the following AWS support plans provides access to only the core AWS Trusted Advisor checks?
A. Developer & Business Support
B. Developer & Enterprise Support
C. Basic & Developer Support
D. Business & Enterprise Support
C. Basic & Developer Support
AWS Trusted Advisor offers a rich set of best practice checks and recommendations across five categories: cost optimization, security, fault tolerance, performance, and service limits. AWS Basic Support and AWS Developer Support customers get access to 6 core security checks (S3 Bucket Permissions, Security Groups - Specific Ports Unrestricted, IAM Use, MFA on Root Account, EBS Public Snapshots, RDS Public Snapshots) and 50 service limit checks.
AWS Business, Enterprise On-Ramp, and Enterprise Support customers get access to ALL 115 Trusted Advisor checks (14 cost optimization, 17 security, 24 fault tolerance, 10 performance, and 50 service limits).
Which of the following are part of the seven design principles for security in the cloud? (Choose TWO)
A. Use manual monitoring techniques to protect your AWS resources
B. Never store sensitive data in the Cloud
C. Use IAM roles to grant temporary access instead of long-term credentials
D. Enable real-time traceability
E. Scale horizontally to protect from failures.
C. Use IAM roles to grant temporary access instead of long-term credentials and D. Enable real-time traceability
There are seven design principles for security in the cloud:
1- Implement a strong identity foundation: Implement the principle of least privilege and enforce separation of duties with appropriate authorization for each interaction with your AWS resources. Centralize privilege management and reduce or even eliminate reliance on long-term credentials.
2- Enable traceability: Monitor, alert, and audit actions and changes to your environment in real time. Integrate logs and metrics with systems to automatically respond and take action.
3- Apply security at all layers: Rather than just focusing on protection of a single outer layer, apply a defence-in-depth approach with other security controls. Apply to all layers (e.g., edge network, VPC, subnet, load balancer, every instance, operating system, and application).
4- Automate security best practices: Automated software-based security mechanisms improve your ability to securely scale more rapidly and cost effectively. Create secure architectures, including the implementation of controls that are defined and managed as code in version-controlled templates.
5- Protect data in transit and at rest: Classify your data into sensitivity levels and use mechanisms, such as encryption, tokenization, and access control where appropriate.
6- Keep people away from data: Create mechanisms and tools to reduce or eliminate the need for direct access or manual processing of data. This reduces the risk of loss or modification and human error when handling sensitive data.
7- Prepare for security events: Prepare for an incident by having an incident management process that aligns to your organizational requirements. Run incident response simulations and use tools with automation to increase your speed for detection, investigation, and recovery.
Protecting from networking failures due to hardware issues or mis-configuration is not related to security. Protecting from failures and scaling horizontally are much more related to the reliability of your system.
AWS provides encryption and access control tools that allow you to easily encrypt your data in transit and at rest and help ensure that only authorized users can access it.
Automating security tasks on AWS enables you to be more secure. For example, you can automate infrastructure and application security checks to continually enforce your security and compliance controls and help ensure confidentiality, integrity, and availability at all times.
The elasticity of the AWS Cloud enables customers to save costs when compared to traditional hosting providers. What can AWS customers do to benefit from the elasticity of the AWS Cloud? (Choose TWO)
A. Deploy your resources across multiple Availability Zones
B. Deploy your resources in another region
C. Use Elastic Load Balancing
D. Use Serverless Computing whenever possible
E. Use Amazon EC2 Auto Scaling
D. Use Serverless Computing whenever possible and E. Use Amazon EC2 Auto Scaling
Another way you can save money with AWS is by taking advantage of the platform’s elasticity. Elasticity means the ability to scale up or down when needed. This concept is most closely associated with the AWS auto scaling which monitors your applications and automatically adjusts capacity (up or down) to maintain steady, predictable performance at the lowest possible cost.
Serverless Computing provides the highest level of elasticity. Serverless enables you to build modern applications with increased agility and lower total cost of ownership. Serverless allows you to run applications and services without thinking about servers. It eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning. With serverless computing, everything required to run and scale your application with high availability is handled for you.
You may want to deploy your resources in another region to enable faster disaster recovery. Also, deploying your resources in multiple regions worldwide to reduce latency to global users.
Deploying your resources across multiple Availability Zones helps you maintain high availability of your infrastructure.
Which of the following security resources are available to any user for free? (Choose TWO)
A. AWS Security Blog
B. AWS Bulletins
C. AWS Support API
D. AWS TAM
E. AWS Classroom Training
A. AWS Security Blog and B. AWS Bulletins
The AWS free security resources include the AWS Security Blog, Whitepapers, AWS Developer Forums, Articles and Tutorials, Training, Security Bulletins, Compliance Resources and Testimonials.
AWS provides live classes (Classroom Training) with accredited AWS instructors who teach you in-demand cloud skills and best practices using a mix of presentations, discussion, and hands-on labs. AWS Classroom Training is not free.
AWS Support API is available for AWS customers who have a Business, Enterprise On-Ramp, or Enterprise support plan. The AWS Support API provides programmatic access to AWS Support Centre features to create, manage, and close support cases.
A Technical Account Manager (TAM) is your designated technical point of contact who provides advocacy and guidance to help plan and build solutions using best practices and proactively keep your AWS environment operationally healthy and secure. TAM is available only for AWS customers who have an Enterprise On-Ramp or Enterprise support plan.
What does Amazon GuardDuty do to protect AWS accounts and workloads? (Choose TWO)
A. Continuously monitors AWS infrastructure and helps detect threats such as attacker reconnaissance or account compromise
B. Initiates automated remediation actions against discovered security issues
C. Notifies AWS customers about abuse events once they are reported
D. Helps AWS customers identify the root cause of potential security issues
E. Checks security groups for rules that allow unrestricted access to AWS resources
A. Continuously monitors AWS infrastructure and helps detect threats such as attacker reconnaissance or account compromise and B. Initiates automated remediation actions against discovered security issues
Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behaviour to protect your AWS accounts and workloads. Amazon GuardDuty integrates with Amazon CloudWatch Events and AWS Lambda to allow you to set up automated remediation actions against discovered security issues.
With the cloud, the collection and aggregation of account and network activities is simplified, but it can be time-consuming for security teams to continuously analyse event log data for potential threats. GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail, Amazon VPC Flow Logs, and DNS logs. With GuardDuty, you now have an intelligent and cost-effective option for continuous threat detection in the AWS Cloud. The service uses machine learning, anomaly detection, and integrated threat intelligence to identify and prioritize potential threats.
Amazon GuardDuty provides broad protection of your AWS accounts, workloads, and data by helping to identify threats such as attacker reconnaissance, instance compromise, and account compromise.
What is the benefit of Amazon EBS volumes being automatically replicated within the same availability zone?
A. Durability
B. Accessibility
C. Elasticity
D. Traceability
A. Durability
Durability refers to the ability of a system to assure data is stored and data remains consistent in the system as long as it is not changed by legitimate access. This means that data should not become corrupted or disappear due to a system malfunction.
Durability is used to measure the likelihood of data loss. For example, assume you have confidential data stored in your Laptop. If you make a copy of it and store it in a secure place, you have just improved the durability of that data. It is much less likely that all copies will be simultaneously destroyed.
Amazon EBS volume data is replicated across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component. The replication of data makes EBS volumes 20 times more durable than typical commodity disk drives, which fail with an AFR (annual failure rate) of around 4%. For example, if you have 1,000 EBS volumes running for 1 year, you should expect 1 to 2 will have a failure.
Additional information:
Amazon S3 is also considered a durable storage service. Amazon S3 is designed for 99.999999999% (11 9’s) durability. This means that if you store 100 billion objects in S3, you will lose one object at most.
Which of the following actions may reduce Amazon EBS costs? (Choose TWO)
A. Changing the type of the volume
B. Distributing requests to multiple volumes
C. Using reservations
D. Deleting unnecessary snapshots
E. Deleting unused Bucket ACLS
A. Changing the type of the volume and D. Deleting unnecessary snapshots
With Amazon EBS, it is important to keep in mind that you are paying for provisioned capacity and performance, even if the volume is unattached or has very low write activity. To optimize storage performance and costs for Amazon EBS, monitor volumes periodically to identify unattached, underutilized or overutilized volumes, and adjust provisioning to match actual usage.
When you want to reduce the costs of Amazon EBS consider the following:
1- Delete Unattached Amazon EBS Volumes:
An easy way to reduce wasted spend is to find and delete unattached volumes. However, when EC2 instances are stopped or terminated, attached EBS volumes are not automatically deleted and will continue to accrue charges since they are still operating.
2- Resize or Change the EBS Volume Type:
Another way to optimize storage costs is to identify volumes that are underutilized and downsize them or change the volume type.
3- Delete Stale Amazon EBS Snapshots:
If you have a backup policy that takes EBS volume snapshots daily or weekly, you will quickly accumulate snapshots. Check for stale snapshots that are over 30 days old and delete them to reduce storage costs.
What is the main benefit of attaching security groups to an Amazon RDS instance?
A. Controls what IP address ranges can connect to your database instance
B. Manages user access and encryption keys
C. Deploys SSL/TLS certificates for use with your database instance
D. Distributes incoming traffic across multiple targets
A. Controls what IP address ranges can connect to your database instance
In Amazon RDS, security groups are used to control which IP address ranges can connect to your databases on a DB instance. When you initially create a DB instance, its firewall prevents any database access except through rules specified by an associated security group.
What are the benefits of the AWS Organizations service? (Choose TWO)
A. Manage your organisation’s payment methods
B. Help organisations achieve their desired business outcomes with AWS
C. Consolidate billing across multiple AWS accounts
D. Control access to AWS services
E. Help organisations design and maintain an accelerated path to successful cloud adoption
C. Consolidate billing across multiple AWS accounts and D. Control access to AWS services
AWS Organizations has five main benefits:
1) Centrally manage access polices across multiple AWS accounts.
2) Automate AWS account creation and management.
3) Control access to AWS services.
4) Consolidate billing across multiple AWS accounts.
5) Configure AWS services across multiple accounts.
** Control access to AWS services: AWS Organizations allows you to restrict what services and actions are allowed in your accounts. You can use Service Control Policies (SCPs) to apply permission guardrails on AWS Identity and Access Management (IAM) users and roles. For example, you can apply an SCP that restricts users in accounts in your organization from launching any resources in regions that you do not explicitly allow.
** Consolidate billing across multiple AWS accounts: You can use AWS Organizations to set up a single payment method for all the AWS accounts in your organization through consolidated billing. With consolidated billing, you can see a combined view of charges incurred by all your accounts, as well as take advantage of pricing benefits from aggregated usage, such as volume discounts for Amazon EC2 and Amazon S3.
What is AWS Billing and Cost Management?
AWS Billing and Cost Management is the service that allows you to manage your organization’s payment methods.
A company is migrating production workloads to AWS, and they are concerned about cost management across different departments. Which option should the company implement to categorize and track AWS spending?
A. Configure AWS Price List API to receive billing updates for each department automatically
B. Use Amazon Aurora to forecast AWS spending based on usage
C. Apply cost allocation tags to segment AWS costs by different projects and departments
D. Use the AWS Pricing Calculator service to monitor the costs incurred by each department
C. Apply cost allocation tags to segment AWS costs by different projects and departments
A tag is a label that you or AWS assigns to an AWS resource. Each tag consists of a key and a value. A key can have more than one value. You can use tags to organize your resources, and cost allocation tags to track your AWS costs on a detailed level. After you activate cost allocation tags, AWS uses the cost allocation tags to organize your resource costs on your cost allocation report, to make it easier for you to categorize and track AWS costs across different departments.
Amazon Aurora is a relational database service, not a cost management service. The name of the service that performs this function is AWS Cost Explorer.
The AWS Price List API is used to know the prices of AWS services. The AWS Price List API does not send billing updates to AWS Customers.
AWS Pricing Calculator does not record any information about your AWS cost and usage. AWS Pricing Calculator is just a tool for estimating your monthly AWS bill based on your expected usage. For example, to estimate your monthly AWS CloudFront bill, you just enter your expected CloudFront usage (Data Transfer Out, Number of requests, etc.) and AWS Pricing Calculator provides an estimate of your monthly bill for CloudFront.