AWS Associates Flashcards
Under a single AWS account, you have set up an Auto Scaling group with a maximum capacity of 50
Amazon Elastic Compute Cloud (Amazon EC2) instances in us-west-2. When you scale out, however,
it only increases to 20 Amazon EC2 instances. What is the likely cause?
A. Auto Scaling has a hard limit of 20 Amazon EC2 instances.
B. If not specified, the Auto Scaling group maximum capacity defaults to 20 Amazon EC2 instances.
C. The Auto Scaling group desired capacity is set to 20, so Auto Scaling stopped at 20 Amazon EC2
instances.
D. You have exceeded the default Amazon EC2 instance limit of 20 per region.
D. Auto Scaling may cause you to reach limits of other services, such as the default number of
Amazon EC2 instances you can currently launch within a region, which is 20.
Elastic Load Balancing allows you to distribute traffic across which of the following?
A. Only within a single Availability Zone
B. Multiple Availability Zones within a region
C. Multiple Availability Zones within and between regions
D. Multiple Availability Zones within and between regions and on-premises virtualized instances
running OpenStack
B. The Elastic Load Balancing service allows you to distribute traffic across a group of Amazon Elastic Compute Cloud (Amazon EC2) instances in one or more Availability Zones within a region.
Amazon CloudWatch offers which types of monitoring plans? (Choose 2 answers) A. Basic B. Detailed C. Diagnostic D. Precognitive E. Retroactive
A and B. Amazon CloudWatch has two plans: basic and detailed. There are no diagnostic,
precognitive, or retroactive monitoring plans for Amazon CloudWatch.
An Amazon Elastic Compute Cloud (Amazon EC2) instance in an Amazon Virtual Private Cloud
(Amazon VPC) subnet can send and receive traffic from the Internet when which of the following
conditions are met? (Choose 3 answers)
A. Network Access Control Lists (ACLs) and security group rules disallow all traffic except relevant
Internet traffic.
B. Network ACLs and security group rules allow relevant Internet traffic.
C. Attach an Internet Gateway (IGW) to the Amazon VPC and create a subnet route table to send all
non-local traffic to that IGW.
D. Attach a Virtual Private Gateway (VPG) to the Amazon VPC and create subnet routes to send all
non-local traffic to that VPG.
E. The Amazon EC2 instance has a public IP address or Elastic IP (EIP) address.
F. The Amazon EC2 instance does not need a public IP or Elastic IP when using Amazon VPC.
B, C, and E. You must do the following to create a public subnet with Internet access:
Attach an IGW to your Amazon VPC.
Create a subnet route table rule to send all non-local traffic (for example, 0.0.0.0/0) to the IGW.
Configure your network ACLs and security group rules to allow relevant traffic to flow to and from
your instance.
You must do the following to enable an Amazon EC2 instance to send and receive traffic from the
Internet:
Assign a public IP address or EIP address.
If you launch five Amazon Elastic Compute Cloud (Amazon EC2) instances in an Amazon Virtual
Private Cloud (Amazon VPC) without specifying a security group, the instances will be launched into
a default security group that provides which of the following? (Choose 3 answers)
A. The five Amazon EC2 instances can communicate with each other.
B. The five Amazon EC2 instances cannot communicate with each other.
C. All inbound traffic will be allowed to the five Amazon EC2 instances.
D. No inbound traffic will be allowed to the five Amazon EC2 instances.
E. All outbound traffic will be allowed from the five Amazon EC2 instances.
F. No outbound traffic will be allowed from the five Amazon EC2 instances.
A, D, and E. If a security group is not specified at launch, then an Amazon EC2 instance will be
launched into the default security group for the Amazon VPC. The default security group allows
communication between all resources within the security group, allows all outbound traffic, and
denies all other traffic.
Your company wants to host its secure web application in AWS. The internal security policies
consider any connections to or from the web server as insecure and require application data
protection. What approaches should you use to protect data in transit for the application? (Choose 2
answers)
A. Use BitLocker to encrypt data.
B. Use HTTPS with server certificate authentication.
C. Use an AWS Identity and Access Management (IAM) role.
D. Use Secure Sockets Layer (SSL)/Transport Layer Security (TLS) for database connection.
E. Use XML for data transfer from client to server.
B and D. To protect data in transit from the clients to the web application, HTTPS with server
certificate authentication should be used. To protect data in transit from the web application to the
database, SSL/TLS for database connection should be used.
You have an application that will run on an Amazon Elastic Compute Cloud (Amazon EC2) instance.
The application will make requests to Amazon Simple Storage Service (Amazon S3) and Amazon
DynamoDB. Using best practices, what type of AWS Identity and Access Management (IAM) identity
should you create for your application to access the identified services?
A. IAM role
B. IAM user
C. IAM group
D. IAM directory
A. Don’t create an IAM user (or an IAM group) and pass the user’s credentials to the application or
embed the credentials in the application. Instead, create an IAM role that you attach to the Amazon
EC2 instance to give applications running on the instance temporary security credentials. The
credentials have the permissions specified in the policies attached to the role. A directory is not an
identity object in IAM.
When a request is made to an AWS Cloud service, the request is evaluated to decide whether it should
be allowed or denied. The evaluation logic follows which of the following rules? (Choose 3 answers)
A. An explicit allow overrides any denies.
B. By default, all requests are denied.
C. An explicit allow overrides the default.
D. An explicit deny overrides any allows.
E. By default, all requests are allowed.
B, C, and D. When a request is made, the AWS service decides whether a given request should be
allowed or denied. The evaluation logic follows these rules:
1) By default, all requests are denied (in general, requests made using the account credentials for
resources in the account are always allowed).
2) An explicit allow overrides this default.
3) An explicit deny overrides any allows.
What is the data processing engine behind Amazon Elastic MapReduce (Amazon EMR)? A. Apache Hadoop B. Apache Hive C. Apache Pig D. Apache HBase
A. Amazon EMR uses Apache Hadoop as its distributed data processing engine. Hadoop is an open
source, Java software framework that supports data-intensive distributed applications running on
large clusters of commodity hardware. Hive, Pig, and HBase are packages that run on top of Hadoop.
What type of AWS Elastic Beanstalk environment tier provisions resources to support a web
application that handles background processing tasks?
A. Web server environment tier
B. Worker environment tier
C. Database environment tier
D. Batch environment tier
B. An environment tier whose web application runs background jobs is known as a worker tier. An
environment tier whose web application processes web requests is known as a web server tier.
Database and batch are not valid environment tiers.
What Amazon Relational Database Service (Amazon RDS) feature provides the high availability for your database? A. Regular maintenance windows B. Security groups C. Automated backups D. Multi-AZ deployment
D. Multi-AZ deployment uses synchronous replication to a different Availability Zone so that
operations can continue on the replica if the master database stops responding for any reason.
Automated backups provide disaster recovery, not high availability. Security groups, while important,
have no effect on availability. Maintenance windows are actually times when the database may not be
available.
What administrative tasks are handled by AWS for Amazon Relational Database Service (Amazon
RDS) databases? (Choose 3 answers)
A. Regular backups of the database
B. Deploying virtual infrastructure
C. Deploying the schema (for example, tables and stored procedures)
D. Patching the operating system and database software
E. Setting up non-admin database accounts and privileges
A, B, and D. Amazon RDS will launch Amazon Elastic Compute Cloud (Amazon EC2) instances,
install the database software, handle all patching, and perform regular backups. Anything within the
database software (schema, user accounts, and so on) is the responsibility of the customer.
Which of the following use cases is well suited for Amazon Redshift?
A. A 500TB data warehouse used for market analytics
B. A NoSQL, unstructured database workload
C. A high traffic, e-commerce web application
D. An in-memory cache
A. Amazon Redshift is a petabyte-scale data warehouse. It is not well suited for unstructured NoSQL
data or highly dynamic transactional data. It is in no way a cache.
Which of the following statements about Amazon DynamoDB secondary indexes is true?
A. There can be many per table, and they can be created at any time.
B. There can only be one per table, and it must be created when the table is created.
C. There can be many per table, and they can be created at any time.
D. There can only be one per table, and it must be created when the table is created.
D. There can be one secondary index per table, and it must be created when the table is created.
What is the primary use case of Amazon Kinesis Firehose?
A. Ingest huge streams of data and allow custom processing of data in flight.
B. Ingest huge streams of data and store it to Amazon Simple Storage Service (Amazon S3), Amazon
Redshift, or Amazon Elasticsearch Service.
C. Generate a huge stream of data from an Amazon S3 bucket.
D. Generate a huge stream of data from Amazon DynamoDB.
B. The Amazon Kinesis family of services provides functionality to ingest large streams of data.
Amazon Kinesis Firehose is specifically designed to ingest a stream and save it to any of the three
storage services listed in Response B.
Your company has 17TB of financial trading records that need to be stored for seven years by law.
Experience has shown that any record more than a year old is unlikely to be accessed. Which of the
following storage plans meets these needs in the most cost-efficient manner?
A. Store the data on Amazon Elastic Block Store (Amazon EBS) volume attached to t2.large
instances.
B. Store the data on Amazon Simple Storage Service (Amazon S3) with lifecycle policies that change
the storage class to Amazon Glacier after one year, and delete the object after seven years.
C. Store the data in Amazon DynamoDB, and delete data older than seven years.
D. Store the data in an Amazon Glacier Vault Lock.
B. Amazon S3 and Amazon Glacier are the most cost-effective storage services. After a year, when the
objects are unlikely to be accessed, you can save costs by transferring the objects to Amazon Glacier
where the retrieval time is three to five hours.
What must you do to create a record of who accessed your Amazon Simple Storage Service (Amazon
S3) data and from where?
A. Enable Amazon CloudWatch logs.
B. Enable versioning on the bucket.
C. Enable website hosting on the bucket.
D. Enable server access logs on the bucket.
E. Create an AWS Identity and Access Management (IAM) bucket policy.
D. Server access logs provide a record of any access to an object in Amazon S3.
Amazon Simple Storage Service (Amazon S3) is an eventually consistent storage system. For what
kinds of operations is it possible to get stale data as a result of eventual consistency?
A. GET after PUT of a new object
B. GET or LIST after a DELETE
C. GET after overwrite PUT (PUT to an existing key)
D. DELETE after GET of new object
C. Amazon S3 provides read-after-write consistency for PUTs to new objects (new key), but eventual
consistency for GETs and DELETEs of existing objects (existing key). Response C changes the
existing object so that a subsequent GET may fetch the previous and inconsistent object.
How is data stored in Amazon Simple Storage Service (Amazon S3) for high durability?
A. Data is automatically replicated to other regions.
B. Data is automatically replicated to different Availability Zones within a region.
C. Data is replicated only if versioning is enabled on the bucket.
D. Data is automatically backed up on tape and restored if needed.
B. AWS will never transfer data between regions unless directed to by you. Durability in Amazon S3 is
achieved by replicating your data geographically to different Availability Zones regardless of the
versioning configuration. AWS doesn’t use tapes.
Your company needs to provide streaming access to videos to authenticated users around the world.
What is a good way to accomplish this?
A. Use Amazon Simple Storage Service (Amazon S3) buckets in each region with website hosting
enabled.
B. Store the videos on Amazon Elastic Block Store (Amazon EBS) volumes.
C. Enable Amazon CloudFront with geolocation and signed URLs.
D. Run a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances to host the videos.
C. Amazon CloudFront provides the best user experience by delivering the data from a geographically
advantageous edge location. Signed URLs allow you to control access to authenticated users.
Which of the following are true about the AWS shared responsibility model? (Choose 3 answers)
A. AWS is responsible for all infrastructure components (that is, AWS Cloud services) that support
customer deployments.
B. The customer is responsible for the components from the guest operating system upward
(including updates, security patches, and antivirus software).
C. The customer may rely on AWS to manage the security of their workloads deployed on AWS.
D. While AWS manages security of the cloud, security in the cloud is the responsibility of the
customer.
E. The customer must audit the AWS data centers personally to confirm the compliance of AWS
systems and services.
A, B, and D. In the AWS shared responsibility model, customers retain control of what security they
choose to implement to protect their own content, platform, applications, systems, and networks, no
differently than they would for applications in an on-site data center.
Which process in an Amazon Simple Workflow Service (Amazon SWF) workflow implements a task? A. Decider B. Activity worker C. Workflow starter D. Business rule
B. An activity worker is a process or thread that performs the activity tasks that are part of your
workflow. Each activity worker polls Amazon SWF for new tasks that are appropriate for that activity
worker to perform; certain tasks can be performed only by certain activity workers. After receiving a
task, the activity worker processes the task to completion and then reports to Amazon SWF that the
task was completed and provides the result. The activity task represents one of the tasks that you
identified in your application.
Which of the following is true if you stop an Amazon Elastic Compute Cloud (Amazon EC2) instance
with an Elastic IP address in an Amazon Virtual Private Cloud (Amazon VPC)?
A. The instance is disassociated from its Elastic IP address and must be re-attached when the
instance is restarted.
B. The instance remains associated with its Elastic IP address.
C. The Elastic IP address is released from your account.
D. The instance is disassociated from the Elastic IP address temporarily while you restart the
instance.
B. In an Amazon VPC, an instance’s Elastic IP address remains associated with an instance when the
instance is stopped.
Which Amazon Elastic Compute Cloud (Amazon EC2) pricing model allows you to pay a set hourly
price for compute, giving you full control over when the instance launches and terminates?
A. Spot instances
B. Reserved instance
C. On Demand instances
D. Dedicated instances
C. You pay a set hourly price for an On Demand instance from when you launch it until you explicitly
stop or terminate it. Spot instances can be terminated when the spot price goes above your bid price.
Reserved instances involve paying for an instance over a one- or three-year term. Dedicated instances
run on hardware dedicated to your account and are not a pricing model.