Big Data BDS-C00 Flashcards

1
Q

An organization is developing a mobile social application and needs to collect logs from all devices on which it is installed. The organization is evaluating the Amazon Kinesis Data Streams to push logs and Amazon EMR to process data. They want to store data on HDFS using the default replication factor to replicate data among the cluster, but they are concerned about the durability of the data. Currently, they are producing 300 GB of raw data daily, with additional spikes during special events. They will need to scale out the Amazon EMR cluster to match the increase in streamed data.

Which solution prevents data loss and matches compute demand?

A. Use multiple Amazon EBS volumes on Amazon EMR to store processed data and scale out the Amazon EMR cluster as needed.
B. Use the EMR File System and Amazon S3 to store processed data and scale out the Amazon EMR cluster as needed.
C. Use Amazon DynamoDB to store processed data and scale out the Amazon EMR cluster as needed.
D. Use Amazon Kinesis Data Firehose and, instead of using Amazon EMR, stream logs directly into Amazon Elasticsearch Service.

A

D. Use Amazon Kinesis Data Firehose and, instead of using Amazon EMR, stream logs directly into Amazon Elasticsearch Service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A user is running a webserver on EC2. The user wants to receive the SMS when the EC2 instance utilization is above the threshold limit.

Which AWS services should the user configure in this case?

A. AWS CloudWatch + AWS SES
B. AWS CloudWatch + AWS SNS
C. AWS CloudWatch + AWS SQS
D. AWS EC2 + AWS CloudWatch

A

B. AWS CloudWatch + AWS SNS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

It is advised that you watch the Amazon CloudWatch “_____” metric (available via the AWS Management Console or Amazon Cloud Watch APIs) carefully and recreate the Read Replica should it fall behind due to replication errors.

A. Write Lag
B. Read Replica
C. Replica Lag
D. Single Replica

A

C. Replica Lag

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You have been asked to use your department’s existing continuous integration (CI) tool to test a three- tier web architecture defined in an AWS CloudFormation template. The tool already supports AWS APIs and can launch new AWS CloudFormation stacks after polling version control. The CI tool reports on the success of the AWS CloudFormation stack creation by using the DescribeStacks API to look for the CREATE_COMPLETE status.
The architecture tiers defined in the template consist of:
. One load balancer
. Five Amazon EC2 instances running the web application
. One multi-AZ Amazon RDS instance How would you implement this?

Choose 2 answers

A. Define a WaitCondition and a WaitConditionhandle for the output of an output of a UserData command that does sanity checking of the application’s post-install state
B. Define a CustomResource and write a script that runs architecture-level integration tests through the load balancer to the application and database for the state of multiple tiers
C. Define a WaitCondition and use a WaitConditionHandle that leverages the AWS SDK to run the DescribeStacks API call until the CREATE_COMPLETE status is returned
D. Define a CustomResource that leverages the AWS SDK to run the DescribeStacks API call until the CREATE_COMPLETE status is returned
E. Define a UserDataHandle for the output of a UserData command that does sanity checking of the application’s post-install state and runs integration tests on the state of multiple tiers through load balancer to the application
F. Define a UserDataHandle for the output of a CustomResource that does sanity checking of the application’s post-install state

A

A. Define a WaitCondition and a WaitConditionhandle for the output of an output of a UserData command that does sanity checking of the application’s post-install state

F. Define a UserDataHandle for the output of a CustomResource that does sanity checking of the application’s post-install state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

By default what are ENIs that are automatically created and attached to instances using the EC2 console set to do when the attached instance terminates?

A. Remain as is
B. Terminate
C. Hibernate
D. Pause

A

B. Terminate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Without _____, you must either create multiple AWS accounts-each with its own billing and subscriptions to AWS products-or your employees must share the security credentials of a single AWS account.

A. Amazon RDS
B. Amazon Glacier
C. Amazon EMR
D. Amazon IAM

A

D. Amazon IAM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The project you are working on currently uses a single AWS CloudFormation template to deploy its AWS infrastructure, which supports a multi-tier web application. You have been tasked with organizing the AWS CloudFormation resources so that they can be maintained in the future, and so that different departments such as Networking and Security can review the architecture before it goes to Production.
How should you do this in a way that accommodates each department, using their existing workflows?

A. Organize the AWS CloudFormation template so that related resources are next to each other in the template, such as VPC subnets and routing rules for Networking and Security groups and IAM information for Security
B. Separate the AWS CloudFormation template into a nested structure that has individual templates for the resources that are to be governed by different departments, and use the outputs from the networking and security stacks for the application template that you control
C. Organize the AWS CloudFormation template so that related resources are next to each other in the template for each department’s use, leverage your existing continuous integration tool to constantly deploy changes from all parties to the Production environment, and then run tests for validation
D. Use a custom application and the AWS SDK to replicate the resources defined in the current AWS CloudFormation template, and use the existing code review system to allow other departments to approve changes before altering the application for future deployments

A

B. Separate the AWS CloudFormation template into a nested structure that has individual templates for the resources that are to be governed by different departments, and use the outputs from the networking and security stacks for the application template that you control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

An administrator is processing events in near real-time using Kinesis streams and Lambda. Lambda intermittently fails to process batches from one of the shards due to a 5 –minute time limit.

What is a possible solution for this problem?

A. Add more Lambda functions to improve concurrent batch processing
B. Reduce the batch size that lambda is reading from the stream
C. Ignore and skip events that are older than 5 minutes and put them to Dead Letter Queue (DLQ)
D. Configure Lambda to read from fewer shards in parallel

A

D. Configure Lambda to read from fewer shards in parallel

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
Fill in the blanks: A\_\_\_\_\_ is a storage device that moves data in sequences of bytes or bits (blocks). Hint: These devices support random access and generally use buffered I/O.
A.	block map
B.	storage block
C.	mapping device
D.	block device
A

D. block device

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
What does Amazon EBS stand for?
A.	Elastic Block Storage
B.	Elastic Business Server
C.	Elastic Blade Server
D.	Elastic Block Store
A

D. Elastic Block Store

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You have an ASP.NET web application running in Amazon Elastic BeanStalk. Your next version of the application requires a third-party Windows installer package to be installed on the instance on first boot and before the application launches.

Which options are possible? Choose 2 answers

A. In the application’s Global.asax file, run msiexec.exe to install the package using Process.Start() in the Application_Start event handler
B. In the source bundle’s .ebextensions folder, create a file with a .config extension. In the file, under the “packages” section and “msi” package manager, include the package’s URL
C. Launch a new Amazon EC2 instance from the AMI used by the environment. Log into the instance, install the package and run sysprep. Create a new AMI. Configure the environment to use the new AMI
D. In the environment’s configuration, edit the instances configuration and add the package’s URL to the “Packages” section
E. In the source bundle’s .ebextensions folder, create a “Packages” folder. Place the package in the folder

A

B. In the source bundle’s .ebextensions folder, create a file with a .config extension. In the file, under the “packages” section and “msi” package manager, include the package’s URL
C. Launch a new Amazon EC2 instance from the AMI used by the environment. Log into the instance, install the package and run sysprep. Create a new AMI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A gas company needs to monitor gas pressure in their pipelines. Pressure data is streamed from sensors placed throughout the pipelines to monitor the data in real time. When an anomaly is detected, the system must send a notification to open valve. An Amazon Kinesis stream collects the data from the sensors and an anomaly Kinesis stream triggers an AWS Lambda function to open the appropriate valve.

Which solution is the MOST cost-effective for responding to anomalies in real time?

A. Attach a Kinesis Firehose to the stream and persist the sensor data in an Amazon S3 bucket. Schedule an AWS Lambda function to run a query in Amazon Athena against the data in Amazon S3 to identify anomalies. When a change is detected, the Lambda function sends a message to the anomaly stream to open the valve.
B. Launch an Amazon EMR cluster that uses Spark Streaming to connect to the Kinesis stream and Spark machine learning to detect anomalies. When a change is detected, the Spark application sends a message to the anomaly stream to open the valve.
C. Launch a fleet of Amazon EC2 instances with a Kinesis Client Library application that consumes the stream and aggregates sensor data over time to identify anomalies. When an anomaly is detected, the application sends a message to the anomaly stream to open the valve.
D. Create a Kinesis Analytics application by using the RANDOM_CUT_FOREST function to detect an anomaly. When the anomaly score that is returned from the function is outside of an acceptable range, a message is sent to the anomaly stream to open the valve.

A

A. Attach a Kinesis Firehose to the stream and persist the sensor data in an Amazon S3 bucket. Schedule an AWS Lambda function to run a query in Amazon Athena against the data in Amazon S3 to identify anomalies. When a change is detected, the Lambda function sends a message to the anomaly stream to open the valve.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

An Amazon Redshift Database is encrypted using KMS. A data engineer needs to use the AWS CLI to create a KMS encrypted snapshot of the database in another AWS region.

Which three steps should the data engineer take to accomplish this task? (Select Three.)

A. Create a new KMS key in the destination region
B. Copy the existing KMS key to the destination region
C. Use CreateSnapshotCopyGrant to allow Amazon Redshift to use the KMS key created in the destination region
D. Use CreateSnapshotCopyGrant to allow Amazon Redshift to use the KMS key from the source region
E. In the source, enable cross-region replication and specify the name of the copy grant created
F. In the destination region, enable cross-region replication and specify the name of the copy grant created

A

A. Create a new KMS key in the destination region

D. Use CreateSnapshotCopyGrant to allow Amazon Redshift to use the KMS key from the source region

F. In the destination region, enable cross-region replication and specify the name of the copy grant created

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You have been tasked with implementing an automated data backup solution for your application servers that run on Amazon EC2 with Amazon EBS volumes. You want to use a distributed data store for your backups to avoid single points of failure and to increase the durability of the data. Daily backups should be retained for 30 days so that you can restore data within an hour.

How can you implement this through a script that a scheduling deamon runs daily on the application servers?

A. Write the script to call the ec2-create-volume API, tag the Amazon EBS volume with the current data time group, and copy backup data to a second Amazon EBS volume. Use the ec2-describe- volumes API to enumerate existing backup volumes. Call the ec2-delete-volume API to prune backup volumes that are tagged with a date-time group older than 30 days
B. Write the script to call the Amazon Glacier upload archive API, and tag the backup archive with the current date-time group. Use the list vaults API to enumerate existing backup archives. Call the delete vault API to prune backup archives that are tagged with a date-time group older than
30 days
C. Write the script to call the ec2-create-snapshot API, and tag the Amazon EBS snapshot with the current date-time group. Use the ec2-describe-snapshot API to enumerate existing Amazon EBS snapshots. Call the ec2-delete-snapshot API to prune Amazon EBs snapshots that are tagged with a date-time group older than 30 days
D. Write the script to call the ec2-create-volume API, tag the Amazon EBS volume with the current date-time group, and use the ec2-copy-snapshot API to backup data to the new Amazon EBS volume. Use the ec2-describe-snapshot API to enumerate existing backup volumes. Call the ec2- delete-snapshot API to prune backup Amazon EBS volumes that are tagged with a date-time group older than 30 days

A

C. Write the script to call the ec2-create-snapshot API, and tag the Amazon EBS snapshot with the current date-time group. Use the ec2-describe-snapshot API to enumerate existing Amazon EBS snapshots. Call the ec2-delete-snapshot API to prune Amazon EBs snapshots that are tagged with a date-time group older than 30 days

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

An enterprise customer is migrating to Redshift and is considering using dense storage nodes in its Redshift cluster. The customer wants to migrate 50 TB of data. The customer’s query patterns involve performing many joins with thousands of rows. The customer needs to know how many nodes are needed in its target Redshift cluster. The customer has a limited budget and needs to avoid performing tests unless absolutely needed.

Which approach should this customer use?

A. Start with many small nodes
B. Start with fewer large nodes
C. Have two separate clusters with a mix of small and large nodes
D. Insist on performing multiple tests to determine the optimal configuration

A

A. Start with many small nodes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

An organization is using Amazon Kinesis Data Streams to collect data generated from thousands of temperature devices and is using AWS Lambda to process the data. Devices generate 10 to 12 million records every day, but Lambda is processing only around 450 thousand records. Amazon CloudWatch indicates that throttling on Lambda is not occurring.

What should be done to ensure that all data is processed? (Choose two.)

A. Increase the BatchSize value on the EventSource, and increase the memory allocated to the Lambda function.
B. Decrease the BatchSize value on the EventSource, and increase the memory allocated to the Lambda function.
C. Create multiple Lambda functions that will consume the same Amazon Kinesis stream.
D. Increase the number of vCores allocated for the Lambda function.
E. Increase the number of shards on the Amazon Kinesis stream.

A

A. Increase the BatchSize value on the EventSource, and increase the memory allocated to the Lambda function.

E. Increase the number of shards on the Amazon Kinesis stream.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS instances and send real-time alerts to their operations team. Which AWS services can accomplish this?
Choose 2 answers

A.	Amazon Simple Email Service
B.	Amazon CloudWatch
C.	Amazon Simple Queue Service
D.	Amazon Route 53
E.	Amazon Simple Notification Service
A

B. Amazon CloudWatch

E. Amazon Simple Notification Service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

When should I choose Provisioned IOPS over Standard RDS storage?
A. If you use production online transaction processing (OLTP) workloads.
B. If you have batch-oriented workloads
C. If you have workloads that are not sensitive to consistent performance

A

A. If you use production online transaction processing (OLTP) workloads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A customer needs to determine the optimal distribution strategy for the ORDERS fact table in its Redshift schema. The ORDERS table has foreign key relationships with multiple dimension tables in this schema.

How should the company determine the most appropriate distribution key for the ORDRES table?

A. Identity the largest and most frequently joined dimension table and ensure that it and the ORDERS table both have EVEN distribution
B. Identify the target dimension table and designate the key of this dimension table as the distribution key of the ORDERS table
C. Identity the smallest dimension table and designate the key of this dimension table as the distribution key of ORDERS table
D. Identify the largest and most frequently joined dimension table and designate the key of this dimension table as the distribution key for the orders table

A

D. Identify the largest and most frequently joined dimension table and designate the key of this dimension table as the distribution key for the orders table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q
In the 'Detailed' monitoring data available for your Amazon EBS volumes, Provisioned IOPS volumes automatically send \_\_\_\_\_ minute metrics to Amazon CloudWatch.
A.	5
B.	2
C.	1
D.	3
A

C. 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A medical record filing system for a government medical fund is using an Amazon S3 bucket to archive documents related to patients. Every patient visit to a physician creates a new file, which can add up to millions of files each month. Collection of these files from each physician is handled via a batch process that runs every night using AWS Data Pipeline. This is sensitive data, so the data and any associated metadata must be encrypted at rest.
Auditors review some files on a quarterly basis to see whether the records are maintained according to regulations. Auditors must be able to locate any physical file in the S3 bucket or a given data, patient, or physician. Auditors spend a signification amount of time locating such files.

What is the most cost-and time-efficient collection methodology in this situation?

A. Use Amazon kinesis to get the data feeds directly from physician, batch them using a Spark application on Amazon Elastic MapReduce (EMR) and then store them in Amazon S3 with folders separated per physician.
B. Use Amazon API Gateway to get the data feeds directly from physicians, batch them using a Spark application on Amazon Elastic MapReduce (EMR), and then store them in Amazon S3 with folders separated per physician.
C. Use Amazon S3 event notifications to populate an Amazon DynamoDB table with metadata about every file loaded to Amazon S3, and partition them based on the month and year of the file.
D. Use Amazon S3 event notifications to populate and Amazon Redshift table with metadata about every file loaded to Amazon S3, and partition them based on the month and year of the file
.

A

A. Use Amazon kinesis to get the data feeds directly from physician, batch them using a Spark application on Amazon Elastic MapReduce (EMR) and then store them in Amazon S3 with folders separated per physician.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You have written a server-side Node.js application and a web application with an HTML/JavaScript front end that uses the Angular.js Framework. The server-side application connects to an Amazon Redshift cluster, issue queries, and then returns the results to the front end for display. Your user base is very large and distributed, but it is important to keep the cost of running this application low.

Which deployment strategy is both technically valid and the most cost-effective?

A. Deploy an AWS Elastic Beanstalk application with two environments: one for the Node.js application and another for the web front end. Launch an Amazon Redshift cluster, and point your application to its Java Database connectivity (JDBC) endpoint
B. Deploy an AWS OpsWorks stack with three layers: a static web server layer for your front end, a Node.js app server layer for your server-side application, and a Redshift DB layer Amazon Redshift cluster
C. Upload the HTML, CSS, images, and JavaScript for the front end to an Amazon Simple Storage Service
(S3) bucket. Create an Amazon CloudFront distribution with this bucket as its origin. Use AWS Elastic Beanstalk to deploy the Node.js application. Launch an Amazon Redshift cluster, and point your application to its JDBC endpoint
D. Upload the HTML, CSS, images, and JavaScript for the front end, plus the Node.js code for the server-side application, to an Amazon S3 bucket. Create a CloudFront distribution with this bucket as its origin. Launch an Amazon Redshift cluster, and point your application to its JDBC endpoint
E. Upload the HTML, CSS, images, and JavaScript for the front end to an Amazon S3 bucket. Use AWS Elastic Beanstalk to deploy the Node.js application. Launch an Amazon Redshift cluster, and point your application to its JDBC endpoint

A

C. Upload the HTML, CSS, images, and JavaScript for the front end to an Amazon Simple Storage Service
(S3) bucket. Create an Amazon CloudFront distribution with this bucket as its origin. Use AWS Elastic Beanstalk to deploy the Node.js application. Launch an Amazon Redshift cluster, and point your application to its JDBC endpoint

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

A. Amazon EBS-backed instances can be stopped and restarted
B. Instance-store backed instances can be stopped and restarted
C. Auto scaling requires using Amazon EBS-backed instances
D. Virtual Private Cloud requires EBS backed instances

A

A. Amazon EBS-backed instances can be stopped and restarted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

You are configuring your company’s application to use Auto Scaling and need to move user state information.

Which of the following AWS services provides a shared data store with durability and low latency?

A. Amazon Simple Storage Service
B. Amazon DynamoDB
C. Amazon EC2 instance storage
D. AWS ElasticCache Memcached

A

A. Amazon Simple Storage Service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Is it possible to access your EBS snapshots?

A. Yes, through the Amazon S3 APIs.
B. Yes, through the Amazon EC2 APIs.
C. No, EBS snapshots cannot be accessed; they can only be used to create a new EBS volume.
D. EBS doesn’t provide snapshots.

A

B. Yes, through the Amazon EC2 APIs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

A user has provisioned 2000 IOPS to the EBS volume. The application hosted on that EBS is experiencing less IOPS than provisioned. Which of the below mentioned options does not affect the IOPS of the volume?

A. The application does not have enough IO for the volume
B. The instance is EBS optimized
C. The EC2 instance has 10 Gigabit Network connectivity
D. The volume size is too large

A

D. The volume size is too large

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q
Using only AWS services. You intend to automatically scale a fleet of stateless of stateless web servers based on CPU and network utilization metrics. Which of the following services are needed? Choose 2 answers
A.	Auto Scaling
B.	Amazon Simple Notification Service
C.	AWS Cloud Formation
D.	CloudWatch
E.	Amazon Simple Workflow Service
A

A. Auto Scaling

D. CloudWatch

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

How many relational database engines does RDS currently support?

A. MySQL, Postgres, MariaDB, Oracle and Microsoft SQL Server
B. Just two: MySQL and Oracle.
C. Five: MySQL, PostgreSQL, MongoDB, Cassandra and SQLite.
D. Just one: MySQL.

A

A. MySQL, Postgres, MariaDB, Oracle and Microsoft SQL Server

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

A company is preparing to give AWS Management Console access to developers. Company policy mandates identity federation and role based access control. Roles are currently assigned using groups in the corporate Active Directory.

What combination of the following will give developers access to the AWS console? Choose 2 answers

A. AWS Directory Service AD connector
B. AWS Directory Service Simple AD
C. AWS identity and Access Management groups
D. AWS identity and Access Management roles
E. AWS identity and Access Management users

A

A. AWS Directory Service AD connector

D. AWS identity and Access Management roles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

An organization currently runs a large Hadoop environment in their data center and is in the process of creating an alternative Hadoop environment on AWS, using Amazon EMR.
They generate around 20 TB of data on a monthly basis. Also on a monthly basis, files need to be grouped and copied to Amazon S3 to be used for the Amazon EMR environment. They have multiple S3 buckets across AWS accounts to which data needs to be copied. There is a 10G AWS Direct Connect setup between their data center and AWS, and the network team has agreed to allocate

A. Use an offline copy method, such as an AWS Snowball device, to copy and transfer data to Amazon S3.
B. Configure a multipart upload for Amazon S3 on AWS Java SDK to transfer data over AWS Direct Connect.
C. Use Amazon S3 transfer acceleration capability to transfer data to Amazon S3 over AWS Direct Connect.
D. Setup S3DistCop tool on the on-premises Hadoop environment to transfer data to Amazon S3 over AWS Direct Connect.

A

B. Configure a multipart upload for Amazon S3 on AWS Java SDK to transfer data over AWS Direct Connect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

An organization needs to store sensitive information on Amazon S3 and process it through Amazon EMR.
Data must be encrypted on Amazon S3 and Amazon EMR at rest and in transit. Using Thrift Server, the Data Analysis team users HIVE to interact with this data. The organization would like to grant access to only specific databases and tables, giving permission only to the SELECT statement.

Which solution will protect the data and limit user access to the SELECT statement on a specific portion of data?

A. Configure Transparent Data Encryption on Amazon EMR. Create an Amazon EC2 instance and install Apache Ranger. Configure the authorization on the cluster to use Apache Ranger.
B. Configure data encryption at rest for EMR File System (EMRFS) on Amazon S3. Configure data encryption in transit for traffic between Amazon S3 and EMRFS. Configure storage and SQL base authorization on HiveServer2.
C. Use AWS KMS for encryption of data. Configure and attach multiple roles with different permissions based on the different user needs.
D. Configure Security Group on Amazon EMR. Create an Amazon VPC endpoint for Amazon S3. Configure HiveServer2 to use Kerberos authentication on the cluster.

A

C. Use AWS KMS for encryption of data. Configure and attach multiple roles with different permissions based on the different user needs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q
A \_\_\_\_\_\_\_\_\_\_ is the concept of allowing (or disallowing) an entity such as a user, group, or role some type of access to one or more resources.
A.	user
B.	AWS Account
C.	resource
D.	permission
A

B. AWS Account

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What does Amazon CloudFormation provide?

A. None of these.
B. The ability to setup Autoscaling for Amazon EC2 instances.
C. A template to map network resources for Amazon Web Services.
D. A templated resource creation for Amazon Web Services.

A

D. A templated resource creation for Amazon Web Services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What is the maximum response time for a Business level Premium Support case?
A. 30 minutes
B. You always get instant responses (within a few seconds).
C. 10 minutes
D. 1 hour

A

D. 1 hour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

An organization is designing an Amazon DynamoDB table for an application that must meet the following requirements:
Item size is 40 KB

Read/write ratio 2000/500 sustained, respectively

Heavily read-oriented and requires low latencies in the order of milliseconds

The application runs on an Amazon EC2 instance

Access to the DynamoDB table must be secure within the VPC

Minimal changes to application code to improve performance using write-through cache

Which design options will BEST meet these requirements?
A. Size the DynamoDB table with 10000 RCUs/20000 WCUs, implement the DynamoDB Accelerator (DAX) for read performance, use VPC endpoints for DynamoDB, and implement an IAM role on the EC2 instance to secure DynamoDB access.
B. Size the DynamoDB table with 20000 RCUs/20000 WCUs, implement the DynamoDB Accelerator (DAX) for read performance, leverage VPC endpoints for DynamoDB, and implement an IAM user on the EC2 instance to secure DynamoDB access.
C. Size the DynamoDB table with 10000 RCUs/20000 WCUs, implement Amazon ElastiCache for read performance, set up a NAT gateway on VPC for the EC2 instance to access DynamoDB, and implement an IAM role on the EC2 instance to secure DynamoDB access.
D. Size the DynamoDB table with 20000 RCUs/20000 WCUs, implement Amazon ElastiCache for read performance, leverage VPC endpoints for DynamoDB, and implement an IAM user on the EC2 instance to secure DynamoDB access.

A

A. Size the DynamoDB table with 10000 RCUs/20000 WCUs, implement the DynamoDB Accelerator (DAX) for read performance, use VPC endpoints for DynamoDB, and implement an IAM role on the EC2 instance to secure DynamoDB access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

A user has deployed an application on his private cloud. The user is using his own monitoring tool. He wants to configure that whenever there is an error, the monitoring tool should notify him via SMS. Which of the below mentioned AWS services will help in this scenario?
A. None because the user infrastructure is in the private cloud/
B. AWS SNS
C. AWS SES
D. AWS SMS

A

B. AWS SNS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

An existing application stores sensitive information on a non-boot Amazon EBS data volume attached to an Amazon Elastic Compute Cloud instance.

Which of the following approaches would protect the sensitive data on an Amazon EBS volume?

A. Snapshot the current Amazon EBS volume. Restore the snapshot to a new, encrypted Amazon
EBS volume Mount the Amazon EBS volume
B. Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume.
Delete the old Amazon EBS volume
C. Unmount the EBS volume. Toggle the encryption attribute to True. Re-mount the Amazon EBs volume
D. Upload your customer keys to AWS CloudHSM. Associate the Amazon EBS volume with AWS CloudHSM. Re-mount the Amazon EBS volume

A

A. Snapshot the current Amazon EBS volume. Restore the snapshot to a new, encrypted Amazon
EBS volume Mount the Amazon EBS volume

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

If I modify a DB Instance or the DB parameter group associated with the instance, should I reboot the instance for the changes to take effect?

A. No
B. Yes

A

B. Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

A company uses Amazon Redshift for its enterprise data warehouse. A new op-premises PostgreSQL OLTP DB must be integrated into the data warehouse. Each table in the PostgreSQL DB has an indexed last_modified timestamp column. The data warehouse has a staging layer to load source data into the data warehouse environment for further processing.
The data log between the source PostgreSQL DB and the Amazon Redshift staging layer should NOT exceed four hours.

What is the most efficient technique to meet these requirements?

A. Create a DBLINK on the source DB to connect to Amazon Redshift. Use a PostgreSQL trigger on the source table to capture the new insert/update/delete event and execute the event on the Amazon Redshift staging table.
B. Use a PostgreSQL trigger on the source table to capture the new insert/update/delete event and write it to Amazon Kinesis Streams. Use a KCL application to execute the event on the Amazon Redshift staging table.
C. Extract the incremental changes periodically using a SQL query. Upload the changes to multiple Amazon Simple Storage Service (S3) objects and run the COPY command to load the Amazon Redshift staging table.
D. Extract the incremental changes periodically using a SQL query. Upload the changes to a single Amazon Simple Storage Service (S3) object run the COPY command to load to the Amazon Redshift staging layer.

A

C. Extract the incremental changes periodically using a SQL query. Upload the changes to multiple Amazon Simple Storage Service (S3) objects and run the COPY command to load the Amazon Redshift staging table.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Is there any way to own a direct connection to Amazon Web Services?

A. You can create an encrypted tunnel to VPC, but you don’t own the connection.
B. Yes, it’s called Amazon Dedicated Connection.
C. No, AWS only allows access from the public Internet.
D. Yes, it’s called Direct Connect.

A

D. Yes, it’s called Direct Connect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Within the IAM service a GROUP is regarded as a:
A. A collection of AWS accounts
B. It’s the group of EC2 machines that gain the permissions specified in the GROUP.
C. There’s no GROUP in IAM, but only USERS and RESOURCES.
D. A collection of users.

A

D. A collection of users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

The Amazon EC2 web service can be accessed using the _____ web services messaging protocol. This interface is described by a Web Services Description Language (WSDL) document.

A. SOAP
B. DCOM
C. CORBA
D. XML-RPC

A

A. SOAP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q
What is an isolated database environment running in the cloud (Amazon RDS) called?
A.	DB Instance
B.	DB Unit
C.	DB Server
D.	DB Volume
A

A. DB Instance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

You are currently hosting multiple applications in a VPC and have logged numerous port scans coming in from a specific IP address block. Your security team has requested that all access from the offending IP address block be denied for the next 24 hours.

Which of the following is the best method to quickly and temporarily deny access from the specified IP address block?

A. Create an AD policy to modify Windows Firewall settings on all hosts in the VPC to deny access from the IP address block
B. Modify the Network ACLs associated with all public subnets in the VPC to deny access from the IP address block
C. Add a rule to all of the VPC 5 Security Groups to deny access from the IP address block D. Modify the
Windows Firewall settings on all Amazon Machine Images (AMIs) that your organization uses in that VPC to deny access from the IP address block

A

B. Modify the Network ACLs associated with all public subnets in the VPC to deny access from the IP address block

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

It is advised that you watch the Amazon CloudWatch “_____” metric (available via the AWS Management Console or Amazon Cloud Watch APIs) carefully and recreate the Read Replica should it fall behind due to replication errors.

A. Write Lag
B. Read Replica
C. Replica Lag
D. Single Replica

A

C. Replica Lag

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Does Dynamic DB support in-place atomic updates?

A. It is not defined
B. No
C. Yes
D. It does support in-place non-atomic updates

A

C. Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

A company is building a new application is AWS. The architect needs to design a system to collect application log events. The design should be a repeatable pattern that minimizes data loss if an application instance fails, and keeps a durable copy of all log data for at least 30 days.

What is the simplest architecture that will allow the architect to analyze the logs?

A. Write them directly to a Kinesis Firehose. Configure Kinesis Firehose to load the events into an
Amazon Redshift cluster for analysis.
B. Write them to a file on Amazon Simple Storage Service (S3). Write an AWS lambda function that runs in response to the S3 events to load the events into Amazon Elasticsearch service for analysis.
C. Write them to the local disk and configure the Amazon cloud watch Logs agent to lead the data into CloudWatch Logs and subsequently into Amazon Elasticsearch Service.
D. Write them to CloudWatch Logs and use an AWS Lambda function to load them into HDFS on an Amazon Elastic MapReduce (EMR) cluster for analysis.

A

A. Write them directly to a Kinesis Firehose.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

You have a load balancer configured for VPC, and all backend Amazon EC2 instances are in service. However, your web browser times out when
connecting to the load balancer’s DNS name.

Which options are probable causes of this behavior?

A. The load balancer was not configured to use a public subnet with an Internet gateway configured
B. The Amazon EC2 instances do not have a dynamically allocated private IP address
C. The security groups or network ACLs are not properly configured for web traffic
D. The load balancer is not configured in a private subnet with a NAT instance

A

A. The load balancer was not configured to use a public subnet with an Internet gateway configured

C. The security groups or network ACLs are not properly configured for web traffic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

If your DB instance runs out of storage space or file system resources, its status will change to_____ and your DB Instance will no longer be available.

A. storage-overflow
B. storage-full
C. storage-exceed
D. storage-overage

A

B. storage-full

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

You have a video Trans coding application running on Amazon EC2. Each instance pools a queue to find out which video should be Trans coded, and then runs a Trans coding process.

If this process is interrupted, the video will be Trans coded by another instance based on the queuing system. You have a large backlog of videos which need to be Trans coded and would like to reduce this backlog by adding more instances. You will need these instances only until the backlog is reduced. Which type of Amazon EC2 instance should you use to reduce the backlog in the most cost-effective way?

A. Dedicated instances
B. Spot instances
C. On-demand instances
D. Reserved instances

A

B. Spot instances

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

An advertising organization uses an application to process a stream of events that are received from clients in multiple unstructured formats.
The application does the following:
Transforms the events into a single structured format and streams them to Amazon Kinesis for real-time analysis.

Stores the unstructured raw events from the log files on local hard drivers that are rotated and uploaded to Amazon S3.

The organization wants to extract campaign performance reporting using an existing Amazon redshift cluster.

Which solution will provide the performance data with the LEAST number of operations?

A. Install the Amazon Kinesis Data Firehose agent on the application servers and use it to stream the log files directly to Amazon Redshift.
B. Create an external table in Amazon Redshift and point it to the S3 bucket where the unstructured raw events are stored.
C. Write an AWS Lambda function that triggers every hour to load the new log files already in S3 to Amazon redshift.
D. Connect Amazon Kinesis Data Firehose to the existing Amazon Kinesis stream and use it to stream the event directly to Amazon Redshift.

A

B. Create an external table in Amazon Redshift and point it to the S3 bucket where the unstructured raw events are stored.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

A user has setup an RDS DB with Oracle. The user wants to get notifications when someone modifies the security group of that DB. How can the user configure that?
A. It is not possible to get the notifications on a change in the security group
B. Configure SNS to monitor security group changes
C. Configure event notification on the DB security group
D. Configure the CloudWatch alarm on the DB for a change in the security group

A

C. Configure event notification on the DB security group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

A system admin is planning to setup event notifications on RDS. Which of the below mentioned services will help the admin setup notifications?

A. AWS SES
B. AWS Cloudtrail
C. AWS CloudWatch
D. AWS SNS

A

D. AWS SNS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

A media advertising company handles a large number of real-time messages sourced from over 200 websites.

The company’s data engineer needs to collect and process records in real time for analysis using Spark Streaming on Amazon Elastic MapReduce (EMR). The data engineer needs to fulfill a corporate mandate to keep ALL raw messages as they are received as a top priority.

Which Amazon Kinesis configuration meets these requirements?

A. Publish messages to Amazon Kinesis Firehose backed by Amazon Simple Storage Service (S3). Pull messages off Firehose with Spark Streaming in parallel to persistence to Amazon S3

B. Publish messages to Amazon Kinesis Streams. Pull messages off Stream with Spark Streaming in parallel to AWS messages from Streams to Firehose backed by Amazon Simple Storage Service (S3)

C. Publish messages to Amazon Kinesis Firehose backed by Amazon Simple Storage (S3).
Use AWS Lambda messages from Firehose to Streams for processing with Spark Streaming

D. Publish messages to Amazon Kinesis Streams, pull messages off with Spark Streaming and write data new data to Amazon Simple Storage Service (S3) before and after processing

A

C. Publish messages to Amazon Kinesis Firehose backed by Amazon Simple Storage (S3).
Use AWS Lambda messages from Firehose to Streams for processing with Spark Streaming

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

What is the charge for the data transfer incurred in replicating data between your primary and standby?

A. No charge. It is free.
B. Double the standard data transfer charge
C. Same as the standard data transfer charge
D. Half of the standard data transfer charge

A

C. Same as the standard data transfer charge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Which Amazon storage do you think is the best for my database-style applications that frequently encounter many random reads and writes across the dataset? A. None of these.
B. Amazon Instance Storage
C. Any of these
D. Amazon EBS

A

D. Amazon EBS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Your Devops team is responsible for a multi-tier, Windows-based web application consisting of web servers, Amazon RDS database instances, and a load balancer behind Amazon Route53. You have been asked by your manager to build a cost-effective rolling deployment solution for this web application.

What method should you use?

A. Re-deploy your application on an AWS OpsWorks stack. Use the AWS OpsWorks clone stack feature to allow updates between duplicate stacks
B. Re-deploy your application on Elastic BeanStalk and take advantage of Elastic BeanStalk rolling updates
C. Re-deploy your application using an AWS CloudFormation template, launch a new AWS
CloudFormation stack during each deployment, and then tear down the old stack
D. Re-deploy your application using an AWS CloudFormation template. Use AWS CloudFormation rolling deployment policies, create a new policy for your AWS CloudFormation stack, and initiate an update stack operation to deploy new code

A

D. Re-deploy your application using an AWS CloudFormation template. Use AWS CloudFormation rolling deployment policies, create a new policy for your AWS CloudFormation stack, and initiate an update stack operation to deploy new code

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Your company operates a website for promoters to sell tickets for entertainment events. You are using a load balancer in front of an Auto Scaling group of web server. Promotion of popular events can cause surges of websites visitors. During scaling-out at theses times, newly launched instances are unable to complete configuration quickly enough, leading to user disappointment.

What option should you choose to improve scaling yet minimize costs? Choose 2 answers

A. Create an AMI with the application pre-configured. Create a new Auto Scaling launch configuration using this new AMI, and configure the Auto Scaling group to launch with this AMI
B. Use Auto Scaling pre-warming to launch instances before they are required. Configure prewarming to use the CPU trend CloudWatch metric for the group
C. Publish a custom CloudWatch metric from your application on the number of tickets sold, and create an Auto Scaling policy based on this
D. Using the history of past scaling events for similar event sales to predict future scaling requirements. Use the Auto Scaling scheduled scaling feature to vary the size of the fleet
E. Configure an Amazon S3 bucket for website hosting. Upload into the bucket an HTML holding page with its ‘x-amz-website-redirect-location’ metadata property set to the load balancer endpoint.
Configure Elastic Load Balancing to redirect to the holding page when the load on web servers is above a certain level

A

D. Using the history of past scaling events for similar event sales to predict future scaling requirements. Use the Auto Scaling scheduled scaling feature to vary the size of the fleet

E. Configure an Amazon S3 bucket for website hosting. Upload into the bucket an HTML holding page with its ‘x-amz-website-redirect-location’ metadata property set to the load balancer endpoint.
Configure Elastic Load Balancing to redirect to the holding page when the load on web servers is above a certain level

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q
To help you manage your Amazon EC2 instances, images, and other Amazon EC2 resources, you can assign your own metadata to each resource in the form of\_\_\_\_\_\_\_\_\_\_\_\_
A.	special filters
B.	functions
C.	tags
D.	wildcards
A

C. tags

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Can I detach the primary (eth0) network interface when the instance is running or stopped?

A. Yes, You can.
B. No. You cannot
C. Depends on the state of the interface at the time

A

B. No. You cannot

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Do the Amazon EBS volumes persist independently from the running life of an Amazon EC2 instance?

A. No
B. Only if instructed to when created
C. Yes

A

C. Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

A user is running one instance for only 3 hours every day. The user wants to save some cost with the instance. Which of the below mentioned Reserved Instance categories is advised in this case?

A. The user should not use RI; instead only go with the on-demand pricing
B. The user should use the AWS high utilized RI
C. The user should use the AWS medium utilized RI D. The user should use the AWS low utilized RI

A

A. The user should not use RI; instead only go with the on-demand pricing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q
Amazon S3 doesn't automatically give a user who creates \_\_\_\_\_ permission to perform other actions on that bucket or object.
A.	a file
B.	a bucket or object
C.	a bucket or file
D.	a object or file
A

B. a bucket or object

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

My Read Replica appears “stuck” after a Multi-AZ failover and is unable to obtain or apply updates from the source DB Instance. What do I do?

A. You will need to delete the Read Replica and create a new one to replace it.
B. You will need to disassociate the DB Engine and re associate it.
C. The instance should be deployed to Single AZ and then moved to Multi- AZ once again
D. You will need to delete the DB Instance and create a new one to replace it.

A

A. You will need to delete the Read Replica and create a new one to replace it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

A user is planning to host a mobile game on EC2 which sends notifications to active users on either high score or the addition of new features. The user should get this notification when he is online on his mobile device. Which of the below mentioned AWS services can help achieve this functionality?

A. AWS Simple Notification Service
B. AWS Simple Queue Service
C. AWS Mobile Communication Service
D. AWS Simple Email Service

A

A. AWS Simple Notification Service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

Which of the following notification endpoints or clients does Amazon Simple Notification Service support? Choose 2 answers

A.	Email
B.	CloudFront distribution
C.	File Transfer Protocol
D.	Short Message Service
E.	Simple Network Management Protocol
A

A. Email

D. Short Message Service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

A company that provides economics data dashboards needs to be able to develop software to display rich, interactive, data-driven graphics that run in web browsers and leverages the full stack of web standards (HTML, SVG and CSS).

Which technology provides the most appropriate for this requirement?

A. D3.js
B. Python/Jupyter
C. R Studio
D. Hue

A

A. D3.js

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Which of these configuration or deployment practices is a security risk for RDS?

A. Storing SQL function code in plaintext
B. Non-Multi-AZ RDS instance
C. Having RDS and EC2 instances exist in the same subnet
D. RDS in a public subnet

A

D. RDS in a public subnet

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

A company needs to deploy services to an AWS region which they not previously used. The company currently has an AWS identity and Access Management (IAM) role for their Amazon EC2 instances, which permits the instance to have access to Amazon DynamoDB. The company wants their EC2 instances in the new region to have the same privileges.

How should the company achieve this?

A. Create a new IAM role and associated policies within the new region
B. Assign the existing IAM role to the Amazon EC2 instances in the new region
C. Copy the IAM role and associated policies to the new region and attach it to the instances
D. Create the Amazon Machine Image of the instance and copy it to the desired region using the AMI Copy feature

A

B. Assign the existing IAM role to the Amazon EC2 instances in the new region

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q
Will I be charged if the DB instance is idle?
A.  No 
B.  Yes
C.	Only is running in GovCloud
D.	Only if running in VPC
A

B. Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

A solutions architect works for a company that has a data lake based on a central Amazon S3 bucket. The data contains sensitive information. The architect must be able to specify exactly which files each user can access. Users access the platform through SAML federation Single Sign On platform.

The architect needs to build a solution that allows fine grained access control, traceability of access to the objects, and usage of the standard tools (AWS Console, AWS CLI) to access the data.

Which solution should the architect build?

A. Use Amazon S3 Server-Side Encryption with AWS KMS-Managed Keys for strong data.
Use AWS KMS to allow access to specific elements of the platform. Use AWS CloudTrail for auditing

B. Use Amazon S3 Server-Side Encryption with Amazon S3 Managed Keys. Set Amazon S3
ACI to allow access to specific elements of the platform. Use Amazon S3 to access logs for auditing

C. Use Amazon S3 Client-Side Encryption with Client-Side Master Key. Set Amazon S3 ACI to allow access to specific elements of the platform. Use Amazon S3 access logs for auditing

D. Use Amazon S3 Client-Side Encryption with AWS KMS-Managed keys for storing data.
Use AMS KWS to allow access to specific elements of the platform. Use AWS CloudTrail for auditing

A

D. Use Amazon S3 Client-Side Encryption with AWS KMS-Managed keys for storing data.
Use AMS KWS to allow access to specific elements of the platform. Use AWS CloudTrail for auditing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

An online photo album app has a key design feature to support multiple screens (e.g. desktop, mobile phone, and tablet) with high quality displays. Multiple versions of the image must be saved in different resolutions and layouts.
The image processing Java program takes an average of five seconds per upload, depending on the image size and format. Each image upload captures the following image metadata: user, album, photo label, upload timestamp
The app should support the following requirements:
• Hundreds of user image uploads per second
• Maximum image metadata size of 10 MB
• Maximum image metadata size of 1 KB
•Image displayed in optimized resolution in all supported screens no later than one minute after image upload Which strategy should be used to meet these requirements?
A. Write images and metadata to Amazon Kinesis, Use a Kinesis Client Library (KCL) application to run the image processing and save the image output to Amazon S3 and metadata to the app repository DB
B. Write image and metadata RDS with BLOB data type. Use AWS Data Pipeline to run the image processing and save the image output to Amazon S3 and metadata to the app repository DB
C. Upload image with metadata to Amazon S3 use Lambda function to run the image processing and save the image output to Amazon S3 and metadata to the app repository DB
D. Write image and metadata to Amazon kinesis. Use Amazon Elastic MapReduce (EMR) with Spark Streaming to run image processing and save image output to Amazon

A

D. Write image and metadata to Amazon kinesis. Use Amazon Elastic MapReduce (EMR) with Spark Streaming to run image processing and save image output to Amazon

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

A customer has an Amazon S3 bucket. Objects are uploaded simultaneously by a cluster of servers from multiple streams of data. The customer maintains a catalog of objects uploaded in Amazon S3 using an Amazon DynamoDB table. This catalog has the following fields StreamName, TimeStamp, and ServerName, TimeStamp, and ServerName, from which ObjectName can be obtained.

The customer needs to define the catalog to support querying for a given stream or server within a defined time range.

Which DynamoDB table scheme is most efficient to support these queries?

A. Define a Primary Key with ServerName as Partition Key and TimeStamp as Sort Key. Don NOT define a Secondary Index or Global Secondary Index.
B. Define a Primary Key with StreamName as Partition Key and TimeStamp followed by ServerName as Sort Key. Define a Global Secondary Index with ServerName as Partition Key and TimeStamp followed by StreamName.
C. Define a Primary Key with ServerName as Partition Key. Define a Local Secondary Index with StreamName as Partition Key. Define a Global Secondary Index with TimeStamp as Partition Key.
D. Define a Primary Key with ServerName as Partition Key. Define a Local Secondary Index with TimeStamp as Partition Key. Define a Global Secondary Index with StreamName as Partition key and TimeStamp as Sort Key.

A

A. Define a Primary Key with ServerName as Partition Key and TimeStamp as Sort Key. Don NOT define a Secondary Index or Global Secondary Index.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

An organization uses a custom map reduce application to build monthly reports based on many small data files in an Amazon S3 bucket. The data is submitted from various business units on a frequent but unpredictable schedule. As the dataset continues to grow, it becomes increasingly difficult to process all of the data in one day. The organization has scaled up its Amazon EMR cluster, but other optimizations could improve performance.

The organization needs to improve performance minimal changes to existing processes and applications.
What action should the organization take?
A. Use Amazon S3 Event Notifications and AWS Lambda to create a quick search file index in
DynamoDB.
B. Add Spark to the Amazon EMR cluster and utilize Resilient Distributed Datasets in-memory.
C. Use Amazon S3 Event Notifications and AWS Lambda to index each file into an Amazon Elasticsearch Service cluster.
D. Schedule a daily AWS Data Pipeline process that aggregates content into larger files using
S3DistCp.
E. Have business units submit data via Amazon Kinesis Firehose to aggregate data hourly into Amazon S3.

A

B. Add Spark to the Amazon EMR cluster and utilize Resilient Distributed Datasets in-memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

A company is using Amazon Machine Learning as part of a medical software application. The application will predict the most likely blood type for a patient based on a variety of other clinical tests that are available when blood type knowledge is unavailable.

What is the appropriate model choice and target attribute combination for the problem?

A.	Multi-class classification model with a categorical target attribute
B.	Regression model with a numeric target attribute
C.	Binary Classification with a categorical target attribute
D.	K-Nearest Neighbors model with a multi-class target attribute
A

A. Multi-class classification model with a categorical target attribute

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

An organization has added a clickstream to their website to analyze traffic. The website is sending each page request with the PutRecord API call to an Amazon Kinesis stream by using the page name as the partition key. During peak spikes in website traffic, a support engineer notices many ProvisionedThroughputExcededException events in the application logs.

What should be done to resolve the issue in the MOST cost-effective way?

A. Create multiple Amazon Kinesis streams for page requests to increase the concurrency of the clickstream.
B. Increase the number of shards on the Kinesis stream to allow for more throughput to meet the peak spikes in traffic.
C. Modify the application to use on the Kinesis Producer Library to aggregate requests before sending them to the Kinesis stream.
D. Attach more consumers to the Kinesis stream to process records in parallel, improving the performance on the stream

A

B. Increase the number of shards on the Kinesis stream to allow for more throughput to meet the peak spikes in traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

Is there a method in the IAM system to allow or deny access to a specific instance?

A. Only for VPC based instances
B. Yes
C. No

A

C. No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

Because of the extensibility limitations of striped storage attached to Windows Server, Amazon RDS does not currently support increasing storage on a _____ DB Instance.

A. SQL Server
B. MySQL
C. Oracle

A

A. SQL Server

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Which of the following requires a custom cloudwatch metric to monitor?

A. Memory utilization of an EC2 instance
B. CPU utilization of an EC2 instance
C. Disk usage activity of an EC2 instance
D. Data transfer of an EC2 instance

A

A. Memory utilization of an EC2 instance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

What are the two types of licensing options available for using Amazon RDS for Oracle?

A. BYOL and Enterprise License
B. BYOL and License Included
C. Enterprise License and License Included
D. Role based License and License Included

A

B. BYOL and License Included

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

Managers in a company need access to the human resources database that runs on Amazon Redshift, to run reports about their employees. Managers must only see information about their direct reports.

Which technique should be used to address this requirement with Amazon Redshift?

A. Define an IAM group for each employee as an IAM user in that group and use that to limit the access.
B. Use Amazon Redshift snapshot to create one cluster per manager. Allow the managers to access only their designated clusters.
C. Define a key for each manager in AWS KMS and encrypt the data for their employees with their private keys.
D. Define a view that uses the employee’s manager name to filter the records based on current user names
.

A

A. Define an IAM group for each employee as an IAM user in that group and use that to limit the access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

What is the charge for the data transfer incurred in replicating data between your primary and standby?

A. Same as the standard data transfer charge
B. Double the standard data transfer charge
C. No charge. It is free
D. Half of the standard data transfer charge

A

C. No charge. It is free

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

What’s an ECU?

A. Extended Cluster User.
B. None of these.
C. Elastic Computer Usage.
D. Elastic Compute Unit.

A

D. Elastic Compute Unit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

An administrator tries to use the Amazon Machine Learning service to classify social media posts that mention the administrator’s company into posts that requires a response and posts that do not. The training dataset of 10,000 posts contains the details of each post including the timestamp, author, and full text of the post. The administrator is missing the target labels that are required for training.

Which Amazon Machine Learning model is the most appropriate for the task?

A.	Unary classification model, where the target class is the require-response post
B.	Binary classification model, where the two classes are require-response and does-not-require- response
C.	Multi-class prediction model, with two classes require-response and does-not-require response
D.	Regression model where the predicted value is the probability that the post requires a response
A

A. Unary classification model, where the target class is the require-response post

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

A large oil and gas company needs to provide near real-time alerts when peak thresholds are exceeded in its pipeline system. The company has developed a system to capture pipeline metrics such as flow rate, pressure and temperature using millions of sensors. The sensors deliver to AWS IoT.

What is a cost-effective way to provide near real-time alerts on the pipeline metrics?

A. Create an AWS IoT rule to generate an Amazon SNS notification
B. Store the data points in an Amazon DynamoDB table and polite peak metrics data from an Amazon EC2 application
C. Create an Amazon Machine Learning model and invoke with AWS Lambda
D. Use Amazon Kinesis Streams and a KCL-based application deployed on AWS Elastic Beanstalk

A

C. Create an Amazon Machine Learning model and invoke with AWS Lambda

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q
What does Amazon ELB stand for? 
A.    Elastic Linux Box.
B.	Encrypted Linux Box.
C.	Encrypted Load Balancing.
D.	Elastic Load Balancing.
A

D. Elastic Load Balancing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

A company is running a batch analysis every hour on their main transactional DB running on an RDS MySQL instance to populate their central Data Warehouse running on Redshift. During the execution of the batch their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data. The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required The on-premises system cannot be modified because is managed by another team.

How would you optimize this scenario to solve performance issues and automate the process as much as possible?

A. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
B. Replace RDS with Redshift for the batch analysis and SQS to send a message to the on-premises system to update the dashboard
C. Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard
D. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.

A

C. Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Multiple rows in an Amazon Redshift table were accidentally deleted. A System Administrator is restoring the table from the most recent snapshot. The snapshot contains all rows that were in the table before the deletion.

What is the SIMPLEST solution to restore the table without impacting users?

A. Restore the snapshot to a new Amazon Redshift cluster, then UNLOAD the table to Amazon S3. In the original cluster, TRUNCATE the table, then load the data from Amazon S3 by using a COPY command.
B. Use the Restore Table from a Snapshot command and specify a new table name DROP the original table, then RENAME the new table to the original table name.
C. Restore the snapshot to a new Amazon Redshift cluster. Create a DBLINK between the two clusters in the original cluster, TRUNCATE the destination table, then use an INSERT command to copy the data from the new cluster.
D. Use the ALTER TABLE REVERT command and specify a time stamp of immediately before the data deletion. Specify the Amazon Resource Name of the snapshot as the SOURCE and use the OVERWRITE REPLACE option.

A

B. Use the Restore Table from a Snapshot command and specify a new table name DROP the original table, then RENAME the new table to the original table name.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

A user is planning to setup infrastructure on AWS for the Christmas sales. The user is planning to use Auto Scaling based on the schedule for proactive scaling.

What advise would you give to the user?

A. It is good to schedule now because if the user forgets later on it will not scale up
B. The scaling should be setup only one week before Christmas
C. Wait till end of November before scheduling the activity
D. It is not advisable to use scheduled based scaling

A

C. Wait till end of November before scheduling the activity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

Your application uses CloudFormation to orchestrate your application’s resources. During your testing phase before application went live, your Amazon RDS instance type was changed and caused the instance to be re-created, resulting in the loss of test data.

How should you prevent this from occurring in the future?

A. Within the AWS CloudFormation parameter with which users can select the Amazon RDS instance type, set AllowedValues to only contain the current instance type
B. Use an AWS CloudFormation stack policy to deny updates to the instance. Only allow
UpdateStack permission to IAM principles that are denied SetStackPolicy
C. In the AWS CloudFormation template, set the AWS::RDS::DBInstance’s DBInstanceClass property to be read-only
D. Subscribe to the AWS CloudFormation notification “BeforeResourceUpdate” and call
CancelStackUpdate if the resource identified is the Amazon RDS instance
E. In the AWS ClousFormation template, set the DeletionPolicy of the AWS::RDS::DBInstance’s DeletionPolicy property to “Retain”

A

E. In the AWS ClousFormation template, set the DeletionPolicy of the AWS::RDS::DBInstance’s DeletionPolicy property to “Retain”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

Is there a limit to the number of groups you can have?

A. Yes for all users except root
B. No
C. Yes unless special permission granted
D. Yes for all users

A

D. Yes for all users

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q
Your company wants to start working with AWS, but has not yet opened an account. With which of the following services should you begin local development?
A.	Amazon DynamoDB
B.	Amazon Simple Queue Service
C.	Amazon Simple Email Service
D.	Amazon CloudSearch
A

A. Amazon DynamoDB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q
HTTP Query-based requests are HTTP requests that use the HTTP verb GET or POST and a Query parameter named\_\_\_\_\_\_\_\_\_\_\_\_\_.
A.	Action
B.	Value
C.	Reset
D.	Retrieve
A

A. Action

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

Amazon EC2 provides a repository of public data sets that can be seamlessly integrated into AWS cloud-based applications.What is the monthly charge for using the public data sets? A. A 1 time charge of 10$ for all the datasets.
B. 1$ per dataset per month
C. 10$ per month for all the datasets
D. There is no charge for using the public data sets

A

D. There is no charge for using the public data sets

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

Are you able to integrate a multi-factor token service with the AWS Platform?
A. Yes, you can integrate private multi-factor token devices to authenticate users to the AWS platform.
B. No, you cannot integrate multi-factor token devices with the AWS platform.
C. Yes, using the AWS multi-factor token devices to authenticate users on the AWS platform.

A

C. Yes, using the AWS multi-factor token devices to authenticate users on the AWS platform.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

You have an application running on an Amazon Elastic Compute Cloud instance, that uploads 5 GB video objects to Amazon Simple Storage Service (S3). Video uploads are taking longer than expected, resulting in poor application performance.

Which method will help improve performance of your application?

A. Enable enhanced networking
B. Use Amazon S3 multipart upload
C. Leveraging Amazon CloudFront, use the HTTP POST method to reduce latency.
D. Use Amazon Elastic Block Store Provisioned IOPs and use an Amazon EBS-optimized instance

A

B. Use Amazon S3 multipart upload

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

In the Amazon RDS Oracle DB engine, the Database Diagnostic Pack and the Database Tuning Pack are only available with ______________

A. Oracle Standard Edition
B. Oracle Express Edition
C. Oracle Enterprise Edition
D. None of these

A

C. Oracle Enterprise Edition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

You have a large number of web servers in an Auto Scaling group behind a load balancer. On an hourly basis, you want to filter and process the logs to collect data on unique visitors, and then put that data in a durable data store in order to run reports. Web servers in the Auto Scaling group are constantly launching and terminating based on your scaling policies, but you do not want to lose any of the log data from these servers during a stop/termination initiated by a user or by Auto Scaling.

What two approaches will meet these requirements?
Choose 2 answers

A. Install an Amazon CloudWatch Logs Agent on every web server during the bootstrap process.
Create a CloudWatch log group and define metric Filters to create custom metrics that track unique visitors from the streaming web server logs. Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the CloudWatch custom metrics
B. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to Amazon Glacier. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use Amazon Data pipeline to process data in Amazon Glacier and run reports every hour
C. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon S3 bucket. Ensure that the operating system shutdown process triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift in order to process and run reports every hour
D. Install an AWS Data Pipeline Logs Agent on every web server during the bootstrap process. Create a log group object in AWS Data Pipeline, and define Metric filters to move processed log data directly from the web servers to Amazon Redshift and runs reports every hour

A

A. Install an Amazon CloudWatch Logs Agent on every web server during the bootstrap process.
Create a CloudWatch log group and define metric Filters to create custom metrics that track unique visitors from the streaming web server logs. Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the CloudWatch custom metrics

C. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon S3 bucket. Ensure that the operating system shutdown process triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift in order to process and run reports every hour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

An organization needs a data store to handle the following data types and access patterns
• Faceting
• Search
• Flexible schema (JSON) and fixed schema
• Noise word elimination

Which data store should the organization choose?

A. Amazon Relational Database Service (RDS)
B. Amazon Redshift
C. Amazon DynamoDB
D. Amazon Elasticsearch Service

A

C. Amazon DynamoDB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

A user has setup an RDS DB with Oracle. The user wants to get notifications when someone modifies the security group of that DB. How can the user configure that?
A. It is not possible to get the notifications on a change in the security group
B. Configure SNS to monitor security group changes
C. Configure event notification on the DB security group
D. Configure the CloudWatch alarm on the DB for a change in the security group

A

C. Configure event notification on the DB security group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

A photo-sharing service stores pictures in Amazon Simple Storage Service (S3) and allows application sign-in using an opened connect-compatible identity provider. Which AWS Security Token Service approach to temporary access should you use for the Amazon S3 operations?
A. Cross-Account Access
B. AWS identity and Access Management roles
C. SAML-based Identity Federation
D. Web identity Federation

A

C. SAML-based Identity Federation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

A data engineer is running a DWH on a 25-node Redshift cluster of a SaaS service. The data engineer needs to build a dashboard that will be used by customers. Five big customers represent 80% of usage, and there is a long tail of dozens of smaller customers. The data engineer has selected the dashboarding tool.
How should the data engineer make sure that the larger customer workloads do NOT interfere with the smaller customer workloads?
A. Apply query filters based on customer-id that can NOT be changed by the user and apply distribution keys on customer id
B. Place the largest customers into a single user group with a dedicated query queue and place the rest of the customer into a different query queue
C. Push aggregations into an RDS for Aurora instance. Connect the dashboard application to Aurora rather than Redshift for faster queries
D. Route the largest customers to a dedicated Redshift cluster, Raise the concurrency of the multi-tenant Redshift cluster to accommodate the remaining customers

A

D. Route the largest customers to a dedicated Redshift cluster, Raise the concurrency of the multi-tenant Redshift cluster to accommodate the remaining customers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

You have a web application that is currently running on a collection of micro instance types in a single AZ
behind a single load balancer. You have an Auto Scaling group configured to scale from 2 to 64 instances. When reviewing your CloudWatch metrics, you see that sometimes your Scaling group is running 64 micro instances. The web application is reading and writing to a DyanamoDB-configured backend and configured with 800 Write Capacity units and 800 Read Capacity units. Your customers are complaining that they are experiencing load times when viewing you website. You have investigated the DynamoDB CloudWatch metrics; you are under the provisioned read and Write Capacity units and there is no throttling.
How do you scale your service to improve the load times and ensure the principles of high availability?
A. Change your Auto Scaling group configuration to include multiple AZs
B. Change you Auto Scaling group configuration to include multiple AZs, and increase the number of Read Capacity units in your DynamoDB table by a factor of three, because you will need to be calling DynamoDB from three AZs
C. Add a second load balancer to your Auto Scaling group so that you can support more inbound connections per second
D. Change your Auto Scaling group configuration to use larger instances and include multiple AZs instead of one

A

D. Change your Auto Scaling group configuration to use larger instances and include multiple AZs instead of one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

A user has launched an EC2 instance and deployed a production application in it. The user wants to prohibit any mistakes from the production team to avoid accidental termination. How can the user achieve this?
A. The user can the set DisableApiTermination attribute to avoid accidental termination
B. It is not possible to avoid accidental termination
C. The user can set the Deletion termination flag to avoid accidental termination
D. The user can set the InstanceInitiatedShutdownBehavior flag to avoid accidental termination

A

A. The user can the set DisableApiTermination attribute to avoid accidental termination

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q
Is creating a Read Replica of another Read Replica supported?
A.	Only in VPC
B.	Yes
C.	Only in certain regions
D.	No
A

D. No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

The operations team and the development team want a single place to view both operating system and application logs.
How should you implement this using AWS services? Choose two answers
A. Using AWS CloudFormation, create a CloudWatch Logs LogGroup and send the operating system and application logs of interest using the CloudWatch Logs Agent
B. Using AWS CloudFormation and configuration management, set up remote logging to send events via UDP packets to CloudTrail
C. Using configuration management, set up remote logging to send events to Amazon Kinesis and insert these into Amazon CloudSearch or Amazon Redshift, depending on available analytic tools
D. Using AWS CloudFormation, create a CloudWatch Logs LogGroup. Because the CloudWatch log agent automatically sends all operating system logs, you only have to configure the application logs for sending off-machine
E. Using AWS CloudFormation, merge the application logs with the operating system logs, and use
IAM Roles to allow both teams to have access to view console output from Amazon EC2

A

A. Using AWS CloudFormation, create a CloudWatch Logs LogGroup and send the operating system and application logs of interest using the CloudWatch Logs Agent

C. Using configuration management, set up remote logging to send events to Amazon Kinesis and insert these into Amazon CloudSearch or Amazon Redshift, depending on available analytic tools

107
Q

A telecommunications company needs to predict customer churn (i.e. customers eho decide to switch a computer). The company has historic records of each customer, including monthly consumption patterns, calls to customer service, and whether the customer ultimately quit th eservice. All of this data is stored in Amazon S3. The company needs to know which customers are likely going to churn soon so that they can win back their loyalty.
What is the optimal approach to meet these requirements?
A. Use the Amazon Machine Learning service to build the binary classification model based on the dataset stored in Amazon S3. The model will be used regularly to predict churn attribute for existing customers
B. Use AWS QuickSight to connect it to data stored in Amazon S3 to obtain the necessary business insight. Plot the churn trend graph to extrapolate churn likelihood for existing customer
C. Use EMR to run the Hive queries to build a profile of a churning customer. Apply the profile to existing customers to determine the likelihood of churn
D. Use a Redshift cluster to COPY the data from Amazon S3. Create a user Define Function in Redshift
that computers the likelihood of churn

A

B. Use AWS QuickSight to connect it to data stored in Amazon S3 to obtain the necessary business insight. Plot the churn trend graph to extrapolate churn likelihood for existing customer

108
Q
You are deploying an application to collect votes for a very popular television show. Millions of users will submit votes using mobile devices. The votes must be collected into a durable, scalable, and highly available data store for real-time public tabulation. Which service should you use?
A.	Amazon DynamoDB
B.	Amazon Redshift
C.	Amazon Kinesis
D.	Amazon Simple Queue Service
A

C. Amazon Kinesis

109
Q

The Marketing Director in your company asked you to create a mobile app that lets users post sightings of good deeds known as random acts of kindness in 80-character summaries. You decided to write the application in JavaScript so that it would run on the broadest range of phones, browsers, and tablets. Your application should provide access to Amazon DynamoDB to store the good deed summaries. Initial testing of a prototype shows that there aren’t large spikes in usage. Which option provides the most cost-effective and scalable architecture for this application?
A. Provide the JavaScript client with temporary credentials from the Security Token Service using a Token Vending Machine
B. Register the application with a Web Identity Provider like Amazon, Google, or Facebook, create an IAM role for that provider, and set up permissions for the IAM role to allow S3 gets and DynamoDB puts. You serve your mobile application out of an S3 bucket enabled as a web site. Your client updates
DynamoDB.
C. Provide the JavaScript client with temporary credentials from the Security Token Service using a Token Vending Machine (TVM) to provide signed credentials mapped to an IAM user allowing DynamoDB puts. You serve your mobile application out of Apache EC2 instances that are load-balanced and autoscaled. Your EC2 instances are configured with an IAM role that allows DynamoDB puts. Your server updates DynamoDB.
D. Register the JavaScript application with a Web Identity Provider like Amazon, Google, or Facebook, create an IAM role for that provider, and set up permissions for the IAM role to allow DynamoDB puts. You serve your mobile application out of Apache EC2 instances that are load-balanced and autoscaled. Your EC2 instances are configured with an IAM role that allows DynamoDB puts. Your server updates
DynamoDB.

A

B. Register the application with a Web Identity Provider like Amazon, Google, or Facebook, create an IAM role for that provider, and set up permissions for the IAM role to allow S3 gets and DynamoDB puts. You serve your mobile application out of an S3 bucket enabled as a web site. Your client updates
DynamoDB.

110
Q

A us-based company is expanding their web presence into Europe. The company wants to extend their AWS infrastructure from Northern Virginia (us-east-1) into the Dublin (eu-west-1) region. Which of the following options would enable an equivalent experience for users on both continents?
A. Use a public-facing load balancer per region to load-balancer web traffic, and enable HTTP health checks
B. Use a public-facing load balancer per region to load balancer web traffic, and enable sticky sessions
C. Use Amazon Route S3, and apply a geolocation routing policy to distribution traffic across both regions
D. Use Amazon Route S3, and apply a weighted routing policy to distribute traffic across both regions

A

C. Use Amazon Route S3, and apply a geolocation routing policy to distribution traffic across both regions

111
Q
In the Amazon CloudWatch, which metric should I be checking to ensure that your DB Instance has enough free storage space?
A.	FreeStorage
B.	FreeStorageSpace
C.	FreeStorageVolume
D.	FreeDBStorageSpace
A

B. FreeStorageSpace

112
Q

When will you incur costs with an Elastic IP address (EIP)?
A. When an EIP is allocated
B. When it is allocated and associated with a running instance
C. When it is allocated and associated with a stopped instance
D. Costs are incurred regardless of whether the EIP associated with a running instance

A

C. When it is allocated and associated with a stopped instance

113
Q

You have launched an Amazon Elastic Compute Cloud (EC2) instance into a public subnet with a primary private IP address assigned, an internet gateway is attached to the VPC, and the public route table is configured to send all internet-based internet. Why is the internet unreachable from this instance?
A. The Internet gateway security group must allow all outbound traffic
B. The instance does not have a public IP address
C. The instance “Source/Destination check” property must be enabled D. The instance security group must allow all inbound traffic

A

B. The instance does not have a public IP address

114
Q

A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS instance and send real-time alerts to their operations team. Which AWS services can accomplish this? Choose 2 answers

A.  Amazon Simple Email Service
B.	Amazon CloudWatch
C.	Amazon Simple Queue Service
D.	Amazon Route 53
E.	Amazon Simple Notification Service
A

B. Amazon CloudWatch

E. Amazon Simple Notification Service

115
Q

An organization is setting up a data catalog and metadata management environment for their numerous data stores currently running on AWS. The data catalog will be used to determine the structure and other attributes of data in the data stores. The data stores are composed of Amazon RDS databases, Amazon Redshift, and CSV files residing on Amazon S3. The catalog should be populated on a scheduled basis, and minimal administration is required to manage the catalog.
How can this be accomplished?
A. Set up Amazon DynamoDB as the data catalog and run a scheduled AWS Lambda function that connects to data sources to populate the database.
B. Use an Amazon database as the data catalog and run a scheduled AWS Lambda function that connects to data sources to populate the database.
C. Use AWS Glue Data Catalog as the data catalog and schedule crawlers that connect to data sources to populate the database.
D. Set up Apache Hive metastore on an Amazon EC2 instance and run a scheduled bash script that connects to data sources to populate the metastore.

A

C. Use AWS Glue Data Catalog as the data catalog and schedule crawlers that connect to data sources to populate the database.C. Use AWS Glue Data Catalog as the data catalog and schedule crawlers that connect to data sources to populate the database.

116
Q

How can the domain’s zone apex, for example,”myzoneapexdomain.com”, be pointed towards an Elastic Load Balancer?

A. By using an Amazon Route 53 Alias record
B. By using an A record
C. By using an AAAA record
D. By using an Amazon Route 53 CNAME record

A

A. By using an Amazon Route 53 Alias record

117
Q

An administrator is deploying Spark on Amazon EMR for two distinct use cases: machine learning algorithms and ad hoc querying. All data will be stored in Amazon S3. Two separate clusters for each use case will be deployed. The data volumes on Amazon S3 are less than 10
GB.
How should the administrator align instance types with the cluster’s purpose?

A. Machine Learning on C instance types and ad-hoc queries on R instance types
B. Machine Learning on R instance types and ad-hoc queries on G2 instance types
C. Machine Learning on T instance types and ad-hoc queries on M instance types
D. Machine Learning on D instance types and ad-hoc queries on I instance types

A

A. Machine Learning on C instance types and ad-hoc queries on R instance types

118
Q

A company has reproducible data that they want to store on Amazon Web Services. The company may want to retrieve the data on a frequent basis. Which Amazon web services storage option allows the customer to optimize storage costs and still achieve high availability for their data?

A. Amazon S3 Reduced Redundancy Storage
B. Amazon EBS Magnetic Volume
C. Amazon Glacier
D. Amazon S3 Standard Storage

A

A. Amazon S3 Reduced Redundancy Storage

119
Q

A data engineer wants to use an Amazon Elastic Map Reduce for an application. The Data engineer needs to make sure it complies with regulatory requirements. The auditor must be able to confirm at any point which servers are running and which network access controls are deployed.
Which action should the data engineer take to meet this requirement?
A. Provide the auditor IAM accounts with the SecurityAudit policy attached to their group.
B. Provide the auditor with SSH keys for access to the Amazon EMR cluster.
C. Provide the auditor with CloudFormation templates.
D. Provide the auditor with access the AWS DirectConnect to use their existing tools.

A

C. Provide the auditor with CloudFormation templates.

120
Q

You want to securely distribute credentials for your Amazon RDS instance to your fleet of web server instances. The credentials are stored in a file that is controlled by a configuration management system.
How do you securely deploy the credentials in an automated manner across the fleet of web server instances, which can number in the hundreds, while retaining the ability to roll back if needed?
A. Store your credential files in an Amazon S3 bucket. Use Amazon S3 server-side encryption on the credential files. Have a scheduled job that pulls down the credential files into the instances every 10 minutes
B. Store the credential files in your version-controlled repository with the rest of your code. Have a post-commit action in version control that kicks off a job in your continuous integration system which securely copies the new credentials files to all web server instances
C. Insert credential files into user data and use an instance lifecycle policy to periodically refresh the files from the user data
D. Keep credential files as a binary blob in an Amazon RDS MySQL DB instance, and have a script on each Amazon EC2 instance that pulls the files down from the RDS instance
E. Store the credential files in your version-controlled repository with the rest of your code. Use a parallel file copy program to send the credential files from your local machine to the Amazon EC2 instances

A

D. Keep credential files as a binary blob in an Amazon RDS MySQL DB instance, and have a script on each Amazon EC2 instance that pulls the files down from the RDS instance

121
Q

A user has launched an EC2 instance from an instance store backed AMI. The user has attached an additional instance store volume to the instance. The user wants to create an AMI from the running instance. Will the AMI have the additional instance store volume data?
A. Yes, the block device mapping will have information about the additional instance store volume
B. No, since the instance store backed AMI can have only the root volume bundled
C. It is not possible to attach an additional instance store volume to the existing instance store backed AMI instance
D. No, since this is ephermal storage it will not be a part of the AMI

A

A. Yes, the block device mapping will have information about the additional instance store volume

122
Q

An organization is designing a public web application and has a requirement that states all application users must be centrally authenticated before any operations are permitted. The organization will need to create a user table with fast data lookup for the application in which a user can read only his or her own data. All users already have an account with amazon.com.
How can these requirements be met?
A. Create an Amazon RDS Aurora table, with Amazon_ID as the primary key. The application uses amazon.com web identity federation to get a token that is used to assume an IAM role from AWS STS Use IAM database authentication by using the rds:db-tag IAM authentication policy and GRANT Amazon RDS row-level read permission per user.
B. Create an Amazon RDS Aurora table, with Amazon_ID as the primary key for each user. The application uses amazon.com web identity federation to get a token that is used to assume an IAM role. Use IAM database authentication by using rds:db-tag IAM authentication policy and GRANT Amazon RDS row-level read permission per user.
C. Create an Amazon DynamoDB table, with Amazon_ID as the partition key. The application uses amazon.com web identity federation to get a token that is used to assume an IAM role from AWS STS in the Role, use IAM condition context key dynamodb:LeadingKeys with IAM substitution variables $ {www.amazon.com:user_id} and allow the required DynamoDB API operations in IAM JSON policy Action element for reading the records.
D. Create an Amazon DynamoDB table, with Amazon_ID as the partition key. The application uses amazon.com web identity federation to assume an IAM role from AWS STS in the Role, use IAM condition context key dynamodb:LeadingKeys with IAM substitution variables $
{www.amazon.com:user_id} and allow the required DynamoDB API operations in IAM JSON policy Action element for reading the records.

A

C. Create an Amazon DynamoDB table, with Amazon_ID as the partition key. The application uses amazon.com web identity federation to get a token that is used to assume an IAM role from AWS STS in the Role, use IAM condition context key dynamodb:LeadingKeys with IAM substitution variables $ {www.amazon.com:user_id} and allow the required DynamoDB API operations in IAM JSON policy Action element for reading the records.

123
Q

A user has created a launch configuration for Auto Scaling where CloudWatch detailed monitoring is disabled. The user wants to now enable detailed monitoring. How can the user achieve this?
A. Update the Launch config with CLI to set InstanceMonitoringDisabled = false
B. The user should change the Auto Scaling group from the AWS console to enable detailed monitoring
C. Update the Launch config with CLI to set InstanceMonitoring.Enabled = true
D. Create a new Launch Config with detail monitoring enabled and update the Auto Scaling group

A

D. Create a new Launch Config with detail monitoring enabled and update the Auto Scaling group

124
Q

An online photo album app has a key design feature to support multiple screens (e.g, desktop, mobile phone, and tablet) with high-quality displays. Multiple versions of the image must be saved in different resolutions and layouts.
The image-processing Java program takes an average of five seconds per upload, depending on the image size and format. Each image upload captures the following image metadata: user, album, photo label, upload timestamp.
The app should support the following requirements:
Hundreds of user image uploads per second
Maximum image upload size of 10 MB
Maximum image metadata size of 1 KB
Image displayed in optimized resolution in all supported screens no later than one minute after image upload Which strategy should be used to meet these requirements?
A. Write images and metadata to Amazon Kinesis. Use a Kinesis Client Library (KCL) application to run the image processing and save the image output to Amazon S3 and metadata to the app repository DB.
B. Write image and metadata RDS with BLOB data type. Use AWS Data Pipeline to run the image processing and save the image output to Amazon S3 and metadata to the app repository DB.
C. Upload image with metadata to Amazon S3, use Lambda function to run the image processing and save the images output to Amazon S3 and metadata to the app repository DB.
D. Write image and metadata to Amazon Kinesis. Use Amazon Elastic MapReduce (EMR) with Spark Streaming to run image processing and save the images output to Amazon S3 and metadata to app repository DB.

A

C. Upload image with metadata to Amazon S3, use Lambda function to run the image processing and save the images output to Amazon S3 and metadata to the app repository DB.

125
Q

A company generates a large number of files each month and needs to use AWS import/export to move these files into Amazon S3 storage. To satisfy the auditors, the company needs to keep a record of which files were imported into Amazon S3.
What is a low-cost way to create a unique log for each import job?
A. Use the same log file prefix in the import/export manifest files to create a versioned log file in
Amazon S3 for all imports
B. Use the log file prefix in the import/export manifest file to create a unique log file in Amazon S3 for each import
C. Use the log file checksum in the import/export manifest file to create a log file in Amazon S3 for each import
D. Use script to iterate over files in Amazon S3 to generate a log after each import/export job

A

B. Use the log file prefix in the import/export manifest file to create a unique log file in Amazon S3 for each import

126
Q

When will you incur costs with an Elastic IP address (EIP)? A. When an EIP is allocated.
B. When it is allocated and associated with a running instance.
C. When it is allocated and associated with a stopped instance.
D. Costs are incurred regardless of whether the EIP is associated with a running instance.

A

C. When it is allocated and associated with a stopped instance.

127
Q

A company’s social media manager requests more staff on the weekends to handle an increase in customer contacts from a particular region. The company needs a report to visualize the trends on weekends over the past 6 months using QuickSight.
How should the data be represented?
A. A line graph plotting customer contacts vs. time, with a line for each region
B. A pie chart per region plotting customer contacts per day of week
C. A map of the regions with a heatmap overlay to show the volume of customer contacts D. A bar graph plotting region vs volume of social media contacts

A

C. A map of the regions with a heatmap overlay to show the volume of customer contacts D. A bar graph plotting region vs volume of social media contacts

128
Q

A user plans to use RDS as a managed DB platform. Which of the below mentioned features is not supported by RDS?
A. Automated backup
B. Automated scaling to manage a higher load
C. Automated failure detection and recovery
D. Automated software patching

A

B. Automated scaling to manage a higher load

129
Q

After an Amazon VPC instance is launched, can I change the VPC security groups it belongs to?

A. No. You cannot.
B. Yes. You can.
C. Only if you are the root user
D. Only if the tag “VPC_Change_Group” is true

A

C. Only if you are the root user

130
Q

Can I encrypt connections between my application and my DB Instance using SSL?

A. No
B. Yes
C. Only in VPC
D. Only in certain regions

A

B. Yes

131
Q

Does AWS Direct Connect allow you access to all Availabilities Zones within a Region?
A. Depends on the type of connection
B. No
C. Yes
D. Only when there’s just one availability zone in a region. If there are more than one, only one availability zone can be accessed directly.

A

A. Depends on the type of connection

132
Q

An administrator needs to manage a large catalog of items from various external sellers. The administration needs to determine if the items should be identified as minimally dangerous, dangerous or highly dangerous based on their textual description. The administrator already has some items with the danger attribute, but receives hundreds of new item descriptions every day without such classification.

The administrator has a system that captures dangerous goods reports from customer support team or from user feedback. What is a cost –effective architecture to solve this issue?

A. Build a set of regular expression rules that are based on the existing examples. And run them on the DynamoDB streams as every new item description is added to the system.
B. Build a kinesis Streams process that captures and marks the relevant items in the dangerous goods reports using a Lambda function once more than two reports have been filed.
C. Build a machine learning model to properly classify dangerous goods and run it on the DynamoDB streams as every new item description is added to the system.
D. Build a machine learning model with binary classification for dangerous goods and run it on the DynamoDB streams as every new item description is added to the system.

A

C. Build a machine learning model to properly classify dangerous goods and run it on the DynamoDB streams as every new item description is added to the system.

133
Q

An Operations team continuously monitors the number of visitors to a website to identify any potential system problems. The number of website visitors varies throughout the day. The site is more popular in the middle of the day and less popular at night.

Which type of dashboard display would be the MOST useful to allow staff to quickly and correctly identify system problems?

A. A vertical stacked bar chart showing today’s website visitors and the historical average number of website visitors.
B. An overlay line chart showing today’s website visitors at one-minute intervals and also the historical average number of website visitors.
C. A single KPI metric showing the statistical variance between the current number of website visitors and the historical number of website visitors for the current time of day.
D. A scatter plot showing today’s website visitors on the X-axis and the historical average number of website visitors on the Y-axis.

A

B. An overlay line chart showing today’s website visitors at one-minute intervals and also the historical average number of website visitors.

134
Q

Which of the following are true regarding AWS Cloud Trail? Choose 3 answers

A. Cloudtrail is enabled globally B. Cloudtrail is enabled by default
C. Cloudtrail is enabled on a per-region basis
D. Cloudtrail is enabled on a per-service basis
E. Logs can be delivered to a single Amazon S3 bucket for aggregation
F. Logs can only be processes and delivered to the region in which they are generated

A

A. Cloudtrail is enabled globally

C. Cloudtrail is enabled on a per-region basis

E. Logs can be delivered to a single Amazon S3 bucket for aggregation

135
Q
A company is deploying a two tier, highly available web application to AWS. Which Service provides durable storage for static content while utilizing lower overall CPU resources for the web tier?
A.	Amazon EBS volume
B.	Amazon S3
C.	Amazon EC2 instance store
D.	Amazon RDS instance
A

B. Amazon S3

136
Q

You have been tasked with deployment a solution for your company that will store images, which the marketing department will use for its campaigns. Employees are able to upload images via a web interface, and once uploaded, each image must be resized and watermarked with the company logo. Image resize and watermark is not time-sensitive and can be completed days after upload if required.

How should you design this solution in the most highly available and cost-effective way?

A. Configure your web application to upload images to the Amazon Elastic Transcoder service. Use the
Amazon Elastic Transcoder watermark feature to add the company logo as a watermark on your images
and then upload the final image into an Amazon s3 bucket
B. Configure your web application to upload images to Amazon S3, and send the Amazon S3 bucket URI to an Amazon SQS queue. Create an Auto Scaling group and configure it to use Spot instances, specifying a price you are willing to pay. Configure the instances in this Auto Scaling group to poll the SQS queue for new images and then resize and watermark the image before uploading the final images into Amazon S3
C. Configure your web application to upload images to Amazon S3, and send the S3 object URI to an Amazon SQS queue. Create an Auto Scaling launch configuration that uses Spot instances, specifying a price you are willing to pay. Configure the instances in this Auto Scaling group to poll the Amazon SQS queue for new images and then resize and watermark the image before uploading the new images into Amazon S3 and deleting the message from the Amazon SQS queue
D. Configure your web application to upload images to the local storage of the web server. Create a cronjob to execute a script daily that scans this directory for new files and then uses the Amazon EC2 Service API to launch 10 new Amazon EC2 instances, which will resize and watermark the images daily

A

C. Configure your web application to upload images to Amazon S3, and send the S3 object URI to an Amazon SQS queue. Create an Auto Scaling launch configuration that uses Spot instances, specifying a price you are willing to pay. Configure the instances in this Auto Scaling group to poll the Amazon SQS queue for new images and then resize and watermark the image before uploading the new images into Amazon S3 and deleting the message from the Amazon SQS queueC. Configure your web application to upload images to Amazon S3, and send the S3 object URI to an Amazon SQS queue. Create an Auto Scaling launch configuration that uses Spot instances, specifying a price you are willing to pay. Configure the instances in this Auto Scaling group to poll the Amazon SQS queue for new images and then resize and watermark the image before uploading the new images into Amazon S3 and deleting the message from the Amazon SQS queue

137
Q

REST or Query requests are HTTP or HTTPS requests that use an HTTP verb (such as GET or POST) and a parameter named Action or Operation that specifies the API you are calling.
A. FALSE
B. TRUE

A

A. FALSE

138
Q

In AWS, which security aspects are the customer’s responsibility? Choose 4 answers
A. Life-Cycle management of IAM credentials
B. Security Group and ACL settings
C. Controlling physical access to compute resources
D. Path management on the EC2 instance’s operating system
E. Encryption of EBS volumes
F. Decommissioning storage devices

A

A. Life-Cycle management of IAM credentials

B. Security Group and ACL settings

D. Path management on the EC2 instance’s operating system

E. Encryption of EBS volumes

139
Q

As part of your continuous deployment process, your application undergoes an I/O load performance test before it is deployed to production using new AMIs. The application uses one Amazon Elastic Block Store (EBS) PIOPS volume per instance and requires consistent I/O performance.

Which of the following must be carried out to ensure that I/O load performance tests yield the correct results in a repeatable manner?

A. Ensure that the I/O block sizes for the test are randomly selected
B. Ensure that the Amazon EBS volumes have been pre-warmed by reading all the blocks before the test
C. Ensure that snapshots of the Amazon EBS volumes are created as a backup
D. Ensure that the Amazon EBS volume is encrypted
E. Ensure that the Amazon EBS volume has been pre-warmed by creating a snapshot of the volume before the test

A

B. Ensure that the Amazon EBS volumes have been pre-warmed by reading all the blocks before the test

140
Q

You are building a mobile app for consumers to post cat pictures online. You will be storing the images in AWS S3. You want to run the system very cheaply and simply.

Which one of these options allows you to build a photo sharing application without needing to worry about scaling expensive uploads processes, authentication/authorization and so forth?

A. Build the application out using AWS Cognito and web identity federation to allow users to log in using Facebook or Google Accounts. Once they are logged in, the secret token passed to that user is used to directly access resources on AWS, like AWS S3. (Amazon Cognito is a superset of the functionality provided by web identity federation. Referlink)
B. Use JWT or SAML compliant systems to build authorization policies. Users log in with a username and password, and are given a token they can use indefinitely to make calls against the photo infrastructure.
C. Use AWS API Gateway with a constantly rotating API Key to allow access from the client-side. Construct a custom build of the SDK and include S3 access in it.
D. Create an AWS oAuth Service Domain ad grant public signup and access to the domain. During setup, add at least one major social media site as a trusted Identity Provider for users

A

A. Build the application out using AWS Cognito and web identity federation to allow users to log in using Facebook or Google Accounts. Once they are logged in, the secret token passed to that user is used to directly access resources on AWS, like AWS S3. (Amazon Cognito is a superset of the functionality provided by web identity federation. Referlink)

141
Q

Select the correct set of steps for exposing the snapshot only to specific AWS accounts
A. Select public for all the accounts and check mark those accounts with whom you want to expose the snapshots and click save.
B. SelectPrivate, enter the IDs of those AWS accounts, and clickSave.
C. SelectPublic, enter the IDs of those AWS accounts, and clickSave.
D. SelectPublic, mark the IDs of those AWS accounts as private, and clickSave.

A

C. SelectPublic, enter the IDs of those AWS accounts, and clickSave.

142
Q

A photo sharing service stores pictures in Amazon Simple Storage Service (S3) and allows application signin using an Open ID Connect compatible identity provider.

Which AWS Security Token approach to temporary access should you use for the Amazon S3 operations?

A. SAML-based identity Federation
B. Cross-Account Access
C. AWS identity and Access Management roles
D. Web identity Federation

A

A. SAML-based identity Federation

143
Q

A company hosts a portfolio of e-commerce websites across the Oregon, N.Virginia, Ireland and Sydney AWS regions. Each site keeps log files that captures user behavior. The company has built an application that generates batches of product recommendations with collaborative filtering in Oregon. Oregon was selected because the flagship site is hosted there and provides the largest collection of data to train machine learning models against. The other regions do NOT have enough historic data to train accurate machine learning models.

Which set of data processing steps improves recommendations for each region?

A. Use the e-commerce application in Oregon to write replica log files in each other region B. Use Amazon S3 bucket replication to consolidate log entries and builds a single model in Oregon
C. Use Kinesis as a butler for web logs and replicate logs to the Kinesis streams of a neighboring region
D. Use the CloudWatch Logs agent to consolidate logs into a single CloudWatch logs group

A

D. Use the CloudWatch Logs agent to consolidate logs into a single CloudWatch logs group

144
Q

A customer has a machine learning workflow that consist of multiple quick cycles of reads-writes-reads on Amazon S3. The customer needs to run the workflow on EMR but is concerned that the reads in subsequent cycles will miss new data critical to the machine learning from the prior cycles.

How should the customer accomplish this?

A. Turn on EMRFS consistent view when configuring the EMR cluster
B. Use AWS Data Pipeline to orchestrate the data processing cycles
C. Set Hadoop.data.consistency = true in the core-site.xml file
D. Set Hadoop.s3.consistency = true in the core-site.xml file

A

A. Turn on EMRFS consistent view when configuring the EMR cluster

145
Q

An online retailer is using Amazon DynamoDB to store data relate to customer transactions. The items in the table contain several string attributes describing the transaction as well as a JSON attribute containing the shopping cart and other details corresponding to the transactions. Average item size is ~250KB, most of which is associated with the JSON attribute. The average generates ~3GB of data per month.
Customers access the table to display their transaction history and review transaction details as needed. Ninety percent of queries against the table are executed when building the transaction history view, with the other 10% retrieving transaction details. The table is partitioned on CustomerID and sorted on transaction data.
The client has very high read capacity provisioned for the table and experiences very even utilization, but complains about the cost of Amazon DynamoDB compared to other NoSQL solutions.

Which strategy will reduce the cost associated with the client’s read queries while not degrading quality?

A. Modify all database calls to use eventually consistent reads and advise customers that transaction history may be one second out-of-date.
B. Change the primary table to partition on TransactionID, create a GSI partitioned on customer and sorted on date, project small attributes into GSI and then query GSI for summary data and the primary table for JSON details.
C. Vertically partition the table, store base attributes on the primary table and create a foreign key reference to a secondary table containing the JSON data. Query the primary table for summary data and the secondary table for JSON details.
D. Create an LSI sorted on date project the JSON attribute into the index and then query the primary table for summary data and the LSI for JSON details

A

C. Vertically partition the table, store base attributes on the primary table and create a foreign key reference to a secondary table containing the JSON data. Query the primary table for summary data and the secondary table for JSON details.

146
Q

An organization has configured a VPC with an Internet Gateway (IGW). Pairs of public and private subnets (each with one subnet per Availability Zone), and an Elastic Load Balancer (ELB) configured to use the public subnets. The application’s web tier leverages the ELB. Auto Scaling and a multi-AZ RDS database instance the organization would like to eliminate any potential single points of failure in this design.
What step should you take to achieve this organization’s objective?
A. Nothing, there are no single points of failure in this architecture.
B. Create and attach a second IGW to provide redundant internet connectivity.
C. Create and configure a second Elastic Load Balancer to provide a redundant load balancer.
D. Create a second multi-AZ RDS instance in another Availability Zone and configure replication to provide a redundant database.

A

A. Nothing, there are no single points of failure in this architecture.

147
Q

A data engineer chooses Amazon DynamoDB as a data store for a regulated application. This application must be submitted to regulators for review. The data engineer needs to provide a control framework that lists the security controls from the process to follow to add new users down to the physical controls of the data center, including items like security guards and cameras.

How should this control mapping be a achieved using AWS?

A. Request AWS third-party audit reports and/or the AWS quality addendum and map the AWS responsibilities to the controls that must be provided
B. Request data center Temporary Auditor access to an AWS data center to verify the control mapping
C. Request relevant SLAs and security guidelines for Amazon DynamoDB and define these guidelines within the application’s architecture to map to the control framework
D. Request Amazon DynamoDB system architecture designs to determine how to map the AWS responsibilities to the controls that must be provided

A

A. Request AWS third-party audit reports and/or the AWS quality addendum and map the AWS responsibilities to the controls that must be provided

148
Q

What is web identity federation?
A. Use of an identity provider like Google or Facebook to become an AWS IAM User.
B. Use of an identity provider like Google or Facebook to exchange for temporary AWS security credentials.
C. Use of AWS IAM User tokens to log in as a Google or Facebook user.
D. Use of AWS STS Tokens to log in as a Google or Facebook user.

A

B. Use of an identity provider like Google or Facebook to exchange for temporary AWS security credentials.

149
Q

True or False: When you add a rule to a DB security group, you do not need to specify port number or protocol.
A. Depends on the RDMS used
B. TRUE
C. FALSE

A

B. TRUE

150
Q
You have identified network throughput as a bottleneck on your m1.small EC2 instance when uploading data Into Amazon S3 In the same region.
How do you remedy this situation?
A.	Add an additional ENI
B.	Change to a larger Instance
C.	Use DirectConnect between EC2 and S3
D.	Use EBS PIOPS on the local volume
A

B. Change to a larger Instance

151
Q

What happens to the I/O operations while you take a database snapshot?
A. I/O operations to the database are suspended for an hour while the backup is in progress.
B. I/O operations to the database are sent to a Replica (if available) for a few minutes while the backup is in progress.
C. I/O operations will be functioning normally
D. I/O operations to the database are suspended for a few minutes while the backup is in progress.

A

D. I/O operations to the database are suspended for a few minutes while the backup is in progress.

152
Q

A Redshift data warehouse has different user teams that need to query the same table with very different query types. These user teams are experiencing poor performance. Which action improves performance for the user teams in this situation?
A. Create custom table views
B. Add interleaved sort keys per team
C. Maintain team-specific copies of the table
D. Add support for workload management queue hopping

A

D. Add support for workload management queue hopping

153
Q

A city has been collecting data on its public bicycle share program for the past three years. The SPB dataset currently on Amazon S3. The data contains the following data points:
• Bicycle organization points
• Bicycle destination points
• Mileage between the points
• Number of bicycle slots available at the station (which is variable based on the station location)
• Number of slots available and taken at each station at a given time
The program has received additional funds to increase the number of bicycle stations, available. All data is regularly archived to Amazon Glacier.
The new bicycle station must be located to provide the most riders access to bicycles. How should this task be performed?

A. Move the data from Amazon S3 into Amazon EBS-backed volumes and EC2 Hardoop with spot instances to run a Spark job that performs a stochastic gradient descent optimization.
B. Use the Amazon Redshift COPY command to move the data from Amazon S3 into RedShift and platform a SQL query that outputs the most popular bicycle stations.
C. Persist the data on Amazon S3 and use a transits EMR cluster with spot instances to run a Spark streaming job that will move the data into Amazon Kinesis.
D. Keep the data on Amazon S3 and use an Amazon EMR based Hadoop cluster with spot insistences to run a spark job that perform a stochastic gradient descent optimization over EMBFS.

A

B. Use the Amazon Redshift COPY command to move the data from Amazon S3 into RedShift and platform a SQL query that outputs the most popular bicycle stations.

154
Q

A company has several teams of analytics. Each team of analysts has their own cluster. The teams need to run SQL queries using Hive, Spark-SQL and Presto with Amazon EMR. The company needs to enable a centralized metadata layer to expose the Amazon S3 objects as tables to the analysts.
Which approach meets the requirement for a centralized metadata layer?
A. EMRFS consistent view with a common Amazon DynamoDB table B. Bootstrap action to change the Hive Metastore to an Amazon RDS database
C. s3distcp with the outputManifest option to generate RDS DDL
D. naming scheme support with automatic partition discovery from Amazon S3

A

A. EMRFS consistent view with a common Amazon DynamoDB table B. Bootstrap action to change the Hive Metastore to an Amazon RDS database

155
Q

The majority of your Infrastructure is on premises and you have a small footprint on AWS Your company has decided to roll out a new application that is heavily dependent on low latency connectivity to LOAP for authentication Your security policy requires minimal changes to the company’s existing application user management processes.
What option would you implement to successfully launch this application1?
A. Create a second, independent LOAP server in AWS for your application to use for authentication
B. Establish a VPN connection so your applications can authenticate against your existing onpremises LDAP servers
C. Establish a VPN connection between your data center and AWS create a LDAP replica on AWS and configure your application to use the LDAP replica for authentication
D. Create a second LDAP domain on AWS establish a VPN connection to establish a trust relationship between your new and existing domains and use the new domain for authentication

A

C. Establish a VPN connection between your data center and AWS create a LDAP replica on AWS and configure your application to use the LDAP replica for authentication

156
Q
You are deploying an application to track GPS coordinates of delivery in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable realtime processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?
A.	Amazon Kinesis
B.	AWS Data Pipeline
C.	Amazon AppStream
D.	Amazon Simple Queue Service
A

A. Amazon Kinesis

157
Q

How should an Administrator BEST architect a large multi-layer Long Short-Term Memory (LSTM) recurrent neural network (RNN) running with MXNET on Amazon EC2? (Choose two.)
A. Use data parallelism to partition the workload over multiple devices and balance the workload within the GPUs.
B. Use compute-optimized EC2 instances with an attached elastic GPU.
C. Use general purpose GPU computing instances such as G3 and P3.
D. Use processing parallelism to partition the workload over multiple storage devices and balance the workload within the GPUs.

A

A. Use data parallelism to partition the workload over multiple devices and balance the workload within the GPUs.

C. Use general purpose GPU computing instances such as G3 and P3.

158
Q
A user has created an ELB with Auto Scaling. Which of the below mentioned offerings from ELB helps the user to stop sending new requests traffic from the load balancer to the EC2 instance when the instance is being deregistered while continuing in-flight requests?
A.	ELB sticky session
B.	ELB deregistration check
C.	ELB connection draining
D.	ELB auto registration Off
A

C. ELB connection draining

159
Q

A gaming organization is developing a new game and would like to offer real-time competition to their users. The data architecture has the following characteristics:
The game application is writing events directly to Amazon DynamoDB from the user’s mobile device.
Users from the website can access their statistics directly from DynamoDB.
The game servers are accessing DynamoDB to update the user’s information.
The data science team extracts data from DynamoDB for various applications.
The engineering team has already agreed to the IAM roles and policies to use for the data science team and the application.
Which actions will provide the MOST security, while maintaining the necessary access to the website and game application? (Choose two.)

A. Use Amazon Cognito user pool to authenticate to both the website and the game application.
B. Use IAM identity federation to authenticate to both the website and the game application.
C. Create an IAM policy with PUT permission for both the website and the game application.
D. Create an IAM policy with fine-grained permission for both the website and the game application.
E. Create an IAM policy with PUT permission for the game application and an IAM policy with GET permission for the website.

A

B. Use IAM identity federation to authenticate to both the website and the game application.

E. Create an IAM policy with PUT permission for the game application and an IAM policy with GET permission for the website.

160
Q

An administrator receives about 100 files per hour into Amazon S3 and will be loading the files into Amazon Redshift. Customers who analyze the data within Redshift gain significant value when they receive data as quickly as possible. The customers have agreed to a maximum loading interval of 5 minutes. Which loading approach should the administrator use to meet this objective?
A. Load each file as it arrives because getting data into the cluster as quickly as possible is the priority.
B. Load the cluster as soon as the administrator has the same number of files as nodes in the cluster.
C. Load the cluster when the administrator has an even multiple of files relative to Cluster Slice
Count, or 5 minutes whichever comes first.
D. Load the cluster when the number files is less than the Cluster Slice Count.

A

C. Load the cluster when the administrator has an even multiple of files relative to Cluster Slice
Count, or 5 minutes whichever comes first.

161
Q

Does Amazon RDS for SQL Server currently support importing data into the msdb database?
A. No
B. Yes

A

A. No

162
Q

Typically, you want your application to check whether a request generated an error before you spend any time processing results. The easiest way to find out if an error occurred is to look for an __________ node in the response from the Amazon RDS API.
A. Incorrect
B. Error
C. FALSE

A

B. Error

163
Q
Amazon RDS creates an SSL certificate and installs the certificate on the DB Instance when Amazon RDS provisions the instance. These certificates are signed by a certificate authority. The \_\_\_\_\_ is stored athttps://rds.amazonaws.com/doc/rds-ssl-ca-cert.pem.
A.  private key 
B.  foreign key
C.	public key
D.	protected key
A

A. private key

164
Q

An Amazon Kinesis stream needs to be encrypted.

Which approach should be used to accomplish this task?

A. Perform a client-side encryption of the data before it enters the Amazon Kinesis stream on the producer
B. Use a partition key to segment the data by MD5 hash functions which makes indecipherable while in transit
C. Perform a client-side encryption of the data before it enters the Amazon Kinesis stream on the consumer
D. Use a shard to segment the data which has built-in functionality to make it indecipherable while in transit

A

A. Perform a client-side encryption of the data before it enters the Amazon Kinesis stream on the producer

165
Q

A game company needs to properly scale its game application, which is backed by DynamoDB.
Amazon Redshift has the past two years of historical data. Game traffic varies throughout the year based on various factors such as season, movie release, and holiday season. An administrator needs to calculate how much read and write throughput should be previsioned for DynamoDB table for each week in advance.
How should the administrator accomplish this task?
A. Feed the data into Amazon Machine Learning and build a regression model
B. Feed the data into Spark Mlib and build a random forest model
C. Feed the data into Apache Mahout and build a multi-classification model
D. Feed the data into Amazon Machine Learning and build a binary classification model

A

B. Feed the data into Spark Mlib and build a random forest model

166
Q
What is the maximum response time for a Business level Premium Support case?
A.	30 minutes
B.	1 hour
C.	12 hours
D.	10 minutes
A

B. 1 hour

167
Q

A systems engineer for a company proposes digitalization and backup of large archives for customers. The systems engineer needs to provide users with a secure storage that makes sure that data will never be tempered with once it has been uploaded. How should this be accomplished?
A. Create an Amazon Glacier Vault. Specify a “Deny” Vault lock policy on this vault to block “glacier:DeleteArchive”.
B. Create an Amazon S3 bucker. Specify a “Deny” bucket policy on this bucket to block “s3:DeleteObject”.
C. Create an Amazon Glacier Vault. Specify a “Deny” vault access policy on this Vault to block “glacier:DeleteArchive”.
D. Create a secondary AWS containing an Amazon S3 bucket. Grant “s3:PutObject” to the primary account.

A

C. Create an Amazon Glacier Vault. Specify a “Deny” vault access policy on this Vault to block “glacier:DeleteArchive”.

168
Q

A company needs to deploy virtual desktops to its customers in a virtual private cloud, leveraging existing security controls. Which set of AWS services and features will meet the company’s requirements?
A. Virtual private network connection, AWS Directory services, and ClassicLink
B. Virtual private network connection, AWS Directory services, and Amazon WorkSpaces
C. AWS Directory service, Amazon WorkSpaces, and AWS Identity and Access Management
D. Amazon Elastic Compute Cloud, and AWS identity and access management

A

B. Virtual private network connection, AWS Directory services, and Amazon WorkSpaces

169
Q

A social media customer has data from different data sources including RDS running MySQL, RedShift, and Hive on EMR. To support better analysis, the customer needs to be able to analyze data from different data sources and to combine the results.
What is the most cost-effective solution to meet these requirements?
A. Load all data from a different database/warehouse to S3. Use Redshift COPY command to copy data to Redshift for analysis.
B. Install Presto on the EMR cluster where Hive sits. Configure MySQL and PostgreSQL connector to select from different data sources in a single query.
C. Spin up an Elasticsearch cluster. Load data from all three data sources and use Kibana to analyze.
D. Write a program running on a separate EC2 instance to run queries to three different systems. Aggregate the results after getting the responses from all three systems.

A

B. Install Presto on the EMR cluster where Hive sits. Configure MySQL and PostgreSQL connector to select from different data sources in a single query.

170
Q
Fill in the blanks: \_\_\_\_\_ is a durable, block-level storage volume that you can attach to a single, running Amazon EC2 instance.
A.	Amazon S3
B.	Amazon EBS
C.	None of these
D.	All of these
A

B. Amazon EBS

171
Q

What happens when you create a topic on Amazon SNS?
A. The topic is created, and it has the name you specified for it.
B. An ARN (Amazon Resource Name) is created
C. You can create a topic on Amazon SQS, not on Amazon SNS.
D. This question doesn’t make sense.

A

B. An ARN (Amazon Resource Name) is created

172
Q

An administrator needs to design a strategy for the schema in a Redshift cluster. The administrator needs to determine the optimal distribution style for the tables on the Redshift schema.
In which two circumstances would choosing EVEN distribution be most appropriate? (Select two)
A. When the tables are highly denormalized and do NOT participate in frequent joins
B. When data must be grouped based on a specific key on a defined slice
C. When data transfer between nodes must be eliminated
D. When a new table has been loaded and it is unclear how it will be joined to dimension tables

A

B. When data must be grouped based on a specific key on a defined slice

D. When a new table has been loaded and it is unclear how it will be joined to dimension tables

173
Q
Is there a limit to the number of groups you can have?
A.	Yes for all users
B.	Yes for all users except root
C.	No
D.	Yes unless special permission granted
A

A. Yes for all users

174
Q

Which data store should the organization choose?
A. Amazon Relational Database Service (RDS)
B. Amazon Redshift
C. Amazon DynamoDB
D. Amazon Elasticsearch

A

C. Amazon DynamoDB

175
Q

A data engineer in a manufacturing company is designing a data processing platform that receives a large volume of unstructured data. The data engineer must populate a well- structured star schema in Amazon Redshift.
What is the most efficient architecture strategy for this purpose?
A. Transform the unstructured data using Amazon EMR and generate CSV data. COPY data into the analysis schema within Redshift.
B. Load the unstructured data into Redshift, and use string paring functions to extract structured data for inserting into the analysis schema.
C. When the data is saved to Amazon S3. Use S3 Event Notifications and AWS Lambda to transform the file content. Insert the data into the analysis schema on Redshift.
D. Normalize the data using an AWS Marketplace ETL tool persist the result to Amazon S3 and use AWS Lambda to INSERT the data into Redshift.

A

A. Transform the unstructured data using Amazon EMR and generate CSV data. COPY data into the analysis schema within Redshift.

176
Q

What does the “Server Side Encryption” option on Amazon S3 provide?

A. It provides an encrypted virtual disk in the Cloud.
B. It doesn’t exist for Amazon S3, but only for Amazon EC2.
C. It encrypts the files that you send to Amazon S3, on the server side.
D. It allows to upload files using an SSL endpoint, for a secure transfer.

A

A. It provides an encrypted virtual disk in the Cloud.

177
Q

An organization needs to design and deploy a large-scale data storage solution that will be highly durable and highly flexible with respect to the type and structure of data being stored. The data to be stored will be sent or generated from a variety of sources and must be persistently available for access and processing by multiple applications.
What is the most cost-effective technique to meet these requirements?
A. Use Amazon Simple Storage Service (S3) as the actual data storage system, coupled with appropriate tools for ingestion/acquisition of data and for subsequent processing and querying.
B. Deploy a long-running Amazon Elastic MapReduce (EMR) cluster with Amazon Elastic Block Store (EBS) volumes for persistent HDFS storage and appropriate Hadoop ecosystem tools for processing and querying.
C. Use Amazon Redshift with data replication to Amazon Simple Storage Service (S3) for comprehensive durable data storage, processing and querying.
D. Launch an Amazon Relational Database Service (RDS), and use the enterprise grade and capacity of the Amazon Aurora Engine for storage processing and querying.

A

C. Use Amazon Redshift with data replication to Amazon Simple Storage Service (S3) for comprehensive durable data storage, processing and querying.

178
Q
Can I initiate a "forced failover" for my Oracle Multi-AZ DB Instance deployment?
A.	Yes
B.	Only in certain regions
C.	Only in VPC
D.	No
A

A. Yes

179
Q

Are you able to integrate a multi-factor token service with the AWS Platform?

A. No, you cannot integrate multi-factor token devices with the AWS platform.
B. Yes, you can integrate private multi-factor token devices to authenticate users to the AWS platform.
C. Yes, using the AWS multi-factor token devices to authenticate users on the AWS platform.

A

C. Yes, using the AWS multi-factor token devices to authenticate users on the AWS platform.

180
Q
What does Amazon SES stand for?
A.  Simple Elastic Server 
B.  Simple Email Service
C.	Software Email Solution
D.	Software Enabled Server
A

B. Simple Email Service

181
Q

An AWS customer is deploying a web application that is composed of a front-end running on Amazon EC2 and of confidential data that is stored on Amazon S3. The customer security policy that all access operations to this sensitive data must be authenticated and authorized by a centralized access management system that is operated by a separate security team. In addition, the web application team that owns and administers the EC2 web front-end instances is prohibited from having any ability to access the data that circumvents this centralized access management system. Which of the following configurations will support these requirements?
A. Encrypt the data on Amazon S3 using a CloudHSM that is operated by the separate security team. Configure the web application to integrate with the CloudHSM for decrypting approved data access operations for trusted end-users.
B. Configure the web application to authenticate end-users against the centralized access management system. Have the web application provision trusted users STS tokens entitling the download of approved data directly from Amazon S3
C. Have the separate security team create and IAM role that is entitled to access the data on Amazon S3. Have the web application team provision their instances with this role while denying their IAM users access to the data on Amazon S3
D. Configure the web application to authenticate end-users against the centralized access management system using SAML. Have the end-users authenticate to IAM using their SAML token and download the approved data directly from S3.

A

B. Configure the web application to authenticate end-users against the centralized access management system. Have the web application provision trusted users STS tokens entitling the download of approved data directly from Amazon S3

182
Q

A company is centralizing a large number of unencrypted small files rom multiple Amazon S3 buckets. The company needs to verify that the files contain the same data after centralization.

Which method meets the requirements?

A. Company the S3 Etags from the source and destination objects
B. Call the S3 CompareObjects API for the source and destination objects
C. Place a HEAD request against the source and destination objects comparing SIG v4 header D. Compare the size of the source and destination objects

A

A. Company the S3 Etags from the source and destination objects

183
Q

An organization would like to run analytics on their Elastic Load Balancing logs stored in Amazon S3 and join this data with other tables in Amazon S3. The users are currently using a BI tool connecting with JDBC and would like to keep using this BI tool.
Which solution would result in the LEAST operational overhead?
A. Trigger a Lambda function when a new log file is added to the bucket to transform and load it into Amazon Redshift. Run the VACUUM command on the Amazon Redshift cluster every night.
B. Launch a long-running Amazon EMR cluster that continuously downloads and transforms new files from Amazon S3 into its HDFS storage. Use Presto to expose the data through JDBC.
C. Trigger a Lambda function when a new log file is added to the bucket to transform and move it to another bucket with an optimized data structure. Use Amazon Athena to query the optimized bucket.
D. Launch a transient Amazon EMR cluster every night that transforms new log files and loads them into Amazon Redshift.

A

C. Trigger a Lambda function when a new log file is added to the bucket to transform and move it to another bucket with an optimized data structure. Use Amazon Athena to query the optimized bucket.

184
Q

A user is planning to use the AWS RDS with MySQL. Which of the below mentioned services the user is not going to pay?

A. Data transfer
B. RDS CloudWatch metrics
C. Data storage
D. I/O requests per month

A

B. RDS CloudWatch metrics

185
Q

A clinical trial will rely on medical sensors to remotely assess patient health. Each physician who participates in the trial requires visual reports each morning. The reports are built from aggregations of all the sensor data taken each minute.

What is the most cost-effective solution for creating this visualization each day?

A. Use Kinesis Aggregators Library to generate reports for reviewing the patient sensor data and generate a QuickSight visualization on the new data each morning for the physician to review
B. Use a Transient EMR cluster that shuts down after use to aggregate the patient sensor data each night and generate a QuickSight visualization on the new data each morning for the physician to review
C. Use Spark streaming on EMR to aggregate the sensor data coming in every 15 minutes and generate a QuickSight visualization on the new data each morning for the physician to review
D. Use an EMR cluster to aggregate the patient sensor data each right and provide Zeppelin notebooks that look at the new data residing on the cluster each morning

A

A. Use Kinesis Aggregators Library to generate reports for reviewing the patient sensor data and generate a QuickSight visualization on the new data each morning for the physician to review

D. Use an EMR cluster to aggregate the patient sensor data each right and provide Zeppelin notebooks that look at the new data residing on the cluster each morning

186
Q
If I want my instance to run on a single-tenant hardware, which value do I have to set the instance's tenancy attribute to?
A.	dedicated
B.	isolated
C.	one
D.	reserved
A

A. dedicated

187
Q

Location of Instances are ____________
A. Regional
B. based on Availability Zone
C. Global

A

B. based on Availability Zone

188
Q

The department of transportation for a major metropolitan area has placed sensors on roads at key locations around the city. The goal is to analyze the flow of traffic and notifications from emergency services to identity potential issues and to help planners correct trouble spots.

A data engineer needs a scalable and fault-tolerant solution that allows planners to respond to issues within 30 seconds of their occurrence.

Which solution should the data engineer choose?

A. Collect the sensor data with Amazon Kinesis Firehose and store it in Amazon Redshift for analysis.
Collect emergency services events with Amazon SQS and store in Amazon DynamoDB for analysis

B. Collect the sensor data with Amazon SQS and store in Amazon DynamoDB for analysis.
Collect emergency services events with Amazon Kinesis Firehouse and store in Amazon Redshift for analysis

C. Collect both sensor data and emergency services events with Amazon Kinesis Streams and use Amazon DynamoDB for analysis

D. Collect both sensor data and emergency services events with Amazon Kinesis Firehouse and use Amazon Redshift for Analysis

A

A. Collect the sensor data with Amazon Kinesis Firehose and store it in Amazon Redshift for analysis.
Collect emergency services events with Amazon SQS and store in Amazon DynamoDB for analysis

189
Q

Customers have recently been complaining that your web application has randomly stopped responding. During a deep dive of your logs, the team has discovered a major bug in your Java web application. This bug is causing a memory leak that eventually causes the application to crash.
Your web application runs on Amazon EC2 and was built with AWS CloudFormation.
Which techniques should you see to help detect theses problems faster, as well as help eliminate the server’s unresponsiveness?
Choose 2 answers
A. Update your AWS CloudFormation configuration and enable a CustomResource that uses cfn- signal to detect memory leaks
B. Update your CloudWatch metric granularity config for all Amazon EC2 memory metrics to support five-second granularity. Create a CloudWatch alarm that triggers an Amazon SNS notification to page your team when the application memory becomes too large
C. Update your AWS CloudFormation configuration to take advantage of Auto Scaling groups. Configure an Auto Scaling group policy to trigger off your custom CloudWatch metrics
D. Create a custom CloudWatch metric that you push your JVM memory usage to create a CloudWatch alarm that triggers an Amazon SNS notification to page your team when the application memory usage becomes too large
E. Update your AWS CloudFormation configuration to take advantage of CloudWatch metrics Agent. Configure the CloudWatch Metrics Agent to monitor memory usage and trigger an Amazon SNS alarm

A

C. Update your AWS CloudFormation configuration to take advantage of Auto Scaling groups. Configure an Auto Scaling group policy to trigger off your custom CloudWatch metrics

D. Create a custom CloudWatch metric that you push your JVM memory usage to create a CloudWatch alarm that triggers an Amazon SNS notification to page your team when the application memory usage becomes too large

190
Q

You run a small online consignment marketplace. Interested sellers complete an online application in order to allow them to sell their products on your website. Once approved, they can their product using a custom interface. From that point, you manage the shopping cart process so that when a buyer decides to buy a product, you handle the billing and coordination the shipping. Part of this process requires sending emails to the buyer and the seller at different stages. Your system has been running on AWS for a few months. Occasionally, products are shipped before payment has cleared and emails are sent out of order. Furthermore, sometimes credit cards are being charged twice.
How can you resolve these problems?
A. Use the Amazon Simple Queue Service (SQS), and use a different set of workers for each task
B. Use the Amazon Simple Workflow Service (SWF), and use a different set of workers for each task.
C. Use the Simple Email Service (SES) to control the correct order of email delivery
D. Use the AWS Data Pipeline service to control the process flow of the various tasks
E. Use the Amazon Simple Queue Service (SQS), and use a single set of workers for each task

A

E. Use the Amazon Simple Queue Service (SQS), and use a single set of workers for each task

191
Q

A company receives data sets coming from external providers on Amazon S3. Data sets from different providers are dependent on one another. Data sets will drive at different and is no particular order.
A data architect needs to design a solution that enables the company to do the following:
• Rapidly perform cross data set analysis as soon as the data becomes available
• Manage dependencies between data sets that arrives at different times
Which architecture strategy offers a scalable and cost-effective solution that meets these requirements?
A. Maintain data dependency information in Amazon RDS for MySQL. Use an AWS Pipeline job to load an Amazon EMR Hive Table based on task dependencies and event notification triggers in Amazon S3
B. Maintain data dependency information in an Amazon DynamoDB table. Use Amazon SNS and event notification to publish data to a fleet of Amazon EC2 workers. Once the task dependencies have been resolved process the data with Amazon EMR
C. Maintain data dependency information in an Amazon ElasticCache Redis cluster. Use Amazon S3 event notifications to trigger an AWS Lambda function that maps the S3 object to Redis. Once the dependencies have been resolved process the data with Amazon EMR
D. Maintain data dependency information in an Amazon DynamoDB table. Use Amazon S3 event notifications to trigger an AWS Lambda function that maps the S3 object to the task associated with it in DynamoDB. Once all task dependencies have been resolved process the data with Amazon EMR

A

C. Maintain data dependency information in an Amazon ElasticCache Redis cluster. Use Amazon S3 event notifications to trigger an AWS Lambda function that maps the S3 object to Redis. Once the dependencies have been resolved process the data with Amazon EMR

192
Q
There are thousands of text files on Amazon S3. The total size of the files is 1 PB. The files contain retail order information for the past 2 years. A data engineer needs to run multiple interactive queries to manipulate the data. The data Engineer has AWS access to spin up an Amazon EMR cluster. The data Engineer needs to use an application on the cluster to process this data and return the results in interactive time frame. Which application on the cluster should be the data engineer use?
A.	Oozie
B.	Apache Pig with Tachyon
C.	Apache Hive
D.	Presto
A

C. Apache Hive

193
Q

An administrator needs to design a distribution strategy for a star schema in a Redshift cluster. The administrator needs to determine the optimal distribution style for the tables in the Redshift schema.
In which three circumstances would choosing Key-based distribution be most appropriate? (Select three)
A. When the administrator needs to optimize a large, slowly changing dimension table
B. When the administrator needs to reduce cross-node traffic
C. When the administrator needs to optimize the fact table for parity with the number of slices
D. When the administrator needs to balance data distribution and collocation of data
E. When the administrator needs to take advantage of data locality on a local node of joins and aggregates

A

A. When the administrator needs to optimize a large, slowly changing dimension table

D. When the administrator needs to balance data distribution and collocation of data

E. When the administrator needs to take advantage of data locality on a local node of joins and aggregates

194
Q

When an EC2 instance that is backed by an s3-based AMI is terminated. What happens to the data on the root volume?
A. Data is unavailable until the instance is restarted
B. Data is automatically deleted
C. Data is automatically saved as an EBS snapshot
D. Data is automatically saved as an EBS volume

A

B. Data is automatically deleted

195
Q

A travel website needs to present a graphical quantitative summary of its daily bookings to website visitors for marketing purposes. The website has millions of visitors per day, but wants to control costs by implementing the least-expensive solution for this visualization. What is the most cost-effective solution?
A. Generate a static graph with a transient EMR cluster daily. And store it in Amazon S3
B. Generate a graph using MicroStrategy backed by a transient EMR cluster
C. Implement a Jupyter front-end provided by a continuously running EMR cluster leveraging spot
instances for task nodes
D. Implement a Zeppelin application that runs on a long-running EMR cluster

A

A. Generate a static graph with a transient EMR cluster daily. And store it in Amazon S3

196
Q

A system needs to collect on-premises application spools files into a persistent storage layer in AWS. Each spool file is 2 KB. The application generates 1 M files per hour. Each source file is automatically deleted from the local server after one hour. What is the most cost-efficient option to meet these requirements?
A. Write file contents to an Amazon DynamoDB table
B. Copy files to Amazon S3 standard storage
C. Write file content to Amazon ElastiCache
D. Copy files to Amazon S3 infrequent Access storage

A

C. Write file content to Amazon ElastiCache

197
Q

An Administrator needs to design the event log storage architecture for events from mobile devices. The event data will be processed by an Amazon EMR cluster daily for aggregated reporting and analytics before being archived.

How should the administrator recommend storing the log data?

A. Create an Amazon S3 bucket and write log data into folders by device Execute the EMR job on the device folders
B. Create an Amazon DynamoDB table partitioned on the device and sorted on data, write log data to the table. Execute the EMR job on the Amazon DynamoDB table
C. Create an Amazon S3 bucket and write data into folders by day. Execute the EMR job on the daily folder
D. Create an Amazon DynamoDB table partitioned on EventID, write log data to table. Execute the EMR job on the table

A

A. Create an Amazon S3 bucket and write log data into folders by device

198
Q

A large grocery distributor receives daily depletion reports from the field in the form of gzip archives of CSV files uploading to Amazon S3. The files range from 500MB to 5GB. These files are processes daily by an EMR job.

Recently it has been observed that the file sizes vary, and the EMR jobs take too long. The distributor needs to tune and optimize the data processing workflow with this limited information to improved the performance of the EMR job.

Which recommendation should an administrator provide?

A. Reduce the HDFS block size to increase the number of task processors
B. Use bzip2 or Snappy rather than gzip for the archives
C. Decompress the gzip archives and store the data as CSV files
D. Use Avro rather than gzip for the archives

A

B. Use bzip2 or Snappy rather than gzip for the archives

199
Q

You need to design a VPC for a web-application consisting of an Elastic Load Balancer (ELB). A fleet of web/application servers, and an RDS database The Entire Infrastructure must be distributed over 2 availability zones.
Which VPC configuration works while assuring the database is not available from the Internet?
A. One public subnet for ELB one public subnet for the web-servers, and one private subnet for the database
B. One public subnet for ELB two private subnets for the web-servers, two private subnets for
RDS
C. Two public subnets for ELB two private subnets for the web-servers and two private subnets for RDS
D. Two public subnets for ELB two public subnets for the web-servers, and two public subnets for RDS

A

C. Two public subnets for ELB two private subnets for the web-servers and two private subnets for RDS

200
Q

A media advertising company handles a large number of real-time messages sourced from over 200 websites in real time. Processing latency must be kept low. Based on calculations, a 60- shared Amazon Kinesis stream is more then sufficient to handle the maximum data throughput, even with traffic spikes. The company also uses an Amazon Kinesis Client Library (KCL) application running on Amazon Elastic Compute Cloud (EC2) managed by an Auto Scaling group. Amazon CloudWatch indicates an average of 25% CPU and a modest level of network traffic across all running servers.
The company reports a 150% to 200% increase in latency of processing messages from Amazon kinesis during peak times. There are NO reports of delay from the sites publishing to Amazon Kinesis.

What is the appropriate solution to address the latency?

A. Increase the number of shared in the Amazon Kinesis stream to 80 for greater concurrency
B. Increate the size of the Amazon EC2 instances to increase network throughput
C. Increase the minimum number of instances in the Auto Scaling group
D. Increase Amazon DynamoDB throughput on the checkpointing table

A

D. Increase Amazon DynamoDB throughput on the checkpointing table

201
Q

You are designing a web application that stores static assets in an Amazon Simple Storage

Service (S3) bucket. You expect this bucket to immediately receive over 150 PUT requests per second. What should you do to ensure optimal performance?

A. Use multi-part upload.
B. Add a random prefix to the key names.
C. Amazon S3 will automatically manage performance at this scale.
D. Use a predictable naming scheme, such as sequential numbers or date time sequences, in the key names

A

B. Add a random prefix to the key names.

202
Q
Can I attach more than one policy to a particular entity?
A.	Yes always
B.	Only if within GovCloud
C.	No
D.	Only if within VPC
A

A. Yes always

203
Q
When using the following AWS services, which should be implemented in multiple Availability Zones for high availability solutions? Choose 2 answers
A.	Amazon Simple Storage Service
B.	Amazon Elastic Load Balancing
C.	Amazon Elastic Compute Cloud
D.	Amazon Simple Notification Service
E.	Amazon DynamoDB
A

B. Amazon Elastic Load Balancing

C. Amazon Elastic Compute Cloud

204
Q

A customers needs to capture all client connection information from their load balancer every five minutes. The company wants to use data for analyzing traffic patterns and troubleshooting their applications. Which of the following options meets the customer requirements?
A. Enable access logs on the load balancer
B. Enable AWS CloudTrail for the load balancer
C. Enable Amazon CloudWatch metrics on the load balancer
D. Install the Amazon CloudWatch Logs agent on the load balancer

A

B. Enable AWS CloudTrail for the load balancer

205
Q

A user is receiving a notification from the RDS DB whenever there is a change in the DB security group. The user does not want to receive these notifications for only a month. Thus, he does not want to delete the notification. How can the user configure this?

A. Change the Disable button for notification to “Yes” in the RDS console
B. Set the send mail flag to false in the DB event notification console
C. The only option is to delete the notification from the console
D. Change the Enable button for notification to “No” in the RDS console

A

D. Change the Enable button for notification to “No” in the RDS console

206
Q

Do the system resources on the Micro instance meet the recommended configuration for Oracle?

A. Yes completely
B. Yes but only for certain situations
C. Not in any circumstance

A

B. Yes but only for certain situations

207
Q

You need to configure an Amazon S3 bucket to serve static assets for your public-facing web application.
Which methods ensure that all objects uploaded to the bucket are set to public read? Choose 2 answers

A. Set permissions on the object to public read during upload
B. Configure the bucket ACL to sell all objects to public read
C. Configure the bucket policy to set all objects to public read
D. Use AWS identity and access Management roles to set the bucket to public read E. Amazon S3 objects default to public read, so no action is needed

A

B. Configure the bucket ACL to sell all objects to public read

C. Configure the bucket policy to set all objects to public readv

208
Q

A company with a support organization needs support engineers to be able to search historic cases to provide fast responses on new issues raised. The company has forwarded all support messages into an Amazon Kinesis Stream. This meets a company objective of using only managed services to reduce.

The company needs an appropriate architecture that allows support engineers to search on historic cases can find similar issues and their associated responses.

Which AWS Lambda action is most appropriate?

A. Ingest and index the content into an Amazon Elasticsearch domain
B. Stem and tokenize the input and store the results into Amazon ElastiCache
C. Write data as JSON into Amazon DynamoDB with primary and secondary indexes
D. Aggregate feedback is Amazon S3 using a columnar format with partitioning

A

A. Ingest and index the content into an Amazon Elasticsearch domain

209
Q

A data engineer is about to perform a major upgrade to the DDL contained within an Amazon Redshift cluster to support a new data warehouse application. The upgrade scripts will include user permission updates, view and table structure changes as well as additional loading and data manipulation tasks. The data engineer must be able to restore the database to its existing state in the event of issues.

Which action should be taken prior to performing this upgrade task?

A. Run an UNLOAD command for all data in the warehouse and save it to S3
B. Create a manual snapshot of the Amazon Redshift cluster
C. Make a copy of the automated snapshot on the Amazon Redshift cluster
D. Call the wait For Snap Shot Available command from either the AWS CLI or an AWS SDK

A

B. Create a manual snapshot of the Amazon Redshift cluster

210
Q

A company needs a churn prevention model to predict which customers will NOT review their yearly subscription to the company’s service. The company plans to provide these customers with a promotional offer. A binary classification model that uses Amazon Machine Learning is required.

On which basis should this binary classification model be built?

A. User profiles (age, gender, income, occupation)
B. Last user session
C. Each user time series events in the past 3 months
D. Quarterly results

A

C. Each user time series events in the past 3 months

211
Q

You are managing the AWS account of a big organization. The organization has more than 1000+ employees and they want to provide access to the various services to most of the employees. Which of the below mentioned options is the best possible solution in this case?

A. The user should create a separate IAM user for each employee and provide access to them as per the policy
B. The user should create an IAM role and attach STS with the role. The user should attach that role to the EC2 instance and setup AWS authentication on that server
C. The user should create IAM groups as per the organization’s departments and add each user to the group for better access control
D. Attach an IAM role with the organization’s authentication service to authorize each user for various AWS services

A

D. Attach an IAM role with the organization’s authentication service to authorize each user for various AWS services

212
Q

Which of the following are characteristics of Amazon VPC subnets? Choose 2 answers

A. Each subnet maps to a single Availability Zone
B. A CIDR block mask of /25 is the smallest range supported
C. Instances in a private subnet can communicate with the internet only if they have an Elastic IP.
D. By default, all subnets can route between each other, whether they are private or public
E. Each subnet spans at least 2 Availability zones to provide a high-availability environment

A

A. Each subnet maps to a single Availability Zone

D. By default, all subnets can route between each other, whether they are private or public

213
Q

A company operates an international business served from a single AWS region. The company wants to expand into a new country. The regulator for that country requires the Data Architect to maintain a log of financial transactions in the country within 24 hours of production transaction. The production application is latency insensitive. The new country contains another AWS region.

What is the most cost-effective way to meet this requirement?

A. Use CloudFormation to replicate the production application to the new region
B. Use Amazon CloudFront to serve application content locally in the country; Amazon CloudFront logs will satisfy the requirement
C. Continue to serve customers from the existing region while using Amazon Kinesis to stream transaction data to the regulator
D. Use Amazon S3 cross-region replication to copy and persist production transaction logs to a budget the new country’s region

A

B. Use Amazon CloudFront to serve application content locally in the country; Amazon CloudFront logs will satisfy the requirement

214
Q
Will I be charged if the DB instance is idle?
A.  No 
B.  Yes
C.	Only is running in GovCloud
D.	Only if running in VPC
A

B. Yes

215
Q
A user is planning to setup notifications on the RDS DB for a snapshot. Which of the below mentioned event categories is not supported by RDS for this snapshot source type?
A.  Backup 
B.  Creation
C.	Deletion
D.	Restoration
A

A. Backup

216
Q
Which DNS name can only be resolved within Amazon EC2?
A.	Internal DNS name
B.	External DNS name
C.	Global DNS name
D.	Private DNS name
A

A. Internal DNS name

217
Q

The Trusted Advisor service provides insight regarding which four categories of an AWS account?
A. Security, fault tolerance, high availability, and connectivity
B. Security, access control, high availability, and performance
C. Performance, cost optimization, security, and fault tolerance
D. Performance, cost optimization, access control, and connectivity

A

C. Performance, cost optimization, security, and fault tolerance

218
Q

Company A operates in Country X, Company A maintains a large dataset of historical purchase orders that contains personal data of their customers in the form of full names and telephone numbers. The dataset consists of 5 text files. 1TB each. Currently the dataset resides on- premises due to legal requirements of storing personal data in-country. The research and development department need to run a clustering algorithm on the dataset and wants to use Elastic Map Reduce service in the closes AWS region. Due to geographic distance the minimum latency between the on-premises system and the closet AWS region is 200 ms.
Which option allows Company A to do clustering in the AWS Cloud and meet the legal requirement of maintaining personal data in-country?
A. Anonymize the personal data portions of the dataset and transfer the data files into Amazon S3 in the AWS region. Have the EMR cluster read the dataset using EMRFS.
B. Establishing a Direct Connect link between the on-premises system and the AWS region to reduce latency. Have the EMR cluster read the data directly from the on-premises storage system over Direct Connect.
C. Encrypt the data files according to encryption standards of Country X and store them in AWS region in Amazon S3. Have the EMR cluster read the dataset using EMRFS.
D. Use AWS Import/Export Snowball device to securely transfer the data to the AWS region and copy the files onto an EBS volume. Have the EMR cluster read the dataset using EMRFS.

A

B. Establishing a Direct Connect link between the on-premises system and the AWS region to reduce latency. Have the EMR cluster read the data directly from the on-premises storage system over Direct Connect.

219
Q

You can use _____ and _____ to help secure the instances in your VPC.
A. security groups and multi-factor authentication
B. security groups and 2-Factor authentication
C. security groups and biometric authentication
D. security groups and network ACLs

A

D. security groups and network ACLs

220
Q

You have been asked to handle a large data migration from multiple Amazon RDS MySQL instances to a DynamoDB table. You have been given a short amount of time to complete the data migration. What will allow you to complete this complex data processing workflow?

A. Create an Amazon Kinesis data stream, pipe in all of the Amazon RDS data, and direct data toward DynamoDB table
B. Write a script in you language of choice, install the script on an Amazon EC2 instance, and then use Auto Scaling groups to ensure that the latency of the mitigation pipelines never exceeds four seconds in any 15-minutes period.
C. Write a bash script to run on your Amazon RDS instance that will export data into DynamoDB
D. Create a data pipeline to export Amazon RDS data and import the data into DynamoDB

A

D. Create a data pipeline to export Amazon RDS data and import the data into DynamoDB

221
Q

You currently run your infrastructure on Amazon EC2 instances behind on Auto Scaling group. All logs for your application are currently written to ephemeral storage. Recently your company experienced a major bug in code that made it through testing and was ultimately deployed to your fleet. This bug triggered your Auto Scaling group to scale up and back down before you could successfully retrieve the logs off your server to better assist you in troubleshooting the bug.

Which technique should you use to make sure you are able to review your logs after your instances have shut down?

A. Configure the ephemeral policies on your Auto Scaling group to back up on terminate
B. Configure your Auto Scaling policies to create a snapshot of all ephemeral storage on terminate
C. Install the CloudWatch logs Agent on your AMI, and configure CloudWatch Logs Agent to stream your logs
D. Install the CloudWatch monitoring agent on your AMI, and set up a new SNS alert for CloudWatch metrics that triggers the CloudWatch monitoring agent to backup all logs on the ephemeral drive
E. Install the CloudWatch Logs Agent on your AMI. Update your Scaling policy to enable automated
CloudWatch Log copy

A

C. Install the CloudWatch logs Agent on your AMI, and configure CloudWatch Logs Agent to stream your logs

222
Q
In the context of MySQL, version numbers are organized as MySQL version = X.Y.Z. What does X denote here?
A.	release level
B.	minor version
C.	version number
D.	major version
A

D. major version

223
Q

When you put objects in Amazon 53, what is the indication that an object was successfully stored?
A. A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful
B. A success code is inserted into the S3 object metadata
C. Amazon S3 is engineered for 99.999999999% durability. Therefore there is no need to confirm that data was inserted.
D. Each S3 account has a special bucket named_ s3_logs. Success codes are written to this bucket with a timestamp and checksum

A

A. A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful

224
Q

Is decreasing the storage size of a DB Instance permitted?
A. Depends on the RDMS used
B. Yes
C. No

A

B. Yes

225
Q

A user is trying to setup a recurring Auto Scaling process. The user has setup one process to scale up every day at 8 am and scale down at 7 PM. The user is trying to setup another recurring process which scales up on the 1st of every month at 8 AM and scales down the same day at 7 PM. What will Auto Scaling do in this scenario?
A. Auto Scaling will execute both processes but will add just one instance on the 1st
B. Auto Scaling will add two instances on the 1st of the month
C. Auto Scaling will schedule both the processes but execute only one process randomly
D. Auto Scaling will throw an error since there is a conflict in the schedule of two separate Auto Scaling Processes

A

D. Auto Scaling will throw an error since there is a conflict in the schedule of two separate Auto Scaling Processes

226
Q

A web-hosting company is building a web analytics tools to capture clickstream data from all of the websites hosted within its platform and to provide near-real-time business intelligence. This entire system is built on AWS services. The web-hosting company is interested in using Amazon kinesis to collect this data and perform sliding window analytics. What is the most reliable and fault-tolerant technique to get each website to send data to Amazon Kinesis with every click?

A. After receiving a request each web server sends it to Amazon kinesis using the Amazon kinesis PutRecord APL Use the SessionID as a parturition key and set up a loop to retry until a success response is received
B. After receiving a request each web server sends it to Amazon kinesis using the Amazon Kinesis
Producer Library addRecord method
C. Each web server bluffers the request until the count reaches 500 and sends them to Amazon kinesis using the Amazon kinesis PutRecord API call
D. After receiving a request each web server sends it to Amazon Kinesis using the Amazon kinesis
PutRecord API. Use the exponential back off algorithm for retries until a successful response is received

A

A. After receiving a request each web server sends it to Amazon kinesis using the Amazon kinesis PutRecord APL Use the SessionID as a parturition key and set up a loop to retry until a success response is received

227
Q

When an Auto Scaling group is running in Amazon Elastic Compute Cloud (EC2), your application rapidly scales up and down in response to load within a 10-minutes window; however, after the load peaks, you begin to see problems in your configuration management system where previously terminated Amazon EC2 resources are still showing as active.
What would be a reliable and efficient way to handle the cleanup of Amazon EC2 resources with your configuration management systems?
Choose 2 answers

A. Write a script that is run by a daily cron job on an Amazon EC2 instance and that executes API Describe calls of the EC2 Auto Scaling group and removes terminated instances from the configuration management system
B. Configure an Amazon Simple Queue Service (SQS) queue for Auto Scaling actions that has a script that listens for new messages and removes terminated instances from the configuration management system
C. Use your existing configuration management system to control the launching and bootstrapping of instances to reduce the number of moving parts in the automation
D. Write a small script that is run during Amazon EC2 instance shutdown to de-register the resource from the configuration management system
E. Use Amazon Simple Workflow Service (SWF) to maintain an Amazon DynamoDB database that contains a whitelist of instances that have been previously launched, and allow the Amazon SWF worker to remove information from the configuration management system

A

A. Write a script that is run by a daily cron job on an Amazon EC2 instance and that executes API Describe calls of the EC2 Auto Scaling group and removes terminated instances from the configuration management system

D. Write a small script that is run during Amazon EC2 instance shutdown to de-register the resource from the configuration management system

228
Q

An organization’s data warehouse contains sales data for reporting purposes. data governance policies prohibit staff from accessing the customers’ credit card numbers.
How can these policies be adhered to and still allow a Data Scientist to group transactions that use the same credit card number?
A. Store a cryptographic hash of the credit card number.
B. Encrypt the credit card number with a symmetric encryption key, and give the key only to the authorized Data Scientist.
C. Mask the credit card numbers to only show the last four digits of the credit card number.
D. Encrypt the credit card number with an asymmetric encryption key and give the decryption key only to the authorized Data Scientist.

A

C. Mask the credit card numbers to only show the last four digits of the credit card number.

229
Q

A user wants to make so that whenever the CPU utilization of the AWS EC2 instance is above 90%, the redlight of his bedroom turns on. Which of the below mentioned AWS services is helpful for this purpose?

A. AWS CloudWatch + AWS SES
B. AWS CloudWatch + AWS SNS
C. It is not possible to configure the light with the AWS infrastructure services
D. AWS CloudWatch and a dedicated software turning on the light

A

B. AWS CloudWatch + AWS SNS

230
Q

A customer wants to track access to their Amazon Simple Storage Service (S3) buckets and also use this information for their internal security and access audits. Which of the following will meet the Customer requirement?
A. Enable AWS CloudTrail to audit all Amazon S3 bucket access.
B. Enable server access logging for all required Amazon S3 buckets. C. Enable the Requester Pays option to track access via AWS Billing D. Enable Amazon S3 event notifications for Put and Post.

A

B. Enable server access logging for all required Amazon S3 buckets. C. Enable the Requester Pays option to track access via AWS Billing D. Enable Amazon S3 event notifications for Put and Post.

231
Q
What does Amazon RDS stand for? 
A.    Regional Data Server.
B.	Relational Database Service.
C.	Nothing.
D.	Regional Database Service.
A

B. Relational Database Service.

232
Q
A user is launching an AWS RDS with MySQL. Which of the below mentioned options allows the user to configure the InnoDB engine parameters?
A.  Options group 
B.  Engine parameters
C.	Parameter groups
D.	DB parameters
A

C. Parameter groups

233
Q

A solutions architect for a logistics organization ships packages from thousands of suppliers to end customers.
The architect is building a platform where suppliers can view the status of one or more of their shipments.
Each supplier can have multiple roles that will only allow access to specific fields in the resulting information.

Which strategy allows the appropriate level of access control and requires the LEAST amount of management work?

A. Send the tracking data to Amazon Kinesis Streams. Use AWS Lambda to store the data in an Amazon DynamoDB Table. Generate temporary AWS credentials for the supplier’s users with AWS STS, specifying fine-grained security policies to limit access only to their application data.
B. Send the tracking data to Amazon Kinesis Firehouse. Use Amazon S3 notifications and AWS Lambda to prepare files in Amazon S3 with appropriate data for each supplier’s roles. Generate temporary AWS credentials for the suppliers’ users with AWS STS. Limit access to the appropriate files through security policies.
C. Send the tracking data to Amazon Kinesis Streams. Use Amazon EMR with Spark Streaming to store the data in HBase. Create one table per supplier. Use HBase Kerberos integration with the suppliers’ users. Use HBase ACL-based security to limit access to the roles to their specific table and columns.
D. Send the tracking data to Amazon Kinesis Firehose. Store the data in an Amazon Redshift cluster.
Create views for the supplier’s users and roles. Allow suppliers access to the Amazon Redshift cluster using a user limited to the application view.

A

B. Send the tracking data to Amazon Kinesis Firehouse. Use Amazon S3 notifications and AWS Lambda to prepare files in Amazon S3 with appropriate data for each supplier’s roles. Generate temporary AWS credentials for the suppliers’ users with AWS STS. Limit access to the appropriate files through security policies.

234
Q

An Amazon EMR cluster using EMRFS has access to Megabytes of data on Amazon S3, originating from multiple unique data sources. The customer needs to query common fields across some of the data sets to be able to perform interactive joins and then display results quickly.

Which technology is most appropriate to enable this capability?
A.	Presto
B.	MicroStrategy
C.	Pig
D.	R Studio
A

A. Presto

235
Q

An organization is soliciting public feedback through a web portal that has been deployed to track the number of requests and other important data. As part of reporting and visualization, AmazonQuickSight connects to an Amazon RDS database to virtualize data. Management wants to understand some important metrics about feedback and how the feedback has changed over the last four weeks in a visual representation.

What would be the MOST effective way to represent multiple iterations of an analysis in Amazon QuickSight that would show how the data has changed over the last four weeks?

A. Use the analysis option for data captured in each week and view the data by a date range.
B. Use a pivot table as a visual option to display measured values and weekly aggregate data as a row dimension.
C. Use a dashboard option to create an analysis of the data for each week and apply filters to visualize the data change.
D. Use a story option to preserve multiple iterations of an analysis and play the iterations sequentially.

A

D. Use a story option to preserve multiple iterations of an analysis and play the iterations sequentially.

236
Q

You have an Auto Scaling group associated with an Elastic Load Balancer (ELB). You have noticed that instances launched via the Auto Scaling group are being marked unhealthy due to an ELB health check, but these unhealthy instances are not being terminated.
What do you need to do to ensure trial instances marked unhealthy by the ELB will be terminated and replaced?
A. Change the thresholds set on the Auto Scaling group health check
B. Add an Elastic Load Balancing health check to your Auto Scaling group
C. Increase the value for the Health check interval set on the Elastic Load Balancer
D. Change the health check set on the Elastic Load Balancer to use TCP rather than HTTP checks

A

B. Add an Elastic Load Balancing health check to your Auto Scaling group

237
Q

A customer is collecting clickstream data using Amazon kinesis and is grouping the events by IP address into 5-minute chunks stored in Amazon S3.

Many analysts in the company use Hive on Amazon EMR to analyze this data. Their queries always reference a single IP address. Data must be optimized for querying based on UP address using Hive running on Amazon EMR. What is the most efficient method to query the data with Hive?

A. Store an index of the files by IP address in the Amazon DynamoDB metadata store for EMRFS
B. Store the Amazon S3 objects with the following naming scheme:
bucketname/source=ip_address/year=yy/month=mm/day=dd/hour=hh/filename
C. Store the data in an HBase table with the IP address as the row key
D. Store the events for an IP address as a single file in Amazon S3 and add metadata with key:Hive_Partitioned_IPAddress

A

A. Store an index of the files by IP address in the Amazon DynamoDB metadata store for EMRFS

238
Q

Does Route 53 support MX Records?
A. Yes.
B. It supports CNAME records, but not MX records.
C. No
D. Only Primary MX records. Secondary MX records are not supported.

A

A. Yes.

239
Q
A sys admin is planning to subscribe to the RDS event notifications. For which of the below mentioned source categories the subscription cannot be configured?
A.	DB security group
B.	DB snapshot
C.	DB options group
D.	DB parameter group
A

C. DB options group

240
Q

A real-time bidding company is rebuilding their monolithic application and is focusing on serving real-time data. A large number of reads and writes are generated from thousands of concurrent users who follow items and bid on the company’s sale offers.
The company is experiencing high latency during special event spikes, with millions of concurrent users.
The company needs to analyze and aggregate a part of the data in near real time to feed an internal dashboard.
What is the BEST approach for serving and analyzing data, considering the constraint of the row latency on the highly demanded data?
A. Use Amazon Aurora with Multi Availability Zone and read replicas. Use Amazon ElastiCache in front of the read replicas to serve read-only content quickly. Use the same database as datasource for the dashboard.
B. Use Amazon DynamoDB to store real-time data with Amazon DynamoDB. Accelerator to serve content quickly. use Amazon DynamoDB Streams to replay all changes to the table, process and stream to Amazon Elasti search Service with AWS Lambda.
C. Use Amazon RDS with Multi Availability Zone. Provisioned IOPS EBS volume for storage. Enable up to five read replicas to serve read-only content quickly. Use Amazon EMR with Sqoop to import Amazon RDS data into HDFS for analysis.
D. Use Amazon Redshift with a DC2 node type and a multi-mode cluster. Create an Amazon EC2 instance with pgpoo1 installed. Create an Amazon ElastiCache cluster and route read requests through pgpoo1,
and use Amazon Redshift for analysis.

A

D. Use Amazon Redshift with a DC2 node type and a multi-mode cluster. Create an Amazon EC2 instance with pgpoo1 installed. Create an Amazon ElastiCache cluster and route read requests through pgpoo1,
and use Amazon Redshift for analysis.

241
Q
Which of the following notification endpoints or clients are supported by Amazon Simple
Notification Service? Choose 2 answers
A.	Email
B.	CloudFront distribution
C.	File Transfer Protocol
D.	Short Message Service
E.	Simple Network Management Protocol
A

B. CloudFront distribution

C. File Transfer Protocol

242
Q

An organization has 10,000 devices that generate 10 GB of telemetry data per day, with each record size around 10 KB. Each record has 100 fields, and one field consists of unstructured log data with a “String” data type in the English language. Some fields are required for the real-time dashboard, but all fields must be available for long-term generation.
The organization also has 10 PB of previously cleaned and structured data, partitioned by Date, in a SAN that must be migrated to AWS within one month. Currently, the organization does not have any real-time capabilities in their solution. Because of storage limitations in the on-premises data warehouse, selective data is loaded while generating the long-term trend with ANSI SQL queries through JDBC for visualization. In addition to the one-time data loading, the organization needs a cost-effective and real-time solution.
How can these requirements be met? (Choose two.)

A. use AWS IoT to send data from devices to an Amazon SQS queue, create a set of workers in an Auto Scaling group and read records in batch from the queue to process and save the data. Fan out to an Amazon SNS queue attached with an AWS Lambda function to filter the request dataset and save it to Amazon Elasticsearch Service for real-time analytics.
B. Create a Direct Connect connection between AWS and the on-premises data center and copy the data to Amazon S3 using S3 Acceleration. Use Amazon Athena to query the data.
C. Use AWS IoT to send the data from devices to Amazon Kinesis Data Streams with the IoT rules engine. Use one Kinesis Data Firehose stream attached to a Kinesis stream to batch and stream the data partitioned by date. Use another Kinesis Firehose stream attached to the same Kinesis stream to filter out the required fields to ingest into Elasticsearch for real-time analytics.
D. Use AWS IoT to send the data from devices to Amazon Kinesis Data Streams with the IoT rules engine. Use one Kinesis Data Firehose stream attached to a Kinesis stream to stream the data into an Amazon S3 bucket partitioned by date. Attach an AWS Lambda function with the same Kinesis stream to filter out the required fields for ingestion into Amazon DynamoDB for real-time analytics.
E. use multiple AWS Snowball Edge devices to transfer data to Amazon S3, and use Amazon Athena to query the data.

A

A. use AWS IoT to send data from devices to an Amazon SQS queue, create a set of workers in an Auto Scaling group and read records in batch from the queue to process and save the data. Fan out to an Amazon SNS queue attached with an AWS Lambda function to filter the request dataset and save it to Amazon Elasticsearch Service for real-time analytics.

D. Use AWS IoT to send the data from devices to Amazon Kinesis Data Streams with the IoT rules engine. Use one Kinesis Data Firehose stream attached to a Kinesis stream to stream the data into an Amazon S3 bucket partitioned by date. Attach an AWS Lambda function with the same Kinesis stream to filter out the required fields for ingestion into Amazon DynamoDB for real-time analytics.

243
Q

An organization is currently using an Amazon EMR long-running cluster with the latest Amazon EMR release for analytic jobs and is storing data as external tables on Amazon S3.
The company needs to launch multiple transient EMR clusters to access the same tables concurrently, but the metadata about the Amazon S3 external tables are defined and stored on the long-running cluster.
Which solution will expose the Hive metastore with the LEAST operational effort?
A. Export Hive metastore information to Amazon DynamoDB hive-site classification to point to the Amazon DynamoDB table.
B. Export Hive metastore information to a MySQL table on Amazon RDS and configure the Amazon EMR hive-site classification to point to the Amazon RDS database.
C. Launch an Amazon EC2 instance, install and configure Apache Derby, and export the Hive metastore information to derby.
D. Create and configure an AWS Glue Data Catalog as a Hive metastore for Amazon EMR.

A

B. Export Hive metastore information to a MySQL table on Amazon RDS and configure the Amazon EMR hive-site classification to point to the Amazon RDS database.

244
Q

An online gaming company uses DynamoDB to store user activity logs and is experiencing throttled writes on the company’s DynamoDB tables. The company is NOT consuming close to the provisioned capacity. The table contains a large number of items and is partitioned on user and sorted by date. The table is 200GB and is currently provisioned at 10K WCU and 20K RCU.

Which two additional pieces of information are required to determine the cause of the throttling? (Select Two.)

A. The structured of any GSIs that have been defined on the table
B. CloudWatch data showing consumed and provisioned write capacity when writes are being throttled
C. Application-level metric showing the average item size and peak update rates for each attribute
D. The structure of any LSIs that have been defined on the table
E. The maximum historical WCU and RCU for the table

A

A. The structured of any GSIs that have been defined on the table

D. The structure of any LSIs that have been defined on the table

245
Q

You run a web application with the following components Elastic Load Balancer (ELB), 3 Web/Application servers, 1 MySQL RDS database with read replicas, and Amazon Simple Storage Service (Amazon S3) for static content. Average response time for users is increasing slowly. What three CloudWatch RDS metrics will allow you to identify if the database is the bottleneck? Choose 3 answers

A. The number of outstanding IOs waiting to access the disk
B. The amount of write latency
C. The amount of disk space occupied by binary logs on the master.
D. The amount of time a Read Replica DB Instance lags behind the source DB Instance
E. The average number of disk I/O operations per second.

A

A. The number of outstanding IOs waiting to access the disk

B. The amount of write latency

D. The amount of time a Read Replica DB Instance lags behind the source DB Instance

246
Q

An organization uses Amazon Elastic MapReduce (EMR) to process a series of extract-transform-load (ETL) steps that run in sequence. The output of each step must be fully processed in subsequent steps but will not be retained.

Which of the following techniques will meet this requirement most efficiently?

A. Use the EMR File System (EMRFS) to store the outputs from each step as objects in Amazon Simple Storage Service (S3).
B. Use the s3n URI to story the data to be processes as objects in Amazon S3.
C. Define the ETL steps as separate AWS Data Pipeline activities.
D. Load the data to be processed into HDFS and then write the final output to Amazon S3.

A

B. Use the s3n URI to story the data to be processes as objects in Amazon S3.

247
Q

A user is trying to understand AWS SNS. To which of the below mentioned end points is SNS unable to send a notification?

A. Email JSON
B. HTTP
C. AWS SQS
D. AWS SES

A

D. AWS SES

248
Q

An organization is designing an application architecture. The application will have over 100 TB of data and will support transactions that arrive at rates from hundreds per second to tens of thousands per second, depending on the day of the week and time of day. All transaction data must be durably and reliably stored.
Certain read operations must be performed with strong consistency.

Which solutions meets these requirements?

A. Use Amazon DynamoDB as the data store and use strong consistent reads when necessary
B. Use an Amazon Relational Database Service (RDS) instance sized to meet the maximum transaction rate and with the High Availability option enabled.
C. Deploy a NoSQL data store on top of an Amazon Elastic MapReduce (EMR) cluster, and select the
HDFS High Durability option.
D. Use Amazon Redshift with synchronous replication to Amazon Simple Storage Service (S3) and rowlevel locking for strong consistency.

A

A. Use Amazon DynamoDB as the data store and use strong consistent reads when necessary

249
Q
Amazon RDS supports SOAP only through \_\_\_\_\_\_\_\_\_\_.
A.	HTTP or HTTPS
B.	TCP/IP
C.	HTTP
D.	HTTPS
A

D. HTTPS

250
Q

A user has launched an EC2 instance from an instance store backed AMI. The infrastructure team wants to create an AMI from the running instance. Which of the below mentioned steps will not be performed while creating the AMI?

A. Define the AMI launch permissions
B. Upload the bundled volume
C. Register the AMI
D. Bundle the volume

A

A. Define the AMI launch permissions

251
Q

What does Amazon RDS stand for?
A. Regional Data Server.
B. Relational Database Service
C. Regional Database Service.

A

B. Relational Database Service

252
Q

What is the name of licensing model in which I can use your existing Oracle Database licenses to run Oracle deployments on Amazon RDS?

A. Bring Your Own License
B. Role Bases License
C. Enterprise License
D. License Included

A

A. Bring Your Own License

253
Q

Which two AWS services provide out-of-the-box user configurable automatic backup-as-a- service and backup rotation options? Choose 2 answers

A. Amazon S3
B. Amazon RDS
C. Amazon EBS
D. Amazon Redshift

A

B. Amazon RDS

D. Amazon Redshift

254
Q

You are working with customer who has 10 TB of archival data that they want to migrate to Amazon Glacier. The customer has a 1Mbps connection to the Internet. Which service or feature provide the fastest method of getting the data into Amazon Glacier?

A. Amazon Glacier multipart upload
B. AWS Storage Gateway
C. VM Import/Export
D. AWS Import/Export

A

D. AWS Import/Export

255
Q

Can the string value of ‘Key’ be prefixed with laws?

A. No
B. Only for EC2 not S3
C. Yes
D. Only for S3 not EC

A

A. No

256
Q

What is the minimum charge for the data transferred between Amazon RDS and Amazon EC2 Instances in the same Availability Zone?

A. USD 0.10 per GB
B. No charge. It is free.
C. USD 0.02 per GB
D. USD 0.01 per GB

A

B. No charge. It is free.

257
Q

Which of the following instance types are available as Amazon EBS backend only?

A.  General purpose T2 
B.  General purpose M3
C.	Compute-optimized C4
D.	Compute-optimized C3
E.	Storage-optimized 12
A

A. General purpose T2

C. Compute-optimized C4

258
Q

Does Amazon RDS allow direct host access via Telnet, Secure Shell (SSH), or Windows Remote Desktop Connection?
A. Yes
B. No
C. Depends on if it is in VPC or not

A

B. No

259
Q

When attached to an Amazon VPC which two components provide connectivity with external networks? Choose 2 answers

A. Elastic IPS (EIP)
B. NAT Gateway (NAT)
C. Internet Gateway {IGW)
D. Virtual Private Gateway (VGW)

A

C. Internet Gateway {IGW)

D. Virtual Private Gateway (VGW)

260
Q

A company that manufactures and sells smart air conditioning units also offers add-on services so that customers can see real-time dashboards in a mobile application or a web browser. Each unit sends its sensor information in JSON format every two seconds for processing and analysis. The company also needs to consume this data to predict possible equipment problems before they occur. A few thousand pre-purchased units will be delivered in the next couple of months. The company expects high market growth in the next year and needs to handle a massive amount of data and scale interruption.

Which ingestion solution should the company use?

A. Write sensor data records to Amazon Kinesis Streams. Process the data using KCL applications for the end-consumer dashboard and anomaly detection workflows.
B. Batch sensor data Amazon Simple Storage Service (S3) every 15 minutes. Flow the data downstream to the end-consumer dashboard and to the anomaly detection application.
C. Write sensor data records to Amazon Kinesis Firehose with Amazon Simply Storage Service (S3) as the destination. Consume the data with a KCL application for the end-consumer dashboard and anomaly detection.
D. Write sensor data records to Amazon Relational Database Service (RDS). Build both the end- consumer dashboard application on top of Amazon RDS.

A

C. Write sensor data records to Amazon Kinesis Firehose with Amazon Simply Storage Service (S3) as the destination. Consume the data with a KCL application for the end-consumer dashboard and anomaly detection.

261
Q

You have started a new job and are reviewing your company’s infrastructure on AWS You notice one web application where they have an Elastic Load Balancer (&B) in front of web instances in an Auto Scaling Group When you check the metrics for the ELB in CloudWatch you see four healthy instances in Availability Zone (AZ) A and zero in AZ B There are zero unhealthy instances.

What do you need to fix to balance the instances across AZs?

A. Set the ELB to only be attached to another AZ
B. Make sure Auto Scaling is configured to launch in both AZs
C. Make sure your AMI is available in both AZs
D. Make sure the maximum size of the Auto Scaling Group is greater than 4

A

B. Make sure Auto Scaling is configured to launch in both AZs

262
Q

Your social media marketing application has a component written in Ruby running on AWS Elastic BeanStalk. This application component posts messages to social media sites in support of various marketing campaigns. Your management now requires you to record replies to these social media messages to analyze the effectiveness of the marketing campaign in comparison to past and future efforts. You have already developed a new application component to interface with the social media site APIs in order to read the replies.

Which process should you use to record the social media replies in a durable data store that can be accessed at any time for analysis of historical data?

A. Deploy the new application component in an Auto Scaling group of Amazon Elastic Compute Cloud (EC2) instances, read the data from the social media sites, store it with Amazon Elastic Block Store, and use AWS Data Pipeline to publish it to Amazon Kinesis for analytics
B. Deploy the new application component as a Elastic BeanStalk application, read the data from the social media sites, store it in Amazon DynamoDB, and use Apache Hive with Amazon Elastic MapReduce for analytic
C. Deploy the new application component in an Auto Scaling group of Amazon EC2 instances, read the data from the social media sites, store it in Amazon Glacier, and use AWS Data Pipeline to publish it to Amazon Redshift for analytics
D. Deploy the new application component as an Amazon Elastic Beanstalk application, read the
data from the social media site, store it with Amazon Elastic Block Store, and use Amazon Kinesis to stream the data to Amazon CloudWatch for analytics

A

B. Deploy the new application component as a Elastic BeanStalk application, read the data from the social media sites, store it in Amazon DynamoDB, and use Apache Hive with Amazon Elastic MapReduce for analytic

263
Q

A new algorithm has been written in Python to identify SPAM e-mails. The algorithm analyzes the free text contained within a sample set of 1 million e-mails stored on Amazon S3. The algorithm must be scaled across a production of 5 PB, which also resides in Amazon S3 storage

Which AWS service strategy is best for this use case?

A. Copy the data into Amazon ElasticCache to perform text analysis on the in-memory data and export the results of the model into Amazon machine learning
B. Use Amazon EMR to parallelize the text analysis tasks across the cluster using a streaming program step
C. Use Amazon Elasticsearch service to store the text and then use the Python Elastic search client to run analysis against the text index
D. Initiate a python job from AWS Data pipeline to run directly against the Amazon S3 text files

A

C. Use Amazon Elasticsearch service to store the text and then use the Python Elastic search client to run analysis against the text index