Incorrect From Test Flashcards
A Developer is creating a new web application that will be deployed using AWS Elastic Beanstalk from the AWS Management Console. The Developer is about to create a source bundle which will be uploaded using the console.
Which of the following are valid requirements for creating the source bundle? (Select TWO.)
Must consist of one or more ZIP files. Must not exceed 512 MB. Must not include a parent folder or top-level directory. Must include the cron.yaml file. Must include a parent folder or top-level directory.
Must not exceed 512 MB.
Must not include a parent folder or top-level directory.
- Consist of a single ZIP file or WAR file (you can include multiple WAR files inside your ZIP file)
- Not exceed 512 MB
- Not include a parent folder or top-level directory (subdirectories are fine)
An application uses AWS Lambda which makes remote calls to several downstream services. A developer wishes to add data to custom subsegments in AWS X-Ray that can be used with filter expressions. Which type of data should be used? Annotations Trace ID Daemon Metadata
Annotations
Annotations are key-value pairs with string, number, or Boolean values. Annotations are indexed for use with filter expressions. Use annotations to record data that you want to use to group traces in the console, or when calling the GetTraceSummaries API.
INCORRECT: “Metadata” is incorrect. Metadata are key-value pairs that can have values of any type, including objects and lists, but are not indexed for use with filter expressions. Use metadata to record additional data that you want stored in the trace but don’t need to use with search.
A serverless application uses an AWS Lambda function to process Amazon S3 events. The Lambda function executes 20 times per second and takes 20 seconds to complete each execution.
How many concurrent executions will the Lambda function require?
5
40
20
400
400
To calculate the concurrency requirements for the Lambda function simply multiply the number of executions per second (20) by the time it takes to complete the execution (20).
A Development team would use a GitHub repository and would like to migrate their application code to AWS CodeCommit.
What needs to be created before they can migrate a cloned repository to CodeCommit over HTTPS?
A set of Git credentials generated with IAM
An Amazon EC2 IAM role with CodeCommit permissions
A public and private SSH key file
A GitHub secure authentication token
Git credentials, an IAM -generated user name and password pair you can use to communicate with CodeCommit repositories over HTTPS.
A developer is planning to use a Lambda function to process incoming requests from an Application Load Balancer (ALB). How can this be achieved?
Create an Auto Scaling Group (ASG) and register the Lambda function in the launch configuration
Setup an API in front of the ALB using API Gateway and use an integration request to map the request to the Lambda function
Configure an event-source mapping between the ALB and the Lambda function
Create a target group and register the Lambda function using the AWS CLI
Create a target group and register the Lambda function using the AWS CLI
You can register your Lambda functions as targets and configure a listener rule to forward requests to the target group for your Lambda function. When the load balancer forwards the request to a target group with a Lambda function as a target, it invokes your Lambda function and passes the content of the request to the Lambda function, in JSON format.
A Development team wants to run their container workloads on Amazon ECS. Each application container needs to share data with another container to collect logs and metrics.
What should the Development team do to meet these requirements?
Create two task definitions. Make one to include the application container and the other to include the other container. Mount a shared volume between the two tasks
Create a single pod specification. Include both containers in the specification. Mount a persistent volume to both containers
Create one task definition. Specify both containers in the definition. Mount a shared volume between those two containers
Create two pod specifications. Make one to include the application container and the other to include the other container. Link the two pods together
Create one task definition. Specify both containers in the definition. Mount a shared volume between those two containers
To configure a Docker volume, in the task definition volumes section, define a data volume with name and DockerVolumeConfiguration values. In the containerDefinitions section, define multiple containers with mountPoints values that reference the name of the defined volume and the containerPath value to mount the volume at on the container.
The containers should both be specified in the same task definition. Therefore, the Development team should create one task definition, specify both containers in the definition and then mount a shared volume between those two containers
A Developer is setting up a code update to Amazon ECS using AWS CodeDeploy. The Developer needs to complete the code update quickly. Which of the following deployment types should the Developer use?
Linear
Canary
In-place
Blue/green
Blue/green
INCORRECT: “In-place” is incorrect as AWS Lambda and Amazon ECS deployments cannot use an in-place deployment type.
An application serves customers in several different geographical regions. Information about the location users connect from is written to logs stored in Amazon CloudWatch Logs. The company needs to publish an Amazon CloudWatch custom metric that tracks connections for each location.
Which approach will meet these requirements?
Configure a CloudWatch Events rule that creates a custom metric from the CloudWatch Logs group.
Stream data to an Amazon Elasticsearch cluster in near-real time and export a custom metric.
Create a CloudWatch metric filter to extract metrics from the log files with location as a dimension.
Create a CloudWatch Logs Insights query to extract the location information from the logs and to create a custom metric with location as a dimension.
Create a CloudWatch metric filter to extract metrics from the log files with location as a dimension.
When you create a metric from a log filter, you can also choose to assign dimensions and a unit to the metric. In this case, the company can assign a dimension that uses the location information.
INCORRECT: “Create a CloudWatch Logs Insights query to extract the location information from the logs and to create a custom metric with location as a dimension” is incorrect. You cannot create a custom metric through CloudWatch Logs Insights.
A developer is preparing to deploy a Docker container to Amazon ECS using CodeDeploy. The developer has defined the deployment actions in a file. What should the developer name the file?
appspec.yml
appspec.json
buildspec.yml
cron.yml
appspec.yml
The name of the AppSpec file for an EC2/On-Premises deployment must be appspec.yml. The name of the AppSpec file for an Amazon ECS or AWS Lambda deployment must be appspec.yaml.
INCORRECT: “buildspec.yml” is incorrect as this is the file name you should use for the file that defines the build instructions for AWS CodeBuild.
A company has created a set of APIs using Amazon API Gateway and exposed them to partner companies. The APIs have caching enabled for all stages. The partners require a method of invalidating the cache that they can build into their applications.
What can the partners use to invalidate the API cache?
They can use the query string parameter INVALIDATE_CACHE
They can pass the HTTP header Cache-Control: max-age=0
They must wait for the TTL to expire
They can invoke an AWS API endpoint which invalidates the cache
They can pass the HTTP header Cache-Control: max-age=0
A client of your API can invalidate an existing cache entry and reload it from the integration endpoint for individual requests. The client must send a request that contains the Cache-Control: max-age=0 header.
To grant permission for a client, attach a policy of the following format to an IAM execution role for the user.
A serverless application uses an Amazon API Gateway and AWS Lambda. The application processes data submitted in a form by users of the application and certain data must be stored and available to subsequent function calls.
What is the BEST solution for storing this data?
Store the data in the /tmp directory Store the data in an Amazon SQS queue Store the data in an Amazon Kinesis Data Stream Store the data in an Amazon DynamoDB table
Store the data in an Amazon DynamoDB table
Amazon DynamoDB is a good solution for this scenario as it is a low-latency NoSQL database that is often used for storing session state data. Amazon S3 would also be a good fit for this scenario but is not offered as an option.
An application component writes thousands of item-level changes to a DynamoDB table per day. The developer requires that a record is maintained of the items before they were modified. What MUST the developer do to retain this information? (Select TWO.)
Create a CloudWatch alarm that sends a notification when an item is modified
Set the StreamViewType to NEW_AND_OLD_IMAGES
Use an AWS Lambda function to extract the item records from the notification and write to an S3 bucket
Set the StreamViewType to OLD_IMAGE
Enable DynamoDB Streams for the table
Set the StreamViewType to OLD_IMAGE
KEYS_ONLY — Only the key attributes of the modified item.
NEW_IMAGE — The entire item, as it appears after it was modified.
OLD_IMAGE — The entire item, as it appeared before it was modified.
NEW_AND_OLD_IMAGES — Both the new and the old images of the item.
A Developer is building an application that will store data relating to financial transactions in multiple DynamoDB tables. The Developer needs to ensure the transactions provide atomicity, isolation, and durability (ACID) and that changes are committed following an all-or nothing paradigm.
What write API should be used for the DynamoDB table?
Strongly consistent
Eventually consistent
Transactional
Standard
Transactional
A company is deploying an on-premise application server that will connect to several AWS services. What is the BEST way to provide the application server with permissions to authenticate to AWS services?
Create an IAM role with the necessary permissions and assign it to the application server
Create an IAM group with the necessary permissions and add the on-premise application server to the group
Create an IAM user and generate access keys. Create a credentials file on the application server
Create an IAM user and generate a key pair. Use the key pair in API calls to AWS services
Create an IAM user and generate access keys. Create a credentials file on the application server
(ON PREMISE!)
A Developer requires a multi-threaded in-memory cache to place in front of an Amazon RDS database. Which caching solution should the Developer choose?
Amazon DynamoDB DAX Amazon RedShift Amazon ElastiCache Memcached
Amazon ElastiCache Redis
CORRECT: “Amazon ElastiCache Memcached” is the correct answer.
INCORRECT: “Amazon ElastiCache Redis” is incorrect as Redis it not multi-threaded.
A Developer is deploying an AWS Lambda update using AWS CodeDeploy. In the appspec.yaml file, which of the following is a valid structure for the order of hooks that should be specified?
BeforeInstall > AfterInstall > ApplicationStart > ValidateService
BeforeBlockTraffic > AfterBlockTraffic > BeforeAllowTraffic > AfterAllowTraffic
BeforeInstall > AfterInstall > AfterAllowTestTraffic > BeforeAllowTraffic > AfterAllowTraffic
BeforeAllowTraffic > AfterAllowTraffic
BeforeAllowTraffic > AfterAllowTraffic
A Developer is building a three-tier web application that must be able to handle a minimum of 10,000 requests per minute. The requirements state that the web tier should be completely stateless while the application maintains session state data for users.
How can the session state data be maintained externally, whilst keeping latency at the LOWEST possible value?
Implement a shared Amazon EFS file system solution across the underlying Amazon EC2 instances, then implement session handling at the application level to leverage the EFS file system for session data storage
Create an Amazon RedShift instance, then implement session handling at the application level to leverage a database inside the RedShift database instance for session data storage
Create an Amazon ElastiCache Redis cluster, then implement session handling at the application level to leverage the cluster for session data storage
Create an Amazon DynamoDB table, then implement session handling at the application level to leverage the table for session data storage
CORRECT: “Create an Amazon ElastiCache Redis cluster, then implement session handling at the application level to leverage the cluster for session data storage” is the correct answer.
INCORRECT: “Create an Amazon DynamoDB table, then implement session handling at the application level to leverage the table for session data storage” is incorrect as though this is a good solution for storing session state data, the latency will not be as low as with ElastiCache.
A company has a large Amazon DynamoDB table which they scan periodically so they can analyze several attributes. The scans are consuming a lot of provisioned throughput. What technique can a Developer use to minimize the impact of the scan on the table’s provisioned throughput?
Set a smaller page size for the scan Use parallel scans Define a range key on the table Prewarm the table by updating all items
Set a smaller page size for the scan
Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request.
A company has implemented AWS CodePipeline to automate its release pipelines. The Development team is writing an AWS Lambda function that will send notifications for state changes of each of the actions in the stages.
Which steps must be taken to associate the Lambda function with the event source?
Create an event trigger and specify the Lambda function from the CodePipeline console
Create a trigger that invokes the Lambda function from the Lambda console by selecting CodePipeline as the event source
Create an Amazon CloudWatch Events rule that uses CodePipeline as an event source
Create an Amazon CloudWatch alarm that monitors status changes in CodePipeline and triggers the Lambda function
Create an Amazon CloudWatch Events rule that uses CodePipeline as an event source
Amazon CloudWatch Events help you to respond to state changes in your AWS resources. When your resources change state, they automatically send events into an event stream. You can create rules that match selected events in the stream and route them to your AWS Lambda function to take action.
AWS CodePipeline can be configured as an event source in CloudWatch Events and can then send notifications using as service such as Amazon SNS.
A Developer is creating a DynamoDB table for storing transaction logs. The table has 10 write capacity units (WCUs). The Developer needs to configure the read capacity units (RCUs) for the table in order to MAXIMIZE the number of requests allowed per second. Which of the following configurations should the Developer use?
Strongly consistent reads of 5 RCUs reading items that are 4 KB in size
Eventually consistent reads of 15 RCUs reading items that are 1 KB in size
Strongly consistent reads of 15 RCUs reading items that are 1KB in size
Eventually consistent reads of 5 RCUs reading items that are 4 KB in size
Eventually consistent reads of 15 RCUs reading items that are 1 KB in size
Eventually consistent, 15 RCUs, 1 KB item = 30 items read per second.
· Strongly consistent, 15 RCUs, 1 KB item = 15 items read per second.
· Eventually consistent, 5 RCUs, 4 KB item = 10 items read per second.
· Strongly consistent, 5 RCUs, 4 KB item = 5 items read per second.
There are multiple AWS accounts across multiple regions managed by a company. The operations team require a single operational dashboard that displays some key performance metrics from these accounts and regions. What is the SIMPLEST solution?
Create an AWS Lambda function that collects metrics from each account and region and pushes the metrics to the account where the dashboard has been created
Create an Amazon CloudWatch dashboard in one account and region and import the data from the other accounts and regions
Create an Amazon CloudTrail trail that applies to all regions and deliver the logs to a single Amazon S3 bucket. Create a dashboard using the data in the bucket
Create an Amazon CloudWatch cross-account cross-region dashboard
Create an Amazon CloudWatch cross-account cross-region dashboard
A developer needs use the attribute of an Amazon S3 object that uniquely identifies the object in a bucket. Which of the following represents an Object Key?
Development/Projects.xls
Project=Blue
s3://dctlabs/Development/Projects.xls
arn:aws:s3:::dctlabs
Development/Projects.xls
A company maintains a REST API service using Amazon API Gateway with native API key validation. The company recently launched a new registration page, which allows users to sign up for the service. The registration page creates a new API key using CreateApiKey and sends the new key to the user. When the user attempts to call the API using this key, the user receives a 403 Forbidden error. Existing users are unaffected and can still call the API.
What code updates will grant these new users’ access to the API?
The createDeployment method must be called so the API can be redeployed to include the newly created API key
The importApiKeys method must be called to import all newly created API keys into the current stage of the API
The createUsagePlanKey method must be called to associate the newly created API key with the correct usage plan
The updateAuthorizer method must be called to update the API’s authorizer to include the newly created API key
The createUsagePlanKey method must be called to associate the newly created API key with the correct usage plan
A usage plan specifies who can access one or more deployed API stages and methods—and also how much and how fast they can access them. The plan uses API keys to identify API clients and meters access to the associated API stages for each key. It also lets you configure throttling limits and quota limits that are enforced on individual client API keys.
CORRECT: “The createUsagePlanKey method must be called to associate the newly created API key with the correct usage plan” is the correct answer.
A Developer is creating a serverless application that will process sensitive data. The AWS Lambda function must encrypt all data that is written to /tmp storage at rest.
How should the Developer encrypt this data?
Configure Lambda to use an AWS KMS customer managed customer master key (CMK). Use the CMK to generate a data key and encrypt all data prior to writing to /tmp storage.
Attach the Lambda function to a VPC and encrypt Amazon EBS volumes at rest using the AWS managed CMK. Mount the EBS volume to /tmp.
Enable default encryption on an Amazon S3 bucket using an AWS KMS customer managed customer master key (CMK). Mount the S3 bucket to /tmp.
Enable secure connections over HTTPS for the AWS Lambda API endpoints using Transport Layer Security (TLS).
CORRECT: “Configure Lambda to use an AWS KMS customer managed customer master key (CMK). Use the CMK to generate a data key and encrypt all data prior to writing to /tmp storage” is the correct answer.
A company runs an e-commerce website that uses Amazon DynamoDB where pricing for items is dynamically updated in real time. At any given time, multiple updates may occur simultaneously for pricing information on a particular product. This is causing the original editor’s changes to be overwritten without a proper review process.
Which DynamoDB write option should be selected to prevent this overwriting?
Conditional writes Concurrent writes Batch writes Atomic writes
Conditional writes
A conditional write succeeds only if the item attributes meet one or more expected conditions. Otherwise, it returns an error. Conditional writes are helpful in many situations. For example, you might want a PutItem operation to succeed only if there is not already an item with the same primary key. Or you could prevent an UpdateItem operation from modifying an item if one of its attributes has a certain value.
A Developer is creating a REST service using Amazon API Gateway with AWS Lambda integration. The service adds data to a spreadsheet and the data is sent as query string parameters in the method request.
How should the Developer convert the query string parameters to arguments for the Lambda function?
Enable request validation Include the Amazon Resource Name (ARN) of the Lambda function Create a mapping template Change the integration type
CORRECT: “Create a mapping template” is the correct answer.
Mapping template overrides provides you with the flexibility to perform many-to-one parameter mappings; override parameters after standard API Gateway mappings
An organization has an account for each environment: Production, Testing, Development. A Developer with an IAM user in the Development account needs to launch resources in the Production and Testing accounts. What is the MOST efficient way to provide access
Create an IAM permissions policy in the Production and Testing accounts and reference the IAM user in the Development account
Create an IAM group in the Production and Testing accounts and add the Developer’s user from the Development account to the groups
Create a role with the required permissions in the Production and Testing accounts and have the Developer assume that role
Create a separate IAM user in each account and have the Developer login separately to each account
CORRECT: “Create a role with the required permissions in the Production and Testing accounts and have the Developer assume that role” is the correct answer.
A Developer has created an Amazon S3 bucket and uploaded some objects that will be used for a publicly available static website. What steps MUST be performed to configure the bucket as a static website? (Select TWO.)
Create an object access control list (ACL) granting READ permissions to the AllUsers group
Enable public access and grant everyone the s3:GetObject permissions
Upload a certificate from AWS Certificate Manager
Upload an index and error document and enter the name of the index and error documents when enabling static website hosting
Upload an index document and enter the name of the index document when enabling static website hosting
Enable public access and grant everyone the s3:GetObject permissions
Upload an index document and enter the name of the index document when enabling static website hosting
INCORRECT: “Upload an index and error document and enter the name of the index and error documents when enabling static website hosting” is incorrect as the error document is optional and the question specifically asks for the steps that MUST be completed.
A company runs a popular website behind an Amazon CloudFront distribution that uses an Application Load Balancer as the origin. The Developer wants to set up custom HTTP responses to 404 errors for content that has been removed from the origin that redirects the users to another page.
The Developer wants to use an AWS Lambda@Edge function that is associated with the current CloudFront distribution to accomplish this goal. The solution must use a minimum amount of resources.
Which CloudFront event type should the Developer use to invoke the Lambda@Edge function that contains the redirect logic?
Viewer response Origin response Viewer request Origin request
Origin response” is the correct answer.
A Developer is developing a web application and will maintain separate sets of resources for the alpha, beta, and release stages. Each version runs on Amazon EC2 and uses an Elastic Load Balancer.
How can the Developer create a single page to view and manage all of the resources?
Deploy all resources using a single Amazon CloudFormation stack
Create a resource group
Create a single AWS CodeDeploy deployment
Create an AWS Elastic Beanstalk environment for each stage
Create a resource group
In AWS, a resource is an entity that you can work with. Examples include an Amazon EC2 instance, an AWS CloudFormation stack, or an Amazon S3 bucket. If you work with multiple resources, you might find it useful to manage them as a group rather than move from one AWS service to another for each task.
An application will be hosted on the AWS Cloud. Developers will be using an Agile software development methodology with regular updates deployed through a continuous integration and delivery (CI/CD) model. Which AWS service can assist the Developers with automating the build, test, and deploy phases of the release process every time there is a code change?
AWS CloudFormation AWS Elastic Beanstalk AWS CodeBuild
AWS CodePipeline
AWS CodePipeline
INCORRECT: “AWS CodeBuild” is incorrect as CodeBuild is used for compiling code, running unit tests and creating the deployment package. It does not manage the deployment of the code.
A Developer is creating a design for an application that will include Docker containers on Amazon ECS with the EC2 launch type. The Developer needs to control the placement of tasks onto groups of container instances organized by availability zone and instance type.
Which Amazon ECS feature provides expressions that can be used to group container instances by the relevant attributes?
Task Group Task Placement Strategy Cluster Query Language Task Placement Constraints
Cluster Query Language
A company runs multiple microservices that each use their own Amazon DynamoDB table. The “customers” microservice needs data that originates in the “orders” microservice.
What approach represents the SIMPLEST method for the “customers” table to get near real-time updates from the “orders” table?
Enable Amazon DynamoDB streams on the “orders” table, configure the “customers” microservice to read records from the stream
Use Amazon Kinesis Firehose to deliver all changes in the “orders” table to the “customers” table
Use Amazon CloudWatch Events to send notifications every time an item is added or modified in the “orders” table
Enable DynamoDB streams for the “customers” table, trigger an AWS Lambda function to read records from the stream and write them to the “orders” table
Enable Amazon DynamoDB streams on the “orders” table, configure the “customers” microservice to read records from the stream
An application running on Amazon EC2 generates a large number of small files (1KB each) containing personally identifiable information that must be converted to ciphertext. The data will be stored on a proprietary network-attached file system. What is the SAFEST way to encrypt the data using AWS KMS?
Create a data encryption key from a customer master key and encrypt the data with the data encryption key
Encrypt the data directly with a customer managed customer master key
Encrypt the data directly with an AWS managed customer master key
Create a data encryption key from a customer master key and encrypt the data with the customer master key
Encrypt the data directly with a customer managed customer master key
A Developer is deploying an application using Docker containers running on the Amazon Elastic Container Service (ECS). The Developer is testing application latency and wants to capture trace information between the microservices.
Which solution will meet these requirements?
Create a Docker image that runs the X-Ray daemon, upload it to a Docker image repository, and then deploy it to the Amazon ECS cluster.
Install the AWS X-Ray daemon on each of the Amazon ECS instances.
Install the Amazon CloudWatch agent on the container image. Use the CloudWatch SDK to publish custom metrics from each of the microservices.
Install the AWS X-Ray daemon locally on an Amazon EC2 instance and instrument the Amazon ECS microservices using the X-Ray SDK.
Create a Docker image that runs the X-Ray daemon, upload it to a Docker image repository, and then deploy it to the Amazon ECS cluster.
A Developer needs to be notified by email for all new object creation events in a specific Amazon S3 bucket. Amazon SNS will be used for sending the messages. How can the Developer enable these notifications?
Create an event notification for all s3:ObjectCreated:Put API calls
Create an event notification for all s3:ObjectCreated:* API calls
Create an event notification for all s3:ObjectRestore:Post API calls
Create an event notification for all s3:ObjectRemoved:Delete API calls
Create an event notification for all s3:ObjectCreated:* API calls
INCORRECT: “Create an event notification for all s3:ObjectCreated:Put API calls” is incorrect as this will not capture all new object creation events (e.g. POST or COPY). The wildcard should be used instead.
A Developer is launching an application on Amazon ECS. The application should scale tasks automatically based on load and incoming connections must be spread across the containers.
How should the Developer configure the ECS cluster?
Write statements using the Cluster Query Language to scale the Docker containers
Create an ECS Task Definition that uses Auto Scaling and Elastic Load Balancing
Create a capacity provider and configure cluster auto scaling
Create an ECS Service with Auto Scaling and attach an Elastic Load Balancer
Create an ECS Service with Auto Scaling and attach an Elastic Load Balancer
A Developer is creating an application that will process some data and generate an image file from it. The application will use an AWS Lambda function which will require 150 MB of temporary storage while executing. The temporary files will not be needed after the function execution is completed.
What is the best location for the Developer to store the files?
Store the files in Amazon S3 and use a lifecycle policy to delete the files automatically
Store the files in the /tmp directory and delete the files when the execution completes
Store the files in an Amazon EFS filesystem and delete the files when the execution completes
Store the files in an Amazon Instance Store and delete the files when the execution completes
CORRECT: “Store the files in the /tmp directory and delete the files when the execution completes” is the correct answer.
The /tmp directory can be used for storing temporary files within the execution context. This can be used for storing static assets that can be used by subsequent invocations of the function. If the assets must be deleted before the function is invoked again the function code should take care of deleting them.
A new application will be deployed using AWS CodeDeploy to Amazon Elastic Container Service (ECS). What must be supplied to CodeDeploy to specify the ECS service to deploy?
The AppSpec file
The Policy file
The BuildSpec file
The Template file
The AppSpec file
INCORRECT: “The BuildSpec file” is incorrect as this is a file type that is used with AWS CodeBuild.
A Developer implemented a static website hosted in Amazon S3 that makes web service requests hosted in Amazon API Gateway and AWS Lambda. The site is showing an error that reads:
“No ‘Access-Control-Allow-Origin’ header is present on the requested resource. Origin ‘null’ is therefore not allowed access.”
What should the Developer do to resolve this issue?
Enable cross-origin resource sharing (CORS) on the S3 bucket
Add the Access-Control-Request-Method header to the request
Enable cross-origin resource sharing (CORS) for the method in API Gateway
Add the Access-Control-Request-Headers header to the request
Enable cross-origin resource sharing (CORS) for the method in API Gateway
INCORRECT: “Enable cross-origin resource sharing (CORS) on the S3 bucket” is incorrect as CORS must be enabled on the requested endpoint which is API Gateway, not S3.
A company is developing a new online game that will run on top of Amazon ECS. Four distinct Amazon ECS services will be part of the architecture, each requiring specific permissions to various AWS services. The company wants to optimize the use of the underlying Amazon EC2 instances by bin packing the containers based on memory reservation.
Which configuration would allow the Development team to meet these requirements MOST securely
Create four distinct IAM roles, each containing the required permissions for the associated ECS services, then, create an IAM group and configure the ECS cluster to reference that group
Create four distinct IAM roles, each containing the required permissions for the associated ECS services, then configure each ECS task definition to reference the associated IAM role
Create four distinct IAM roles, each containing the required permissions for the associated ECS services, then configure each ECS service to reference the associated IAM role
Create a new Identity and Access Management (IAM) instance profile containing the required permissions for the various ECS services, then associate that instance role with the underlying EC2 instances
Create four distinct IAM roles, each containing the required permissions for the associated ECS services, then configure each ECS task definition to reference the associated IAM role
INCORRECT: “Create four distinct IAM roles, each containing the required permissions for the associated ECS services, then configure each ECS service to reference the associated IAM role” is incorrect as the reference should be made within the task definition.
A developer is troubleshooting problems with a Lambda function that is invoked by Amazon SNS and repeatedly fails. How can the developer save discarded events for further processing?
Enable Lambda streams Configure a Dead Letter Queue (DLQ) Enable SNS notifications for failed events Enable CloudWatch Logs for the Lambda function
Configure a Dead Letter Queue (DLQ)
You can configure a dead letter queue (DLQ) on AWS Lambda to give you more control over message handling for all asynchronous invocations, including those delivered via AWS events (S3, SNS, IoT, etc.).
A company needs to store sensitive documents on Amazon S3. The documents should be encrypted in transit using SSL/TLS and then be encrypted for storage at the destination. The company do not want to manage any of the encryption infrastructure or customer master keys and require the most cost-effective solution.
What is the MOST suitable option to encrypt the data?
Client-side encryption with Amazon S3 managed keys
Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS) using customer managed CMKs
Server-Side Encryption with Customer-Provided Keys (SSE-C)
Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
A company has deployed a REST API using Amazon API Gateway with a Lambda authorizer. The company needs to log who has accessed the API and how the caller accessed the API. They also require logs that include errors and execution traces for the Lambda authorizer.
Which combination of actions should the Developer take to meet these requirements? (Select TWO.)
Enable API Gateway access logs. Create an API Gateway usage plan. Enable server access logging.
Enable API Gateway execution logging.
Enable detailed logging in Amazon CloudWatch.
CORRECT: “Enable API Gateway execution logging” is a correct answer.
CORRECT: “Enable API Gateway access logs” is also a correct answer.
There are two types of API logging in CloudWatch: execution logging and access logging. In execution logging, API Gateway manages the CloudWatch Logs. The process includes creating log groups and log streams, and reporting to the log streams any caller’s requests and responses.
A company uses Amazon DynamoDB to store sensitive data that must be encrypted. The company security policy mandates that data must be encrypted before it is submitted to DynamoDB
How can a Developer meet these requirements?
Use the UpdateTable operation to switch to a customer managed customer master key (CMK).
Use AWS Certificate Manager (ACM) to create one certificate for each DynamoDB table.
Use the UpdateTable operation to switch to an AWS managed customer master key (CMK).
Use the DynamoDB Encryption Client to enable end-to-end protection using client-side encryption.
“Use the DynamoDB Encryption Client to enable end-to-end protection using client-side encryption” is the correct answer.
In addition to encryption at rest, which is a server-side encryption feature, AWS provides the Amazon DynamoDB Encryption Client. This client-side encryption library enables you to protect your table data before submitting it to DynamoDB.
A Developer has deployed an application that runs on an Auto Scaling group of Amazon EC2 instances. The application data is stored in an Amazon DynamoDB table and records are constantly updated by all instances. An instance sometimes retrieves old data. The Developer wants to correct this by making sure the reads are strongly consistent.
How can the Developer accomplish this?
Create a new DynamoDB Accelerator (DAX) table
Use the GetShardIterator command
Set consistency to strong when calling UpdateTable
Set ConsistentRead to true when calling GetItem
“Set ConsistentRead to true when calling GetItem” is the correct answer.
When you request a strongly consistent read, DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful.
The GetItem operation returns a set of attributes for the item with the given primary key. If there is no matching item, GetItem does not return any data and there will be no Item element in the response.
GetItem provides an eventually consistent read by default. If your application requires a strongly consistent read, set ConsistentRead to true. Although a strongly consistent read might take more time than an eventually consistent read, it always returns the last updated value.
A Developer created a new AWS account and must create a scalable AWS Lambda function that meets the following requirements for concurrent execution:
· Average execution time of 100 seconds
· 50 requests per second
Which step must be taken prior to deployment to prevent error?
Contact AWS Support to increase the concurrent execution limits
Implement error handling within the application code
Implement dead-letter queues to capture invocation errors
Add an event source from Amazon API Gateway to the Lambda function
“Contact AWS Support to increase the concurrent execution limits” is the correct answer.
The average execution time is 100 seconds and 50 requests are received per second. This means the concurrency requirement is 100 x 50 = 5,000. As 5,000 is well above the default allowed concurrency of 1,000 executions a second. Therefore, the Developer will need to contact AWS Support to increase the concurrent execution limits.
A developer is designing a web application that will run on Amazon EC2 Linux instances using an Auto Scaling Group. The application should scale based on a threshold for the number of users concurrently using the application.
How should the Auto Scaling Group be configured to scale out?
Use the Amazon CloudWatch metric “NetworkIn”
Use a target tracking scaling policy
Create a custom Amazon CloudWatch metric for memory usage
Create a custom Amazon CloudWatch metric for concurrent users
“Create a custom Amazon CloudWatch metric for concurrent users” is the correct answer.
You can create a custom CloudWatch metric for your EC2 Linux instance statistics by creating a script through the AWS Command Line Interface (AWS CLI). Then, you can monitor that metric by pushing it to CloudWatch. In this scenario you could then monitor the number of users currently logged in.
An application searches a DynamoDB table to return items based on primary key attributes. A developer noticed some ProvisionedThroughputExceeded exceptions being generated by DynamoDB.
How can the application be optimized to reduce the load on DynamoDB and use the LEAST amount of RCU?
Modify the application to issue scan API calls with eventual consistency reads
Modify the application to issue scan API calls with strong consistency reads
Modify the application to issue query API calls with eventual consistency reads
Modify the application to issue query API calls with strong consistency reads
“Modify the application to issue query API calls with eventual consistency reads” is the correct answer.
In general, Scan operations are less efficient than other operations in DynamoDB. A Scan operation always scans the entire table or secondary index. It then filters out values to provide the result you want, essentially adding the extra step of removing data from the result set.
If possible, you should avoid using a Scan operation on a large table or index with a filter that removes many results. Also, as a table or index grows, the Scan operation slows. The Scan operation examines every item for the requested values and can use up the provisioned throughput for a large table or index in a single operation. For faster response times, design your tables and indexes so that your applications can use Query instead of Scan. (For tables, you can also consider using the GetItem and BatchGetItem APIs.)
Additionally, eventual consistency consumes fewer RCUs than strong consistency. Therefore, the application should be refactored to use query APIs with eventual consistency.
A Developer is building a WebSocket API using Amazon API Gateway. The payload sent to this API is JSON that includes an action key which can have multiple values. The Developer must integrate with different routes based on the value of the action key of the incoming JSON payload.
How can the Developer accomplish this task with the LEAST amount of configuration?
Set the value of the route selection expression to $default.
Create a mapping template to map the action key to an integration request.
Create a separate stage for each possible value of the action key.
Set the value of the route selection expression to $request.body.action.
“Set the value of the route selection expression to $request.body.action” is the correct answer.
In your WebSocket API, incoming JSON messages are directed to backend integrations based on routes that you configure. (Non-JSON messages are directed to a $default route that you configure.)
An application running on Amazon EC2 generates a large number of small files (1KB each) containing personally identifiable information that must be converted to ciphertext. The data will be stored on a proprietary network-attached file system. What is the SAFEST way to encrypt the data using AWS KMS?
Create a data encryption key from a customer master key and encrypt the data with the customer master key
Create a data encryption key from a customer master key and encrypt the data with the data encryption key
Encrypt the data directly with a customer managed customer master key
Encrypt the data directly with an AWS managed customer master key
Encrypt the data directly with a customer managed customer master key
INCORRECT: “Encrypt the data directly with an AWS managed customer master key” is incorrect as the network-attached file system is proprietary and therefore will not be supported by AWS managed CMKs.
A company manages a web application that is deployed on AWS Elastic Beanstalk. A Developer has been instructed to update to a new version of the application code. There is no tolerance for downtime if the update fails and rollback should be fast.
What is the SAFEST deployment method to use?
All at once Immutable Rolling with Additional Batch Rolling
CORRECT: “Immutable” is the correct answer.
INCORRECT: “Rolling with Additional Batch” is incorrect because it requires manual redeployment in the case of failure.
A utilities company needs to ensure that documents uploaded by customers through a web portal are securely stored in Amazon S3 with encryption at rest. The company does not want to manage the security infrastructure in-house. However, the company still needs maintain control over its encryption keys due to industry regulations.
Which encryption strategy should a Developer use to meet these requirements?
Server-side encryption with customer-provided encryption keys (SSE-C)
Server-side encryption with AWS KMS managed keys (SSE-KMS)
Server-side encryption with Amazon S3 managed keys (SSE-S3)
Client-side encryption
CORRECT: “Server-side encryption with customer-provided encryption keys (SSE-C)” is the correct answer.
Server-side encryption is about protecting data at rest. Server-side encryption encrypts only the object data, not object metadata. Using server-side encryption with customer-provided encryption keys (SSE-C) allows you to set your own encryption keys.
With the encryption key you provide as part of your request, Amazon S3 manages the encryption as it writes to disks and decryption when you access your objects. Therefore, you don’t need to maintain any code to perform data encryption and decryption. The only thing you do is manage the encryption keys you provide.
A developer is making updates to the code for a Lambda function. The developer is keen to test the code updates by directing a small amount of traffic to a new version. How can this BEST be achieved?
Create an alias that points to both the new and previous versions of the function code and assign a weighting for sending a portion of traffic to the new version
Create an API using API Gateway and use stage variables to point to different versions of the Lambda function
Create a new function using the new code and update the application to split requests between the new functions
Create two versions of the function code. Configure the application to direct a subset of requests to the new version
CORRECT: “Create an alias that points to both the new and previous versions of the function code and assign a weighting for sending a portion of traffic to the new version”
You can create one or more aliases for your AWS Lambda function. A Lambda alias is like a pointer to a specific Lambda function version. Users can access the function version using the alias ARN.
You can point an alias a multiple versions of your function code and then assign a weighting to direct certain amounts of traffic to each version. This enables a blue/green style of deployment and means it’s easy to roll back to the older version by simply updating the weighting if issues occur.
A Developer is designing a cloud native application. The application will use several AWS Lambda functions that will process items that the functions read from an event source. Which AWS services are supported for Lambda event source mappings? (Select THREE.)
Amazon Simple Notification Service (SNS) Amazon Simple Queue Service (SQS) Another Lambda function Amazon Kinesis Amazon DynamoDB Amazon Simple Storage Service (S3)
Amazon Simple Queue Service (SQS)
Amazon Kinesis
Amazon DynamoDB
An event source mapping is an AWS Lambda resource that reads from an event source and invokes a Lambda function. You can use event source mappings to process items from a stream or queue in services that don’t invoke Lambda functions directly. Lambda provides event source mappings for the following services.
A developer is creating an Auto Scaling group of Amazon EC2 instances. The developer needs to publish a custom metric to Amazon CloudWatch. Which method would be the MOST secure way to authenticate a CloudWatch PUT request?
Create an IAM role with the PutMetricData permission and create a new Auto Scaling launch configuration to launch instances using that role
Create an IAM role with the PutMetricData permission and modify the Amazon EC2 instances to use that role
Modify the CloudWatch metric policies to allow the PutMetricData permission to instances from the Auto Scaling group
Create an IAM user with the PutMetricData permission and modify the Auto Scaling launch configuration to inject the user credentials into the instance user data
CORRECT: “Create an IAM role with the PutMetricData permission and create a new Auto Scaling launch configuration to launch instances using that role” is the correct answer
INCORRECT: “Create an IAM role with the PutMetricData permission and modify the Amazon EC2 instances to use that role” is incorrect as you should create a new launch configuration for the Auto Scaling group rather than updating the instances manually.
A company is creating an application that will require users to access AWS services and allow them to reset their own passwords. Which of the following would allow the company to manage users and authorization while allowing users to reset their own passwords?
Amazon Cognito user pools and identity pools
Amazon Cognito identity pools and AWS IAM
Amazon Cognito identity pools and AWS STS
Amazon Cognito user pools and AWS KMS
CORRECT: “Amazon Cognito user pools and identity pools” is the correct answer.
INCORRECT: “Amazon Cognito identity pools and AWS IAM” is incorrect as a Cognito user pool should be used as the directory source for creating and managing users. IAM is used for accounts that are used to administer AWS services, not for application user access.
The first requirement is provided by an Amazon Cognito User Pool. With a Cognito user pool you can add sign-up and sign-in to mobile and web apps and it also offers a user directory so user accounts can be created directly within the user pool. Users also have the ability to reset their passwords.
To access AWS services you need a Cognito Identity Pool. An identity pool can be used with a user pool and enables a user to obtain temporary limited-privilege credentials to access AWS services.
An application has been instrumented to use the AWS X-Ray SDK to collect data about the requests the application serves. The Developer has set the user field on segments to a string that identifies the user who sent the request.
How can the Developer search for segments associated with specific users?
Use a filter expression to search for the user field in the segment metadata
Use a filter expression to search for the user field in the segment annotations
By using the GetTraceGraph API with a filter expression
By using the GetTraceSummaries API with a filter expression
CORRECT: “By using the GetTraceSummaries API with a filter expression” is the correct answer.
A subset of segment fields are indexed by X-Ray for use with filter expressions. For example, if you set the user field on a segment to a unique identifier, you can search for segments associated with specific users in the X-Ray console or by using the GetTraceSummaries API.
An application uses an Auto Scaling group of Amazon EC2 instances, an Application Load Balancer (ALB), and an Amazon Simple Queue Service (SQS) queue. An Amazon CloudFront distribution caches content for global users. A Developer needs to add in-transit encryption to the data by configuring end-to-end SSL between the CloudFront Origin and the end users.
How can the Developer meet this requirement? (Select TWO.)
Create an Origin Access Identity (OAI)
Configure the Origin Protocol Policy
Add a certificate to the Auto Scaling Group
Create an encrypted distribution
Configure the Viewer Protocol Policy
CORRECT: “Configure the Origin Protocol Policy” is a correct answer.
CORRECT: “Configure the Viewer Protocol Policy” is also a correct answer.
To enable SSL between the origin and the distribution the Developer can configure the Origin Protocol Policy. Depending on the domain name used (CloudFront default or custom), the steps are different. To enable SSL between the end-user and CloudFront the Viewer Protocol Policy should be configured.
A Development team has deployed several applications running on an Auto Scaling fleet of Amazon EC2 instances. The Operations team have asked for a display that shows a key performance metric for each application on a single screen for monitoring purposes.
What steps should a Developer take to deliver this capability using Amazon CloudWatch?
Create a custom dimension with a unique metric name for each application
Create a custom event with a unique metric name for each application
Create a custom alarm with a unique metric name for each application
Create a custom namespace with a unique metric name for each application
A namespace is a container for CloudWatch metrics. Metrics in different namespaces are isolated from each other, so that metrics from different applications are not mistakenly aggregated into the same statistics.
Therefore, the Developer should create a custom namespace with a unique metric name for each application. This namespace will then allow the metrics for each individual application to be shown in a single view through CloudWatch.
CORRECT: “Create a custom namespace with a unique metric name for each application” is the correct answer.
INCORRECT: “Create a custom dimension with a unique metric name for each application” is incorrect as a dimension further clarifies what a metric is and what data it stores.
A batch job runs every 24 hours and writes around 1 million items into a DynamoDB table each day. The batch job completes quickly, and the items are processed within 2 hours and are no longer needed.
What’s the MOST efficient way to provide an empty table each day?
Use the BatchWriteItem API with a DeleteRequest
Use the BatchUpdateItem API with expressions
Issue an AWS CLI aws dynamodb delete-item command with a wildcard
Delete the entire table and recreate it each day
Any delete operation will consume RCUs to scan/query the table and WCUs to delete the items. It will be much cheaper and simpler to just delete the table and recreate it again ahead of the next batch job. This can easily be automated through the API.
INCORRECT: “Issue an AWS CLI aws dynamodb delete-item command with a wildcard” is incorrect as this operation deletes data from a table one item at a time, which is highly inefficient. You also must specify the item’s primary key values; you cannot use a wildcard.
The source code for an application is stored in a file named index.js that is in a folder along with a template file that includes the following code:
# AWSTemplateFormatVersion: '2010-09-09' # Transform: 'AWS::Serverless-2016-10-31' # Resources: # LambdaFunctionWithAPI: # Type: AWS::Serverless::Function # Properties: # Handler: index.handler # Runtime: nodejs12.x
What does a Developer need to do to prepare the template so it can be deployed using an AWS CLI command?
Run the aws serverless create-package command to embed the source file directly into the existing CloudFormation template
Run the aws lambda zip command to package the source file together with the CloudFormation template and deploy the resulting zip archive
Run the aws cloudformation package command to upload the source code to an Amazon S3 bucket and produce a modified CloudFormation template
Run the aws cloudformation compile command to base64 encode and embed the source file into a modified CloudFormation template
CORRECT: “Run the aws cloudformation package command to upload the source code to an Amazon S3 bucket and produce a modified CloudFormation template” is the correct answer.
INCORRECT: “Run the aws serverless create-package command to embed the source file directly into the existing CloudFormation template” is incorrect as the Developer has the choice to run either “aws cloudformation package” or “sam package”, but not “aws serverless create-package”.
A Developer needs to create an instance profile for an Amazon EC2 instance using the AWS CLI. How can this be achieved? (Select THREE.)
Run the AddRoleToInstanceProfile API Run the AssignInstanceProfile API Run the aws iam add-role-to-instance-profile command
Run the aws ec2 associate-instance-profile command
Run the CreateInstanceProfile API
Run the aws iam create-instance-profile command
Run the aws iam create-instance-profile command
Run the aws iam add-role-to-instance-profile command
Run the aws ec2 associate-instance-profile command
To add a role to an Amazon EC2 instance using the AWS CLI you must first create an instance profile. Then you need to add the role to the instance profile and finally assign the instance profile to the Amazon EC2 instance.
A development team is migrating data from various file shares to AWS from on-premises. The data will be migrated into a single Amazon S3 bucket. What is the SIMPLEST method to ensure the data is encrypted at rest in the S3 bucket?
Use SSL to transmit the data over the Internet
Ensure all requests use the x-amz-server-side-encryption-customer-key header
Ensure all requests use the x-amz-server-side-encryption header
Enable default encryption when creating the bucket
CORRECT: “Enable default encryption when creating the bucket” is the correct answer.
An application uses Amazon API Gateway, an AWS Lambda function and a DynamoDB table. The developer requires that another Lambda function is triggered when an item lifecycle activity occurs in the DynamoDB table.
How can this be achieved?
Configure an Amazon CloudWatch alarm that sends an Amazon SNS notification. Trigger the Lambda function asynchronously from the SNS notification
Configure an Amazon CloudTrail API alarm that sends a message to an Amazon SQS queue. Configure the Lambda function to poll the queue and invoke the function synchronously
Enable a DynamoDB stream and trigger the Lambda function asynchronously from the stream
Enable a DynamoDB stream and trigger the Lambda function synchronously from the stream
CORRECT: “Enable a DynamoDB stream and trigger the Lambda function synchronously from the stream” is the correct answer.
INCORRECT: “Enable a DynamoDB stream and trigger the Lambda function asynchronously from the stream” is incorrect as the invocation should be synchronous.
Immediately after an item in the table is modified, a new record appears in the table’s stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records.
An application is using Amazon DynamoDB as its data store and needs to be able to read 100 items per second as strongly consistent reads. Each item is 5 KB in size.
What value should be set for the table’s provisioned throughput for reads?
250 Read Capacity Units 500 Read Capacity Units 200 Read Capacity Units 50 Read Capacity Units
CORRECT: “200 Read Capacity Units” is the correct answer.
To determine the number of RCUs required to handle 100 strongly consistent reads per/second with an average item size of 5KB, perform the following steps:
- Determine the average item size by rounding up the next multiple of 4KB (5KB rounds up to 8KB).
- Determine the RCU per item by dividing the item size by 4KB (8KB/4KB = 2).
- Multiply the value from step 2 with the number of reads required per second (2x100 = 200).
A company wants to implement authentication for its new REST service using Amazon API Gateway. To authenticate the calls, each request must include HTTP headers with a client ID and user ID. These credentials must be compared to authentication data in an Amazon DynamoDB table.
What MUST the company do to implement this authentication in API Gateway?
Implement an Amazon Cognito authorizer that references the DynamoDB authentication table
Create a model that requires the credentials, then grant API Gateway access to the authentication table
Implement an AWS Lambda authorizer that references the DynamoDB authentication table
Modify the integration requests to require the credentials, then grant API Gateway access to the authentication table
CORRECT: “Implement an AWS Lambda authorizer that references the DynamoDB authentication table” is the correct answer.
There are two types of Lambda authorizers:
- A token-based Lambda authorizer (also called a TOKEN authorizer) receives the caller’s identity in a bearer token, such as a JSON Web Token (JWT) or an OAuth token.
- A request parameter-based Lambda authorizer (also called a REQUEST authorizer) receives the caller’s identity in a combination of headers, query string parameters, stageVariables, and $context variables.
An application will use AWS Lambda and an Amazon RDS database. The Developer needs to secure the database connection string and enable automatic rotation every 30 days. What is the SIMPLEST way to achieve this requirement?
Store a SecureString in Systems Manager Parameter Store and enable automatic rotation every 30 days
Store a secret in AWS Secrets Manager and enable automatic rotation every 30 days
Store the connection string in an encrypted Amazon S3 bucket and use a scheduled CloudWatch Event to update the connection string every 30 days
Store the connection string as an encrypted environment variable in Lambda and create a separate function that rotates the connection string every 30 days
CORRECT: “Store a secret in AWS Secrets Manager and enable automatic rotation every 30 days” is the correct answer.
A company needs to provide additional security for their APIs deployed on Amazon API Gateway. They would like to be able to authenticate their customers with a token. What is the SAFEST way to do this?
Setup usage plans and distribute API keys to the customers
Use AWS Single Sign-on to authenticate the customers
Create an Amazon Cognito identity pool
Create an API Gateway Lambda authorizer
CORRECT: “Create an API Gateway Lambda authorizer” is the correct answer.
A developer is creating a serverless application that will use a DynamoDB table. The average item size is 7KB. The application will make 3 strongly consistent reads/sec, and 1 standard write/sec. How many RCUs/WCUs are required?
12 RCU and 14 WCU 6 RCU and 7 WCU 6 RCU and 14 WCU
3 RCU and 7 WCU
6 RCU and 7 WCU
Read capacity unit (RCU):
- Each API call to read data from your table is a read request.
- Read requests can be strongly consistent, eventually consistent, or transactional.
- For items up to 4 KB in size, one RCU can perform one strongly consistent read request per second.
- Items larger than 4 KB require additional RCUs.
- For items up to 4 KB in size, one RCU can perform two eventually consistent read requests per second.
- Transactional read requests require two RCUs to perform one read per second for items up to 4 KB.
- For example, a strongly consistent read of an 8 KB item would require two RCUs, an eventually consistent read of an 8 KB item would require one RCU, and a transactional read of an 8 KB item would require four RCUs.
Write capacity unit (WCU):
- Each API call to write data to your table is a write request.
- For items up to 1 KB in size, one WCU can perform one standard write request per second.
- Items larger than 1 KB require additional WCUs.
- Transactional write requests require two WCUs to perform one write per second for items up to 1 KB.
- For example, a standard write request of a 1 KB item would require one WCU, a standard write request of a 3 KB item would require three WCUs, and a transactional write request of a 3 KB item would require six WCUs.
An AWS Lambda function requires several environment variables with secret values. The secret values should be obscured in the Lambda console and API output even for users who have permission to use the key.
What is the best way to achieve this outcome and MINIMIZE complexity and latency?
Encrypt the secret values client-side using encryption helpers
Store the encrypted values in an encrypted Amazon S3 bucket and reference them from within the code
Use an external encryption infrastructure to encrypt the values and add them as environment variables
Encrypt the secret values with a customer-managed CMK
Encrypt the secret values client-side using encryption helpers
• Encryption helpers – The Lambda console lets you encrypt environment variable values client side, before sending them to Lambda. This enhances security further by preventing secrets from being displayed unencrypted in the Lambda console, or in function configuration that’s returned by the Lambda API. The console also provides sample code that you can adapt to decrypt the values in your function handler.
A development team are creating a mobile application that customers will use to receive notifications and special offers. Users will not be required to log in.
What is the MOST efficient method to grant users access to AWS resources?
Use Amazon Cognito to associate unauthenticated users with an IAM role that has limited access to resources
Use an IAM SAML 2.0 identity provider to establish trust
Embed access keys in the application that have limited access to resources
Use Amazon Cognito Federated Identities and setup authentication using a Cognito User Pool
Use Amazon Cognito to associate unauthenticated users with an IAM role that has limited access to resources
An organization has a new AWS account and is setting up IAM users and policies. According to AWS best practices, which of the following strategies should be followed? (Select TWO.)
Use user accounts to delegate permissions
Create standalone policies instead of using inline policies
Create user accounts that can be shared for efficiency
Always use customer managed policies instead of AWS managed policies
Use groups to assign permissions to users
Create standalone policies instead of using inline policies
Use groups to assign permissions to users
Explanation
AWS provide a number of best practices for AWS IAM that help you to secure your resources. The key best practices referenced in this scenario are as follows:
- Use groups to assign permissions to users – this is correct as you should create permissions policies and assign them to groups. Users can be added to the groups to get the permissions they need to perform their jobs.
- Create standalone policies instead of using inline policies (Use Customer Managed Policies Instead of Inline Policies in the AWS best practices) – this refers to creating your own policies that are standalone policies which can be reused multiple times (assigned to multiple entities such as groups, and users). This is better than using inline policies which are directly attached to a single entity.
INCORRECT: “Use user accounts to delegate permissions” is incorrect as you should use roles to delegate permissions.
A Developer is deploying an update to a serverless application that includes AWS Lambda using the AWS Serverless Application Model (SAM). The traffic needs to move from the old Lambda version to the new Lambda version gradually, within the shortest period of time.
Which deployment configuration is MOST suitable for these requirements?
CodeDeployDefault.HalfAtATime
CodeDeployDefault.LambdaCanary10Percent5Minutes
CodeDeployDefault.LambdaLinear10PercentEvery1Minute
CodeDeployDefault.LambdaLinear10PercentEvery2Minutes
Explanation
If you use AWS SAM to create your serverless application, it comes built-in with CodeDeploy to provide gradual Lambda deployments. With just a few lines of configuration, AWS SAM does the following for you:
- Deploys new versions of your Lambda function, and automatically creates aliases that point to the new version.
- Gradually shifts customer traffic to the new version until you’re satisfied that it’s working as expected, or you roll back the update.
- Defines pre-traffic and post-traffic test functions to verify that the newly deployed code is configured correctly and your application operates as expected.
- Rolls back the deployment if CloudWatch alarms are triggered.
There are several options for how CodeDeploy shifts traffic to the new Lambda version. You can choose from the following:
- Canary: Traffic is shifted in two increments. You can choose from predefined canary options. The options specify the percentage of traffic that’s shifted to your updated Lambda function version in the first increment, and the interval, in minutes, before the remaining traffic is shifted in the second increment.
- Linear: Traffic is shifted in equal increments with an equal number of minutes between each increment. You can choose from predefined linear options that specify the percentage of traffic that’s shifted in each increment and the number of minutes between each increment.
All-at-once: All traffic is shifted from the original Lambda function to the updated Lambda function version at once.
Therefore CodeDeployDefault.LambdaCanary10Percent5Minutes is the best answer as this will shift 10 percent of the traffic and then after 5 minutes shift the remainder of the traffic. The entire deployment will take 5 minutes to cut over.
A Developer is deploying an application using Docker containers on Amazon ECS. One of the containers runs a database and should be placed on instances in the “databases” task group.
What should the Developer use to control the placement of the database task?
ECS Container Agent IAM Group Task Placement Constraint Cluster Query Language
A task placement constraint is a rule that is considered during task placement. Task placement constraints can be specified when either running a task or creating a new service. The task placement constraints can be updated for existing services as well.
Amazon ECS supports the following types of task placement constraints:
distinctInstance
Place each task on a different container instance. This task placement constraint can be specified when either running a task or creating a new service.
memberOf
Place tasks on container instances that satisfy an expression. For more information about the expression syntax for constraints, see Cluster Query Language.
The memberOf task placement constraint can be specified with the following actions:
- Running a task
- Creating a new service
- Creating a new task definition
- Creating a new revision of an existing task definition
The example task placement constraint below uses the memberOf constraint to place tasks on instances in the databases task group. It can be specified with the following actions: CreateService, UpdateService, RegisterTaskDefinition, and RunTask.
"placementConstraints": [ { "expression": "task:group == databases", "type": "memberOf" } ] The Developer should therefore use task placement constraints as in the above example to control the placement of the database task.
INCORRECT: “Cluster Query Language” is incorrect. Cluster queries are expressions that enable you to group objects. For example, you can group container instances by attributes such as Availability Zone, instance type, or custom metadata.
A Developer must deploy a new AWS Lambda function using an AWS CloudFormation template.
Which procedures will deploy a Lambda function? (Select TWO.)
Upload a ZIP file containing the function code to Amazon S3, then add a reference to it in an AWS::Lambda::Function resource in the template
Create an AWS::Lambda::Function resource in the template, then write the code directly inside the CloudFormation template
Upload a ZIP file to AWS CloudFormation containing the function code, then add a reference to it in an AWS::Lambda::Function resource in the template
1. Upload the function code to a private Git repository, then add a reference to it in an AWS::Lambda::Function resource in the template
Upload the code to an AWS CodeCommit repository, then add a reference to it in an AWS::Lambda::Function resource in the template
Upload a ZIP file containing the function code to Amazon S3, then add a reference to it in an AWS::Lambda::Function resource in the template
Create an AWS::Lambda::Function resource in the template, then write the code directly inside the CloudFormation template
A developer is preparing the resources for creating a multicontainer Docker environment on AWS Elastic Beanstalk. How can the developer define the Docker containers?
Define the containers in the Dockerrun.aws.json file in JSON format and save at the root of the source directory
Define the containers in the Dockerrun.aws.json file in YAML format and save at the root of the source directory
Create a buildspec.yml file and save it at the root of the source directory
Create a Docker.config file and save it in the .ebextensions folder at the root of the source directory
Define the containers in the Dockerrun.aws.json file in JSON format and save at the root of the source directory
You can launch a cluster of multicontainer instances in a single-instance or autoscaling Elastic Beanstalk environment using the Elastic Beanstalk console. The single container and multicontainer Docker platforms for Elastic Beanstalk support the use of Docker images stored in a public or private online image repository.
You specify images by name in the Dockerrun.aws.json file and save it in the root of your source directory.
A mobile application is being developed that will use AWS Lambda, Amazon API Gateway and Amazon DynamoDB. A developer would like to securely authenticate the users of the mobile application and then grant them access to the API.
What is the BEST way to achieve this?
Create a COGNITO_USER_POOLS authorizer in API Gateway
Create an IAM authorizer in API Gateway
Create a Lambda authorizer in API Gateway
Create a COGNITO_IDENTITY_POOLS authorizer in API Gateway
CORRECT: “Create a COGNITO_USER_POOLS authorizer in API Gateway” is the correct answer.
To use an Amazon Cognito user pool with your API, you must first create an authorizer of the COGNITO_USER_POOLS type and then configure an API method to use that authorizer. After the API is deployed, the client must first sign the user in to the user pool, obtain an identity or access token for the user, and then call the API method with one of the tokens, which are typically set to the request’s Authorization header. The API call succeeds only if the required token is supplied and the supplied token is valid, otherwise, the client isn’t authorized to make the call because the client did not have credentials that could be authorized.
A serverless application is used to process customer information and outputs a JSON file to an Amazon S3 bucket. AWS Lambda is used for processing the data. The data is sensitive and should be encrypted.
How can a Developer modify the Lambda function to ensure the data is encrypted before it is uploaded to the S3 bucket?
Use the GenerateDataKey API, then use the data key to encrypt the file using the Lambda code
Enable server-side encryption on the S3 bucket and create a policy to enforce encryption
Use the S3 managed key and call the GenerateDataKey API to encrypt the file
Use the default KMS key for S3 and encrypt the file using the Lambda code
The GenerateDataKey API is used with the AWS KMS services and generates a unique symmetric data key. This operation returns a plaintext copy of the data key and a copy that is encrypted under a customer master key (CMK) that you specify. You can use the plaintext key to encrypt your data outside of AWS KMS and store the encrypted data key with the encrypted data.
For this scenario we can use GenerateDataKey to obtain an encryption key from KMS that we can then use within the function code to encrypt the file. This ensures that the file is encrypted BEFORE it is uploaded to Amazon S3.
CORRECT: “Use the GenerateDataKey API, then use the data key to encrypt the file using the Lambda code” is the correct answer.
A Developer is creating a service on Amazon ECS and needs to ensure that each task is placed on a different container instance.
How can this be achieved?
Create a service on Fargate
Use a task placement constraint
Create a cluster with multiple container instances
Use a task placement strategy
CORRECT: “Use a task placement constraint” is the correct answer.
INCORRECT: “Use a task placement strategy” is incorrect as this is used to select instances for task placement using the binpack, random and spread algorithms.
A Developer received the following error when attempting to launch an Amazon EC2 instance using the AWS CLI.
An error occurred (UnauthorizedOperation) when calling the RunInstances operation: You are not authorized to perform this operation. Encoded authorization failure message: VNVaHFdCohROkbyT_rIXoRyNTp7vXFJCqnGiwPuyKnsSVf-WSSGK_06….
What action should the Developer perform to make this error more human-readable?
Use the AWS IAM decode-authorization-message API to decode this message
Use an open source decoding library to decode the message
Make a call to AWS KMS to decode the message
Use the AWS STS decode-authorization-message API to decode the message
CORRECT: “Use the AWS STS decode-authorization-message API to decode the message” is the correct answer.
Explanation
The AWS STS decode-authorization-message API decodes additional information about the authorization status of a request from an encoded message returned in response to an AWS request. The output is then decoded into a more human-readable output that can be viewed in a JSON editor.
The following example is the decoded output from the error shown in the question:
{
“DecodedMessage”: “{"allowed":false,"explicitDeny":false,"matchedStatements":{"items":[]},"failures":{"items":[]},"context":{"principal":{"id":"AIDAXP4J2EKU7YXXG3EJ4","name":"Paul","arn":"arn:aws:iam::515148227241:user/Paul"},"action":"ec2:RunInstances","resource":"arn:aws:ec2:ap-southeast-2:51514822724lu}]}}}”…..
}
Therefore, the best answer is to use the AWS STS decode-authorization-message API to decode the message.
A small team of Developers require access to an Amazon S3 bucket. An admin has created a resource-based policy. Which element of the policy should be used to specify the ARNs of the user accounts that will be granted access?
Condition
Id
Principal
Sid
Use the Principal element in a policy to specify the principal that is allowed or denied access to a resource. You cannot use the Principal element in an IAM identity-based policy. You can use it in the trust policies for IAM roles and in resource-based policies. Resource-based policies are policies that you embed directly in an IAM resource.
CORRECT: “Principal” is the correct answer.
An Amazon DynamoDB table will store authentication credentials for a mobile app. The table must be secured so only a small group of Developers are able to access it.
How can table access be secured according to this requirement and following AWS best practice?
Attach a permissions policy to an IAM group containing the Developer’s IAM user accounts that grants access to the table
Create a shared user account and attach a permissions policy granting access to the table. Instruct the Developer’s to login with the user account
Create an AWS KMS resource-based policy to a CMK and grant the developer’s user accounts the permissions to decrypt data in the table using the CMK
Attach a resource-based policy to the table and add an IAM group containing the Developer’s IAM user accounts as a Principal in the policy
Explanation
Amazon DynamoDB supports identity-based policies only. The best practice method to assign permissions to the table is to create a permissions policy that grants access to the table and assigning that policy to an IAM group that contains the Developer’s user accounts.
This will provide all users with accounts in the IAM group with the access required to access the DynamoDB table.
CORRECT: “Attach a permissions policy to an IAM group containing the Developer’s IAM user accounts that grants access to the table” is the correct answer.
A Developer is working on an AWS Lambda function that accesses Amazon DynamoDB. The Lambda function must retrieve an item and update some of its attributes or create the item if it does not exist. The Lambda function has access to the primary key.
Which IAM permission should the Developer request for the Lambda function to achieve this functionality?
“dynamodb:UpdateItem”, “dynamodb:GetItem”, and “dynamodb:DescribeTable”
“dynamodb:GetRecords”, “dynamodb:PutItem”, and “dynamodb:UpdateTable”
“dynamodb:DeleteItem”, “dynamodb:GetItem”, and “dynamodb:PutItem”
“dynamodb:UpdateItem”, “dynamodb:GetItem”, and “dynamodb:PutItem”
Explanation
The Developer needs the permissions to retrieve items, update/modify items, and create items. Therefore permissions for the following API actions are required:
- GetItem - The GetItem operation returns a set of attributes for the item with the given primary key.
- UpdateItem - Edits an existing item’s attributes, or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values.
- PutItem - Creates a new item, or replaces an old item with a new item. If an item that has the same primary key as the new item already exists in the specified table, the new item completely replaces the existing item.
An application is running on a cluster of Amazon EC2 instances. The application has received an error when trying to read objects stored within an Amazon S3 bucket. The bucket is encrypted with server-side encryption and AWS KMS managed keys (SSE-KMS). The error is as follows:
Service: AWSKMS; Status Code: 400, Error Code: ThrottlingException
Which combination of steps should be taken to prevent this failure? (Select TWO.)
Contact AWS support to request an S3 rate limit increase
Import a customer master key (CMK) with a larger key size
Contact AWS support to request an AWS KMS rate limit increase
Perform error retries with exponential backoff in the application code
Use more than once customer master key (CMK) to encrypt S3 data
CORRECT: “Contact AWS support to request an AWS KMS rate limit increase” is a correct answer.
CORRECT: “Perform error retries with exponential backoff in the application code” is a correct answer.
AWS KMS establishes quotas for the number of API operations requested in each second. When you exceed an API request quota, AWS KMS throttles the request, that is, it rejects an otherwise valid request and returns a ThrottlingException error like the following one.
As the error indicates, one of the recommendations is to reduce the frequency of calls which can be implemented by using exponential backoff logic in the application code. It is also possible to contact AWS and request an increase in the quota.
An application needs to read up to 100 items at a time from an Amazon DynamoDB. Each item is up to 100 KB in size and all attributes must be retrieved.
What is the BEST way to minimize latency?
Use BatchGetItem
Use a Query operation with a FilterExpression
Use GetItem and use a projection expression
Use a Scan operation with pagination
The BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.
A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. In order to minimize response latency, BatchGetItem retrieves items in parallel.
By default, BatchGetItem performs eventually consistent reads on every table in the request. If you want strongly consistent reads instead, you can set ConsistentRead to true for any or all tables.
An application is running on a fleet of EC2 instances running behind an Elastic Load Balancer (ELB). The EC2 instances session data in a shared Amazon S3 bucket. Security policy mandates that data must be encrypted in transit.
How can the Developer ensure that all data that is sent to the S3 bucket is encrypted in transit?
Create an S3 bucket policy that denies traffic where SecureTransport is true
Create an S3 bucket policy that denies traffic where SecureTransport is false
Create an S3 bucket policy that denies any S3 Put request that does not include the x-amz-server-side-encryption
Configure HTTP to HTTPS redirection on the Elastic Load Balancer
At the Amazon S3 bucket level, you can configure permissions through a bucket policy. For example, you can limit access to the objects in a bucket by IP address range or specific IP addresses. Alternatively, you can make the objects accessible only through HTTPS.
CORRECT: “Create an S3 bucket policy that denies traffic where SecureTransport is false” is the correct answer.
A Developer has joined a team and needs to connect to the AWS CodeCommit repository using SSH. What should the Developer do to configure access using Git?
On the Developer’s IAM account, under security credentials, choose to create an access key and secret ID
Create an account on Github and user those login credentials to login to AWS CodeCommit
Generate an SSH public and private key. Upload the public key to the Developer’s IAM account
On the Developer’s IAM account, under security credentials, choose to create HTTPS Git credentials for AWS CodeCommit
You need to configure your Git client to communicate with CodeCommit repositories. As part of this configuration, you provide IAM credentials that CodeCommit can use to authenticate you. IAM supports CodeCommit with three types of credentials:
- Git credentials, an IAM -generated user name and password pair you can use to communicate with CodeCommit repositories over HTTPS.
- SSH keys, a locally generated public-private key pair that you can associate with your IAM user to communicate with CodeCommit repositories over SSH.
- AWS access keys, which you can use with the credential helper included with the AWS CLI to communicate with CodeCommit repositories over HTTPS.
A Developer is writing an AWS Lambda function that processes records from an Amazon Kinesis Data Stream. The Developer must write the function so that it sends a notice to Administrators if it fails to process a batch of records.
How should the Developer write the function?
Configure an Amazon SNS topic as an on-failure destination
Separate the Lambda handler from the core logic
Use Amazon CloudWatch Events to send the processed data
Push the failed records to an Amazon SQS queue
With Destinations, you can route asynchronous function results as an execution record to a destination resource without writing additional code. An execution record contains details about the request and response in JSON format including version, timestamp, request context, request payload, response context, and response payload.
For each execution status such as Success or Failure you can choose one of four destinations: another Lambda function, SNS, SQS, or EventBridge. Lambda can also be configured to route different execution results to different destinations.
In this scenario the Developer can publish the processed data to an Amazon SNS topic by configuring an Amazon SNS topic as an on-failure destination.
CORRECT: “Configure an Amazon SNS topic as an on-failure destination” is the correct answer.
INCORRECT: “Push the failed records to an Amazon SQS queue” is incorrect as SQS will not notify the administrators, SNS should be used.
A Developer needs to restrict all users and roles from using a list of API actions within a member account in AWS Organizations. The Developer needs to deny access to a few specific API actions.
What is the MOST efficient way to do this?
Create a deny list and specify the API actions to deny
Create an IAM policy that allows only the unrestricted API actions
Create an IAM policy that denies the API actions for all users and roles
Create an allow list and specify the API actions to deny
Service control policies (SCPs) are one type of policy that you can use to manage your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines.
You can configure the SCPs in your organization to work as either of the following:
- A deny list – actions are allowed by default, and you specify what services and actions are prohibited
- An allow list – actions are prohibited by default, and you specify what services and actions are allowed
CORRECT: “Create a deny list and specify the API actions to deny” is the correct answer.
A Developer is deploying an Amazon ECS update using AWS CodeDeploy. In the appspec.yaml file, which of the following is a valid structure for the order of hooks that should be specified?
BeforeInstall > AfterInstall > AfterAllowTestTraffic > BeforeAllowTraffic > AfterAllowTraffic
BeforeBlockTraffic > AfterBlockTraffic > BeforeAllowTraffic > AfterAllowTraffic
BeforeInstall > AfterInstall > ApplicationStart > ValidateService
BeforeAllowTraffic > AfterAllowTraffic
INCORRECT: “BeforeAllowTraffic > AfterAllowTraffic” is incorrect as this would be valid for AWS Lambda.
CORRECT: “BeforeInstall > AfterInstall > AfterAllowTestTraffic > BeforeAllowTraffic > AfterAllowTraffic” is the correct answer.
An application is being instrumented to send trace data using AWS X-Ray. A Developer needs to upload segment documents using JSON-formatted strings to X-Ray using the API. Which API action should the developer use?
The GetTraceSummaries API action
The PutTraceSegments API action
The UpdateGroup API action
The PutTelemetryRecords API action
You can send trace data to X-Ray in the form of segment documents. A segment document is a JSON formatted string that contains information about the work that your application does in service of a request. Your application can record data about the work that it does itself in segments, or work that uses downstream services and resources in subsegments.
Segments record information about the work that your application does. A segment, at a minimum, records the time spent on a task, a name, and two IDs. The trace ID tracks the request as it travels between services. The segment ID tracks the work done for the request by a single service.
CORRECT: “The PutTraceSegments API action” is the correct answer.
INCORRECT: “The PutTelemetryRecords API action” is incorrect as this is used by the AWS X-Ray daemon to upload telemetry.
INCORRECT: “The UpdateGroup API action” is incorrect as this updates a group resource.
INCORRECT: “The GetTraceSummaries API action” is incorrect as this retrieves IDs and annotations for traces available for a specified time frame using an optional filter.
A Developer created an AWS Lambda function for a serverless application. The Lambda function has been executing for several minutes and the Developer cannot find any log data in CloudWatch Logs.
What is the MOST likely explanation for this issue?
The Lambda function is missing a target CloudWatch Logs group
The execution role for the Lambda function is missing permissions to write log data to the CloudWatch Logs
The Lambda function does not have any explicit log statements for the log data to send it to CloudWatch Logs
The Lambda function is missing CloudWatch Logs as a source trigger to send log data
An AWS Lambda function’s execution role grants it permission to access AWS services and resources. You provide this role when you create a function, and Lambda assumes the role when your function is invoked. You can create an execution role for development that has permission to send logs to Amazon CloudWatch and upload trace data to AWS X-Ray.
The most likely cause of this issue is that the execution role assigned to the Lambda function does not have the permissions (shown above) to write to CloudWatch Logs.
CORRECT: “The execution role for the Lambda function is missing permissions to write log data to the CloudWatch Logs” is the correct answer.
An application running on Amazon EC2 is experiencing intermittent technical difficulties. The developer needs to find a solution for tracking the errors that occur in the application logs and setting up a notification when the error rate exceeds a certain threshold.
How can this be achieved with the LEAST complexity?
Use CloudTrail to monitor the application log files and send an SNS notification
Configure Amazon CloudWatch Events to monitor the EC2 instances and configure an SNS topic as a target
Configure the application to send logs to Amazon S3. Use Amazon Kinesis Analytics to analyze the log files and send an SES notification
Use CloudWatch Logs to track the number of errors that occur in the application logs and send an SNS notification
You can use CloudWatch Logs to monitor applications and systems using log data. For example, CloudWatch Logs can track the number of errors that occur in your application logs and send you a notification whenever the rate of errors exceeds a threshold you specify.
CloudWatch Logs uses your log data for monitoring; so, no code changes are required. For example, you can monitor application logs for specific literal terms (such as “NullReferenceException”) or count the number of occurrences of a literal term at a particular position in log data (such as “404” status codes in an Apache access log).
When the term you are searching for is found, CloudWatch Logs reports the data to a CloudWatch metric that you specify. Log data is encrypted while in transit and while it is at rest.
CORRECT: “Use CloudWatch Logs to track the number of errors that occur in the application logs and send an SNS notification” is the correct answer.
An application collects data from sensors in a manufacturing facility. The data is stored in an Amazon SQS Standard queue by an AWS Lambda function and an Amazon EC2 instance processes the data and stores it in an Amazon RedShift data warehouse. A fault in the sensors’ software is causing occasional duplicate messages to be sent. Timestamps on the duplicate messages show they are generated within a few seconds of the primary message.
How a can a Developer prevent duplicate data being stored in the data warehouse?
Configure a redrive policy, specify a destination Dead-Letter queue, and set the maxReceiveCount to 1
Send a ChangeMessageVisibility call with VisibilityTimeout set to 30 seconds after the receipt of every message from the queue
Use a FIFO queue and configure the Lambda function to add a message group ID to the messages generated by each individual sensor
Use a FIFO queue and configure the Lambda function to add a message deduplication token to the message body
Use a FIFO queue and configure the Lambda function to add a message deduplication token to the message body
A Developer is deploying an Amazon EC2 update using AWS CodeDeploy. In the appspec.yml file, which of the following is a valid structure for the order of hooks that should be specified?
BeforeBlockTraffic > AfterBlockTraffic > BeforeAllowTraffic > AfterAllowTraffic
BeforeInstall > AfterInstall > AfterAllowTestTraffic > BeforeAllowTraffic > AfterAllowTraffic
BeforeAllowTraffic > AfterAllowTraffic
BeforeInstall > AfterInstall > ApplicationStart > ValidateService
CORRECT: “BeforeInstall > AfterInstall > ApplicationStart > ValidateService” is the correct answer.
A Developer is creating a social networking app for games that uses a single Amazon DynamoDB table. All users’ saved game data is stored in the single table, but users should not be able to view each other’s data.
How can the Developer restrict user access so they can only view their own data?
Restrict access to specific items based on certain primary key values
Read records from DynamoDB and discard irrelevant data client-side
Use separate access keys for each user to call the API and restrict access to specific items based on access key ID
Use an identity-based policy that restricts read access to the table to specific principals
In DynamoDB, you have the option to specify conditions when granting permissions using an IAM policy. For example, you can:
- Grant permissions to allow users read-only access to certain items and attributes in a table or a secondary index.
- Grant permissions to allow users write-only access to certain attributes in a table, based upon the identity of that user.
To implement this kind of fine-grained access control, you write an IAM permissions policy that specifies conditions for accessing security credentials and the associated permissions. You then apply the policy to IAM users, groups, or roles that you create using the IAM console. Your IAM policy can restrict access to individual items in a table, access to the attributes in those items, or both at the same time.
You use the IAM Condition element to implement a fine-grained access control policy. By adding a Condition element to a permissions policy, you can allow or deny access to items and attributes in DynamoDB tables and indexes, based upon your particular business requirements. You can also grant permissions on a table, but restrict access to specific items in that table based on certain primary key values.
CORRECT: “Restrict access to specific items based on certain primary key values” is the correct answer.
A Developer is creating a serverless website with content that includes HTML files, images, videos, and JavaScript (client-side scripts).
Which combination of services should the Developer use to create the website?
Amazon ECS and Redis
Amazon EC2 and Amazon ElastiCache
AWS Lambda and Amazon API Gateway
Amazon S3 and Amazon CloudFront
Amazon S3 and Amazon CloudFront
An application that is being migrated to AWS and refactored requires a storage service. The storage service should provide a standards-based REST web service interface and store objects based on keys.
Which AWS service would be MOST suitable?
Amazon EFS
Amazon EBS
Amazon S3
Amazon DynamoDB
Explanation
Amazon S3 is object storage built to store and retrieve any amount of data from anywhere on the Internet. Amazon S3 uses standards-based REST and SOAP interfaces designed to work with any internet-development toolkit.
Amazon S3 is a simple key-based object store. The key is the name of the object and the value is the actual data itself. Keys can be any string, and they can be constructed to mimic hierarchical attributes.
CORRECT: “Amazon S3” is the correct answer.
An Amazon ElastiCache cluster has been placed in front of a large Amazon RDS database. To reduce cost the ElastiCache cluster should only cache items that are actually requested. How should ElastiCache be optimized?
Use a lazy loading caching strategy
Use a write-through caching strategy
Only cache database writes
Enable a TTL on cached data
CORRECT: “Use a lazy loading caching strategy” is the correct answer.
There are two caching strategies available: Lazy Loading and Write-Through:
Lazy Loading
Loads the data into the cache only when necessary (if a cache miss occurs).
Lazy loading avoids filling up the cache with data that won’t be requested.
If requested data is in the cache, ElastiCache returns the data to the application.
If the data is not in the cache or has expired, ElastiCache returns a null.
The application then fetches the data from the database and writes the data received into the cache so that it is available for next time.
Data in the cache can become stale if Lazy Loading is implemented without other strategies (such as TTL).
An organization has encrypted a large quantity of data. To protect their data encryption keys they are planning to use envelope encryption. Which of the following processes is a correct implementation of envelope encryption?
Encrypt plaintext data with a master key and then encrypt the master key with a top-level encrypted data key
Encrypt plaintext data with a master key and then encrypt the master key with a top-level plaintext data key
Encrypt plaintext data with a data key and then encrypt the data key with a top-level plaintext master key.
Encrypt plaintext data with a data key and then encrypt the data key with a top-level encrypted master key
CORRECT: “Encrypt plaintext data with a data key and then encrypt the data key with a top-level plaintext master key” is the correct answer.
Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.
You can even encrypt the data encryption key under another encryption key and encrypt that encryption key under another encryption key. But, eventually, one key must remain in plaintext so you can decrypt the keys and your data. This top-level plaintext key encryption key is known as the master key.
A company is building an application to track athlete performance using an Amazon DynamoDB table. Each item in the table is identified by a partition key (user_id) and a sort key (sport_name). The table design is shown below:
- Partition key: user_id
- Sort Key: sport_name
- Attributes: score, score_datetime
A Developer is asked to write a leaderboard application to display the top performers (user_id) based on the score for each sport_name.
What process will allow the Developer to extract results MOST efficiently from the DynamoDB table?
Use a DynamoDB query operation with the key attributes of user_id and sport_name and order the results based on the score attribute
Create a global secondary index with a partition key of sport_name and a sort key of score, and get the results
Use a DynamoDB scan operation to retrieve scores and user_id based on sport_name, and order the results based on the score attribute
Create a local secondary index with a primary key of sport_name and a sort key of score and get the results based on the score attribute
CORRECT: “Create a global secondary index with a partition key of sport_name and a sort key of score, and get the results” is the correct answer.
INCORRECT: “Use a DynamoDB query operation with the key attributes of user_id and sport_name and order the results based on the score attribute” is incorrect as this is less efficient compared to using a GSI.
The manager of a development team is setting up a shared S3 bucket for team members. The manager would like to use a single policy to allow each user to have access to their objects in the S3 bucket. Which feature can be used to generalize the policy?
Condition
Variable
Principal
Resource
When this policy is evaluated, IAM replaces the variable ${aws:username}with the friendly name of the actual current user. This means that a single policy applied to a group of users can control access to a bucket by using the username as part of the resource’s name.
CORRECT: “Variable” is the correct answer.
INCORRECT: “Condition” is incorrect. The Condition element (or Condition block) lets you specify conditions for when a policy is in effect.
A legacy application is being refactored into a microservices architecture running on AWS. The microservice will include several AWS Lambda functions. A Developer will use AWS Step Functions to coordinate function execution.
How should the Developer proceed?
Create a layer in AWS Lambda and add the functions to the layer
Create an AWS CloudFormation stack using a YAML-formatted template
Create a workflow using the StartExecution API action
Create a state machine using the Amazon States Language
CORRECT: “Create a state machine using the Amazon States Language” is the correct answer.
AWS Step Functions is a web service that enables you to coordinate the components of distributed applications and microservices using visual workflows. You build applications from individual components that each perform a discrete function, or task, allowing you to scale and change applications quickly.
The following are key features of AWS Step Functions:
- Step Functions is based on the concepts of tasks and state machines.
- You define state machines using the JSON-based Amazon States Language.
A website delivers images stored in an Amazon S3 bucket. The site uses Amazon Cognito-enabled and guest users without logins need to be able to view the images from the S3 bucket..
How can a Developer enable access for guest users to the AWS resources?
Create a new user pool, enable access to unauthenticated identities, and grant access to AWS resources
Create a new identity pool, enable access to unauthenticated identities, and grant access to AWS resources
Create a blank user ID in a user pool, add to the user group, and grant access to AWS resources
Create a new user pool, disable authentication access, and grant access to AWS resources
Amazon Cognito identity pools support both authenticated and unauthenticated identities. Authenticated identities belong to users who are authenticated by any supported identity provider. Unauthenticated identities typically belong to guest users.
CORRECT: “Create a new identity pool, enable access to unauthenticated identities, and grant access to AWS resources” is the correct answer.
INCORRECT: “Create a new user pool, enable access to unauthenticated identities, and grant access to AWS resources” is incorrect as you must use identity pools for unauthenticated users.
A Development team are creating a financial trading application. The application requires sub-millisecond latency for processing trading requests. Amazon DynamoDB is used to store the trading data. During load testing the Development team found that in periods of high utilization the latency is too high and read capacity must be significantly over-provisioned to avoid throttling.
How can the Developers meet the latency requirements of the application?
Use exponential backoff in the application code for DynamoDB queries
Store the trading data in Amazon S3 and use Transfer Acceleration
Create a Global Secondary Index (GSI) for the trading data
Use Amazon DynamoDB Accelerator (DAX) to cache the data
CORRECT: “Use Amazon DynamoDB Accelerator (DAX) to cache the data” is the correct answer.
INCORRECT: “Use exponential backoff in the application code for DynamoDB queries” is incorrect as this may reduce the requirement for over-provisioning reads but it will not solve the problem of reducing latency. With this solution the application performance will be worse, it’s a case of reducing cost along with performance.
An e-commerce company has developed an API that is hosted on Amazon ECS. Variable traffic spikes on the application are causing order processing to take too long. The application processes orders using Amazon SQS queues. The ApproximateNumberOfMessagesVisible metric spikes at very high values throughout the day which triggers the CloudWatch alarm. Other ECS metrics for the API containers are well within limits.
As a Developer Associate, which of the following will you recommend for improving performance while keeping costs low?
Use ECS service scheduler Use backlog per instance metric with target tracking scaling policy Use ECS step scaling policy Use Docker swarm
Use backlog per instance metric with target tracking scaling policy - If you use a target tracking scaling policy based on a custom Amazon SQS queue metric, dynamic scaling can adjust to the demand curve of your application more effectively.
Docker swarm - A Docker swarm is a container orchestration tool, meaning that it allows the user to manage multiple containers deployed across multiple host machines. A swarm consists of multiple Docker hosts which run in swarm mode and act as managers (to manage membership and delegation) and workers (which run swarm services).
ECS service scheduler - Amazon ECS provides a service scheduler (for long-running tasks and applications), the ability to run tasks manually (for batch jobs or single run tasks), with Amazon ECS placing tasks on your cluster for you. You can specify task placement strategies and constraints that allow you to run tasks in the configuration you choose, such as spread out across Availability Zones. It is also possible to integrate with custom or third-party schedulers.
ECS step scaling policy - Although Amazon ECS Service Auto Scaling supports using Application Auto Scaling step scaling policies, AWS recommends using target tracking scaling policies instead. For example, if you want to scale your service when CPU utilization falls below or rises above a certain level, create a target tracking scaling policy based on the CPU utilization metric provided by Amazon ECS.
An application is hosted by a 3rd party and exposed at yourapp.3rdparty.com. You would like to have your users access your application using www.mydomain.com, which you own and manage under Route 53.
What Route 53 record should you create?
Create a PTR record Create an Alias Record Create a CNAME record
Create an A record
Create a CNAME record
A CNAME record maps DNS queries for the name of the current record, such as acme.example.com, to another domain (example.com or example.net) or subdomain (acme.example.com or zenith.example.org).
CNAME records can be used to map one domain name to another. Although you should keep in mind that the DNS protocol does not allow you to create a CNAME record for the top node of a DNS namespace, also known as the zone apex. For example, if you register the DNS name example.com, the zone apex is example.com. You cannot create a CNAME record for example.com, but you can create CNAME records for www.example.com, newproduct.example.com, and so on.
Create an A record - Used to point a domain or subdomain to an IP address. ‘A record’ cannot be used to map one domain name to another.
As a developer, you are working on creating an application using AWS Cloud Development Kit (CDK).
Which of the following represents the correct order of steps to be followed for creating an app using AWS CDK?
Create the app from a template provided by AWS CDK -> Add code to the app to create resources within stacks -> Build the app (optional) -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account
Create the app from a template provided by AWS CloudFormation -> Add code to the app to create resources within stacks -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account -> Build the app
Create the app from a template provided by AWS CDK -> Add code to the app to create resources within stacks -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account -> Build the app
Create the app from a template provided by AWS CloudFormation -> Add code to the app to create resources within stacks -> Build the app (optional) -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account
- Create the app from a template provided by AWS CDK
- Add code to the app to create resources within stacks
- Build the app (optional)
- Synthesize one or more stacks in the app
- Deploy stack(s) to your AWS account
our global organization has an IT infrastructure that is deployed using CloudFormation on AWS Cloud. One employee, in us-east-1 Region, has created a stack ‘Application1’ and made an exported output with the name ‘ELBDNSName’. Another employee has created a stack for a different application ‘Application2’ in us-east-2 Region and also exported an output with the name ‘ELBDNSName’. The first employee wanted to deploy the CloudFormation stack ‘Application1’ in us-east-2, but it got an error. What is the cause of the error?
Exported Output Values in CloudFormation must have unique names within a single Region
Output Values in CloudFormation must have unique names within a single Region
Output Values in CloudFormation must have unique names across all Regions
Exported Output Values in CloudFormation must have unique names across all Regions
Exported Output Values in CloudFormation must have unique names within a single Region
A company has built its technology stack on AWS serverless architecture for managing all its business functions. To expedite development for a new business requirement, the company is looking at using pre-built serverless applications.
Which AWS service represents the easiest solution to address this use-case?
AWS Serverless Application Repository (SAR) AWS Marketplace AWS Service Catalog
AWS AppSync
AWS Serverless Application Repository (SAR)
A Developer has been entrusted with the job of securing certain S3 buckets that are shared by a large team of users. Last time, a bucket policy was changed, the bucket was erroneously available for everyone, outside the organization too.
Which feature/service will help the developer identify similar security issues with minimum effort?
S3 Object Lock Access Advisor feature on IAM console IAM Access Analyzer S3 Analytics
IAM Access Analyzer - AWS IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. This lets you identify unintended access to your resources and data, which is a security risk.
Access Advisor feature on IAM console - To help identify the unused roles, IAM reports the last-used timestamp that represents when a role was last used to make an AWS request. Your security team can use this information to identify, analyze, and then confidently remove unused roles. This helps improve the security posture of your AWS environments.
S3 Object Lock - S3 Object Lock enables you to store objects using a “Write Once Read Many” (WORM) model. S3 Object Lock can help prevent accidental or inappropriate deletion of data
S3 Analytics - By using Amazon S3 analytics Storage Class Analysis you can analyze storage access patterns to help you decide when to transition the right data to the right storage class. You cannot use S3 Analytics to identify unintended access to your S3 resources.
You have deployed a Java application to an EC2 instance where it uses the X-Ray SDK. When testing from your personal computer, the application sends data to X-Ray but when the application runs from within EC2, the application fails to send data to X-Ray.
Which of the following does NOT help with debugging the issue?
EC2 X-Ray Daemon EC2 Instance Role CloudTrail X-Ray sampling
X-Ray sampling
By customizing sampling rules, you can control the amount of data that you record, and modify sampling behavior on the fly without modifying or redeploying your code. Sampling rules tell the X-Ray SDK how many requests to record for a set of criteria. X-Ray SDK applies a sampling algorithm to determine which requests get traced however because our application is failing to send data to X-Ray it does not help in determining the cause of failure.
Incorrect options:
EC2 X-Ray Daemon - The AWS X-Ray daemon is a software application that listens for traffic on UDP port 2000, gathers raw segment data, and relays it to the AWS X-Ray API. The daemon logs could help with figuring out the problem.