Questions from Exam Pro & Topics Flashcards

1
Q

How to Prevent Uploads of Unencrypted Objects to Amazon S3

A

prevents users from uploading unencrypted objects, unless they are using server-side encryption with S3–managed encryption keys (SSE-S3) or server-side encryption with AWS KMS–managed keys (SSE-KMS).

x-amz-server-side-encryption

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The release process workflow of an application requires a manual approval before the code is deployed, what can you set up?

A

Use an approval action in a stage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Amazon API Gateway now supports importing ___ ___ ___. This allows you to easily create and deploy new APIs as well as update existing APIs in Amazon API Gateway.

A

Swagger API definitions

penapi: 3.0.0
info:
 -title: Sample API
 -description: 
 -version: 0.1.9
servers:
  - url: http://api.example.com/v1
    description: x
  - url: http://staging-api.example.com
    description: x
paths:
  /users:
    get:
      summary: x.
      description: y
      responses:
        '200':    # status code
          description: x:
            application/json:
              schema:
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

AWS Serverless Application Model (AWS SAM) Supports Inline ___

A
  • Swagger
  • Use intrinsic functions to specify URIs - CodeUri and DefinitionUri now accept Amazon S3 objects with a Bucket, Key and Version. This means you can now use intrinsic functions to dynamically specify your code or Swagger file’s location.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does the client have to do to invalidate the cache in API Gateway?

A

The client must send a request that contains the Cache-Control: max-age=0 header.

The client receives the response directly from the integration endpoint instead of the cache, provided that the client is authorized to do so. This replaces the existing cache entry with the new response, which is fetched from the integration endpoint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What S3 bucket policy should I use to comply with the AWS Config SSL requests only?

A

By default, Amazon S3 allows both HTTP and HTTPS requests. To comply with the s3-bucket-ssl-requests-only rule, confirm that your bucket policies explicitly deny access to HTTP requests. Bucket policies that allow HTTPS requests without explicitly denying HTTP requests might not comply with the rule.

To determine HTTP or HTTPS requests in a bucket policy, use a condition that checks for the key “aws:SecureTransport”. When this key is true, this means that the request is sent through HTTPS. To be sure to comply with the s3-bucket-ssl-requests-only rule, create a bucket policy that explicitly denies access when the request meets the condition “aws:SecureTransport”: “false”. This policy explicitly denies access to HTTP requests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

If you have 4 ECS services and each one needs to have specific permissions, what do you have to do?

A

Create four distinct IAM roles, each containing the required permissions for the associated ECS service, then configure each ECS task definition to reference the associated IAM role.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What code changes do I need to make to my application to use X-Ray?

A

For applications running on other AWS services, such as EC2 or ECS, you will need to install the X-Ray agent and instrument your application code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Developer must minimize the time between the message arrival in the queue and the dashboard update

A

OK: Retrieve the messages from the queue using long polling every 20 seconds.

NOK: Retrieve the messages from the queue using short polling every 10 seconds. (This doesn’t exist)

The maximum long polling wait time is 20 seconds.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A social media company is using Amazon Cognito in order to synchronize profiles across different mobile devices, to enable end users to have a seamless experience.
Which of the following configurations can be used to silently notify users whenever an update is available on all other devices?

A

Amazon Cognito Sync is an AWS service and client library that enables cross-device syncing of application-related user data. You can use it to synchronize user profile data across mobile devices and the web without requiring your own backend. The client libraries cache data locally so your app can read and write data regardless of device connectivity status. When the device is online, you can synchronize data, and if you set up push sync, notify other devices immediately that an update is available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the top caching strategies?

A

Cache Aside (If data exists in the cache, read from it, else read from the db but the db will write that data in the cache after that)

Read Through (It gets only data from the cache, if it doesn’t exist, the db will write in the cache but the app continues reading from the cache)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

An application runs on multiple EC2 instances behind an ELB.

Where is the session data best written so that it can be served reliably across multiple requests?

A

Write data to Amazon ElastiCache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A Developer has developed a web application and wants to deploy it quickly on a Tomcat server on AWS. The Developer wants to avoid having to manage the underlying infrastructure.
What is the easiest way to deploy the application, based on these requirements?

A

AWS Elastic Beanstalk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

To log in to an Amazon ECR registry

This command retrieves an authentication token using the GetAuthorizationToken API, and then it prints a docker login command with the authorization token and, if you specified a registry ID, the URI for an Amazon ECR registry. You can execute the printed command to authenticate to the registry with Docker. After you have authenticated to an Amazon ECR registry with this command, you can use the Docker CLI to push and pull images to and from that registry as long as your IAM principal has access to do so until the token expires. The authorization token is valid for 12 hours.

A

aws ecr get-login

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The upload of a 15 GB object to Amazon S3 fails. The error message reads: “Your proposed upload exceeds the maximum allowed object size.”
What technique will allow the Developer to upload this object?

A

The multipart upload API is designed to improve the upload experience for larger objects. You can upload an object in parts. These object parts can be uploaded independently, in any order, and in parallel. You can use a multipart upload for objects from 5 MB to 5 TB in size. For more information, see Uploading and copying objects using multipart upload.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Where can PortMapping be defined when launching containers in Amazon ECS?

A

Task definition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Typically, when you use the KCL, you should ensure that…

A

the number of instances does not exceed the number of shards (except for failure standby purposes). Each shard is processed by exactly one KCL worker and has exactly one corresponding record processor, so you never need multiple instances to process one shard. However, one worker can process any number of shards, so it’s fine if the number of shards exceeds the number of instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

KCL, Kinesis, consumers

To scale up processing in your application, you should test a combination of these approaches:

A
  • Increasing the instance size (because all record processors run in parallel within a process)
  • Increasing the number of instances up to the maximum number of open shards (because shards can be processed independently)
  • Increasing the number of shards (which increases the level of parallelism)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What does an Amazon SQS delay queue accomplish?

A

Delay queues let you postpone the delivery of new messages to a queue for a number of seconds, for example, when your consumer application needs additional time to process messages. If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period. The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes. For information about configuring delay queues using the console see Configuring queue parameters (console).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A Developer is writing a serverless application that requires that an AWS Lambda function be invoked every 10 minutes.
What is an automated and serverless way to trigger the function?

A

Create an Amazon CloudWatch Events rule that triggers on a regular schedule to invoke the Lambda function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is deployment package?

A

Your AWS Lambda function’s code consists of scripts or compiled programs and their dependencies. You use a deployment package to deploy your function code to Lambda. Lambda supports two types of deployment packages: container images and .zip files.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Lambda supports two types of deployment packages:

A

container images

.zip files

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

In CloudFormation you can create lambda functions and yo can set the code via three ways

A
  • Code inline (Python and JS)
  • .Zip Archive (IN S3, S3Key, S3ObjectVersion)
  • Container image (IN ECR)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What can you do if you want to run the X-Ray daemon on Amazon ECS?

A

In Amazon ECS, create a Docker image that runs the X-Ray daemon, upload it to a Docker image repository, and then deploy it to your Amazon ECS cluster and Ensure port mappings and network settings are correct and IAM task roles are defined.

Extra: On ECS you don’t have control of your EC2 then you can’t install de daemon

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

In CloudFormation if you want to set values based on a region, you can use the key…

A

The optional Mappings section matches a key to a corresponding set of named values. For example, if you want to set values based on a region, you can create a mapping that uses the region name as a key and contains the values you want to specify for each specific region. You use the Fn::FindInMap intrinsic function to retrieve values in a map.

The following example shows a Mappings section with a map RegionMap, which contains five keys that map to name-value pairs containing single string values. The keys are region names. Each name-value pair is the AMI ID for the HVM64 AMI in the region represented by the key.

“Mappings” : {
“RegionMap” : {
“us-east-1” : { “HVM64” : “ami-aaa”},
“us-west-1” : { “HVM64” : “ami-bbb”},
“eu-west-1” : { “HVM64” : “ami-ccc”},
}
}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

To manage large Amazon Simple Queue Service (Amazon SQS) messages, you must use two services:

A

Amazon Simple Storage Service (Amazon S3) and the Amazon SQS Extended Client Library for Java. This is especially useful for storing and consuming messages up to 2 GB. Unless your application requires repeatedly creating queues and leaving them inactive or storing large amounts of data in your queues, consider using Amazon S3 for storing your data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

The Developer makes changes to the code and uploads a new .ZIP file to Amazon S3. However, Lambda executes the earlier code.

This is not about version, fix it via CLI…

A
  • -function-name my-function \

- -zip-file fileb://my-function.zip

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

How do I give internet access to a Lambda function that’s connected to an Amazon VPC?

A

If you’re using an existing Amazon VPC, start from Create your VPC components to create a public subnet with a NAT gateway and one or more private subnets. If your existing VPC already has a public subnet with a NAT gateway and one or more private subnets, skip ahead to Create a Lambda execution role for your VPC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

A Developer is creating an AWS Lambda function to process a stream of data from an Amazon Kinesis Data Stream. When the Lambda function parses the data and encounters a missing field, it exits the function with an error. The function is generating duplicate records from the Kinesis stream. When the Developer looks at the stream output without the Lambda function, there are no duplicate records.
What is the reason for the duplicates?

A

The Lambda function did not handle the error, and the Lambda service attempted to reprocess the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Read or write operations on my Amazon DynamoDB table are being throttled. Why is this happening, and how can I fix it?

One strategy for distributing loads more evenly across a partition key space is to add a random number to the end of the partition key values. Then you randomize the writes across the larger space.

A
  • Distribute read and write operations as evenly as possible across your table

– One way to better distribute writes across a partition key space in Amazon DynamoDB is to expand the space. You can do this in several different ways. You can add a random number to the partition key values to distribute the items among partitions. Or you can use a number that is calculated based on something that you’re querying on.

    • Sharding Using Random Suffixes
    • Sharding Using Calculated Suffixes
  • Implement a caching solution
  • Implement error retries and exponential backoff
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

How can you call a “container for CloudWatch metrics”

A

“A namespace”

In Amazon CloudWatch you have some concepts:

  • Namespaces:
  • Metrics: It represents a time-ordered set of data points that are published to CloudWatch
  • Dimensions: It is a name/value pair that is part of the identity of a metric. You can assign up to 10 dimensions to a metric.
  • Statistics: They are metric data aggregations over specified periods of time
  • Percentiles: The relative standing of a value in a dataset
  • Alarms: Automatically initiate actions on your behalf (An alarm watches a single metric over a specified time period)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

A Developer wants access to make the log data of an application running on an EC2 instance available to systems administrators.
Which of the following enables monitoring of this metric in Amazon CloudWatch?

A

Install the Amazon CloudWatch Logs agent on the EC2 instance that the application is running on.

The unified CloudWatch agent enables you to do the following: Collect internal system-level metrics from Amazon EC2 instances across operating systems. The metrics can include in-guest metrics, in addition to the metrics for EC2 instances

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

You can configure API Gateway to perform basic validation of an API request before proceeding with the integration request.

A

When the validation fails, API Gateway immediately fails the request, returns a 400 error response to the caller, and publishes the validation results in CloudWatch Logs. This reduces unnecessary calls to the backend. More importantly, it lets you focus on the validation efforts specific to your application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Where should the company move session data to MOST effectively reduce downtime and make users’ session data more fault tolerant?

A

An Amazon ElastiCache for Redis cluster

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is the file format that you have to set in your Configuration files in (.ebextensions)

A

.config

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

A Developer wants to debug an application by searching and filtering log data.

The Developer creates a new metric filter to count exceptions in the application logs. However, no results are returned from the logs.

What is the reason that no filtered results are being returned?

A

Filters do not retroactively filter data. Filters only publish the metric data points for events that happen after the filter was created. Filtered results return the first 50 lines, which will not be displayed if the timestamp on the filtered results is earlier than the metric creation time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Publishing Custom Metrics

You can publish your own metrics to CloudWatch using the AWS CLI or an API. You can view statistical graphs of your published metrics with the AWS Management Console

A

High-Resolution Metrics

  • Standard resolution, with data having a one-minute granularity
  • High resolution, with data at a granularity of one second
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Which option would enable DynamoDB table updates to trigger the Lambda function?

A

An event source mapping is an AWS Lambda resource that reads from an event source and invokes a Lambda function. You can use event source mappings to process items from a stream or queue in services that don’t invoke Lambda functions directly. Lambda provides event source mappings for the following services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

By default, the DynamoDB write operations (PutItem, UpdateItem, DeleteItem) are unconditional: Each operation overwrites an existing item that has the specified primary key.

Which DynamoDB write option should be selected to prevent this overwriting when two people write data to the same element?

A

DynamoDB optionally supports conditional writes for these operations. A conditional write succeeds only if the item attributes meet one or more expected conditions. Otherwise, it returns an error. Conditional writes are helpful in many situations. For example, you might want a PutItem operation to succeed only if there is not already an item with the same primary key. Or you could prevent an UpdateItem operation from modifying an item if one of its attributes has a certain value.

Get value: 10
Update if value is 10

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Can you change the ELB in a EB when it is already created?

A

No, you Can’t

By default, Elastic Beanstalk creates an Application Load Balancer for your environment when you enable load balancing with the Elastic Beanstalk console or the EB CLI. It configures the load balancer to listen for HTTP traffic on port 80 and forward this traffic to instances on the same port. You can choose the type of load balancer that your environment uses only during environment creation. Later, you can change settings to manage the behavior of your running environment’s load balancer, but you can’t change its type.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

A Developer must encrypt a 100-GB object using AWS KMS.

What is the BEST approach?

A

Make an GenerateDataKey API call that returns a plaintext key and an encrypted copy of a data key. Use a plaintext key to encrypt the data

You need to have permission, cli s3 cp large file

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

How can you Migrate a Git repository to AWS CodeCommit?

A

A set of Git credentials generated from IAM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

A Developer is writing a REST service that will add items to a shopping list. The service is built on Amazon API Gateway with AWS Lambda integrations. The shopping list items are send as query string parameters in the method request.
How should the Developer convert the query string parameters to arguments for the Lambda function?

A

Change the integration type (API GATEWAY)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

ReceiveMessageWaitTimeSeconds

A

length of time, in seconds, for which a ReceiveMessage action waits for a message to arrive. Valid values: An integer from 0 to 20 (seconds). Default: 0.

How can Company B reduce the number of empty responses?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

DynamoDB uses a pessimistic locking model

DynamoDB uses optimistic concurrency control

A

Optimistic locking is a strategy to ensure that the client-side item that you are updating (or deleting) is the same as the item in Amazon DynamoDB. If you use this strategy, your database writes are protected from being overwritten by the writes of others, and vice versa.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

CreatePlatformEndpoint

A

Creates an endpoint for a device and mobile app on one of the supported push notification services, such as GCM (Firebase Cloud Messaging) and APNS. CreatePlatformEndpoint requires the PlatformApplicationArn that is returned from CreatePlatformApplication. You can use the returned EndpointArn to send a message to a mobile app or by the Subscribe action for subscription to a topic. The CreatePlatformEndpoint action is idempotent, so if the requester already owns an endpoint with the same device token and attributes, that endpoint’s ARN is returned without creating a new endpoint. For more information, see Using Amazon SNS Mobile Push Notifications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

You are writing to a DynamoDB table and receive the following exception:”
ProvisionedThroughputExceededException”

A

You’re exceeding your capacity on a particular Hash Key

The partition key of an item is also known as its hash attribute. The term hash attribute derives from the use of an internal hash function in DynamoDB that evenly distributes data items across partitions, based on their partition key values.

48
Q

What AWS products and features can be deployed by Elastic Beanstalk? (Choose three.)

    A. Auto scaling groups
    B. Route 53 hosted zones
    C. Elastic Load Balancers
    D. RDS Instances
    E. Elastic IP addresses
    F. SQS Queues
A

A. Auto scaling groups
C. Elastic Load Balancers
D. RDS Instances

49
Q

What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

A. Virtual Private Cloud requires EBS backed instances
B. Amazon EBS-backed instances can be stopped and restarted
C. Auto scaling requires using Amazon EBS-backed instances.
D. Instance-store backed instances can be stopped and restarted
A

B. Amazon EBS-backed instances can be stopped and restarted

50
Q

Request Parameters in SNS

A
Message
MessageAttributes
MessageAttributes (FIFO topics.)
MessageGroupId
MessageStructure
PhoneNumber
Subject
TargetArn
TopicArn
51
Q

EC2 instances are launched from Amazon Machine images (AMIs). A given public AMI can:

A. be used to launch EC2 Instances in any AWS region.
B. only be used to launch EC2 instances in the same country as the AMI is stored.
C. only be used to launch EC2 instances in the same AWS region as the AMI is stored.
D. only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
A

C. only be used to launch EC2 instances in the same AWS region as the AMI is stored.

52
Q
How to get the name of the AMIS
A. DescnbeInstances
B. DescribeAMls
C. DescribeImages
D. GetAMls
E. You cannot retrieve a list of AMIs as there are over 10,000 AMIs
A

DescribeImages

53
Q

2 Best Practices for Querying and Scanning Data

A
  • Reduce page size

- Isolate scan operations

54
Q

This example snippet returns a string containing the DNS name of the load balancer with the logical name myELB.

A

“Fn::GetAtt” : [ “myELB” , “DNSName” ]

55
Q

After launching an instance that you intend to serve as a NAT (Network Address Translation) (A NAT INSTANCE) device in a public subnet you modify your route tables to have the NAT device be the target of internet bound traffic of your private subnet. When you try and make an outbound connection to the Internet from an instance in the private subnet, you are not successful.
Which of the following steps could resolve the issue?

A. Attaching a second Elastic Network interface (ENI) to the NAT instance, and placing it in the private subnet
B. Attaching a second Elastic Network Interface (ENI) to the instance in the private subnet, and placing it in the public subnet
C. Disabling the Source/Destination Check attribute on the NAT instance
D. Attaching an Elastic IP address to the instance in the private subnet
A

C. Disabling the Source/Destination Check attribute on the NAT instance

Each EC2 instance performs source/destination checks by default. This means that the instance must be the source or destination of any traffic it sends or receives. However, a NAT instance must be able to send and receive traffic when the source or destination is not itself. Therefore, you must disable source/destination checks on the NAT instance.

56
Q

What is the maximum number of S3 Buckets available per AWS account?

    A. 100 per region
    B. there is no limit
    C. 100 per account
    D. 500 per account
    E. 100 per IAM user
A

C. 100 per account

57
Q

A Developer is trying to make API calls using SDK. The IAM user credentials used by the application require multi-factor authentication for all API calls.
Which method the Developer use to access the multi-factor authentication protected API?

A. GetFederationToken
B. GetCallerIdentity
C. GetSessionToken
D. DecodeAuthorizationMessage
A

C. GetSessionToken

The primary occasion for calling the GetSessionToken API operation or the get-session-token CLI command is when a user must be authenticated with multi-factor authentication (MFA).

Returns a set of temporary credentials for an AWS account or IAM user. The credentials consist of an access key ID, a secret access key, and a security token. Typically, you use GetSessionToken if you want to use MFA to protect programmatic calls to specific AWS API operations like Amazon EC2 StopInstances.

58
Q

A Developer has an e-commerce API hosted on Amazon ECS. Variable and spiking demand on the application is causing order processing to take too long. The application processes Amazon SQS queues. The ApproximateNumberOfMessagesVisible metric spikes at very high values throughout the day, which cause
Amazon CloudWatch alarm breaches. Other ECS metrics for the API containers are well within limits.
What can the Developer implement to improve performance while keeping costs low?

A. Target tracking scaling policy
B. Docker Swarm
C. Service scheduler
D. Step scaling policy
A

The CloudWatch Amazon SQS queue metric ApproximateNumberOfMessagesVisible. The number of messages in a queue might not change proportionally to the size of the Auto Scaling group that processes messages from the queue. However, a customized metric that measures the number of messages in the queue per EC2 instance in the Auto Scaling group can work

59
Q

Which logs can the Developer use to verify whether the traffic is reaching subnet B?

A

C. VPC Flow Logs

60
Q

A Developer has built an application running on AWS Lambda using AWS Serverless Application Model (AWS SAM).
What is the correct order of execution to successfully deploy the application?

A
  1. Build the SAM template locally.
  2. Package the SAM template onto Amazon S3.
  3. Deploy the SAM template from Amazon S3.
61
Q

Start a pipeline execution in CodePipeline when changes are detected..

A

Source actions and change detection methods

Amazon S3
Bitbucket
AWS CodeCommit
Amazon ECR
GitHub
62
Q

Redeploy and roll back a deployment with CodeDeploy

What occurs if the deployment of the new version fails due to code regression?

A

CodeDeploy rolls back deployments by redeploying a previously deployed revision of an application as a new deployment. These rolled-back deployments are technically new deployments, with new deployment IDs, rather than restored versions of a previous deployment.

A new deployment of the last known version of the application is deployed with a new deployment ID.

63
Q

A Developer is publishing critical log data to a log group in Amazon CloudWatch Logs, which was created 2 months ago. The Developer must encrypt the log data using an AWS KMS customer master key (CMK) so future data can be encrypted to comply with the company’s security policy.
How can the Developer meet this requirement?

A. Use the CloudWatch Logs console and enable the encrypt feature on the log group
B. Use the AWS CLI create-log-group command and specify the key Amazon Resource Name (ARN)
C. Use the KMS console and associate the CMK with the log group
D. Use the AWS CLI associate-kms-key command and specify the key Amazon Resource Name (ARN)
A

D. Use the AWS CLI associate-kms-key command and specify the key Amazon Resource Name (ARN)

Associates the specified AWS Key Management Service (AWS KMS) customer master key (CMK) with the specified log group.

Associating an AWS KMS CMK with a log group overrides any existing associations between the log group and a CMK. After a CMK is associated with a log group, all newly ingested data for the log group is encrypted using the CMK. This association is stored as long as the data encrypted with the CMK is still within Amazon CloudWatch Logs. This enables Amazon CloudWatch Logs to decrypt this data whenever it is requested.

64
Q

Lambda functions as targets

A

You can register your Lambda functions as targets and configure a listener rule to forward requests to the target group for your Lambda function. When the load balancer forwards the request to a target group with a Lambda function as a target, it invokes your Lambda function and passes the content of the request to the Lambda function, in JSON format.

Limits

  • It must be in the same account and in the same Region.
  • The maximum size of the request body that you can send to a Lambda function is 1 MB
  • The maximum size of the response JSON that the Lambda function can send is 1 MB.
  • WebSockets are not supported. Upgrade requests are rejected with an HTTP 400 code.
  • Local Zones are not supported.
65
Q

Creating and using usage plans with API keys

A

After you create, test, and deploy your APIs, you can use API Gateway usage plans to make them available as product offerings for your customers. You can configure usage plans and API keys to allow customers to access selected APIs at agreed-upon request rates and quotas that meet their business requirements and budget constraints. If desired, you can set default method-level throttling limits for an API or set throttling limits for individual API methods.

66
Q

How can I reference a resource in another stack from an AWS CloudFormation template?

A

Note: To reference a resource in another AWS CloudFormation stack, you must create cross-stack references. To create a cross-stack reference, use the export field to flag the value of a resource output for export. Then, use the Fn::ImportValue intrinsic function to import the value in any stack within the same AWS Region and account. AWS CloudFormation identifies exported values by the names specified in the template. These names must be unique to your AWS Region and account.

67
Q

A Developer must allow guest users without logins to access an Amazon Cognito-enabled site to view files stored within an Amazon S3 bucket.

A

Create a new identity pool, enable access to authenticated identities, and grant access to AWS resources.

Amazon Cognito identity pools provide temporary AWS credentials for users who are guests (unauthenticated) and for users who have been authenticated and received a token. An identity pool is a store of user identity data specific to your account.

68
Q

A Developer has discovered that an application responsible for processing messages in an Amazon SQS queue is routinely falling behind. The application is capable of processing multiple messages in one execution, but is only receiving one message at a time.

What should the Developer do to increase the number of messages the application receives?

A

Call the ReceiveMessage API to set MaxNumberOfMessages to a value greater than the default of 1.

69
Q

Controlling access to an API with API Gateway resource policies

A

Amazon API Gateway resource policies are JSON policy documents that you attach to an API to control whether a specified principal (typically an IAM user or role) can invoke the API. You can use API Gateway resource policies to allow your API to be securely invoked by:

Users from a specified AWS account.

Specified source IP address ranges or CIDR blocks.

Specified virtual private clouds (VPCs) or VPC endpoints (in any account).
70
Q

Amazon DocumentDB

A

Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data.

71
Q

What should a developer do in the Elastic Beanstalk application version lifecycle settings to retain the source code in the S3 bucket?

A

D. Set Retention to Retain source bundle in S3.

Each time you upload a new version of your application with the Elastic Beanstalk console or the EB CLI, Elastic Beanstalk creates an application version. If you don’t delete versions that you no longer use, you will eventually reach the application version quota and be unable to create new versions of that application.

72
Q

Deployment Lifecycle Event

A

ApplicationStop
This is the first deployment lifecycle event that occurs even before the revision gets downloaded. The AppSpec file and scripts used for this deployment lifecycle event are from the last successfully deployed revision.
You can use the ApplicationStop deployment lifecycle event if you want to gracefully stop the application or remove currently installed packages in preparation of a deployment.

DownloadBundle
During this deployment lifecycle event, the agent copies the revision files to a temporary location on the instance. This deployment lifecycle event is reserved for the agent and cannot be used to run user scripts.

BeforeInstall
You can use the BeforeInstall deployment lifecycle event for preinstall tasks such as decrypting files and creating a backup of the current version.
Install During this deployment lifecycle event, the agent copies the revision files from the temporary location to the final destination folder. This deployment lifecycle event is reserved for the agent and cannot be used to run user scripts.

AfterInstall
You can use the AfterInstall deployment lifecycle event for tasks such as configuring your application or changing file permissions.

ApplicationStart
You typically use the ApplicationStart deployment lifecycle event to restart services that were stopped during ApplicationStop.

ValidateService
ValidateService is the last deployment lifecycle event and is an opportunity to verify that the deployment completed successfully.

73
Q

ResultPath

A

AWS Step Functions:
Input and Output Processing:

  • Paths
  • InputPath, Parameters and ResultSelector
  • ItemsPath
  • ResultPath: The output of a state can be a copy of its input, the result it produces (for example, output from a Task state’s Lambda function), or a combination of its input and result. Use ResultPath to control which combination of these is passed to the state output.
  • OutputPath
  • InputPath, ResultPath and OutputPath Example
  • Context Object
74
Q

Lambda functions as targets

Advanced settings: Multi-Value Headers and Failover Configurations

A

Lambda functions as targets

To enable multi-value headers, under Attributes, choose Edit attributes. When you enable multi-value headers, HTTP headers and query string parameters that are sent with multiple values are shown as arrays within the AWS Lambda event and response objects.

For example, suppose the client supplies a query string like “?name=foo&name=bar”. If you’ve enabled multi-value headers, ALB supplies these duplicate parameters as a ‘name’: [‘foo’, ‘bar’] entry in the event object. ALB applies the same processing to duplicate HTTP headers.

75
Q

A developer is creating a role to access Amazon S3 buckets. To create the role, the developer uses the AWS CLI create-role command.
Which policy should be added to allow the Amazon EC2 service to assume the role?

A. Managed policy
B. Trust policy
C. Inline policy
D. Service control policy (SCP)
A

B. Trust policy

An IAM role has a trust policy that defines which conditions must be met to allow other principals to assume it. This trust policy reduces the risks associated with privilege escalation.

76
Q

A developer is building a WebSocket API using Amazon API Gateway. The payload sent to this API is JSON that includes an action key. This key can have three different values: create, update, and remove. The developer must integrate with different routes based on the value of the action key of the incoming JSON payload.
How can the developer accomplish this task with the LEAST amount of configuration?

A. Deploy the WebSocket API to three stages for the respective routes: create, update, and remove
B. Create a new route key and set the name as action
C. Set the value of the route selection expression to action
D. Set the value of the route selection expression to $request.body.action
A

D. Set the value of the route selection expression to $request.body.action

Using routes to process messages

In API Gateway WebSocket APIs, messages can be sent from the client to your backend service and vice versa. Unlike HTTP’s request/response model, in WebSocket the backend can send messages to the client without the client taking any action.

Messages can be JSON or non-JSON. However, only JSON messages can be routed to specific integrations based on message content. Non-JSON messages are passed through to the backend by the $default route.

77
Q

Which of the following is in the correct execution order of lifecycle hooks in AWS CodeDeploy?

A
BeforeInstall
Install
AfterInstall
AllowTestTraffic
AfterAllowTestTraffic
BeforeAllowTraffic
AllowTraffic
AfterAllowTraffic
78
Q

A developer is managing their application deployments using AWS ElasticBeanstalk. When deploying new application changes they receive an error advising that they have reached the version limit - and they are forced to manually delete some older versions of their application before deploying again.

What is the most effective way that the developer can help to avoid the issue of maxing out their total number of versions again in the future?

  • Deploy across multiple worker environments
  • This issue cannot be avoided
  • Implement an application lifecycle policy
  • Regularly logon and delete older versions
A

Implement an application lifecycle policy

Each time you upload a new version of your application with the Elastic Beanstalk console or the EB CLI, Elastic Beanstalk creates an application version. If you don’t delete versions that you no longer use, you will eventually reach the application version quota and be unable to create new versions of that application.

You can avoid hitting the quota by applying an application version lifecycle policy to your applications. A lifecycle policy tells Elastic Beanstalk to delete application versions that are old, or to delete application versions when the total number of versions for an application exceeds a specified number.

Elastic Beanstalk applies an application’s lifecycle policy each time you create a new application version, and deletes up to 100 versions each time the lifecycle policy is applied. Elastic Beanstalk deletes old versions after creating the new version, and does not count the new version towards the maximum number of versions defined in the policy.

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/applications-lifecycle.html

79
Q

A developer has created a serverless application which they are now in the process of deploying on AWS. The application artifacts have already been uploaded to S3, and a SAM template file has already been created.

Which CLI command would be used to perform the next step in the automated deployment?

sam deploy
sam package
sam init
sam build

A

sam deploy

Deploys an AWS SAM application.

This command now comes with a guided interactive mode, which you can enable by specifying the –guided parameter. The interactive mode walks you through the parameters required for deployment, provides default options, and saves these options in a configuration file in your project folder. You can execute subsequent deployments of your application by simply executing sam deploy and the needed parameters will be retrieved from the AWS SAM CLI configuration file.

Deploying Lambda functions through AWS CloudFormation requires an Amazon S3 bucket for the Lambda deployment package. AWS SAM CLI now creates and manages this Amazon S3 bucket for you.

80
Q

A company is planning their application deployment on ElasticBeanstalk. There is a requirement for their application environment to have access to multiple custom software libraries provided by a 3rd party vendor.

What is the recommended way they can include these additional dependencies as part of their environment?

Create multiple worker environments for dependencies

ElasticBeanstalk is limited to AWS provided software

Launch the environment using a custom AMI

SSH into the environment after launch and install

A

Launch the environment using a custom AMI

When you create an AWS Elastic Beanstalk environment, you can specify an Amazon Machine Image (AMI) to use instead of the standard Elastic Beanstalk AMI included in your platform version. A custom AMI can improve provisioning times when instances are launched in your environment if you need to install a lot of software that isn’t included in the standard AMIs.

Using configuration files is great for configuring and customizing your environment quickly and consistently. Applying configurations, however, can start to take a long time during environment creation and updates. If you do a lot of server configuration in configuration files, you can reduce this time by making a custom AMI that already has the software and configuration that you need.

A custom AMI also allows you to make changes to low-level components, such as the Linux kernel, that are difficult to implement or take a long time to apply in configuration files. To create a custom AMI, launch an Elastic Beanstalk platform AMI in Amazon EC2, customize the software and configuration to your needs, and then stop the instance and save an AMI from it.

81
Q

A developer needs to pull data from a popular social media website via a public-facing GraphQL API and store that data in their RDS MySQL database. The developer has created a lambda function in the same VPC as the RDS database. The lambda is able to query the RDS database but is not able to query the internet-facing GraphQL API. What change would resolve the issue?

Add NAT Gateway and open the Lambda’s Security Group outbound ports.

Ensure RDS and Lambda are in the same Security Group

Use AWS AppSync to connect to the internet-facing GraphQL API

Replace infrastructure with AWS Amplify

Use Lambda Amazon RDS Proxy to establish GraphQL connection

Set RDS Public Accessibility to true

A

Add NAT Gateway and open the Lambda’s Security Group outbound ports.

82
Q

A developer has created a contact form hosted on S3 Static Website Hosting. When the contact form is submitted it sends the request to an API Gateway endpoint which in turn calls a Lambda function. API Gateway is reporting 502 Bad Gateway error. What could be the cause of this error?

The returned data from the Lambda is not expected format

Then http request made is using the wrong METHOD eg. should have been POST but used GET.

Should be using HTTP API instead of REST API

CORS has not been enabled on the API Gateway endpoint

A

The returned data from the Lambda is not expected format

Generally when you back Lambdas behinds API Gateway you will always turn on Lambda proxy integration because you need to return an HTTP response.

API Gateway expects the response to be of a specific JSON format:

If the format is not correct API Gateway will return a 502 Bad Gateway error.

502 Bad Gateway error is an HTTP status code that means that one server on the internet received an invalid response from another server. So 502 is telling you that the returned format is incorrect.

83
Q

A company has a web-application composed of multiple docker containers running on ECS. Customers report they are experiencing poor performance relating to slow page loads. A company’s developer has decided to instrument X-Ray to determine the cause for these performance issues. What steps must be configured to instrument X-Ray? (Choose 2)

  • Launch a container on ECS running the X-Ray Deamon
  • Set XRayEnabled to true in your Task Definition file.
  • Use portMappings to open TCP 2000 in the Task Definition file.
  • Launch an EC2 instance running the X-Ray Daemon
  • Use portMappings to open UDP 2000 in the Task Definition file.
A
  • Launch a container on ECS running the X-Ray Deamon
    An X-Ray Daemon is needed to report back the document segment and subsegment data to X-Ray. For all computing services other than Lambda you have to manually launch the X-Ray Deamon.
    You could launch the X-Ray Daemon on ECS, Fargate, EC2 or Elastic Beanstalk (EB). Since we’re already using ECS this would be the easiest way to get started for this architecture.
  • Use portMappings to open UDP 2000 in the Task Definition file.
    In order to communicate with the X-Ray Daemon, we need to open Port 2000 on UDP.

In Amazon ECS, create a Docker image that runs the X-Ray daemon, upload it to a Docker image repository, and then deploy it to your Amazon ECS cluster. You can use port mappings and network mode settings in your task definition file to allow your application to communicate with the daemon container.

84
Q

An application is publishing custom metrics to CloudWatch. A Developer wants to create a CloudWatch alarm that only triggers if it breaches two evaluation periods or more.
What should be done to meet these requirements?

Publish using statistics sets

Publish with single data points

Publish with value zero

Publish using High-Resolution Metrics

A

Publish using statistics sets

You can aggregate your data before you publish to CloudWatch. When you have multiple data points per minute, aggregating data minimizes the number of calls to put-metric-data.

What’s important to remember is that the statistic set holds information for more than one data-point. Since we want to trigger an alarm for two evaluation periods, the statistic set would that information for us to do so.

85
Q

A company has a web-application running on multiple EC2 instances backed by an Application Load Balancer managed by an Auto Scaling Group. The company must create a base-line of IP traffic to detect anomalies of suspicious traffic. How can this IP traffic be monitored?

Create a CloudTrail Log

Create a DNS Log

Create a VPC Flow Log

Install CloudWatch Agent on EC2 instances

A

Create a VPC Flow Log

VPC Flow Log can be created to capture all incoming and outgoing IP traffic within your VPC.

86
Q

A company is running a web-application backed by an RDS Postgres databases. They are experiencing read-contention on their databases after a period of rapid growth of new paid users. To reduce the number of reads and improve the performance of their web-application they decided to use ElasticCache. They only need to cache frequently accessed pages and it doesn’t matter if the content being cached is stale. What ElastiCache caching strategy should they implement?

  • Write-Through
  • Lazy Loading
  • Redis
  • Memcached
A

Lazy Loading is a caching strategy that loads data into the cache only when necessary. Lazy Loading is great for when you don’t want to cache everything to keep the amount of cache data small. With Lazy Loading it is possible for data to be stale, so, if up-to-date data is important to you, you’ll want to use Write-Through instead.

There are:

  • Lazy Loading (loads data only when necessary)
  • Write-Through (whenever data is written to the db)
  • Adding TTL (number of seconds until the key expires)
87
Q

A company with a legacy application running on-premises is looking to migrate it to AWS. The application uses antiquated technology and has little or no development support. They do not have the time or resources to invest in re-building or modifying the application source code.

Which type of AWS deployment would be best recommended for this use case?

EC2 Instances with Instance Store Volumes

AWS Lambda

EC2 Instances with EBS Backed Volumes

Elastic Beanstalk

A

EC2 Instances with EBS Backed Volumes

Data that is stored on an Amazon EBS volume will persist independently of the life of the instance. If you are using an Amazon EBS volume as a root partition, you will need to set the Delete On Terminate flag to “N” if you want your Amazon EBS volume to persist outside the life of the instance. For data requiring a higher level of durability, it’s recommended using Amazon EBS volumes or backing up the data to Amazon S3.

88
Q

A developer has created a Lambda function that will be used to retrieve data from an existing EC2 instance. The existing instance the function needs to access is located in a private subnet within their VPC.

Which 2 steps does the developer need to take to be able to provide the Lambda access to the instance?
(Choose 2)

  • Specify the public IPv4 address in the Lambda VPC options
  • Specify the private subnet in the Lambda VPC options
  • Specify the relevant security group in the Lambda VPC options
  • Specify the relevant NACL in the Lambda VPC options
A
  • Specify the private subnet in the Lambda VPC options
    Connect your function to private subnets to access private resources. If your function needs internet access, use NAT. Connecting a function to a public subnet does not give it internet access or a public IP address.
  • Specify the relevant security group in the Lambda VPC options
    When you connect a function to a VPC, Lambda creates an elastic network interface for each combination of security group and subnet in your function’s VPC configuration. This process can take about a minute. During this time, you cannot perform additional operations that target the function, such as creating versions or updating the function’s code. For new functions, you can’t invoke the function until its state transitions from Pending to Active. For existing functions, you can still invoke the old version while the update is in progress.
89
Q

A company is required to retrieve a list of all objects inside an S3 bucket used to store production data. They create a shell script which uses the AWS CLI to perform the task. However due to the extremely large amount of objects in the S3 bucket, they are finding the script continually fails due to timing-out.

What step can they take to help mitigate this issue?

  • Use the –page-size option with their CLI commands
  • Run the script with AWS Lambda
  • Use the –paginate option with their CLI commands
  • Enable Amazon S3 Transfer Acceleration
A

Use the –page-size option with their CLI commands

If you see issues when running list commands on a large number of resources, the default page size of 1000 might be too high. This can cause calls to AWS services to exceed the maximum allowed time and generate a “timed out” error. You can use the –page-size option to specify that the AWS CLI request a smaller number of items from each call to the AWS service. The CLI still retrieves the full list, but performs a larger number of service API calls in the background and retrieves a smaller number of items with each call. This gives the individual calls a better chance of succeeding without a timeout. Changing the page size doesn’t affect the output; it affects only the number of API calls that need to be made to generate the output.

Available:

- -no-paginate
- -page-size
- -max-items
- -starting-token
90
Q

A company is using AWS ElastiCache to manage their Redis cluster which has multiple nodes. There is a requirement to define default eviction policies, and memory usage quotas for all nodes inside of the cluster.

Which ElastiCache feature should they use to define these values?

ElastiCache Endpoints

ElastiCache Parameter Groups

ElastiCache Security Groups

ElastiCache Subnet Groups

A

ElastiCache Parameter Groups

Parameter groups are an easy way to manage runtime settings for supported engine software. Parameters are used to control memory usage, eviction policies, item sizes, and more. An ElastiCache parameter group is a named collection of engine-specific parameters that you can apply to a cluster. By doing this, you make sure that all of the nodes in that cluster are configured in exactly the same way.

    ElastiCache Nodes
    ElastiCache for Redis Shards
    ElastiCache for Redis Clusters
    ElastiCache for Redis Replication
    AWS Regions and Availability Zones
    ElastiCache for Redis Endpoints
    ElastiCache Parameter Groups
    ElastiCache for Redis Security
    ElastiCache Security Groups
    ElastiCache Subnet Groups
    ElastiCache for Redis Backups
    ElastiCache Events
91
Q

A developer has been tasked with creating new development environments for the junior members of his team. They already use Chef Automate to enable their cross-team collaboration, and they want to use their existing Chef recipes for their new server configurations.

Which of the following AWS services would offer the best solution for their use case?

AWS Workspaces

AWS OpsWorks

AWS Cloud9

AWS CodeStar

A

AWS OpsWorks

AWS OpsWorks for Chef Automate is a fully managed configuration management service that hosts Chef Automate, a suite of automation tools from Chef for configuration management, compliance and security, and continuous deployment. OpsWorks also maintains your Chef server by automatically patching, updating, and backing up your server. OpsWorks eliminates the need to operate your own configuration management systems or worry about maintaining its infrastructure. OpsWorks gives you access to all of the Chef Automate features, such as configuration and compliance management, which you manage through the Chef console or command line tools like Knife. It also works seamlessly with your existing Chef cookbooks.

92
Q

Which CloudFormation property for Lambda would they use in the template - to provide the Node.js code inline within the template?

A

ZipFile

If you include your function source inline with this parameter, AWS CloudFormation places it in a file named index and zips it to create a deployment package.

93
Q

A developer has multiple services backed by a single RDS Postgres database. Read contention is occurring causing some of the services to hang. How can the developer determine which service is causing the issue?

Use CloudWatch Logs
Use RDS Enhanced Monitoring
Use RDS Performance Insights
Switch to DynamoDB

A

Use RDS Enhanced Monitoring

Enhanced Monitoring metrics are useful when you want to see how different processes or threads on a DB instance use the CPU. So we should be able to see under Enhanced Monitoring the processes running which can help us determine which service.

94
Q

A company performs real-time analytics using Kinesis Data Streams running two shards. A customer consumer is running on EC2 managed by an Auto Scaling Group. The consumer has been written to process and send the data to Elasticsearch. Whenever there an auto scaling policy is triggered, duplicate records are appearing in Elasticsearch. How can the implementation be refactored to resolve the duplicate issue?

Add a shard to Kinesis Data Streams

Use a combination of versioning and unique IDs

Merge all Kinesis Data Streams into a single Shard

Remove the Auto Scaling Group

A

Use a combination of versioning and unique IDs

Duplicate data being sent to the final destination is inevitable since producers or consumers may attempt retries. You must design your code in such a way to handle duplicates. Versioning and unique keys will mitigate this problem.

One of the reasons duplicates occur is when another worker is added and this happened when the auto scaling group spun up another EC2 instance running another consumer to handle the increase of data.

Duplicate records occur because the producer or consumer retries to process data.

When a producer retries it because it did not receive an acknowledgement or it took to long to receive an acknowledgement from Amazon Kinesis Data Streams.

When a consumer retries it because of record processor restarts. Restarts happen in cases such as: * Worker terminates unexpectedly * Worker instances are added or removed * Shards are merged or split * The application is deployed

The final destination needs to assume that duplicate records can occur and take this into account to avoid inserting or processing duplicate records.

95
Q

What is Amazon EC2 instance store?

Instance type -> Instance store volumes

c1. medium - 1 x 350 GB†
c1. xlarge - 4 x 420 GB (1.6 TB)

A

The data stored on a local instance store will persist only as long as that instance is alive. Therefore, it’s recommended that you use the local instance store only for temporary data.

If you launch an “instance store” instance, be prepared to leave it running until you’re completely done with it. Note that you will be charged from the moment the instance is started, until the time it is terminated.

96
Q

What are EC2 Instances with EBS Backed Volumes?

A

Data that is stored on an Amazon EBS volume will persist independently of the life of the instance. If you are using an Amazon EBS volume as a root partition, you will need to set the Delete On Terminate flag to “N” if you want your Amazon EBS volume to persist outside the life of the instance. For data requiring a higher level of durability, it’s recommended using Amazon EBS volumes or backing up the data to Amazon S3.

An “EBS-backed” instance is an EC2 instance which uses an EBS volume as it’s root device. EBS volumes are redundant, “virtual” drives, which are not tied to any particular hardware, however they are restricted to a particular EC2 availability zone. This means that an EBS volume can move from one piece of hardware to another within the same availability zone. You can think of EBS volumes as a kind of Network Attached Storage.

97
Q

A developer is working with the AWS SDK inside their website, currently hosted via an S3 bucket enabled for static website hosting. There is a requirement for the website to call different Lambda functions in the account when users take specific actions on the website.

Which is the most secure way to grant users access to these specific functions without exposing sensitive access credentials?

Create a hidden file in the root of the website containing IAM access keys with only Lambda execution access

Create a service role for S3. Grant access to execute the specific functions and assign it to the bucket hosting your website.

Create a web identity federation role. Grant access to execute the specific functions and retrieve temporary credentials.

Create a service role for Lambda. Grant access to execute the specific functions and assign it to the bucket hosting your website.

A

Create a web identity federation role. Grant access to execute the specific functions and retrieve temporary credentials.

If you don’t use Amazon Cognito, then you must write - Create a web identity federation role. Grant access to execute the specific functions and retrieve temporary credentials.

With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known external identity provider (IdP), such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP. They can receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application.

98
Q

A company with multiple teams working on projects is planning to move to AWS. Due to some recent budget over-expenditures there is a requirement that all billing and usage for AWS resources must be tracked at the individual project/team level.

What is the recommended way that the accounts/projects should be created on AWS to meet the requirements?

Create separate AWS accounts for each department and use the cost explorer service

Have users tag each created resource with a unique identifier for their team.

Create separate AWS accounts for each department and use consolidated billing.

Have users tag each created resource with their name and department name

A

Create separate AWS accounts for each department and use consolidated billing.

You can use the consolidated billing feature in AWS Organizations to consolidate billing and payment for multiple AWS accounts or multiple Amazon Internet Services Pvt. Ltd (AISPL) accounts. Every organization in AWS Organizations has a master (payer) account that pays the charges of all the member (linked) accounts. For more information about organizations, see the AWS Organizations User Guide.

Consolidated billing has the following benefits:

One bill – You get one bill for multiple accounts.

Easy tracking – You can track the charges across multiple accounts and download the combined cost and usage data.

Combined usage – You can combine the usage across all accounts in the organization to share the volume pricing discounts, Reserved Instance discounts, and Savings Plans. This can result in a lower charge for your project, department, or company than with individual standalone accounts. For more information, see Volume Discounts.

No extra fee – Consolidated billing is offered at no additional cost

99
Q

A company with a highly secured web application dealing with personal data is performing an audit of their system to try and further improve security. The auditors are looking at the encryption systems in place throughout the application and have decided it would be best to implement an envelope encryption strategy to help protect the keys being used to decrypt data.

Which of the following would be a correct implementation of an envelope encryption strategy?

  • Encrypt the application data using a master key, then encrypt the master key itself with another encrypted data key.
  • Encrypt the application data using a master key, then encrypt the master key itself with another plaintext data key.
  • Encrypt the application data using a data key, then encrypt the data key itself with another encrypted master key.
  • Encrypt the application data using a data key, then encrypt the data key itself with another plaintext master key.
A

Encrypt the application data using a data key, then encrypt the data key itself with another plaintext master key.

When you encrypt your data, your data is protected, but you have to protect your encryption key. One strategy is to encrypt it. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.

You can even encrypt the data encryption key under another encryption key, and encrypt that encryption key under another encryption key. But, eventually, one key must remain in plaintext so you can decrypt the keys and your data. This top-level plaintext key encryption key is known as the master key.

100
Q

A company uses CodeDeploy to handle their ECS deployment process. There is a requirement to run a new task at the end of each deployment, after the second target group serves traffic to the replacement task set.

Which lifecycle hook inside of their CodeDeploy appspec.yml file should be used to trigger the task?

BeforeInstall

AfterInstall

AfterAllowTraffic

BeforeAllowTraffic

A

AfterAllowTraffic

Use to run tasks after the second target group serves traffic to the replacement task set. The results of a hook function at this lifecycle event can trigger a rollback.

101
Q

A company has a web application running on multiple EC2 instances in their AWS account. There is a requirement for the application to regularly access multiple other AWS resources out of EC2 during its normal operation.

Which is the most secure way they can provide all of the instances with access to the required AWS resources?

Attach a single instance role to all instance

Attach multiple instance roles to each instance

Provide access keys inside environment variables for each instance

Include a file in the application root of each instance with access keys

A

Attach a single instance role to all instance

IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use.

For example, you can use IAM roles to grant permissions to applications running on your instances that need to use a bucket in Amazon S3. You can specify permissions for IAM roles by creating a policy in JSON format. These are similar to the policies that you create for IAM users. If you change a role, the change is propagated to all instances.

When creating IAM roles, associate least privilege IAM policies that restrict access to the specific API calls the application requires.

102
Q

A company wants to access AWS services via the SDK for their on-premise application.

A

Generally, when you want to give access to the SDK you never want to be manually placing the access key and secret onto your server. However, when you are not using AWS you’ll need to store the access key and secret in a configuration file, in environment variables or in the ~/.aws/credentials file.

You never want to place your access key and secret directly into the code that is managed by repository, otherwise, your access key and secret could get leaked if someone were to comprise your repository.

103
Q

A company is utilizing AWS Lambda functions in key areas of their production environment. When deploying new function versions they want to slowly shift their incoming traffic over to the new versions, and have the ability to easily roll back to the old versions if any issues are detected.

How can they implement the Lambda functions in their environment to best achieve this?

  • Create weighted routing policy with multiple Lambda functions
  • Create multi-value answer routing policy with multiple Lambda functions.
  • Use an ELB for routing traffic to multiple Lambda functions.
  • Use Lambda alias routing configuration
A

Use Lambda alias routing configuration

You can create one or more aliases for your AWS Lambda function. A Lambda alias is like a pointer to a specific Lambda function version. Users can access the function version using the alias ARN. Using routing configuration on an alias to send a portion of traffic to a second function version. For example, you can reduce the risk of deploying a new version by configuring the alias to send most of the traffic to the existing version, and only a small percentage of traffic to the new version.

104
Q

List of lifecycle event hooks for an Amazon ECS deployment

A

(start)
BeforeInstall – Use to run tasks before the replacement task set is created. One target group is associated with the original task set. If an optional test listener is specified, it is associated with the original task set. A rollback is not possible at this point.

(install)
AfterInstall – Use to run tasks after the replacement task set is created and one of the target groups is associated with it. If an optional test listener is specified, it is associated with the original task set. The results of a hook function at this lifecycle event can trigger a rollback.

(allow test traffic)
AfterAllowTestTraffic – Use to run tasks after the test listener serves traffic to the replacement task set. The results of a hook function at this point can trigger a rollback.

BeforeAllowTraffic – Use to run tasks after the second target group is associated with the replacement task set, but before traffic is shifted to the replacement task set. The results of a hook function at this lifecycle event can trigger a rollback.

(allow traffic)
AfterAllowTraffic – Use to run tasks after the second target group serves traffic to the replacement task set. The results of a hook function at this lifecycle event can trigger a rollback.

105
Q

List of lifecycle event hooks for an AWS Lambda deployment

A

(start)
BeforeAllowTraffic – Use to run tasks after the second target group is associated with the replacement task set, but before traffic is shifted to the replacement task set. The results of a hook function at this lifecycle event can trigger a rollback.

(allow traffic)
AfterAllowTraffic – Use to run tasks after the second target group serves traffic to the replacement task set. The results of a hook function at this lifecycle event can trigger a rollback.

106
Q

AppSpec ‘hooks’ section for an EC2/On-Premises deployment

Note:

For in-place deployments, the six hooks related to blocking and allowing traffic apply only if you specify a Classic Load Balancer, Application Load Balancer, or Network Load Balancer from Elastic Load Balancing in the deployment group.

A

(start)
BeforeBlockTraffic – to run tasks on instances before they are deregistered from a load balancer.

(block traffic)
AfterBlockTraffic – to run tasks on instances after they are deregistered from a load balancer.

ApplicationStop – occurs even before the application revision is downloaded.

(DownloadBundle)
BeforeInstall – for preinstall tasks, such as decrypting files and creating a backup of the current version.

(install)

AfterInstall – for tasks such as configuring your application or changing file permissions.

ApplicationStart – to restart services that were stopped during ApplicationStop.

ValidateService – It is used to verify the deployment was completed successfully.

BeforeAllowTraffic – to run tasks on instances before they are registered with a load balancer.

(allow traffic)

AfterAllowTraffic – You can use this deployment lifecycle event to run tasks on instances after they are registered with a load balancer.

107
Q

AppSpec ‘hooks’ section for an EC2/On-Premises deployment

In a blue/green deployment, event hooks are run in the following order:

A
  • Start

ApplicationStop

  • DownloadBundle

Before Install

  • Install

After Install

Application Start

Validate Service

Before Allow Traffic

  • Allow Traffic

After Allow Traffic

108
Q

When you associate a lambda function with a VPC, what error can you get relate to permissions?

A

You can get: “The execution role does not have access to create Network Interface”

When you want to do this, yo need to create an ENI in the subnet to be able to connect with an instance.

You need to modify the role first

109
Q

AWS ___ and ___ deployments cannot use an in-place deployment type

A

Lambda and Amazon ECS

The Blue/green deployment type on an Amazon ECS compute platform works like this:

Traffic is shifted from the task set with the original version of an application in an Amazon ECS service to a replacement task set in the same service.
You can set the traffic shifting to linear or canary through the deployment configuration.
The protocol and port of a specified load balancer listener is used to reroute production traffic.
During a deployment, a test listener can be used to serve traffic to the replacement task set while validation tests are run.
110
Q

A company is deploying an on-premise application server that will connect to several AWS services.

What is the BEST way to provide the application server with permissions to authenticate to AWS services?

  • Create an IAM user and generate access keys.
  • Create a credentials file on the application server
  • Create an IAM group with the necessary permissions and add the on-premise application server to the group
  • Create an IAM role with the necessary permissions and assign it to the application server
  • Create an IAM user and generate a key pair. Use the key pair in API calls to AWS services
A

Create an IAM user and generate access keys.

Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK).

With key pair You cannot use them in API calls to AWS services.

111
Q

Every time an Amazon EC2 instance is launched, certain metadata about the instance should be recorded in an Amazon DynamoDB table. The data is gathered and written to the table by an AWS Lambda function.

A

CloudWatch Events:

  • Event Pattern
  • Shedule

CORRECT: “Create a CloudWatch Event with an event pattern looking for EC2 state changes and a target set to use the Lambda function” is the correct answer.

In this scenario the only workable solution is to create a CloudWatch Event with an event pattern looking for EC2 state changes and a target set to use the Lambda function.

112
Q

A Developer is deploying an application in a microservices architecture on Amazon ECS. The Developer needs to choose the best task placement strategy to MINIMIZE the number of instances that are used.

Which task placement strategy should be used?

binpack
spread
random
weighted

A

binpack
Tasks are placed on container instances so as to leave the least amount of unused CPU or memory. This strategy minimizes the number of container instances in use.

random
Tasks are placed randomly.

spread
Amazon ECS will select tasks to terminate that maintains a balance across Availability Zones.

113
Q

A developer is making some updates to an AWS Lambda function that is part of a serverless application and will be saving a new version. The application is used by hundreds of users and the developer needs to be able to test the updates and be able to rollback if there any issues with user experience.

What is the SAFEST way to do this with minimal changes to the application code?

Create an alias and point it to the new version. Update the application code to point to the new alias

Create A records in Route 53 for each function version’s ARN. Use a weighted routing policy to direct 20% of traffic to the new version. Add the DNS records to the application code

Create an alias and point it to the new and previous versions. Assign a weight of 20% to the new version to direct less traffic. Update the application code to point to the new alias

Update the application code to point to the new version

A

Create an alias and point it to the new and previous versions. Assign a weight of 20% to the new version to direct less traffic. Update the application code to point to the new alias

You can create one or more aliases for your AWS Lambda function. A Lambda alias is like a pointer to a specific Lambda function version. Users can access the function version using the alias ARN.

You can point an alias a multiple versions of your function code and then assign a weighting to direct certain amounts of traffic to each version. This enables a blue/green style of deployment and means it’s easy to roll back to the older version by simply updating the weighting if issues occur with user experience.

114
Q

A company is setting up a Lambda function that will process events from a DynamoDB stream. The Lambda function has been created and a stream has been enabled. What else needs to be done for this solution to work?

An alarm should be created in CloudWatch that sends a notification to Lambda when a new entry is added to the DynamoDB stream

Update the CloudFormation template to map the DynamoDB stream to the Lambda function

An event-source mapping must be created on the DynamoDB side to associate the DynamoDB stream with the Lambda function

An event-source mapping must be created on the Lambda side to associate the DynamoDB stream with the Lambda function

A

An event-source mapping must be created on the Lambda side to associate the DynamoDB stream with the Lambda function

An event source mapping is an AWS Lambda resource that reads from an event source and invokes a Lambda function. You can use event source mappings to process items from a stream or queue in services that don’t invoke Lambda functions directly. Lambda provides event source mappings for the following services.

Services That Lambda Reads Events From

Amazon Kinesis
Amazon DynamoDB
Amazon Simple Queue Service
115
Q

An application scans an Amazon DynamoDB table once per day to produce a report. The scan is performed in non-peak hours when production usage uses around 50% of the provisioned throughput.

How can you MINIMIZE the time it takes to produce the report without affecting production workloads? (Select TWO).

Use a Sequential Scan API operation
Increase read capacity units during the scan operation
Use a Parallel Scan API operation
Use pagination to divide results into 1 MB pages
Use the Limit parameter
A

Use a Parallel Scan API operation
Use the Limit parameter

By default, the Scan operation processes data sequentially. Amazon DynamoDB returns data to the application in 1 MB increments, and an application performs additional Scan operations to retrieve the next 1 MB of data.

To control the amount of data returned per request, use the Limit parameter. This can help prevent situations where one worker consumes all of the provisioned throughput, at the expense of all other workers.

116
Q

A developer has multiple micro-services running in production. They want to use AWS X-Ray to trace the path of the request to discover any point of failure. How can they configure X-Ray with their micro-services to collect granular trace information such as the performance of a section of their code?

  • Use PutTraceSegments to report trace segments to X-Ray API.
  • Use PutTraceSegments to report segment documents to X-Ray API.
  • X-Ray cannot gather performance issues on sections of code. Use CloudWatch Logs instead.
  • Using the X-Ray SDK to report segments and subsegments with BeginSegment and BeginSubsegment.
A

Using the X-Ray SDK to report segments and subsegments with BeginSegment and BeginSubsegment.
Subsegments allow to collect granular detail at the code level about service such as performance.

X - The PutTraceSegments is for sending document segments not trace segments the X-Ray API.
X - X-Ray can gather performance issues on sections of code using document subsegments.

117
Q

If the company could detect when memory is running rampant they could force a restart of the server. How can this application’s memory be monitored to troubleshoot when rampant memory leaks are occurring?

A

Install CloudWatch Agent

CloudWatch does not by default collect memory and disk usage and you need to install the CloudWatch Agent.

Before the CloudWatch Agent, you had to install CloudWatch Monitoring Scripts. The latter could show up on old exam questions.