Notes 4 Flashcards

1
Q

How can you have an AWS account per team/project and then consolidate the bill?

A

The consolidated billing feature in AWS Organizations allows you to consolidate payment for multiple AWS accounts or multiple AISPL accounts. Each organization in AWS Organizations has a master account that pays the charges for all the member accounts. If you have access to the master account, you can see a combined view of the AWS charges that are incurred by the member accounts. You also can get a cost report for each member account.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How can you encrypt data locally using KMS?

A

Using KMS, It is recommended that you use the following pattern to encrypt data locally in your application:

  1. Use the GenerateDataKey operation to get a data encryption key.
  2. Use the plaintext data key (returned in the Plaintext field of the response) to encrypt data locally, then erase the plaintext data key from memory.
  3. Store the encrypted data key (returned in the CiphertextBlob field of the response) alongside the locally encrypted data.

Hence, the valid steps in this scenario are the following:

– Use the GenerateDataKey operation to get a data encryption key then use the plaintext data key in the response to encrypt data locally.

– Erase the plaintext data key from memory and store the encrypted data key alongside the locally encrypted data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Lambda function to process records with DynamoDB Streams

A

You can use an AWS Lambda function to process records in an Amazon DynamoDB Streams stream. With DynamoDB Streams, you can trigger a Lambda function to perform additional work each time a DynamoDB table is updated.

You need to create an event source mapping to tell Lambda to send records from your stream to a Lambda function. You can create multiple event source mappings to process the same data with multiple Lambda functions, or process items from multiple streams with a single function. To configure your function to read from DynamoDB Streams in the Lambda console, create a DynamoDB trigger.

You also need to assign the following permissions to Lambda:

dynamodb: DescribeStream
dynamodb: GetRecords
dynamodb: GetShardIterator
dynamodb: ListStreams

The AWSLambdaDynamoDBExecutionRole managed policy already includes these permissions.

Hence, the correct answers are:

– Create an event source mapping in Lambda to send records from your stream to a Lambda function.

– Select AWSLambdaDynamoDBExecutionRole managed policy as the function’s execution role.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Optimistic locking in DynamoDB

A

Optimistic locking is a strategy to ensure that the client-side item that you are updating (or deleting) is the same as the item in DynamoDB. If you use this strategy, then your database writes are protected from being overwritten by the writes of others — and vice-versa. Take note that:

– DynamoDB global tables use a “last writer wins” reconciliation between concurrent updates. If you use Global Tables, last writer policy wins. So in this case, the locking strategy does not work as expected.

– DynamoDBMapper transactional operations do not support optimistic locking.

With optimistic locking, each item has an attribute that acts as a version number. If you retrieve an item from a table, the application records the version number of that item. You can update the item, but only if the version number on the server side has not changed. If there is a version mismatch, it means that someone else has modified the item before you did; the update attempt fails, because you have a stale version of the item. If this happens, you simply try again by retrieving the item and then attempting to update it. Optimistic locking prevents you from accidentally overwriting changes that were made by others; it also prevents others from accidentally overwriting your changes.

Hence, implementing an optimistic locking strategy in your application source code by designating one property to store the version number in the mapping class for your table is the correct answer in this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Global secondary index in DynamoDB

A

A global secondary index is an index with a partition key and a sort key that can be different from those on the base table. A global secondary index is considered “global” because queries on the index can span all of the data in the base table, across all partitions.

To create a table with one or more global secondary indexes, use the CreateTable operation with the GlobalSecondaryIndexes parameter. For maximum query flexibility, you can create up to 20 global secondary indexes (default limit) per table. You must specify one attribute to act as the index partition key; you can optionally specify another attribute for the index sort key. It is not necessary for either of these key attributes to be the same as a key attribute in the table. Global secondary indexes inherit the read/write capacity mode from the base table.

As shown in the above table, the following are the things that the developer should consider when using a global secondary index:

– Queries or scans on this index consume capacity units from the index, not from the base table.

– Queries on this index support eventual consistency only.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

If your identity store is not compatible with SAML 2.0,

A

If your identity store is not compatible with SAML 2.0, then you can build a custom identity broker application to perform a similar function. The broker application authenticates users, requests temporary credentials for users from AWS, and then provides them to the user to access AWS resources.

The application verifies that employees are signed into the existing corporate network’s identity and authentication system, which might use LDAP, Active Directory, or another system. The identity broker application then obtains temporary security credentials for the employees.

To get temporary security credentials, the identity broker application calls either “AssumeRole” or “GetFederationToken to obtain temporary security credentials, depending on how you want to manage the policies for users and when the temporary credentials should expire. The call returns temporary security credentials consisting of an AWS access key ID, a secret access key, and a session token. The identity broker application makes these temporary security credentials available to the internal company application. The app can then use the temporary credentials to make calls to AWS directly. The app caches the credentials until they expire, and then requests a new set of temporary credentials.

Hence, the correct answer is “to create a custom identity broker application in your on-premises data center and use STS to issue short-lived AWS credentials”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How to increase the number of instances on an EC2 auto-scaling group?

A

At any time, you can change the size of an existing EC2 Auto Scaling group manually. You can either update the “desired capacity” of the Auto Scaling group, or update the instances that are attached to the Auto Scaling group. Manually scaling your group can be useful when automatic scaling is not needed or when you need to hold capacity at a fixed number of instances.

With each Auto Scaling group, you can control when it adds instances (referred to as scaling out) or removes instances (referred to as scaling in) from your network architecture. You can scale the size of your group manually by adjusting your desired capacity, or you can automate the process through the use of scheduled scaling or a scaling policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is AWS AppSync?

A

AWS AppSync simplifies application development by letting you create a flexible API to securely access, manipulate, and combine data from one or more data sources. AppSync is a managed service that uses GraphQL to make it easy for applications to get exactly the data they need.

With AppSync, you can build scalable applications, including those requiring real-time updates, on a range of data sources such as NoSQL data stores, relational databases, HTTP APIs, and your custom data sources with AWS Lambda. For mobile and web apps, AppSync additionally provides local data access when devices go offline, and data synchronization with customizable conflict resolution, when they are back online.

AWS AppSync is quite similar with Amazon Cognito Sync which is also a service for synchronizing application data across devices. It enables user data like app preferences or game state to be synchronized as well however, the key difference is that, it also extends these capabilities by allowing multiple users to synchronize and collaborate in real time on shared data.

AWS AppSync is a serverless GraphQL and Pub/Sub API service that simplifies building modern web and mobile applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is AWS Amplify?

A

AWS Amplify is a set of purpose-built tools and features that lets frontend web and mobile developers quickly and easily build full-stack applications on AWS, with the flexibility to leverage the breadth of AWS services as your use cases evolve.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Multi-container Docker images on EC2

A

You can create docker environments that support multiple containers per Amazon EC2 instance with multicontainer Docker platform for Elastic Beanstalk.

Elastic Beanstalk uses Amazon Elastic Container Service (Amazon ECS) to coordinate container deployments to multicontainer Docker environments. Amazon ECS provides tools to manage a cluster of instances running Docker containers. Elastic Beanstalk takes care of Amazon ECS tasks including cluster creation, task definition and execution.

AWS Elastic Beanstalk is an application management platform that helps customers easily deploy and scale web applications and services. It keeps the provisioning of building blocks (e.g., EC2, RDS, Elastic Load Balancing, Auto Scaling, CloudWatch), deployment of applications, and health monitoring abstracted from the user so they can just focus on writing code. You simply specify which container images are to be deployed, the CPU and memory requirements, the port mappings, and the container links.

Elastic Beanstalk will automatically handle all the details such as provisioning an Amazon ECS cluster, balancing load, auto-scaling, monitoring, and placing your containers across your cluster. Elastic Beanstalk is ideal if you want to leverage the benefits of containers but just want the simplicity of deploying applications from development to production by uploading a container image. You can work with Amazon ECS directly if you want more fine-grained control for custom application architectures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

CloudFormation inline code to deploy

A

To create a Lambda function you first create a Lambda function deployment package, a .zip or .jar file consisting of your code and any dependencies. When creating the zip, include only the code and its dependencies, not the containing folder. You will then need to set the appropriate security permissions for the zip package.

If you are using a CloudFormation template, you can configure the AWS::Lambda::Function resource which creates a Lambda function. To create a function, you need a deployment package and an execution role. The deployment package contains your function code. The execution role grants the function permission to use AWS services, such as Amazon CloudWatch Logs for log streaming and AWS X-Ray for request tracing.

Under the “AWS::Lambda::Function” resource, you can use the Code property which contains the deployment package for a Lambda function. For all runtimes, you can specify the location of an object in Amazon S3.

For Node.js and Python functions, you can specify the function code inline in the template. Changes to a deployment package in Amazon S3 are not detected automatically during stack updates. To update the function code, change the object key or version in the template.

Hence, the ZipFile parameter to is the correct one to be used in this scenario, which will allow the developer to place the python code inline in the template. If you include your function source inline with this parameter, AWS CloudFormation places it in a file named index and zips it to create a deployment package. This is the reason why it is called the “ZipFile” parameter, and not because it accepts zip files.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Setting capacity units on DynamoDB global secondary index

A

A global secondary index (GSI) is an index with a partition key and a sort key that can be different from those on the base table. It is considered “global” because queries on the index can span all of the data in the base table, across all partitions.

Every global secondary index has its own provisioned throughput settings for read and write activity. Queries or scans on a global secondary index consume capacity units from the index, not from the base table. The same holds true for global secondary index updates due to table writes.

When you create a global secondary index on a provisioned mode table, you must specify read and write capacity units for the expected workload on that index. The provisioned throughput settings of a global secondary index are separate from those of its base table. A Query operation on a global secondary index consumes read capacity units from the index, not the base table. When you put, update or delete items in a table, the global secondary indexes on that table are also updated; these index updates consume write capacity units from the index, not from the base table.

For example, if you Query a global secondary index and exceed its provisioned read capacity, your request will be throttled. If you perform heavy write activity on the table but a global secondary index on that table has insufficient write capacity, then the write activity on the table will be throttled.

To avoid potential throttling, the provisioned write capacity for a global secondary index should be equal or greater than the write capacity of the base table since new updates will write to both the base table and global secondary index.

Hence, the correct answer in this scenario is to ensure that the global secondary index’s provisioned WCU is equal or greater than the WCU of the base table.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Connecting Lambda function to VPC

A

You can configure a Lambda function to connect to a virtual private cloud (VPC) in your account. Use Amazon Virtual Private Cloud (Amazon VPC) to create a private network for resources such as databases, cache instances, or internal services. Connect your function to the VPC to access private resources during execution.

AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPC-specific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC.

Don’t put your Lambda function in a VPC unless you have to. There is no benefit outside of using this to access resources you cannot expose publicly, like a private Amazon Relational Database instance. Services like Amazon Elasticsearch Service can be secured over IAM with access policies, so exposing the endpoint publicly is safe and wouldn’t require you to run your function in the VPC to secure it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A Developer has run “aws configure” on an Amazon EC2 Linux instance. Where would the access keys be stored on the filesystem?

A

~/.aws/credentials

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is an “instance profile” ?

A

An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

An organization has separate production and development accounts. What’s the most efficient way to enable a Developer with a user account in the development account permissions to deploy AWS services into the production account.

A

Allow the user to assume a role in the production account that has the nessesary permissions. This is know as cross-account access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which element of an IAM policy document can be used to specify that a policy should take effect only if the caller is coming from a specific source IP address?

A

“Condition”

The Condition element (or Condition block) lets you specify conditions for when a policy is in effect.

18
Q

IAM best practices

A
  • You should create individual IAM users but you cannot delete the Root account. Instead you should remove access keys from the Root account, set a complex password, enable MFA, and use IAM users for most operations.
  • Do not share access keys
  • Use groups to assign permissions to IAM users.
19
Q

Which AWS Cognito feature allows an authenticated user to gain temporary security credentials for accessing AWS services?

A

A Cognito Identity Pool

Cognito Identity pools provide temporary security credentials to access your app’s backend resources in AWS or any service behind Amazon API Gateway.

20
Q

Cognito user vs identity pools

A

User pools are for authentication (identity verification). With a user pool, your app users can sign in through the user pool or federate through a third-party identity provider (IdP).

Identity pools are for authorization (access control). You can use identity pools to create unique identities for users and give them access to other

Use a user pool when you need to:

Design sign-up and sign-in webpages for your app.
Access and manage user data.
Track user device, location, and IP address, and adapt to sign-in requests of different risk levels.
Use a custom authentication flow for your app.

Identity pool use cases

Use an identity pool when you need to:

Give your users access to AWS resources, such as an Amazon Simple Storage Service (Amazon S3) bucket or an Amazon DynamoDB table.
Generate temporary AWS credentials for unauthenticated users.

For more example use cases, see Common Amazon Cognito Scenarios.

21
Q

Which AWS service provides a dedicated single-tenant managed service for securely creating and managing encryption keys?

A

CloudHSM is single-tenant, AWS KMS is multi-tenant

22
Q

Explain envelope encryption?

A

Envelope encryption process is to encrypt plaintext data with a data key and then encrypt the data key with a top-level plaintext master key.

23
Q

Which services should you use to import an SSL certificate from a 3rd party?

A

AWS Certificate Manager (ACM) and IAM certificate store.

If you got your SSL certificate from a third-party CA, import the certificate into ACM or upload it to the IAM certificate store.

24
Q

A company is currently in the process of integrating their on-premises data center to their cloud infrastructure in AWS. One of the requirements is to integrate the on-premises Lightweight Directory Access Protocol (LDAP) directory service to their AWS VPC using IAM.

Which of the following provides the MOST suitable solution to implement if the identity store that they are using is not compatible with SAML?

A

Create a custom identity broker application in your on-premises data center and use STS to issue short-lived AWS credentials.

If your identity store is not compatible with SAML 2.0, then you can build a custom identity broker application to perform a similar function. The broker application authenticates users, requests temporary credentials for users from AWS, and then provides them to the user to access AWS resources.

The application verifies that employees are signed into the existing corporate network’s identity and authentication system, which might use LDAP, Active Directory, or another system. The identity broker application then obtains temporary security credentials for the employees.

To get temporary security credentials, the identity broker application calls either AssumeRole or GetFederationToken to obtain temporary security credentials, depending on how you want to manage the policies for users and when the temporary credentials should expire. The call returns temporary security credentials consisting of an AWS access key ID, a secret access key, and a session token. The identity broker application makes these temporary security credentials available to the internal company application. The app can then use the temporary credentials to make calls to AWS directly. The app caches the credentials until they expire, and then requests a new set of temporary credentials.

25
Q

A developer needs to encrypt all objects being uploaded by their application to the S3 bucket to comply with the company’s security policy. The bucket will use server-side encryption with Amazon S3-Managed encryption keys (SSE-S3) to encrypt the data using 256-bit Advanced Encryption Standard (AES-256) block cipher.

Which of the following request headers should the developer use?

A

“x-amz-server-side-encryption” header is correct as this is the one being used for Amazon S3-Managed Encryption Keys (SSE-S3).

However, if you chose to use server-side encryption with customer-provided encryption keys (SSE-C), you must provide encryption key information using the following request headers:

x-amz-server-side​-encryption​-customer-algorithm

x-amz-server-side​-encryption​-customer-key

x-amz-server-side​-encryption​-customer-key-MD5

26
Q

A developer has instrumented an application using the X-Ray SDK to collect all data about the requests that an application serves. There is a new requirement to develop a custom debug tool which will enable them to view the full traces of their application without using the X-Ray console.

What should the developer do to accomplish this task?

A

X-Ray compiles and processes segment documents to generate queryable trace summaries and full traces that you can access by using the “GetTraceSummaries” and “BatchGetTraces” APIs, respectively. In addition to the segments and subsegments that you send to X-Ray, the service uses information in subsegments to generate inferred segments and adds them to the full trace. Inferred segments represent downstream services and resources in the service map.

In this scenario, the developer should “use the GetTraceSummaries API to get the list of trace IDs of the application and then retrieve the list of traces using BatchGetTraces API” in order to develop the custom debug tool

27
Q

AWS X-Ray Annotations and Metadata

A

AWS X-Ray

“Annotations” are simple key-value pairs that are indexed for use with filter expressions. Use annotations to record data that you want to use to group traces in the console, or when calling the GetTraceSummaries API. X-Ray indexes up to 50 annotations per trace.

“Metadata” are key-value pairs with values of any type, including objects and lists, but that are not indexed. Use metadata to record data you want to store in the trace but don’t need to use for searching traces. You can view annotations and metadata in the segment or subsegment details in the X-Ray console.

28
Q

CodeDeploy

A

CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services.

CodeDeploy can deploy application content that runs on a server and is stored in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories. CodeDeploy can also deploy a serverless Lambda function. You do not need to make changes to your existing code before you can use CodeDeploy.

CodeDeploy provides two deployment type options:

In-place deployment: The application on each instance in the deployment group is stopped, the latest application revision is installed, and the new version of the application is started and validated. You can use a load balancer so that each instance is deregistered during its deployment and then restored to service after the deployment is complete. Only deployments that use the EC2/On-Premises compute platform can use in-place deployments. AWS Lambda compute platform deployments cannot use an in-place deployment type.

Blue/green deployment: The behavior of your deployment depends on which compute platform you use:

– Blue/green on an EC2/On-Premises compute platform: The instances in a deployment group (the original environment) are replaced by a different set of instances (the replacement environment). If you use an EC2/On-Premises compute platform, be aware that blue/green deployments work with Amazon EC2 instances only.

– Blue/green on an AWS Lambda compute platform: Traffic is shifted from your current serverless environment to one with your updated Lambda function versions. You can specify Lambda functions that perform validation tests and choose the way in which the traffic shift occurs. All AWS Lambda compute platform deployments are blue/green deployments. For this reason, you do not need to specify a deployment type.

– Blue/green on an Amazon ECS compute platform: Traffic is shifted from the task set with the original version of a containerized application in an Amazon ECS service to a replacement task set in the same service. The protocol and port of a specified load balancer listener are used to reroute production traffic. During deployment, a test listener can be used to serve traffic to the replacement task set while validation tests are run.

The CodeDeploy agent is a software package that, when installed and configured on an instance, makes it possible for that instance to be used in CodeDeploy deployments. The CodeDeploy agent communicates outbound using HTTPS over port 443.

It is also important to note that the CodeDeploy agent is required only if you deploy to an EC2/On-Premises compute platform. The agent is not required for deployments that use the Amazon ECS or AWS Lambda compute platform.

Therefore, the valid considerations in CodeDeploy in this scenario are:

– AWS Lambda compute platform deployments cannot use an in-place deployment type.

– CodeDeploy can deploy applications to both your EC2 instances as well as your on-premises servers.

29
Q

AWS SAM template sections

A

AWS SAM Template sections

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html

30
Q

Cloudformation StackSets

A

AWS CloudFormation StackSets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and regions with a single operation. Using an administrator account, you define and manage an AWS CloudFormation template, and use the template as the basis for provisioning stacks into selected target accounts across specified regions.

A stack set lets you create stacks in AWS accounts across regions by using a single AWS CloudFormation template. All the resources included in each stack are defined by the stack set’s AWS CloudFormation template. As you create the stack set, you specify the template to use, as well as any parameters and capabilities that the template requires.

Hence, the correct solution in this scenario is to update the stacks on multiple AWS accounts using CloudFormation StackSets.

After you’ve defined a stack set, you can create, update, or delete stacks in the target accounts and regions you specify. When you create, update, or delete stacks, you can also specify operational preferences, such as the order of regions in which you want the operation to be performed, the failure tolerance beyond which stack operations stop, and the number of accounts in which operations are performed on stacks concurrently. Remember that a stack set is a regional resource so if you create a stack set in one region, you cannot see it or change it in other regions.

31
Q

Route Tables

A

A route table contains a set of rules, called routes, that are used to determine where network traffic is directed.

Each subnet in your VPC must be associated with a route table; the table controls the routing for the subnet. A subnet can only be associated with one route table at a time, but you can associate multiple subnets with the same route table.

Thus, the correct answer is: A subnet can only be associated with one route table at a time.

https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html

32
Q

You developed a shell script which uses AWS CLI to create a new Lambda function. However, you received an InvalidParameterValueException after running the script.

What is the MOST likely cause of this issue?

A

The “InvalidParameterValueException” will be returned if one of the parameters in the request is invalid. For example, if you provided an IAM role in the “CreateFunction” API which AWS Lambda is unable to assume. Hence, this option is the most likely cause of the issue in this scenario.

33
Q

Subnets and Route Table

A

A route table contains a set of rules, called routes, that are used to determine where network traffic is directed.

Each subnet in your VPC must be associated with a route table; the table controls the routing for the subnet. A subnet can only be associated with one route table at a time, but you can associate multiple subnets with the same route table.

Thus, the correct answer is: A subnet can only be associated with one route table at a time.

34
Q

X-Ray subsegments

A

AWS X-ray analyzes and debugs production, distributed applications, such as those built using a microservices architecture. With, X-Ray, you can identify performance bottlenecks, edge case errors, and other hard to detect issues.

A segment can break down the data about the work done into subsegments. Subsegments provide more granular timing information and details about downstream calls that your application made to fulfill the original request. A subsegment can contain additional details about a call to an AWS service, an external HTTP API, or an SQL database. You can define arbitrary subsegments to instrument specific functions or lines of code in your application.

Subsegments extend a trace’s segment with details about work done in order to serve a request. Each time you make a call with an instrumented client, the X-Ray SDK records the information generated in a subsegment. You can create additional subsegments to group other subsegments, to measure the performance of a section of code, or to record annotations and metadata.

Hence, the correct answer is: Using AWS X-Ray, define an arbitrary subsegment inside the code to instrument the function.

35
Q

AWS Cognito custom authentication

A

Amazon Cognito supports developer authenticated identities, in addition to web identity federation through Facebook (Identity Pools), Google (Identity Pools), Login with Amazon (Identity Pools), and Sign in with Apple (Identity Pools). With developer authenticated identities, you can register and authenticate users via your own existing authentication process, while still using Amazon Cognito to synchronize user data and access AWS resources. Using developer authenticated identities involves interaction between the end-user device, your backend for authentication, and Amazon Cognito.

Developers can use their own authentication system with Cognito. What this means is that your app can benefit from all of the features of Amazon Cognito while utilizing your own authentication system. This works by your app requesting a unique identity ID for your end-users based on the identifier you use in your own authentication system. You can use the Cognito identity ID to save and synchronize user data across devices with the Cognito sync service or retrieve temporary, limited-privilege AWS credentials to securely access your AWS resources.

The process is simple, you first request a token for your users by using the server-side Cognito API for developer authenticated identities. Cognito then creates a valid token for your users. You can then exchange this token with Amazon Secure Token Service for AWS credentials.

With developer authenticated identities, a new API, GetOpenIdTokenForDeveloperIdentity, was introduced. This API call replaces the use of GetId and GetOpenIdToken (APIs needed in the basic authflow) from the device and should be called from your backend as part of your own authentication API. Because this API call is signed by your AWS credentials, Cognito can trust that the user identifier supplied in the API call is valid. This replaces the token validation Cognito performs with public providers.

Hence, the correct answer is: Use developer-authenticated identities in Amazon Cognito to generate unique identifiers for the users.

36
Q

A team of developers needs permission to launch EC2 instances with an instance role that will allow them to update items in a DynamoDB table. Each developer has access to IAM users that belongs in the same IAM group.

Which of the following steps must be done to implement the solution?

A

Create an IAM role with an IAM policy that will allow access to the DynamoDB table. Add the EC2 service to the trust policy of the role. Create a custom policy with iam:PassRole permission. Attach the policy to the IAM group.

Before you can access any AWS services via CLI/API from an EC2 instance, you must first configure and specify your access credentials. A more secure approach is by allowing the EC2 to assume an IAM role so it can access AWS services on your behalf. This way, your credentials are never stored or exposed.

According to the scenario, the EC2 instances (that will be launched by the developers) need access to a DynamoDB table. First, we need to create an IAM role with permission that will allow access to the DynamoDB table. After creating the role, you must add the EC2 service as a trusted entity in the role’s trust policy. You need to do this so EC2 instances can assume the IAM role. Afterwhich, you have to attach the following policy to the IAM Group:

If the developers don’t have iam:PassRole permission, he or she can’t associate a role with the instance during launch.

The PassRole permission helps you make sure that a user doesn’t pass a role to an EC2 instance where the role has more permissions than you want the user to have. For example, Alice might be allowed to perform only EC2 and S3 actions. If Alice could pass a role to the EC2 instance that allows additional actions, she could log into the instance, get temporary security credentials via the role she passed, and make calls to AWS that you don’t intend.

Hence, the correct answer is: Create an IAM role with an IAM policy that will allow access to the DynamoDB table. Add the EC2 service to the trust policy of the role. Create a custom policy with iam:PassRole permission. Attach the policy to the IAM group.

37
Q

Some static assets stored in an S3 bucket need to be accessed by a user on the development account. The S3 bucket is in the production account. According to the company policy, the sharing of full credentials between accounts is prohibited.

What steps should be done to delegate access across the two accounts? (Select THREE.)

A

Hence, the correct answers are:

– On the production account, create an IAM role and specify the development account as a trusted entity.

– Set the policy that will grant access to S3 for the IAM role created in the production account

– Log in to the development account and create a policy that will use STS to assume the IAM role in the production account. Attach the policy to corresponding IAM users.

38
Q

AWS Step Function states

A

AWS Step Function states

States can perform a variety of functions in your state machine:

Task State – Do some work in your state machine
Choice State – Make a choice between branches of execution
Fail or Succeed State – Stop execution with failure or success
Pass State – Simply pass its input to its output or inject some fixed data, without performing work.
Wait State – Provide a delay for a certain amount of time or until a specified time/date.
Parallel State – Begin parallel branches of execution.
Map State – Dynamically iterate steps.
39
Q

A development team needs to deploy an application revision into three environments: Test, Staging, and Production. The application should be deployed into the Test environment first, then Staging, and then Production.

Which approach will conveniently allow the team to deploy the application into different environments?

A

Answer: “Create multiple deployment groups for each environment using AWS CodeDeploy.”

n an EC2/On-Premises deployment, a deployment group is a set of individual instances targeted for deployment. A deployment group contains individually tagged instances, Amazon EC2 instances in Amazon EC2 Auto Scaling groups, or both.

You can associate more than one deployment group with an application in CodeDeploy. This makes it possible to deploy an application revision to different sets of instances at different times. For example, you might use one deployment group to deploy an application revision to a set of instances tagged Test where you ensure the code’s quality. Next, you deploy the same application revision to a deployment group with instances tagged Staging for additional verification. Finally, when you are ready to release the latest application to customers, you deploy to a deployment group that includes instances tagged Production.

Hence, the correct answer is: Create multiple deployment groups for each environment using AWS CodeDeploy.

40
Q

AWS KMS

A

AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control customer master keys (CMKs), the encryption keys used to encrypt your data. AWS KMS CMKs are protected by hardware security modules (HSMs) that are validated by the FIPS 140-2 Cryptographic Module Validation Program except in the China (Beijing) and China (Ningxia) Regions. AWS KMS is integrated with most other AWS services that encrypt your data. AWS KMS is also integrated with AWS CloudTrail to log use of your CMKs for auditing, regulatory, and compliance needs.

You can perform the following key management functions in AWS KMS:

– Create symmetric and asymmetric keys where the key material is only ever used within the service

– Create symmetric keys where the key material is generated and used within a custom key store under your control.

– Import your own symmetric key for use within the service.

– Create both symmetric and asymmetric data key pairs for local use within your applications.

– Define which IAM users and roles can manage keys.

– Define which IAM users and roles can use keys to encrypt and decrypt data.

– Choose to have keys that were generated by the service to be automatically rotated on an annual basis.

– Temporarily disable keys so they cannot be used by anyone.

– Re-enable disabled keys.

– Schedule the deletion of keys that you no longer use.

– Audit the use of keys by inspecting logs in AWS CloudTrail.

By default, AWS KMS creates the key material for a CMK. You cannot extract, export, view, or manage this key material. Also, you cannot delete this key material; you must delete the CMK. However, you can import your own key material into a CMK or create the key material for a CMK in the AWS CloudHSM cluster associated with an AWS KMS custom key store. There are also types of CMKs that are not eligible for automatic key rotation such as asymmetric CMKs, CMKs in custom key stores, and CMKs with imported key material.

41
Q

What does the The AWS STS DecodeAuthorizationMessage API do?

A

The AWS STS DecodeAuthorizationMessage API decodes additional information about the authorization status of a request from an encoded message returned in response to an AWS request.

For example, if a user is not authorized to perform an operation that he or she has requested, the request returns a Client.UnauthorizedOperation response (an HTTP 403 response). Some AWS operations additionally return an encoded message that can provide details about this authorization failure.