AWS Solutions Architect Similar Services Flashcards

1
Q

Cognito user pool, or cognito identity pool?

A

Cognito provides

  • authentication - (user sign up or sign in) support for Enterprise ID’s (microsoft AD), or social ID’s (amazon, facebook, google, etc.)
  • authorization - sets of permissions or operations allowed for a user. Fine-grained access control to resources.
  • user management - user lifecycles such as importing users, onboarding users, disabling users, and storing and managing profiles.

Cognito User Pools - Provide sign up and sign in functionality for mobile, web, or app user. No server infrastructure for users to authenticate, provides profiles to manage users, OpenID Connect and OAuth standard tokens, priced per monthly active users.

Cognito Identity Pools - Provides AWS credentials for accessing resources on behalf of users, supports rules to map users to different IAM roles, free.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

RDS Vs. DynamoDB Key points

A

Pricing - RDS is a monthly charge for each database launched with the option to reserve a DB instance for a one or three year term and receive discounts in pricing, compared to on-demand instance pricing.

Dynamo DB charges for reading, writing, and storing data in DynamoDB tables, along with any optional features chosen.

DynamoDB supports ACID
DynamoDB uses filter expressions because it does not support complex queries.

Multi-AZ deployments for MySQL, MariaDB, Oracle, and PostgreSQL engine utilize synchronous physical replication.

Multi-AZ deployments for SQL server use synchronous logical replication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

S3 vs. EBS vs. EFS

A

S3 is cheaper than EBS and EFS in pure storage costs

EBS and EFS have higher performance than S3

EBS is meant to be used as volumes for EC2 instances

S3 does not have a hierarchy (flat environment) for files unlike EFS

S3 offers eventual consistency for overwrite PUTS and DELETES in all regions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

SWF vs. AWS Step Functions vs. SQS

A

SWF is a web service that makes it easy to coordinate work across distributed application components.

  • In SWF tasks represent invocations of logical steps in applications
  • Tasks are processed by workers which are programs that interact with SWF to get tasks, process them, and return results
  • Coordination involves managing execution dependencies, scheduling, and concurrency in accordance with the logical flow of the application

AWS Step Functions are a fully managed service that makes it easy to coordinate the components of distributed applications and microservices using VISUAL WORKFLOWS.

  • Define STATE MACHINES that describe your workflow as a series of steps, their relationships, and inputs and outputs.
  • States represent an individual step in a workflow diagram.
  • States can perform work, make choices, pass parameters, initiate parallel execution, manage timeouts, or terminate your workflow with a success or failure.

SQS is a message queue use by distributed applications to exchange messages through a poling model.
FIFO queues preserve the exact order in which messages are sent and received. standard queues provide a loose-FIFO capability that attempts to preserve the order of messages.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Cloudwatch Vs. Cloudtrail

A

Cloudwatch is a monitoring service for AWS resources and Applications.

  • free basic monitoring for resources such as EC2, EBS, RDS DB, enabled by default.
  • collect and track metrics, monitor log files, set alarms.
  • can enable detailed monitoring for AWS resources to send metric data more frequently, with additional cost.
  • reports on application logs
  • a real time stream of system events describing changes to AWS resources
  • delivers in 5 minute periods for basic monitoring and 1 minute periods for detailed monitoring.
  • Cloudwatch Logs agent sends data every five seconds by default

Cloud trail is a web service that records API activity in your AWS account.

  • Cloud trail is enabled by default.
  • logs information on WHO made a request, services used, actions performed, parameters for actions, and response elements returned by services.
  • Cloud trail logs are stored in S3 buckets OR a CLOUDWATCH logs log group.
  • helps ensure compliance and regulatory standards.
  • provides specific information on what occurred in AWS account
  • Focuses more on AWS API calls made in AWS account
  • Typically delivers event within 15 minutes of API call.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Datasync vs. Storage gateway

A

Think datasync when it comes to large, permanent migrations like the movement of historical data. Storage gateway is better when data needs to be transferred back and forth.

DataSync simplifies copying of large amounts of data to and from AWS storage services over the internet or over Direct Connect

  • uses an agent which is a VM owned by the user and is used to read or write data from storage systems
  • copies to S3, EFS, Fsx for Windows File server
  • Uses NFS, SMB, or S3 API
  • uses Snowcone

Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage by linking it to S3.

  • Three types: File, Volume, and Tape
  • Uses STORAGE GATEWAY APPLIANCE, a VM from Amazon which is installed and hosted on your data center
  • uses iSCSI, SMB, NFS
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

CloudFront Vs. Global Accelerator

A

CloudFront

  • Uses multiple sets of dynamically changing IPs
  • Pricing is mainly based on data transfer out and HTTP requests
  • uses Edge Locations to cache content
  • designed for HTTP

Global Accelerator

  • Provides a set of static IP addresses as a fixed entry point for your applications
  • Charges a fixed hourly fee and an incremental charge over standard transfer rates (DT-Premium: Data Transfer-Premium fee)
  • Uses Edge Locations to find an optimal pathway to nearest regional endpoint
  • Best for TCP and UDP
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

EC2 HC vs. ELB HC vs. ASG HC

A

EC2 HCs are built in, if checks pass the overall status is OKAY. If one or more checks fail, the overall status is impaired.
Two Types:
-System status checks - these checks detect underlying problems with your instance that require AWS involvement to repair. Either wait for AWS or fix it yourself
-Instance Status Checks - monitor the software and network configuration of your individual instance. EC2 sends an ARP request to the ENI. These checks require the user to repair.

ELB HCs are configured using a specific protocol or port. HTTP/HTTPS return a code 200 if sucessful, a TCP HC succeeds on a succesful connection, an SSL HC succeeds if a handshake succeeds.
Do not support websockets.

ASG HCs come from EC2, ELB, or a custom HC.
if an ASG instance is marked as unhealthy, it is scheduled for replacement. this can be manually interrupted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

cross-zone load balancing

A

load balancer nodes distribute incoming requests evenly across the Availability Zones enabled for your load balancer. Otherwise, each load balancer node distributes requests only to instances in its Availability Zone.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Storage Optimized Instances

A

Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage.

They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Memory Optimized Instances

A

designed to deliver fast performance for workloads that process large data sets in memory,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Compute Optimized Instances

A

ideal for compute-bound applications that benefit from high-performance processors, such as batch processing workloads and media transcoding.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

General Purpose Instances

A

They provide a balance of compute, memory, and networking resources, and can be used for a variety of workloads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Vertical Scaling

A

means running the same software on bigger machines which is limited by the capacity of the individual server.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Horizontal scaling

A

adding more servers to the existing pool and doesn’t run into limitations of individual servers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Does Elastic Beanstalk support Docker? What about Auto-scaling?

A

Elastic Beanstalk supports the deployment of web applications from Docker containers.

With Docker containers, you can define your own runtime environment.

You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren’t supported by other platforms.

Docker containers are self-contained and include all the configuration information and software your web application requires to run.

By using Docker with Elastic Beanstalk, you have an infrastructure that automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.

You can manage your web application in an environment that supports the range of services that are integrated with Elastic Beanstalk, including but not limited to VPC, RDS, and IAM.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Does ECS AUTOMATICALLY provide Service Auto Scaling, Service Load Balancing, and Monitoring with CloudWatch?

A

No. ECS supports auto scaling, load balancing and monitoring with cloudwatch, but these features must be enabled.

You will have to manually configure these things if you wish to use ECS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

AWS Consolidated Billing

A

You can use the consolidated billing feature in AWS Organizations to consolidate payment for multiple AWS accounts or multiple AISPL accounts.

With consolidated billing, you can see a combined view of AWS charges incurred by all of your accounts.

You can also get a cost report for each member account that is associated with your master account.

Consolidated billing is offered at no additional charge. AWS and AISPL accounts can’t be consolidated together.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Network Load Balancer

A

best suited for load balancing of TCP traffic where extreme performance is required.

Operating at the connection level (Layer 4), Network Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) and is capable of handling millions of requests per second while maintaining ultra-low latencies.

Network Load Balancer is also optimized to handle sudden and volatile traffic patterns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What port is RDP?

A

TCP 3389 and UDP 3389

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the valid case scenarios in using Enhanced Networking?

A

When you need higher packet per second performance.

When you need consistently lower inter-instance latencies.

Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types. SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies. There is no additional charge for using enhanced networking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the only required section in CloudFormation?

A

The Resources section.

However, as you build your template, it might be helpful to use the logical ordering of the following list, as values in one section might refer to values from a previous section. Take note that all of the sections here are optional, except for Resources, which is the only one required.

  • Format Version
  • Description
  • Metadata
  • Parameters
  • Mappings
  • Conditions
  • Transform
  • Resources (required)
  • Outputs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is Blue/Green deployment?

A

Blue/Green deployment sets up a new green environment which uses entirely new AWS resources.

Blue/green deployments provide a level of isolation between your blue and green application environments.

It ensures that spinning up a parallel green environment does not affect resources underpinning your blue environment. This isolation reduces your deployment risk.

This ability to simply roll traffic back to the still-operating blue environment is a key benefit of blue/green deployments. You can roll back to the blue environment at any time during the deployment process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Canary Deployment

A

Using a very small fraction of production traffic, to better reflect user traffic.

If you discover the green environment is not operating as expected, there is no impact on the blue environment. You can route traffic back to it, minimizing impaired operation or downtime, and limiting the blast radius of impact.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Does Route 53 support DNSSEC?

A

Amazon Route 53’s DNS services does not support DNSSEC at this time.

However, their domain name registration service supports configuration of signed DNSSEC keys for domains when DNS service is configured at another provider.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What are the route 53 supported DNS record types?

A
  • A (address record)
  • AAAA (IPv6 address record)
  • CNAME (canonical name record)
  • CAA (certification authority authorization)
  • MX (mail exchange record)
  • NAPTR (name authority pointer record)
  • NS (name server record)
  • PTR (pointer record)
  • SOA (start of authority record)
  • SPF (sender policy framework)
  • SRV (service locator)
  • TXT (text record)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

egress-only Internet gateway

A

horizontally scaled, redundant, and highly available VPC component that allows outbound communication over IPv6 from instances in your VPC to the Internet, and prevents the Internet from initiating an IPv6 connection with your instances.

Take note that an egress-only Internet gateway is for use with IPv6 traffic only. To enable outbound-only Internet communication over IPv4, use a NAT gateway instead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

AWS hosts a variety of public datasets such as satellite imagery, geospatial, or genomic data that you want to use for your web application hosted in Amazon EC2.

If you use these datasets, how much will it cost you?

A

AWS hosts a variety of public datasets that anyone can access for free.

Previously, large datasets such as satellite imagery or genomic data have required hours or days to locate, download, customize, and analyze. When data is made publicly available on AWS, anyone can analyze any volume of data without needing to download or store it themselves.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Service Control Policies (SCPs)

A

AWS Organizations offers policy-based management for multiple AWS accounts. With Organizations, you can create groups of accounts, automate account creation, apply and manage policies for those groups.

Organizations enables you to centrally manage policies across multiple accounts, without requiring custom scripts and manual processes.

It allows you to create Service Control Policies (SCPs) that centrally control AWS service use across multiple AWS accounts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Web Identity Federation

A

With web identity federation, you don’t need to create custom sign-in code or manage your own user identities.

Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account.

Using an IdP helps you keep your AWS account secure because you don’t have to embed and distribute long-term security credentials with your application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

The low network latency and high network throughput in a dedicated placement group was working fine for a couple of weeks, however, when you try to add new instances to the placement group that already has running EC2 instances, you receive an ‘insufficient capacity error’.

How will you fix this issue?

A

It is recommended that you launch the number of instances that you need in the placement group in a single launch request and that you use the same instance type for all instances in the placement group.

If you try to add more instances to the placement group later, or if you try to launch more than one instance type in the placement group, you increase your chances of getting an insufficient capacity error.

If you stop an instance in a placement group and then start it again, it still runs in the placement group. However, the start fails if there isn’t enough capacity for the instance.

If you receive a capacity error when launching an instance in a placement group that already has running instances, stop and start all of the instances in the placement group, and try the launch again. Restarting the instances may migrate them to hardware that has capacity for all the requested instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Lambda function default timeout? Maximum execution duration?

A

Default timeout is 3 seconds.

Maximum execution duration is 900 seconds, equivalent to 15 minutes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

How do you run a Lambda function synchronously?

A

Use the Invoke operation, or use an AWS SDK in your preferred runtime.

If you anticipate a long-running Lambda function, your client may time out before function execution completes. To avoid this, update the client timeout or your SDK configuration.

34
Q

What does the UpdateShardCount command do?

A

To update the shard count, Kinesis Data Streams performs splits or merges on individual shards.

Updating the shard count is an asynchronous operation. Upon receiving the request, Kinesis Data Streams returns immediately and sets the status of the stream to UPDATING. After the update is complete, Kinesis Data Streams sets the status of the stream back to ACTIVE. Depending on the size of the stream, the scaling action could take a few minutes to complete. You can continue to read and write data to your stream while its status is UPDATING.

35
Q

A global medical research company has a molecular imaging system which provides each client with frequently updated images of what is happening inside the human body at the molecular and cellular level. The system is hosted in AWS and the images are hosted in an S3 bucket behind a CloudFront web distribution. There was a new batch of updated images that were uploaded in S3, however, the users were reporting that they were still seeing the old content. You need to control which image will be returned by the system even when the user has another version cached either locally or behind a corporate caching proxy.

Which of the following is the most suitable solution to solve this issue?

A

Use Versioned objects

  • Versioning enables you to control which file a request returns even when the user has a version cached either locally or behind a corporate caching proxy. If you invalidate the file, the user might continue to see the old version until it expires from those caches.
  • CloudFront access logs include the names of your files, so versioning makes it easier to analyze the results of file changes.
  • Versioning provides a way to serve different versions of files to different users.
  • Versioning simplifies rolling forward and back between file revisions.
  • Versioning is less expensive. You still have to pay for CloudFront to transfer new versions of your files to edge locations, but you don’t have to pay for invalidating files.
36
Q

NAT Gateway or NAT Instance?

A

AWS offers two kinds of NAT devices — a NAT gateway or a NAT instance. It is recommended to use NAT gateways, as they provide better availability and bandwidth over NAT instances.

The NAT Gateway service is also a managed service that does not require your administration efforts. A NAT instance is launched from a NAT AMI.

37
Q

CloudWatch Logs Agent

A

CloudWatch Logs agent provides an automated way to send log data to CloudWatch Logs from Amazon EC2 instances hence, CloudWatch Logs agent is the correct answer.

The CloudWatch Logs agent is comprised of the following components:

  • A plug-in to the AWS CLI that pushes log data to CloudWatch Logs.
  • A script (daemon) that initiates the process to push data to CloudWatch Logs.
  • A cron job that ensures that the daemon is always running.
38
Q

What do you need to do to enable cross-region replication in S3?

A

The source and destination buckets must have versioning enabled.

The source and destination buckets must be in different AWS Regions.

Amazon S3 must have permissions to replicate objects from that source bucket to the destination bucket on your behalf.

39
Q

AWS Global Accelerator

A

a service that improves the availability and performance of your applications with local or global users.

It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers, or Amazon EC2 instances.

40
Q

Elasticache (redis and memcached) vs. dynamoDB and multi-az RDS for storing session state?

A

For sub-millisecond latency caching, ElastiCache is the best choice. In order to address scalability and to provide a shared data storage for sessions that can be accessed from any individual web server, you can abstract the HTTP sessions from the web servers themselves.

A common solution to for this is to leverage an In-Memory Key/Value store such as Redis and Memcached.

Multi-master DynamoDB and Multi-AZ RDS are incorrect because although you can use DynamoDB and RDS for storing session state, these two are not the best choices in terms of cost-effectiveness and performance when compared to ElastiCache. There is a significant difference in terms of latency if you used DynamoDB and RDS when you store the session data.

41
Q

How do you force SSL to an RDS database?

A

If you want to force SSL, use the rds.force_ssl parameter.

By default, the rds.force_ssl parameter is set to false. Set the rds.force_ssl parameter to true to force connections to use SSL. The rds.force_ssl parameter is static, so after you change the value, you must reboot your DB instance for the change to take effect.

Download the Amazon RDS Root CA certificate. Import the certificate to your servers and configure your application to use SSL to encrypt the connection to RDS.

42
Q

A brand new IAM user is intended to be used to send API requests to S3, DynamoDB, Lambda, and other AWS resources of your cloud infrastructure.

What must be done to allow the user to make API calls?

A

You can choose the credentials that are right for your IAM user. When you use the AWS Management Console to create a user, you must choose to at least include a console password or access keys. By default, a brand new IAM user created using the AWS CLI or AWS API has no credentials of any kind. You must create the type of credentials for an IAM user based on the needs of your user.

Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). Users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services.

43
Q

Cache-Control max-age directive

A

If it is set to 0, no caching occurs.

You can control how long your objects stay in a CloudFront cache before CloudFront forwards another request to your origin. Reducing the duration allows you to serve dynamic content. Increasing the duration means your users get better performance because your objects are more likely to be served directly from the edge cache. A longer duration also reduces the load on your origin.

44
Q

Using deployment groups in CodeDeploy to automate code deployments consistently

A

CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, or serverless Lambda functions. It allows you to rapidly release new features, update Lambda function versions, avoid downtime during application deployment, and handle the complexity of updating your applications, without many of the risks associated with error-prone manual deployments.

45
Q

Path Conditions - Path based routing

A

You can use path conditions to define rules that forward requests to different target groups based on the URL in the request (also known as path-based routing).

This type of routing is the most appropriate solution for this scenario hence, using path conditions to define rules that forward requests to different target groups based on the URL in the request is the correct answer.

Each path condition has one path pattern. If the URL in a request matches the path pattern in a listener rule exactly, the request is routed using that rule.

A path pattern is case-sensitive, can be up to 128 characters in length, and can contain any of the following characters. You can include up to three wildcard characters.

46
Q

Host Conditions - Host-based routing

A

host-based routing defines rules that forward requests to different target groups based on the host name in the host header instead of the URL, which is what is needed in this scenario.

47
Q

If asked a question for storage on s3 for a limited time, a few hours, a day, a week, less than a month?

A

The scenario requires you to select a cost-effective service that does not have a minimum storage duration since the data will only last for 12 hours.

Among the options given, only Amazon S3 Standard has the feature of no minimum storage duration. It is also the most cost-effective storage service because you will only be charged for the last 12 hours, unlike in other storage classes where you will still be charged based on its respective storage duration (e.g. 30 days, 90 days, 180 days).

48
Q

What is the correct indication that an object was successfully stored when you put objects in Amazon S3?

A

HTTP 200 result code and MD5 checksum

If you triggered an S3 API call and got HTTP 200 result code and MD5 checksum, then it is considered as a successful upload. The S3 API will return an error code in case the upload is unsuccessful.

49
Q

Fsx for Lustre

A

Keywords: AMAZON S3

EFS is incorrect if S3 is mentioned.

Amazon FSx for Lustre provides a high-performance file system optimized for fast processing of workloads such as machine learning, high performance computing (HPC), video processing, financial modeling, and electronic design automation (EDA). These workloads commonly require data to be presented via a fast and scalable file system interface, and typically have data sets stored on long-term data stores like Amazon S3.

50
Q

Multicast

A

Multicast is a network capability that allows one-to-many distribution of data. With multicasting, one or more sources can transmit network packets to subscribers that typically reside within a multicast group. However, take note that Amazon VPC does not support multicast or broadcast networking.

51
Q

Creating a virtual overlay network running on the OS level of the instance

A

overlay multicast is a method of building IP level multicast across a network fabric supporting unicast IP routing, such as Amazon Virtual Private Cloud (Amazon VPC).

52
Q

What is the default kinesis stream storage time? Max?

A

24 hours.

max is 7 days.

53
Q

Is inter-region VPC peering possible?

A

Yes.

A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region.

54
Q

In Elastic Beanstalk, where does it store the application files and server log files?

A

AWS Elastic Beanstalk stores your application files and optionally, server log files in Amazon S3.

If you are using the AWS Management Console, the AWS Toolkit for Visual Studio, or AWS Toolkit for Eclipse, an Amazon S3 bucket will be created in your account and the files you upload will be automatically copied from your local client to Amazon S3.

Optionally, you may configure Elastic Beanstalk to copy your server log files every hour to Amazon S3. You do this by editing the environment configuration settings.

55
Q

When does a single Elastic IP not incur charges?

A

An Elastic IP address doesn’t incur charges as long as the following conditions are true:

  • The Elastic IP address is associated with an Amazon EC2 instance.
  • The instance associated with the Elastic IP address is running.
  • The instance has only one Elastic IP address attached to it.
56
Q

What is the recommended storage engine for MySQL RDS?

A

InnoDB

57
Q

What storage is not recommended for MySQL RDS?

A

MyISAM

58
Q

You are a Solutions Architect working for a software development company. You are planning to launch a fleet of EBS-backed EC2 instances and want to automatically assign each instance with a static private IP address which does not change even if the instances are restarted.

What should you do to accomplish this?

A

In EC2-Classic, your EC2 instance receives a private IPv4 address from the EC2-Classic range each time it’s started.

In EC2-VPC on the other hand, your EC2 instance receives a static private IPv4 address from the address range of your default VPC.

Hence, the correct answer is launching the instances in the Amazon Virtual Private Cloud (VPC) and not launching the instances in EC2-Classic.

59
Q

CloudHSM

A

If you want a managed service for creating and controlling your encryption keys, but you don’t want or need to operate your own HSM, consider using AWS Key Management Service.

60
Q

A requirement is to use VPC flow logs to monitor all the COPY and UNLOAD traffic of your Redshift cluster that moves in and out of your VPC.

Which of the following is the most suitable solution to implement in this scenario?

A

Enable Enhanced VPC routing on your Amazon Redshift cluster.

When you use Amazon Redshift Enhanced VPC Routing, Amazon Redshift forces all COPY and UNLOAD traffic between your cluster and your data repositories through your Amazon VPC.

By using Enhanced VPC Routing, you can use standard VPC features, such as VPC security groups, network access control lists (ACLs), VPC endpoints, VPC endpoint policies, internet gateways, and Domain Name System (DNS) servers. Hence, enabling Enhanced VPC routing on your Amazon Redshift cluster is the correct answer.

You use these features to tightly manage the flow of data between your Amazon Redshift cluster and other resources. When you use Enhanced VPC Routing to route traffic through your VPC, you can also use VPC flow logs to monitor COPY and UNLOAD traffic. If Enhanced VPC Routing is not enabled, Amazon Redshift routes traffic through the Internet, including traffic to other services within the AWS network.

61
Q

An application is hosted in an AWS Fargate cluster that runs a batch job whenever an object is loaded on an Amazon S3 bucket. The minimum number of ECS Tasks is initially set to 1 to save on costs, and it will only increase the task count based on the new objects uploaded on the S3 bucket. Once processing is done, the bucket becomes empty and the ECS Task count should be back to 1.

Which is the most suitable option to implement with the LEAST amount of effort?

A

You can use CloudWatch Events to run Amazon ECS tasks when certain AWS events occur. You can set up a CloudWatch Events rule that runs an Amazon ECS task whenever a file is uploaded to a certain Amazon S3 bucket using the Amazon S3 PUT operation. You can also declare a reduced number of ECS tasks whenever a file is deleted on the S3 bucket using the DELETE operation.

First, you must create a CloudWatch Events rule for the S3 service that will watch for object-level operations – PUT and DELETE objects. For object-level operations, it is required to create a CloudTrail trail first. On the Targets section, select the “ECS task” and input the needed values such as the cluster name, task definition and the task count. You need two rules – one for the scale-up and another for the scale-down of the ECS task count.

62
Q

Are IAM policies like firewall rules?

A

NO. If a permission like a put or get is allowed explicitly in one “Action” and then denied in another, it is just denied.

This is unlike a firewall rule that will allow first and let traffic pass before getting to the deny rule.

63
Q

Enable Enhanced Monitoring in RDS

A

Amazon RDS provides metrics in real time for the operating system (OS) that your DB instance runs on. You can view the metrics for your DB instance using the console, or consume the Enhanced Monitoring JSON output from CloudWatch Logs in a monitoring system of your choice. By default, Enhanced Monitoring metrics are stored in the CloudWatch Logs for 30 days. To modify the amount of time the metrics are stored in the CloudWatch Logs, change the retention for the RDSOSMetrics log group in the CloudWatch console.

monitors:

  • processes or threads on a DB instance using a CPU
  • Including the % of CPU bandwidth
  • total memory used by each process
64
Q

There is a requirement to optimize your database workloads in your cluster where you have to direct the write operations of the production traffic to your high-capacity instances and point the reporting queries sent by your internal staff to the low-capacity instances.

Which is the most suitable configuration for your application as well as your Aurora database cluster to achieve this requirement?

A

Amazon Aurora typically involves a cluster of DB instances instead of a single instance. Each connection is handled by a specific DB instance. When you connect to an Aurora cluster, the host name and port that you specify point to an intermediate handler called an endpoint. Aurora uses the endpoint mechanism to abstract these connections. Thus, you don’t have to hardcode all the hostnames or write your own logic for load-balancing and rerouting connections when some DB instances aren’t available.

For certain Aurora tasks, different instances or groups of instances perform different roles. For example, the primary instance handles all data definition language (DDL) and data manipulation language (DML) statements. Up to 15 Aurora Replicas handle read-only query traffic.

Using endpoints, you can map each connection to the appropriate instance or group of instances based on your use case. For example, to perform DDL statements you can connect to whichever instance is the primary instance. To perform queries, you can connect to the reader endpoint, with Aurora automatically performing load-balancing among all the Aurora Replicas. For clusters with DB instances of different capacities or configurations, you can connect to custom endpoints associated with different subsets of DB instances. For diagnosis or tuning, you can connect to a specific instance endpoint to examine details about a specific DB instance.

The custom endpoint provides load-balanced database connections based on criteria other than the read-only or read-write capability of the DB instances. For example, you might define a custom endpoint to connect to instances that use a particular AWS instance class or a particular DB parameter group. Then you might tell particular groups of users about this custom endpoint. For example, you might direct internal users to low-capacity instances for report generation or ad hoc (one-time) querying, and direct production traffic to high-capacity instances. Hence, creating a custom endpoint in Aurora based on the specified criteria for the production traffic and another custom endpoint to handle the reporting queries is the correct answer.

65
Q

AWS Nitro System

A

The AWS Nitro System is the underlying platform for the latest generation of EC2 instances that enables AWS to innovate faster, further reduce the cost of the customers, and deliver added benefits like increased security and new instance types.

Amazon EBS is a persistent block storage volume. It can persist independently from the life of an instance. Since the scenario requires you to have an EBS volume with up to 64,000 IOPS, you have to launch a Nitro-based EC2 instance.

66
Q

Amazon Aurora Global Database

A

Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages.

Aurora Global Database supports storage-based replication that has a latency of less than 1 second. If there is an unplanned outage, one of the secondary regions you assigned can be promoted to read and write capabilities in less than 1 minute. This feature is called Cross-Region Disaster Recovery. An RPO of 1 second and an RTO of less than 1 minute provides you a strong foundation for a global business continuity plan.

67
Q

ENI - Cold, warm, h ot

A

Cold when launched, warm when STOPPED, and hot while running.

68
Q

AWS config

A

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources.

Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.

With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines.

69
Q

AWS Secrets Manager

A

AWS Secrets Manager is an AWS service that makes it easier for you to manage secrets.

Secrets can be database credentials, passwords, third-party API keys, and even arbitrary text.

You can store and control access to these secrets centrally by using the Secrets Manager console, the Secrets Manager command line interface (CLI), or the Secrets Manager API and SDKs.

70
Q

An organization needs to control the access for several S3 buckets. They plan to use a gateway endpoint to allow access to trusted buckets.

Which of the following could help you achieve this requirement?

A

Generate an endpoint policy for trusted S3 buckets.

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

When you create a VPC endpoint, you can attach an endpoint policy that controls access to the service to which you are connecting. You can modify the endpoint policy attached to your endpoint and add or remove the route tables used by the endpoint. An endpoint policy does not override or replace IAM user policies or service-specific policies (such as S3 bucket policies). It is a separate policy for controlling access from the endpoint to the specified service.

71
Q

Active-Active Failover

A

Active-Active Failover

Use this failover configuration when you want all of your resources to be available the majority of the time. When a resource becomes unavailable, Route 53 can detect that it’s unhealthy and stop including it when responding to queries.

In active-active failover, all the records that have the same name, the same type (such as A or AAAA), and the same routing policy (such as weighted or latency) are active unless Route 53 considers them unhealthy. Route 53 can respond to a DNS query using any healthy record.

72
Q

Your manager has asked you to deploy a mobile application that can collect votes for a popular singing competition. Millions of users from around the world will submit votes using their mobile phones. These votes must be collected and stored in a highly scalable and highly available data store which will be queried for real-time ranking.

Which of the following combination of services should you use to meet this requirement?

A

DynamoDB and Appsync

DynamoDB is durable, scalable, and highly available data store which can be used for real-time tabulation.

You can also use AppSync with DynamoDB to make it easy for you to build collaborative apps that keep shared data updated in real time. You just specify the data for your app with simple code statements and AWS AppSync manages everything needed to keep the app data updated in real time.

This will allow your app to access data in Amazon DynamoDB, trigger AWS Lambda functions, or run Amazon Elasticsearch queries and combine data from these services to provide the exact data you need for your app.

73
Q

DynamoDB and Cloudfront are incompatible

A

DynamoDB and Cloudfront are incompatible

74
Q

You are employed by a large electronics company that uses Amazon Simple Storage Service. For reporting purposes, they want to track and log every request access to their S3 buckets including the requester, bucket name, request time, request action, referrer, turnaround time, and error code information. The solution should also provide more visibility into the object-level operations of the bucket.

Which is the best solution among the following options that can satisfy the requirement?

A

Server Access Logging

You can use AWS CloudTrail logs together with server access logs for Amazon S3. CloudTrail logs provide you with detailed API tracking for Amazon S3 bucket-level and object-level operations, while server access logs for Amazon S3 provide you visibility into object-level operations on your data in Amazon S3.

75
Q

You are a Big Data Engineer who is assigned to handle the online enrollment system database of a prestigious university, which is hosted in RDS. You are required to monitor the database metrics in Amazon CloudWatch to ensure the availability of the enrollment system.

What are the enhanced monitoring metrics that Amazon CloudWatch gathers from Amazon RDS DB instances which provide a more accurate information? (Select TWO.)

A

RDS child processes – Shows a summary of the RDS processes that support the DB instance, for example aurora for Amazon Aurora DB clusters and mysqld for MySQL DB instances. Process threads appear nested beneath the parent process. Process threads show CPU utilization only as other metrics are the same for all threads for the process. The console displays a maximum of 100 processes and threads. The results are a combination of the top CPU consuming and memory consuming processes and threads. If there are more than 50 processes and more than 50 threads, the console displays the top 50 consumers in each category. This display helps you identify which processes are having the greatest impact on performance.

RDS processes – Shows a summary of the resources used by the RDS management agent, diagnostics monitoring processes, and other AWS processes that are required to support RDS DB instances.

OS processes – Shows a summary of the kernel and system processes, which generally have minimal impact on performance.

CPU Utilization, Database Connections, and Freeable Memory are incorrect because these are just the regular items provided by Amazon RDS Metrics in CloudWatch. Remember that the scenario is asking for the Enhanced Monitoring metrics.

76
Q

An application is using a RESTful API hosted in AWS which uses Amazon API Gateway and AWS Lambda. There is a requirement to trace and analyze user requests as they travel through your Amazon API Gateway APIs to the underlying services.

Which of the following is the most suitable service to use to meet this requirement?

A

AWS X-Ray

You can use AWS X-Ray to trace and analyze user requests as they travel through your Amazon API Gateway APIs to the underlying services. API Gateway supports AWS X-Ray tracing for all API Gateway endpoint types: regional, edge-optimized, and private. You can use AWS X-Ray with Amazon API Gateway in all regions where X-Ray is available.

X-Ray gives you an end-to-end view of an entire request, so you can analyze latencies in your APIs and their backend services. You can use an X-Ray service map to view the latency of an entire request and that of the downstream services that are integrated with X-Ray. And you can configure sampling rules to tell X-Ray which requests to record, at what sampling rates, according to criteria that you specify. If you call an API Gateway API from a service that’s already being traced, API Gateway passes the trace through, even if X-Ray tracing is not enabled on the API.

You can enable X-Ray for an API stage by using the API Gateway management console, or by using the API Gateway API or CLI.

77
Q

An auto-scaling group of Linux EC2 instances is created with basic monitoring enabled in CloudWatch. You noticed that your application is slow so you asked one of your engineers to check all of your EC2 instances. After checking your instances, you noticed that the auto scaling group is not launching more instances as it should be, even though the servers already have high memory usage.

Which of the following are possible solutions that an Architect can implement to solve this issue? (Select TWO.)

A

Install CloudWatch monitoring scripts in the instances, send custom metrics to cloudwatch awhich will trigger your ASG to scale up

Install Cloudwatch agent to the EC2 instances which will trigger your ASG to scale up

78
Q

A data analytics company keeps a massive volume of data that they store in their on-premises data center. To scale their storage systems, they are looking for cloud-backed storage volumes that they can mount using Internet Small Computer System Interface (iSCSI) devices from their on-premises application servers. They have an on-site data analytics application that frequently accesses the latest data subsets locally while the older data are rarely accessed. You are required to minimize the need to scale the on-premises storage infrastructure while still providing their web application with low-latency access to the data.

Which type of AWS Storage Gateway service will you use to meet the above requirements?

A

Volume Gateway in cached mode

In this scenario, the technology company is looking for a storage service that will enable their analytics application to frequently access the latest data subsets and not the entire data set (as it was mentioned that the old data are rarely being used). This requirement can be fulfilled by setting up a Cached Volume Gateway in AWS Storage Gateway.

By using cached volumes, you can use Amazon S3 as your primary data storage, while retaining frequently accessed data locally in your storage gateway. Cached volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low-latency access to frequently accessed data. You can create storage volumes up to 32 TiB in size and afterward, attach these volumes as iSCSI devices to your on-premises application servers. When you write to these volumes, your gateway stores the data in Amazon S3. It retains the recently read data in your on-premises storage gateway’s cache and uploads buffer storage.

Cached volumes can range from 1 GiB to 32 TiB in size and must be rounded to the nearest GiB. Each gateway configured for cached volumes can support up to 32 volumes for a total maximum storage volume of 1,024 TiB (1 PiB).

In the cached volumes solution, AWS Storage Gateway stores all your on-premises application data in a storage volume in Amazon S3. Hence, the correct answer is: Volume Gateway in cached mode.

79
Q

You are working for a multinational telecommunications company. Your IT Manager is willing to consolidate their log streams including the access, application, and security logs in one single system. Once consolidated, the company wants to analyze these logs in real-time based on heuristics. There will be some time in the future where the company will need to validate heuristics, which requires going back to data samples extracted from the last 12 hours.

What is the best approach to meet this requirement?

A

First, send all of the log events to Amazon Kinesis then afterwards, develop a client process to apply heuristics on the logs

In this scenario, you need a service that can collect, process, and analyze data in real-time hence, the right service to use here is Amazon Kinesis.

Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application.

With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your data is collected before the processing can begin.

All other options are incorrect since these services do not have real-time processing capability, unlike Amazon Kinesis.

80
Q

A company needs secure access to its Amazon RDS for MySQL database that is used by multiple applications. Each IAM user must use a short-lived authentication token to connect to the database.

Which of the following is the most suitable solution in this scenario?

A

Use IAM DB Authentication and create database accounts using the AWS-provided “AWSAuthenticationPlugin” plugin in MySQL.

You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don’t need to use a password when you connect to a DB instance.

An authentication token is a string of characters that you use instead of a password. After you generate an authentication token, it’s valid for 15 minutes before it expires. If you try to connect using an expired token, the connection request is denied.

Since the scenario asks you to create a short-lived authentication token to access an Amazon RDS database, you can use an IAM database authentication when connecting to a database instance. Authentication is handled by AWSAuthenticationPlugin—an AWS-provided plugin that works seamlessly with IAM to authenticate your IAM users.

IAM database authentication provides the following benefits:

Network traffic to and from the database is encrypted using Secure Sockets Layer (SSL).

You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance.

For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for greater security

81
Q

You are working as a Principal Solutions Architect for a leading digital news company which has both an on-premises data center as well as an AWS cloud infrastructure. They store their graphics, audios, videos, and other multimedia assets primarily in their on-premises storage server and use an S3 Standard storage class bucket as a backup. Their data are heavily used for only a week (7 days) but after that period, it will only be infrequently used by their customers. You are instructed to save storage costs in AWS yet maintain the ability to fetch a subset of their media assets in a matter of minutes for a surprise annual data audit, which will be conducted on their cloud storage.

Which of the following are valid options that you can implement to meet the above requirement? (Select TWO.)

A

Set a lifecycle policy in the bucket to transition to S3-standard IA after 30 days

Set a lifecycle policy in the bucket to transition the data to glacier after one week

  • Objects must be stored at least 30 days in the current storage class before you can transition them to STANDARD_IA or ONEZONE_IA. For example, you cannot create a lifecycle rule to transition objects to the STANDARD_IA storage class one day after you create them. Amazon S3 doesn’t transition objects within the first 30 days because newer objects are often accessed more frequently or deleted sooner than is suitable for STANDARD_IA or ONEZONE_IA storage.

Since there is a time constraint in transitioning objects in S3, you can only change the storage class of your objects from S3 Standard storage class to STANDARD_IA or ONEZONE_IA storage after 30 days. This limitation does not apply on INTELLIGENT_TIERING, GLACIER, and DEEP_ARCHIVE storage class.

In addition, the requirement says that the media assets should be fetched in a matter of minutes for a surprise annual data audit. This means that the retrieval will only happen once a year. You can use expedited retrievals in Glacier which will allow you to quickly access your data (within 1–5 minutes) when occasional urgent requests for a subset of archives are required.

82
Q

You are working as a Solutions Architect for a leading technology company where you are instructed to troubleshoot the operational issues of your cloud architecture by logging the AWS API call history of your AWS resources. You need to quickly identify the most recent changes made to resources in your environment, including creation, modification, and deletion of AWS resources. One of the requirements is that the generated log files should be encrypted to avoid any security issues.

Which of the following is the most suitable approach to implement the encryption?

A

Use CloudTrail with its default settings

By default, CloudTrail event log files are encrypted using Amazon S3 server-side encryption (SSE). You can also choose to encrypt your log files with an AWS Key Management Service (AWS KMS) key. You can store your log files in your bucket for as long as you want. You can also define Amazon S3 lifecycle rules to archive or delete log files automatically. If you want notifications about log file delivery and validation, you can set up Amazon SNS notifications.