Random 3 Flashcards

1
Q

CloudWatch agent

A
  • To collect logs from your Amazon EC2 instances and on-premises servers into CloudWatch Logs, AWS offers both a new unified CloudWatch agent
  • CloudWatch agent which has the following advantages:
  • You can collect both logs and advanced metrics with the installation and configuration of just one agent.
  • The unified agent enables the collection of logs from servers running Windows Server.
  • If you are using the agent to collect CloudWatch metrics, the unified agent also enables the collection of additional system metrics, for in-guest visibility.
  • The unified agent provides better performance.
  • enables you to collect both system metrics and log files from Amazon EC2 instances and on-premises servers. The agent supports both Windows Server and Linux and allows you to select the metrics to be collected, including sub-resource metrics such as per-CPU core.
  • CloudWatch agent in your EC2 instances to collect and monitor the custom metric (memory usage)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

CloudWatch Logs Insights

A

CloudWatch Logs Insights enables you to interactively search and analyze your log data in Amazon CloudWatch Logs. You can perform queries to help you quickly and effectively respond to operational issues. If an issue occurs, you can use CloudWatch Logs Insights to identify potential causes and validate deployed fixes.

CloudWatch Logs Insights includes a purpose-built query language with a few simple but powerful commands. CloudWatch Logs Insights provides sample queries, command descriptions, query autocompletion, and log field discovery to help you get started quickly. Sample queries are included for several types of AWS service logs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

AWS Inspector Agent

A

AWS Inspector is simply a security assessments service which only helps you in checking for unintended network accessibility of your EC2 instances and for vulnerabilities on those EC2 instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

EBS provisioning

A
  • Size Range: io1 volumes can range in size from 4 GiB to 16 TiB.
  • IOPS Provisioning: io1 allows you to specify the IOPS that you want. You can provision up to 64,000 IOPS per volume when attached to a Nitro-based EC2 instance and up to 32,000 IOPS per volume when attached to other instances.
    Low Latency: It provides consistent low-latency performance, making it ideal for critical applications.
    IOPS to Volume Size Ratio: For every 1 GiB of storage you provision, you can request up to 50 IOPS. This means for a 100 GiB volume, you can provision up to 5,000 IOPS.

io2 supports up to 500 IOPS per GiB, Size Range: io2 volumes can be prov

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Elastic Fabric Adapter

A

An Elastic Fabric Adapter (EFA) is a network device that you can attach to your Amazon EC2 instance to accelerate High Performance Computing (HPC) and machine learning applications. EFA enables you to achieve the application performance of an on-premises HPC cluster with the scalability, flexibility, and elasticity provided by the AWS Cloud.

EFA provides lower and more consistent latency and higher throughput than the TCP transport traditionally used in cloud-based HPC systems. It enhances the performance of inter-instance communication which is critical for scaling HPC and machine learning applications. It is optimized to work on the existing AWS network infrastructure, and it can scale depending on application requirements.

EFA integrates with Libfabric 1.9.0, and it supports Open MPI 4.0.2 and Intel MPI 2019 Update 6 for HPC applications and Nvidia Collective Communications Library (NCCL) for machine learning applications.

Best for HPC applications (e.g., scientific simulations, computational fluid dynamics, finite element analysis), distributed ML training, and other applications requiring fast inter-node communication.
An Elastic Fabric Adapter (EFA) is simply an Elastic Network Adapter (ENA) with added capabilities. It provides all of the functionality of an ENA, with additional OS-bypass functionality. OS-bypass is an access model that allows HPC and machine learning applications to communicate directly with the network interface hardware to provide low-latency, reliable transport functionality.

The OS-bypass capabilities of EFAs are not supported on Windows instances. If you attach an EFA to a Windows instance, the instance functions as an Elastic Network Adapter without the added EFA capabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Elastic Network Adapters

A

Elastic Network Adapters (ENAs) provide traditional IP networking features that are required to support VPC networking. EFAs provide all of the same traditional IP networking features as ENAs
* Relies on standard TCP/IP networking, with enhancements like multi-queue for higher throughput.
* est for general high-throughput applications, such as big data, analytics, media streaming, database clusters, and web servers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

fault-tolerance

A

Basically, fault-tolerance is the ability of a system to remain in operation even in the event that some of its components fail without any service degradation. In AWS, it can also refer to the minimum number of running EC2 instances or resources which should be running at all times in order for the system to properly operate and serve its consumers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

High Availability

A

High Availability, which is just concerned with having at least one running instance or resource in case of failure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

S3 Cross-Region Replication

A

In this scenario, you need to enable Cross-Region Replication to ensure that your S3 bucket would not be affected even if there is an outage in one of the Availability Zones or a regional service failure in us-east-1. When you upload your data in S3, your objects are redundantly stored on multiple devices across multiple facilities within the region only, where you created the bucket. Thus, if there is an outage on the entire region, your S3 bucket will be unavailable if you do not enable Cross-Region Replication, which should make your data available to another region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

AWS Storage Gateway

A

It acts as a bridge between your on-premises infrastructure and AWS cloud storage services like Amazon S3, Glacier, and EBS, enabling seamless data transfer and management.
AWS Storage Gateway offers three different types of gateways to meet different use cases:
* File Gateway: For file-based data transfer to S3.only file gateway can store and retrieve objects in Amazon S3 using the protocols NFS and SMB.
* Tape Gateway: For backup and archival solutions, using AWS cloud as a virtual tape library.
* Volume Gateway: Provides block storage volumes, either cached in the cloud or replicated on-premises and backed by AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Well-Architected Tool

A

You can also use the Well-Architected Tool to automatically monitor the status of your workloads across your AWS account, conduct architectural reviews and check for AWS best practices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

DynamoDb global table

A

DynamoDB Global Tables is a fully managed, multi-region, multi-active database that allows you to replicate your Amazon DynamoDB tables across multiple AWS regions. It is designed to provide high availability and low-latency access to globally distributed applications, enabling real-time data access from different regions around the world.
* Multi-Region Replication
* Automatic Replication: DynamoDB Global Tables automatically replicate your data across multiple AWS regions, ensuring that updates made in one region are reflected across all other regions.
Eventually Consistent Replication: Updates made in one region will eventually propagate to all other regions, usually within seconds, ensuring consistency across all regions.
l
ast writer wins (LWW)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Amazon Managed Grafana

A

Amazon Managed Grafana is a fully managed service with rich, interactive data visualizations to help customers analyze, monitor, and alarm on metrics, logs, and traces across multiple data sources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

RDS MULTI ZONE

A

You can run an Amazon RDS DB instance in several AZs with Multi-AZ deployment. Amazon automatically provisions and maintains a secondary standby DB instance in a different AZ. Your primary DB instance is synchronously replicated across AZs to the secondary instance to provide data redundancy, failover support, eliminate I/O freezes, and minimize latency spikes during systems backup.
Read replica is not doing automatic failover.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

AWS Transit Gateway

A

With AWS Transit Gateway, you only have to create and manage a single connection from the central gateway to each Amazon VPC, on-premises data center, or remote office across your network. Transit Gateway acts as a hub that controls how traffic is routed among all the connected networks which act like spokes. This hub and spoke model significantly simplifies management and reduces operational costs because each network only has to connect to the Transit Gateway and not to every other network. Any new VPC is simply connected to the Transit Gateway and is then automatically available to every other network that is connected to the Transit Gateway. This ease of connectivity makes it easy to scale your network as you grow.
A transit gateway attachment is both a source and a destination of packets. You can attach the following resources to your transit gateway:

  • One or more VPCs
  • One or more VPN connections
  • One or more AWS Direct Connect gateways
  • One or more transit gateway peering connections

If you attach a transit gateway peering connection, the transit gateway must be in a different Region.

Hence, the correct answer is: Set up an AWS Transit Gateway in each region to interconnect all networks within it. Then, route traffic between the transit gateways through a peering connection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

AWS License Manager

A

AWS License Manager is a service that makes it easier for you to manage your software licenses from software vendors (for example, Microsoft, SAP, Oracle, and IBM) centrally across AWS and your on-premises environments. This provides control and visibility into the usage of your licenses, enabling you to limit licensing overages and reduce the risk of non-compliance and misreporting.

As you build out your cloud infrastructure on AWS, you can save costs by using Bring Your Own License model (BYOL) opportunities. That is, you can re-purpose your existing license inventory for use with your cloud resources.
If you are responsible for managing licenses in your organization, you can use.
License Manager to set up licensing rules, attach them to your launches, and keep track of usage. The users in your organization can then add and remove license-consuming resources without additional work.

License Manager reduces the risk of licensing overages and penalties with inventory tracking that is tied directly into AWS services. License Manager’s built-in dashboards provide ongoing visibility into license usage and assistance with vendor audits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How to update s3 bucket policy to use cloud front for dedcated users only?

A

You can update the Amazon S3 bucket policy using either the AWS Management Console or the Amazon S3 API:

  • Grant the CloudFront origin access identity the applicable permissions on the bucket.
  • Deny access to anyone that you don’t want to have access using Amazon S3 URLs.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

AWS Application Migration Service

A

AWS Application Migration Service (AWS MGN) is the primary migration service recommended for lift-and-shift migrations to AWS. AWS encourages customers who are currently using AWS Elastic Disaster Recovery to switch to AWS MGN for future migrations. AWS MGN enables organizations to move applications to AWS without having to make any changes to the applications, their architecture, or the migrated servers.
Implementation begins by installing the AWS Replication Agent on your source servers. When you launch Test or Cutover instances, AWS Application Migration Service automatically converts your source servers to boot and runs natively on AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

AWS Replication Agent

A

The AWS Replication Agent is a key component of AWS services designed to facilitate seamless data migration and disaster recovery from on-premises environments to the AWS cloud. It is used as part of two main AWS services: AWS Application Migration Service (MGN) and AWS Elastic Disaster Recovery (AWS DRS). The replication agent is installed on source servers (whether physical or virtual machines), where it continuously replicates the data to AWS without disrupting ongoing operations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

AWS Application Discovery Service

A

The AWS Application Discovery Service is primarily used to track the migration status of your on-premises applications from the Migration Hub console in your home Region. This service is not capable of doing the actual migration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

AWS Step Functions

A

AWS Step Functions provides useful guarantees around task assignments. It ensures that a task is never duplicated and is assigned only once. Thus, even though you may have multiple workers for a particular activity type (or a number of instances of a decider), AWS Step Functions will give a specific task to only one worker (or one decider instance). Additionally, AWS Step Functions keeps at most one decision task outstanding at a time for workflow execution. Thus, you can run multiple decider instances without worrying about two instances operating on the same execution simultaneously. These facilities enable you to coordinate your workflow without worrying about duplicate, lost, or conflicting tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

RDS, the Enhanced Monitoring metrics

A

RDS child processes – Shows a summary of the RDS processes that support the DB instance, for example aurora for Amazon Aurora DB clusters and mysqld for MySQL DB instances. Process threads appear nested beneath the parent process. Process threads show CPU utilization only as other metrics are the same for all threads for the process. The console displays a maximum of 100 processes and threads. The results are a combination of the top CPU-consuming and memory-consuming processes and threads. If there are more than 50 processes and more than 50 threads, the console displays the top 50 consumers in each category. This display helps you identify which processes are having the greatest impact on performance.

**RDS processes **– Shows a summary of the resources used by the RDS management agent, diagnostics monitoring processes, and other AWS processes that are required to support RDS DB instances.

**OS processes **– Shows a summary of the kernel and system processes, which generally have minimal impact on performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

s3 public access

A
  • You can also manage the public permissions of your objects during upload. Under Manage public permissions, you can grant read access to your objects to the general public (everyone in the world) for all of the files that you’re uploading. Granting public read access is applicable to a small subset of use cases, such as when buckets are used for websites.
  • Amazon S3 offers access policy options broadly categorized as resource-based policies and user policies. Access policies you attach to your resources (buckets and objects) are referred to as resource-based policies.
  • For example, bucket policies and access control lists (ACLs) are resource-based policies. You can also attach access policies to users in your account. These are called user policies. You may choose to use resource-based policies, user policies, or some combination of these to manage permissions to your Amazon S3 resources.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

EBS

A

Amazon EBS provides three volume types to best meet the needs of your workloads: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic.
General Purpose (SSD) is the new, SSD-backed, general purpose EBS volume type that is recommended as the default choice for customers. General Purpose (SSD) volumes are suitable for a broad range of workloads, including small to medium-sized databases, development and test environments, and boot volumes.

**Provisioned **IOPS (SSD) volumes offer storage with consistent and low-latency performance and are designed for I/O intensive applications such as large relational or NoSQL databases. Magnetic volumes provide the lowest cost per gigabyte of all EBS volume types.

Magnetic volumes are ideal for workloads where data are accessed infrequently, and applications where the lowest storage cost is important. Take note that this is a Previous Generation Volume. The latest low-cost magnetic storage types are Cold HDD (sc1) and Throughput Optimized HDD (st1) volumes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Security group updates

A

You can add as source another security group Id

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

RAID 0

A

RAID 0 configuration enables you to improve your storage volumes’ performance by distributing the I/O across the volumes in a stripe. Therefore, if you add a storage volume, you get the straight addition of throughput and IOPS. This configuration can be implemented on both EBS or instance store volumes. Since the main requirement in the scenario is storage performance, you need to use an instance store volume. It uses NVMe or SATA-based SSD to deliver high random I/O performance. This type of storage is a good option when you need storage with very low latency and you don’t need the data to persist when the instance terminates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

NAT pricing

A

NAT gateway hourly usage and data processing rates apply. Amazon EC2 charges for data transfer also apply. NAT gateways are not supported for IPv6 traffic—use an egress-only internet gateway instead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

S3 Select

A

Amazon S3 Select is designed to help analyze and process data within an object in Amazon S3 buckets, faster and cheaper. It works by providing the ability to retrieve a subset of data from an object in Amazon S3 using simple SQL expressions. Your applications no longer have to use compute resources to scan and filter the data from an object, potentially increasing query performance by up to 400%, and reducing query costs as much as 80%. You simply change your application to use SELECT instead of GET to take advantage of S3 Select.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Amazon Redshift Spectrum

A

Amazon Redshift also includes Redshift Spectrum, allowing you to directly run SQL queries against exabytes of unstructured data in Amazon S3. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, ORC, Parquet, RCFile, RegexSerDe, SequenceFile, TextFile, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data being retrieved, so queries against Amazon S3 run fast, regardless of data set size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

ports

A

HTTP 80
HTTPS 443

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

EC2 S3 COSTS

A

To minimize the data transfer charges, you need to deploy the EC2 instance in the same Region as Amazon S3. Take note that there is no data transfer cost between S3 and EC2 in the same AWS Region. Install the conversion software on the instance to perform data transformation and re-upload the data to Amazon S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Amazon EMR

A

Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. By using these frameworks and related open-source projects such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence workloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into and out of other AWS data stores and databases such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

AWS Data Pipeline

A

his is primarily used as a cloud-based data workflow service that helps you process and move data between different AWS services and on-premises data sources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Kinesis Data Streams

A

Kinesis can do the job just fine because of its architecture. A Kinesis data stream is a set of shards that has a sequence of data records, and each data record has a sequence number that is assigned by Kinesis Data Streams. Kinesis can also easily handle the high volume of messages being sent to the service.

Amazon Kinesis Data Streams enables real-time processing of streaming big data. It provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream (for example, to perform counting, aggregation, and filtering).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

AWS DataSync

A

AWS DataSync allows you to copy large datasets with millions of files, without having to build custom solutions with open source tools or license and manage expensive commercial network acceleration software. You can use DataSync to migrate active data to AWS, transfer data to the cloud for analysis and processing, archive data to free up on-premises storage capacity, or replicate data to AWS for business continuity.

AWS DataSync simplifies, automates, and accelerates copying large amounts of data to and from AWS storage services over the internet or AWS Direct Connect. DataSync can copy data between Network File System (NFS), Server Message Block (SMB) file servers, self-managed object storage, or AWS Snowcone, and Amazon Simple Storage Service (Amazon S3) buckets, Amazon EFS file systems, and Amazon FSx for Windows File Server file systems.
You deploy an AWS DataSync agent to your on-premises hypervisor or in Amazon EC2. To copy data to or from an on-premises file server, you download the agent virtual machine image from the AWS Console and deploy to your on-premises VMware ESXi, Linux Kernel-based Virtual Machine (KVM), or Microsoft Hyper-V hypervisor. To copy data to or from an in-cloud file server, you create an Amazon EC2 instance using a DataSync agent AMI. In both cases the agent must be deployed so that it can access your file server using the NFS, SMB protocol, or the Amazon S3 API. To set up transfers between your AWS Snowcone device and AWS storage, use the DataSync agent AMI that comes pre-installed on your device.

Since the scenario plans to use AWS Direct Connect for private connectivity between on-premises and AWS, you can use DataSync to automate and accelerate online data transfers to AWS storage services. The AWS DataSync agent will be deployed in your on-premises network to accelerate data transfer to AWS. To connect programmatically to an AWS service, you will need to use an AWS Direct Connect service endpoint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

S3 ENCRYPTION, KEY ROTATION

A
  • Amazon S3 server-side encryption uses one of the strongest block ciphers available to encrypt your data, 256-bit Advanced Encryption Standard (AES-256).
  • If you need server-side encryption for all of the objects that are stored in a bucket, use a bucket policy. You can create a bucket policy that denies permissions to upload an object unless the request includes the x-amz-server-side-encryption header to request server-side encryption.
  • Automatic key rotation is disabled by default on customer managed keys but authorized users can enable and disable it. When you enable (or re-enable) automatic key rotation, AWS KMS automatically rotates the KMS key one year (approximately 365 days) after the enable date and every year thereafter.

AWS KMS automatically rotates AWS managed keys every year (approximately 365 days). You cannot enable or disable key rotation for AWS managed keys.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Amazon S3 Glacier (Glacier) vault

A

An Amazon S3 Glacier (Glacier) vaultcan have one resource-based vault access policy and one Vault Lock policy attached to it. A Vault Lock policy is a vault access policy that you can lock. Using a Vault Lock policy can help you enforce regulatory and compliance requirements. Amazon S3 Glacier provides a set of API operations for you to manage the Vault Lock policies.

As an example of a Vault Lock policy, suppose that you are required to retain archives for one year before you can delete them. To implement this requirement, you can create a Vault Lock policy that denies users permission to delete an archive until the archive has existed for one year. You can test this policy before locking it down. After you lock the policy, the policy becomes immutable. For more information about the locking process, see Amazon S3 Glacier Vault Lock. If you want to manage other user permissions that can be changed, you can use the vault access policy

Amazon S3 Glacier supports the following archive operations: Upload, Download, and Delete. Archives are immutable and cannot be modified. Hence, the correct answer is to store the audit logs in a Glacier vault and use the Vault Lock feature.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

AWS Control Tower

A

AWS Control Tower offers a straightforward way to set up and govern an AWS multi-account environment, following prescriptive best practices. AWS Control Tower orchestrates the capabilities of several other AWS services, including AWS Organizations, AWS Service Catalog, and AWS Single Sign-On, to build a landing zone in less than an hour. It offers a dashboard to see provisioned accounts across your enterprise, guardrails enabled for policy enforcement, guardrails enabled for continuous detection of policy non-conformance, and non-compliant resources organized by accounts and OUs.
A guardrail is a high-level rule that provides ongoing governance for your overall AWS environment. It’s expressed in plain language. Through guardrails, AWS Control Tower implements preventive or detective controls that help you govern your resources and monitor compliance across groups of AWS accounts.

A guardrail applies to an entire organizational unit (OU), and every AWS account within the OU is affected by the guardrail. Therefore, when users perform work in any AWS account in your landing zone, they’re always subject to the guardrails that are governing their account’s OU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

which db has Read replica?

A

Amazon RDS for MySQL, MariaDB, Oracle, and PostgreSQL as well as Amazon Aurora.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

A company launched an EC2 instance in the newly created VPC. They noticed that the generated instance does not have an associated DNS hostname.

Which of the following options could be a valid reason for this issue?

A

The DNS resolution and DNS hostname of the VPC configuration should be enabled.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

“Statement”:[
{
“Sid”:”DirectoryTutorialsDojo1234”,
“Effect”:”Allow”,
“Action”:[
“ds:*”
],
“Resource”:”arn:aws:ds:us-east-1:987654321012:directory/d-1234567890”
},
{

A

Allows all AWS Directory Service (ds) calls as long as the resource contains the directory ID: d-1234567890

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Where can you safely import the SSL/TLS certificate of your application?

A
  • AWS Certificate Manager
  • IAM certificate store
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Predictive scaling

A

Predictive scaling uses machine learning to predict capacity requirements based on historical data from CloudWatch. The machine learning algorithm consumes the available historical data and calculates capacity that best fits the historical load pattern, and then continuously learns based on new data to make future forecasts more accurate.
In general, if you have regular patterns of traffic increases and applications that take a long time to initialize, you should consider using predictive scaling. Predictive scaling can help you scale faster by launching capacity in advance of forecasted load, compared to using only dynamic scaling, which is reactive in nature. Predictive scaling can also potentially save you money on your EC2 bill by helping you avoid the need to overprovision capacity. You also don’t have to spend time reviewing your application’s load patterns and trying to schedule the right amount of capacity using scheduled scaling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

FTP ports

A

The FTP protocol uses TCP via ports 20 and 21.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

AWS Health

A

AWS Health provides ongoing visibility into your resource performance and the availability of your AWS services and accounts. You can use AWS Health events to learn how service and resource changes might affect your applications running on AWS. AWS Health provides relevant and timely information to help you manage events in progress. AWS Health also helps you be aware of and to prepare for planned activities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

EC2 Billing

A

Billing commences when Amazon EC2 initiates the boot sequence of an AMI instance. Billing ends when the instance terminates, which could occur through a web services command, by running “shutdown -h”, or through instance failure. When you stop an instance, AWS shuts it down but doesn’t charge hourly usage for a stopped instance or data transfer fees.** However, AWS does charge for the storage of any Amazon EBS volumes.**

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Route53 does not have any computation capability

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Lambda edge

A

Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your application, which improves performance and reduces latency. With Lambda@Edge, you don’t have to provision or manage infrastructure in multiple locations around the world. You pay only for the compute time you consume - there is no charge when your code is not running.

With Lambda@Edge, you can enrich your web applications by making them globally distributed and improving their performance — all with zero server administration. Lambda@Edge runs your code in response to events generated by the Amazon CloudFront content delivery network (CDN). Just upload your code to AWS Lambda, which takes care of everything required to run and scale your code with high availability at an AWS location closest to your end user.

By using Lambda@Edge and Kinesis together, you can process real-time streaming data so that you can track and analyze globally-distributed user activity on your website and mobile applications, including clickstream analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Amazon RDS storage volume snapshot

A

Amazon RDS creates a storage volume snapshot that backs up the entire DB instance, not just individual databases. It’s important to keep in mind that when creating a DB snapshot on a Single-AZ DB instance, a brief I/O suspension may occur. The duration of the suspension can vary from a few seconds to a few minutes, depending on the size and class of your DB instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

EC2 metadata

A

Instance metadata is data about your EC2 instance that you can use to configure or manage the running instance. Because your instance metadata is available from your running instance, you do not need to use the Amazon EC2 console or the AWS CLI. This can be helpful when you’re writing scripts to run from your instance. For example, you can access the local IP address of your instance from instance metadata to manage a connection to an external application.To view the private IPv4 address, public IPv4 address, and all other categories of instance metadata from within a running instance, use the following URL:

http://169.254.169.254/latest/meta-data/

lea
mama

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

ACM Private CA

A

This service is for enterprise customers building a public key infrastructure (PKI) inside the AWS cloud and is intended for private use within an organization. With ACM Private CA, you can create your own CA hierarchy and issue certificates with it for authenticating internal users, computers, applications, services, servers, and other devices and for signing computer code. Certificates issued by a private CA are trusted only within your organization, not on the internet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

AWS Certificate Manager (ACM)

A

AWS Certificate Manager (ACM) — This service manages certificates for enterprise customers who need a publicly trusted secure web presence using TLS. You can deploy ACM certificates into AWS Elastic Load Balancing, Amazon CloudFront, Amazon API Gateway, and other integrated services. The most common application of this kind is a secure public website with significant traffic requirements.
In addition to requesting SSL/TLS certificates provided by AWS Certificate Manager (ACM), you can import certificates that you obtained outside of AWS. You might do this because you already have a certificate from a third-party certificate authority (CA) or because you have application-specific requirements that are not met by ACM issued certificates.
In addition, you have to manually reimport your certificate every year since self-signed certificates in ACM are not automatically renewed, unlike those generated from ACM.

55
Q

Amazon RDS Proxy

A

Amazon RDS Proxy is a fully managed, highly available, and scalable database proxy service offered by AWS for Amazon RDS (Relational Database Service) and Amazon Aurora databases. It acts as an intermediary between your application and your database, improving database performance, availability, and security by managing connection pools and reducing the overhead of opening and closing database connections.

56
Q

EBS encryption

A

When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted:

  • Data at rest inside the volume
  • All data moving between the volume and the instance
  • All snapshots created from the volume
  • All volumes created from those snapshots

Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. You can encrypt both the boot and data volumes of an EC2 instance.

57
Q

AWS Systems Manager Run Command

A

AWS Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances. A managed instance is any Amazon EC2 instance or on-premises machine in your hybrid environment that has been configured for Systems Manager. Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at scale. You can use Run Command from the AWS console, the AWS Command Line Interface, AWS Tools for Windows PowerShell, or the AWS SDKs. Run Command is offered at no additional cost.

58
Q

AWS Identity and Access Management

A

You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don’t need to use a password when you connect to a DB instance.

An authentication token is a string of characters that you use instead of a password. After you generate an authentication token, it’s valid for 15 minutes before it expires. If you try to connect using an expired token, the connection request is denied.

59
Q

scaling policies

A

Amazon EC2 Auto Scaling supports the following types of scaling policies:

**Target tracking scaling **- Increase or decrease the current capacity of the group based on a target value for a specific metric. This is similar to the way that your thermostat maintains the temperature of your home – you select a temperature and the thermostat does the rest.

Step scaling - Increase or decrease the current capacity of the group based on a set of scaling adjustments, known as step adjustments, that vary based on the size of the alarm breach.When you create a step scaling policy, you can also specify the number of seconds that it takes for a newly launched instance to warm up.

Simple scaling - Increase or decrease the current capacity of the group based on a single scaling adjustment.

60
Q

weighted routing

A
  • Application Load Balancers support Weighted Target Groups routing. With this feature, you will be able to do weighted routing of the traffic forwarded by a rule to multiple target groups. This enables various use cases like blue-green, canary, and hybrid deployments without the need for multiple load balancers

When you create a target group in your Application Load Balancer, you specify its target type. This determines the type of target you specify when registering with this target group. You can select the following target types:

  1. instance - The targets are specified by instance ID.
  2. ip - The targets are IP addresses.
  3. Lambda - The target is a Lambda function.
  • To divert 50% of the traffic to the new application in AWS and the other 50% to the application, you can also use Route 53 with Weighted routing policy. This will divert the traffic between the on-premises and AWS-hosted applications accordingly.

Weighted routing lets you associate multiple resources with a single domain name (tutorialsdojo.com) or subdomain name (portal.tutorialsdojo.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software. You can set a specific percentage of how much traffic will be allocated to the resource by specifying the weights.

you can control the proportion of traffic directed to each endpoint usin

61
Q

VPC endpoint

A
  • A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an Internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other services does not leave the Amazon network.

Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components that allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.
* As a rule of thumb, most AWS services use VPC Interface Endpoint except for S3 and DynamoDB, which use VPC Gateway Endpoint.
* There is no additional charge for using gateway endpoints. However, standard charges for data transfer and resource usage still apply.

62
Q

Gateway endpoint vs Interfcae endpoint

A

Gateway Endpoints for Amazon S3:

Use Amazon S3 public IP addresses.
Does not allow access from on-premises.
Does not allow access from another AWS Region.
S3 and DynamoDB
Not billed.

Interface Endpoints for Amazon S3:
Use private IP addresses from your VPC to access Amazon S3.
Allows access from on-premises.
Allows access from a VPC in another AWS Region using VPC peering or AWS Transit Gateway.
Billed.

63
Q

web access control list (ACL)

A
  • part of AWS WAF
  • You create a web ACL and define its protection strategy by adding rules. Rules define criteria for inspecting web requests and specify how to handle requests that match the criteria. A web access control list (web ACL) gives you fine-grained control over all of the HTTP(S) web requests that your protected resource responds to.

You can use criteria like the following to allow or block requests:

-IP address origin of the request

-Country of origin of the request
-create one or more geo-match conditions. A geo match condition lists countries that your requests originate from
-String match or regular expression (regex) match in a part of the request
-Size of a particular part of the request
-Detection of malicious SQL code or scripting

You can also test for any combination of these conditions. You can block or count web requests that not only meet the specified conditions but also exceed a specified number of requests in any 5-minute period. You can combine conditions using logical operators. You can also run CAPTCHA controls against requests.

To allow or block web requests based on country of origin, create one or more geographical, or geo, match statements. You can use this to block access to your site from specific countries or to only allow access from specific countries.

64
Q

AWS Wavelength

A

AWS Wavelength combines the high bandwidth and ultralow latency of 5G networks with **AWS compute and storage services **so that developers can innovate and build a new class of applications.

Wavelength Zones are AWS infrastructure deployments that embed AWS compute and storage services within telecommunications providers’ data centers at the edge of the 5G network, so application traffic can reach application servers running in Wavelength Zones without leaving the mobile providers’ network. This prevents the latency that would result from multiple hops to the internet and enables customers to take full advantage of 5G networks. Wavelength Zones extend AWS to the 5G edge, delivering a consistent developer experience across multiple 5G networks around the world. Wavelength Zones also allow developers to build the next generation of ultra-low latency applications using the same familiar AWS services, APIs, tools, and functionality they already use today.

65
Q

EKS RBAC

A

Amazon EKS uses IAM to provide authentication to your Kubernetes cluster, but it still relies on native Kubernetes Role-Based Access Control (RBAC) for authorization. This means that IAM is only used for the authentication of valid IAM entities. All permissions for interacting with your Amazon EKS cluster’s Kubernetes API are managed through the native Kubernetes RBAC system.

Access to your cluster using AWS Identity and Access Management (IAM) entities is enabled by the AWS IAM Authenticator for Kubernetes, which runs on the Amazon EKS control plane. The authenticator gets its configuration information from the aws-auth ConfigMap (AWS authenticator configuration map).

The aws-auth ConfigMap is automatically created and applied to your cluster when you create a managed node group or when you create a node group using eksctl. It is initially created to allow nodes to join your cluster, but you also use this ConfigMap to add role-based access control (RBAC) access to IAM users and roles.

66
Q

AWS Batch

A

AWS Batch is a powerful tool for developers, scientists, and engineers who need to run a large number of batch and ML computing jobs. By optimizing compute resources, AWS Batch enables you to focus on analyzing outcomes and resolving issues, rather than worrying about the technical details of running job.

With AWS Batch, you can define and submit multiple simulation jobs to be executed concurrently. AWS Batch will take care of distributing the workload across multiple EC2 instances, scaling up or down based on the demand, and managing the execution environment. It provides an easy-to-use interface and automation for managing the simulations, allowing you to focus on the software itself rather than the underlying infrastructure.

67
Q

AWS Glue PySpark job

A

AWS Glue has a minimum billing duration of 1 minute (Glue version 2.0 and later),

68
Q

Amazon Storage Gateway - Cached Volumes

A

Amazon Storage Gateway - Cached Volumes Overview:
Cached Volumes in AWS Storage Gateway enable the company to store their primary data in Amazon S3 while maintaining a local cache of frequently accessed data on their on-premises infrastructure. This hybrid model allows them to scale their storage without needing additional physical storage, while still providing fast local access to frequently used data.

How Cached Volumes Work:
Primary Storage in S3: All the company’s data is primarily stored in Amazon S3. This ensures durability, scalability, and cost-effectiveness, as they no longer need to rely on their limited on-premises storage capacity.

Local Cache for Fast Access: The Storage Gateway - Cached Volumes service creates a local cache on their on-premises servers, storing frequently accessed data locally. This local cache improves performance because the most frequently accessed data is available with low latency.

69
Q

A company has multiple AWS Site-to-Site VPN connections placed between their VPCs and their remote network. During peak hours, many employees are experiencing slow connectivity issues, which limits their productivity. The company has asked a solutions architect to scale the throughput of the VPN connections.

Which solution should the architect carry out?

A

With AWS Transit Gateway, you can simplify the connectivity between multiple VPCs and also connect to any VPC attached to AWS Transit Gateway with a single VPN connection.

AWS Transit Gateway also enables you to scale the IPsec VPN throughput with equal-cost multi-path (ECMP) routing support over multiple VPN tunnels. A single VPN tunnel still has a maximum throughput of 1.25 Gbps. If you establish multiple VPN tunnels to an ECMP-enabled transit gateway, it can scale beyond the default limit of 1.25 Gbps.

Hence, the correct answer is: Associate the VPCs to an Equal Cost Multipath Routing (ECMR)-enabled transit gateway and attach additional VPN tunnels.

70
Q

Whch db is ACID compliant?

A

Aurora and RDS

71
Q

Aurora

A

Aurora includes a high-performance storage subsystem. Its MySQL- and PostgreSQL-compatible database engines are customized to take advantage of that fast distributed storage. The underlying storage grows automatically as needed, up to 64 tebibytes (TiB). Aurora also automates and standardizes database clustering and replication, which are typically among the most challenging aspects of database configuration and administration.
For Amazon RDS MariaDB DB instances, the maximum provisioned storage limit constrains the size of a table to a maximum size of 64 TB when using InnoDB file-per-table tablespaces. This limit also constrains the system tablespace to a maximum size of 16 TB. InnoDB file-per-table tablespaces (with tables each in their own tablespace) is set by default for Amazon RDS MariaDB DB instances.

72
Q

VPC peering connection

A

A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different regions (also known as an inter-region VPC peering connection).

73
Q

Inter-Region VPC Peering

A

Inter-Region VPC Peering provides a simple and cost-effective way to share resources between regions or replicate data for geographic redundancy. Built on the same horizontally scaled, redundant, and highly available technology that powers VPC today, Inter-Region VPC Peering encrypts inter-region traffic with no single point of failure or bandwidth bottleneck. Traffic using Inter-Region VPC Peering always stays on the global AWS backbone and never traverses the public internet, thereby reducing threat vectors, such as common exploits and DDoS attacks.

74
Q

VPC endpoint inter region

A

VPC endpoints are region-specific only and do not support inter-region communication.

75
Q

AWS Network Firewall

A

AWS Network Firewall is a stateful, managed, network firewall, and intrusion detection and prevention service for your virtual private cloud (VPC). With Network Firewall, you can filter traffic at the perimeter of your VPC. This includes traffic going to and coming from an internet gateway, NAT gateway, or over VPN or AWS Direct Connect. Network Firewall uses Suricata — an open-source intrusion prevention system (IPS) for stateful inspection.
You can use Network Firewall to monitor and protect your Amazon VPC traffic in a number of ways, including the following:

  • Pass traffic through only from known AWS service domains or IP address endpoints, such as Amazon S3.
  • Use custom lists of known bad domains to limit the types of domain names that your applications can access.
  • Perform deep packet inspection on traffic entering or leaving your VPC.
  • Use stateful protocol detection to filter protocols like HTTPS, independent of the port used
76
Q

Network Access Analyzer

A

Network Access Analyzer is a feature of VPC that reports on unintended access to your AWS resources based on the security and compliance that you set.

77
Q

Amazon Detective

A

This service just automatically collects log data from your AWS resources to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities in your AWS account.

78
Q

A client is hosting their company website on a cluster of web servers that are behind a public-facing Application Load Balancer (AWS ALB). The client also uses Amazon Route 53 to manage their public DNS.

How should the client configure the DNS zone apex record to point to the load balancer?

A

Create an A record aliased to the load balancer DNS name. You can’t use ip as it is going to change.

79
Q

Amazon FSx

A

Amazon FSx provides you with two file systems to choose from: Amazon FSx for Windows File Server for Windows-based applications and Amazon FSx for Lustre for compute-intensive workloads.

For Windows-based applications, Amazon FSx provides fully managed Windows file servers with features and performance optimized for “lift-and-shift” business-critical application workloads including home directories (user shares), media workflows, and ERP applications. It is accessible from Windows and Linux instances via the SMB protocol. If you have Linux-based applications, Amazon EFS is a cloud-native fully managed file system that provides simple, scalable, elastic file storage accessible from Linux instances via the NFS protocol.

For compute-intensive and fast processing workloads, like high-performance computing (HPC), machine learning, EDA, and media processing, Amazon FSx for Lustre, provides a file system that’s optimized for performance, with input and output stored on Amazon S3.You can also use FSx for Lustre as a standalone high-performance file system to burst your workloads from on-premises to the cloud. By copying on-premises data to an FSx for Lustre file system, you can make that data available for fast processing by compute instances running on AWS. With Amazon FSx, you pay for only the resources you use. There are no minimum commitments, upfront hardware or software costs, or additional fees.
For compute-intensive and fast processing workloads, like high-performance computing (HPC), machine learning, EDA, and media processing, Amazon FSx for Lustre, provides a file system that’s optimized for performance, with input and output stored on Amazon S3. It has capability to easily process your S3 data with a high-performance POSIX interface

80
Q

Amazon CloudWatch Application Insights

A

Amazon CloudWatch Application Insights facilitates observability for your applications and underlying AWS resources. It helps you set up the best monitors for your application resources to continuously analyze data for signs of problems with your applications. Application Insights, which is powered by SageMaker and other AWS technologies, provides automated dashboards that show potential problems with monitored applications, which help you to quickly isolate ongoing issues with your applications and infrastructure. The enhanced visibility into the health of your applications that Application Insights provides helps reduce the “mean time to repair” (MTTR) to troubleshoot your application issues.
When you add your applications to Amazon CloudWatch Application Insights, it scans the resources in the applications and recommends and configures metrics and logs on CloudWatch for application components. Application Insights analyzes metric patterns using historical data to detect anomalies and continuously detects errors and exceptions from your application, operating system, and infrastructure logs. It correlates these observations using a combination of classification algorithms and built-in rules. Then, it automatically creates dashboards that show the relevant observations and problem severity information to help you prioritize your actions.

81
Q

Kubernetes Horizontal Pod Autoscaler

A

The Kubernetes Horizontal Pod Autoscaler automatically scales the number of Pods in a deployment, replication controller, or replica set based on that resource’s CPU utilization. This can help your applications scale out to meet increased demand or scale in when resources are not needed, thus freeing up your nodes for other applications. When you set a target CPU utilization percentage, the Horizontal Pod Autoscaler scales your application in or out to try to meet that target.
Autoscaling is a function that automatically scales your resources up or down to meet changing demands. This is a major Kubernetes function that would otherwise require extensive human resources to perform manually.

Amazon EKS supports two autoscaling products:

  • Karpenter
  • Cluster Autoscaler

The Kubernetes Cluster Autoscaler automatically adjusts the number of nodes in your cluster when pods fail or are rescheduled onto other nodes. The Cluster Autoscaler uses Auto Scaling groups.
Karpenter is a flexible, high-performance Kubernetes cluster autoscaler that launches appropriately sized compute resources, like Amazon EC2 instances, in response to changing application load. It integrates with AWS to provision compute resources that precisely match workload requirements.

82
Q

metric to scale based on sqs

A

ApproximateAgeOfOldestMessage, you must use Auto Scaling group

83
Q

AWS Storage Gateway Hardware Appliance

A

The AWS Storage Gateway Hardware Appliance is a physical, standalone, validated server configuration for on-premises deployments.
The AWS Storage Gateway Hardware Appliance is a physical hardware appliance with the **Storage Gateway software preinstalled **on a validated server configuration. The hardware appliance is a high-performance 1U server that you can deploy in your data center or on-premises inside your corporate firewall. When you buy and activate your hardware appliance, the activation process associates your hardware appliance with your AWS account. After activation, your hardware appliance appears in the console as a gateway on the Hardware page. You can configure your hardware appliance as a file gateway, tape gateway, or volume gateway type. The procedure that you use to deploy and activate these gateway types on a hardware appliance is the same as on a virtual platform.

84
Q

AWS Directory Services

A

you need AWS Directory Services to integrate with your on-premises Active Directory

85
Q

Amazon Workspace

A

Amazon Workspace to create the needed virtual desktops in your VPC.

86
Q

Cloud Trail encryption

A

By default, CloudTrail event log files are encrypted using Amazon S3 server-side encryption (SSE). You can also choose to encrypt your log files with an AWS Key Management Service (AWS KMS) key. You can store your log files in your bucket for as long as you want. You can also define Amazon S3 lifecycle rules to archive or delete log files automatically. If you want notifications about log file delivery and validation, you can set up Amazon SNS notifications.

87
Q

Could I lose my keys if a single HSM instance fails?

A

Yes. It is possible to lose keys that were created since the most recent daily backup if the CloudHSM cluster that you are using fails and you are not using two or more HSMs. Amazon strongly recommends that you use two or more HSMs, in separate Availability Zones, in any production CloudHSM Cluster to avoid loss of cryptographic keys.

88
Q

Can Amazon recover my keys if I lose my credentials to my HSM?

A

No. Amazon does not have access to your keys or credentials and therefore has no way to recover your keys if you lose your credentials.

89
Q

EBS snapshot process

A

Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed.

While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume hence, you can still use the EBS volume normally.

When you create an EBS volume based on a snapshot, the new volume begins as an exact replica of the original volume that was used to create the snapshot. The replicated volume loads data lazily in the background so that you can begin using it immediately. If you access data that hasn’t been loaded yet, the volume immediately downloads the requested data from Amazon S3 and then continues loading the rest of the volume’s data in the background.

90
Q

An On-Demand EC2 instance is launched into a VPC subnet with the Network ACL configured to allow all inbound traffic and deny all outbound traffic. The instance’s security group has an inbound rule to allow SSH from any IP address and does not have any outbound rules.

In this scenario, what are the changes needed to allow SSH connection to the instance?

A

In order for you to establish an SSH connection from your home computer to your EC2 instance, you need to do the following:

  • On the Security Group, add an Inbound Rule to allow SSH traffic to your EC2 instance.
  • On the NACL, add both an Inbound and Outbound Rule to allow SSH traffic to your EC2 instance.
91
Q

What is the EASIEST way for the Architect to automate the log collection from the Amazon EC2 instances?

A

Add a lifecycle hook to your Auto Scaling group to move instances in the Terminating state to the** Terminating:Wait** state to delay the termination of unhealthy Amazon EC2 instances. Configure a CloudWatch Events rule for the EC2 Instance-terminate Lifecycle Action Auto Scaling Event with an associated Lambda function. Trigger the CloudWatch agent to push the application logs and then resume the instance termination once all the logs are sent to CloudWatch Logs.

92
Q

Volume Gateway

A

The Volume Gateway is a cloud-based iSCSI block storage volume for your on-premises applications. The Volume Gateway provides either a local cache or full volumes on-premises while also storing full copies of your volumes in the AWS cloud.

There are two options for Volume Gateway:

Cached Volumes - you store volume data in AWS, with a small portion of recently accessed data in the cache on-premises.

Stored Volumes - you store the entire set of volume data on-premises and store periodic point-in-time backups (snapshots) in AWS.

93
Q

AWS Glue

A

AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. You can create and run an ETL job with a few clicks in the AWS Management Console. You simply point AWS Glue to your data stored on AWS, and AWS Glue discovers your data and stores the associated metadata (e.g., table definition and schema) in the AWS Glue Data Catalog. Once cataloged, your data is immediately searchable, queryable, and available for ETL. AWS Glue generates the code to execute your data transformations and data loading processes.

94
Q

Aurora failover

A

Failover is automatically handled by Amazon Aurora so that your applications can resume database operations as quickly as possible without manual administrative intervention.
If you have an Amazon Aurora Replica in the same or a different Availability Zone, when failing over, Amazon Aurora flips the canonical name record (CNAME) for your DB Instance to point at the healthy replica, which in turn is promoted to become the new primary. Start-to-finish failover typically completes within 30 seconds.

If you are running Aurora Serverless and the DB instance or AZ becomes unavailable, Aurora will automatically recreate the DB instance in a different AZ.

If you do not have an Amazon Aurora Replica (i.e., single instance) and are not running Aurora Serverless, Aurora will attempt to create a new DB Instance in the same Availability Zone as the original instance. This replacement of the original instance is done on a best-effort basis and may not succeed, for example, if there is an issue that is broadly affecting the Availability Zone.

95
Q

AWS CloudTrail

A

AWS CloudTrail is an AWS service that helps you enable governance, compliance, and operational and risk auditing of your AWS account. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs.

There are two types of events that you configure your CloudTrail for:

Management Events provide visibility into management operations that are performed on resources in your AWS account. These are also known as control plane operations. Management events can also include non-API events that occur in your account.

Data Events, on the other hand, provide visibility into the resource operations performed on or within a resource. These are also known as data plane operations. It allows granular control of data event logging with advanced event selectors. You can currently log data events on different resource types such as Amazon S3 object-level API activity (e.g. GetObject, DeleteObject, and PutObject API operations), AWS Lambda function execution activity (the Invoke API), DynamoDB Item actions, and many more.

96
Q

SQS retention period

A

The default message retention period is 4 days.
To fix this, you can increase the message retention period to a maximum of 14 days using the SetQueueAttributes action.

97
Q

AppSync pipeline resolvers

A

AppSync pipeline resolvers offer an elegant server-side solution to address the common challenge faced in web applications—aggregating data from multiple database tables. Instead of invoking multiple API calls across different data sources, which can degrade application performance and user experience, AppSync pipeline resolvers enable easy retrieval of data from multiple sources with just a single call. By leveraging Pipeline functions, these resolvers streamline the process of consolidating and presenting data to end-users.

98
Q

AWS Trusted Advisor

A

AWS Trusted Advisor inspects your AWS environment, and then makes recommendations when opportunities exist to** save money**, improve system availability and performance, or help close security gaps. If you have a Basic or Developer Support plan, you can use the Trusted Advisor console to access all checks in the Service Limits category and six checks in the Security category.

AWS has an example of the implementation of Quota Monitor CloudFormation template that you can deploy on your AWS account. The template uses an AWS Lambda function that runs once every 24 hours. The Lambda function refreshes the **AWS Trusted Advisor Service Limits **checks to retrieve the most current utilization and quota data through API calls. Amazon CloudWatch Events captures the status events from Trusted Advisor. It uses a set of CloudWatch Events rules to send the status events to all the targets you choose during initial deployment of the solution: an Amazon Simple Queue Service (Amazon SQS) queue, an Amazon Simple Notification Service (Amazon SNS) topic or a Lambda function for Slack notifications.

The AWS Trusted Advisor Service limit publishes service limits metric to CloudWatch; thus, you can configure an alarm and send a notification to Amazon SNS. You can also create an AWS Lambda function to read data from specific Trusted Advisor checks. A Lambda function invocation can be scheduled using Amazon EventBridge (Amazon CloudWatch Events) to automate the process.

99
Q

AWS AppSync

A

AWS AppSync is a serverless GraphQL and Pub/Sub API service that simplifies building modern web and mobile applications. It provides a robust, scalable GraphQL interface for application developers to combine data from multiple sources, including Amazon DynamoDB, AWS Lambda, and HTTP APIs.

With AWS AppSync, you can use custom domain names to configure a single, memorable domain that works for both your GraphQL and real-time APIs.

In other words, you can utilize simple and memorable endpoint URLs with domain names of your choice by creating custom domain names that you associate with the AWS AppSync APIs in your account.

100
Q

AWS Outposts

A

AWS Outposts is a fully managed service from AWS that extends AWS infrastructure, services, APIs, and tools to on-premises data centers or co-location facilities. It enables organizations to run AWS services locally on their own hardware while still maintaining the ability to seamlessly interact with AWS’s public cloud.

101
Q

Egress-only Internet gateway

A

An egress-only Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows outbound communication over IPv6 from instances in your VPC to the Internet, and prevents the Internet from initiating an IPv6 connection with your instances.

Take note that an egress-only Internet gateway is for use with IPv6 traffic only. To enable outbound-only Internet communication over IPv4, use a NAT gateway instead.

102
Q

Kinesis data stream data retention

A

By default, records of a stream in Amazon Kinesis are accessible for up to 24 hours from the time they are added to the stream. You can raise this limit to up to 7 days by enabling extended data retention.

103
Q

AWS Elastic Beanstalk

A

**AWS Elastic Beanstalk **supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren’t supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.

By using Docker with Elastic Beanstalk, you have an infrastructure that automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. You can manage your web application in an environment that supports the range of services that are integrated with Elastic Beanstalk, including but not limited to VPC, RDS, and IAM.

104
Q

AWS Compute Optimizer

A

Compute Optimizer simply analyzes your workload and recommends the optimal AWS resources needed to improve performance and reduce costs.

105
Q

RDS read replica

A

Create an Amazon RDS for MySQL read replica in the secondary AWS Region is incorrect because MySQL replicas won’t provide you a read replication latency of less than 1 second. RDS Read Replicas can only provide asynchronous replication in seconds and not in milliseconds.

106
Q

Route 53 latency based routing

A

You can create latency records for your resources in multiple AWS Regions by using latency-based routing. In the event that Route 53 receives a DNS query for your domain or subdomain such as tutorialsdojo.com or portal.tutorialsdojo.com, it determines which AWS Regions you’ve created latency records for, determines which region gives the user the lowest latency, and then selects a latency record for that region. Route 53 responds with the value from the selected record which can be the IP address for a web server or the CNAME of your elastic load balancer.

Hence, using Route 53 to distribute the load to the multiple EC2 instances across all AWS Regions is the correct answer.

107
Q

S3 STORAGE CLASS TO CHEAP STORAGE FOR LESS THAN DAY

A

The scenario requires you to select a cost-effective service that does not have a minimum storage duration since the data will only last for 12 hours. Among the options given, only Amazon S3 Standard has the feature of no minimum storage duration. It is also the most cost-effective storage service because you will only be charged for the last 12 hours, unlike in other storage classes where you will still be charged based on its respective storage duration (e.g. 30 days, 90 days, 180 days). S3 Intelligent-Tiering also has no minimum storage duration and this is designed for data with changing or unknown access patters.
S3 Standard-IA is designed for long-lived but infrequently accessed data that is retained for months or years. Data that is deleted from S3 Standard-IA within 30 days will still be charged for a full 30 days.

108
Q

Lambda function URLs

A

Lambda function URLs are HTTP(S) endpoints dedicated to your Lambda function. You can easily create and set up a function URL using the Lambda console or API. Once created, Lambda generates a unique URL endpoint for your use.

In the scenario, creating a function URL is the simplest and most straightforward way of making the Lambda function callable by the analytics service. This also simplifies the architecture since there is no need to set up and manage an intermediary service such as API Gateway.

109
Q

Aurura improve performance

A

The most suitable solution in this scenario is to use Multi-AZ deployments instead, but since this option is not available, you can still set up Read Replicas which you can promote as your primary stand-alone DB cluster in the event of an outage.

Hence, the correct answer here is to create Amazon Aurora Replicas.

110
Q

Application Load Balancer path based routing

A

If your application is composed of several individual services, an Application Load Balancer can route a request to a service based on the content of the request such as Host field, Path URL, HTTP header, HTTP method, Query string, or Source IP address. Path-based routing allows you to route a client request based on the URL path of the HTTP header. Each path condition has one path pattern. If the URL in a request matches the path pattern in a listener rule exactly, the request is routed using that rule.

You can use path conditions to define rules that forward requests to different target groups based on the URL in the request (also known as path-based routing). This type of routing is the most appropriate solution for this scenario hence, the correct answer is: Use path conditions to define rules that forward requests to different target groups based on the URL in the request.

111
Q

AWS Global Accelerator enpoint groups

A

With AWS Global Accelerator, you can add or remove endpoints in the AWS Regions, run blue/green deployment, and A/B test without needing to update the IP addresses in your client applications. This is particularly useful for IoT, retail, media, automotive, and healthcare use cases in which client applications cannot be updated frequently.

If you have multiple resources in multiple regions, you can use AWS Global Accelerator to reduce the number of IP addresses. By creating an endpoint group, you can add all of your EC2 instances from a single region in that group. You can add additional endpoint groups for instances in other regions. After it, you can then associate the appropriate ALB endpoints to each of your endpoint groups. The created accelerator would have two static IP addresses that you can use to create a security rule in your firewall device. Instead of regularly adding the Amazon EC2 IP addresses in your firewall, you can use the static IP addresses of AWS Global Accelerator to automate the process and eliminate this repetitive task.

112
Q

Cloud Formation required section

A

The Resources section is the only required section. Some sections in a template can be in any order.

113
Q

vertical vs horizontal scaling

A

Vertical scaling means running the same software on bigger machines which is limited by the capacity of the individual server. Horizontal scaling is adding more servers to the existing pool and doesn’t run into limitations of individual servers.

114
Q

EC2 instances

A

Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications.
Memory Optimized Instances is incorrect because these are designed to deliver fast performance for workloads that process large data sets in memory, which is quite different from handling high read and write capacity on local storage.

Compute Optimized Instances is incorrect because these are ideal for compute-bound applications that benefit from high-performance processors, such as batch processing workloads and media transcoding.

General Purpose Instances is incorrect because these are the most basic type of instances. They provide a balance of compute, memory, and networking resources, and can be used for a variety of workloads. Since you are requiring higher read and write capacity, storage optimized instances should be selected instead.

115
Q

Pemissions ec2 dynamodb

A

Instead, the role supplies temporary permissions that applications can use when they make calls to other AWS resources. When you launch an EC2 instance, you specify an IAM role to associate with the instance. Applications that run on the instance can then use the role-supplied temporary credentials to sign API requests.

Hence, the best option here is to remove the stored access keys first in the AMI. Then, create a new IAM role with permissions to access the DynamoDB table and assign it to the EC2 instances.

116
Q

rds in flight encryption

A

You can use Secure Sockets Layer (SSL) to encrypt connections between your client applications and your Amazon RDS DB instances running Microsoft SQL Server. SSL support is available in all AWS regions for all supported SQL Server editions.

When you create an SQL Server DB instance, Amazon RDS creates an SSL certificate for it. The SSL certificate includes the DB instance endpoint as the Common Name (CN) for the SSL certificate to guard against spoofing attacks.

There are 2 ways to use SSL to connect to your SQL Server DB instance:

  • Force SSL for all connections — this happens transparently to the client, and the client doesn’t have to do any work to use SSL.
  • Encrypt specific connections — this sets up an SSL connection from a specific client computer, and you must do work on the client to encrypt connections.
    You can force all connections to your DB instance to use SSL, or you can encrypt connections from specific client computers only. To use SSL from a specific client, you must obtain certificates for the client computer, import certificates on the client computer, and then encrypt the connections from the client computer.

If you want to force SSL, use the rds.force_ssl parameter. By default, the rds.force_ssl parameter is set to false. Set the rds.force_ssl parameter to true to force connections to use SSL. The rds.force_ssl parameter is static, so after you change the value, you must reboot your DB instance for the change to take effect.

117
Q

TDE

A

Specifying the TDE option in an RDS option group that is associated with that DB instance to enable transparent data encryption (TDE) is incorrect because transparent data encryption (TDE) is primarily used to encrypt stored data on your DB instances running Microsoft SQL Server and not the data that is in transit.

118
Q

Amazon S3 File Gateway

A

Amazon S3 File Gateway presents a file interface that enables you to store files as objects in Amazon S3 using the industry-standard NFS and SMB file protocols, and access those files via NFS and SMB from your data center or Amazon EC2, or access those files as objects directly in Amazon S3.

When you deploy File Gateway, you specify how much disk space you want to allocate for local cache. This local cache acts as a buffer for writes and provides **low latency access **to data that was recently written to or read from Amazon S3. When a client writes data to a file via File Gateway, that data is first written to the local cache disk on the gateway itself. Once the data has been safely persisted to the local cache, only then does the File Gateway acknowledge the write back to the client. From there, File Gateway transfers the data to the S3 bucket asynchronously in the background, optimizing data transfer using multipart parallel uploads and encrypting data in transit using HTTPS.

119
Q

Enable cross region replication

A

To enable the cross-region replication feature in S3, the following items should be met:

  • The source and destination buckets must have versioning enabled.
  • The source and destination buckets must be in different AWS Regions.
  • Amazon S3 must have permission to replicate objects from that source bucket to the destination bucket on your behalf.
120
Q

CloudWatch Logs agent

A

The CloudWatch Logs agent is comprised of the following components:

  • A plug-in to the AWS CLI that pushes log data to CloudWatch Logs.
  • A script (daemon) that initiates the process to push data to CloudWatch Logs.
  • A cron job that ensures that the daemon is always running.

CloudWatch Logs agent provides an automated way to send log data to CloudWatch Logs from Amazon EC2 instances hence, CloudWatch Logs agent is the correct answer.

121
Q

EBS encryption

A

You can configure your AWS account to enforce the encryption of the new EBS volumes and snapshot copies that you create. For example, Amazon EBS encrypts the EBS volumes created when you launch an instance and the snapshots that you copy from an unencrypted snapshot.

Encryption by default has no effect on existing EBS volumes or snapshots. The following are important considerations in EBS encryption:

  • Encryption by default is a Region-specific setting. If you enable it for a Region, you cannot disable it for individual volumes or snapshots in that Region.
  • When you enable encryption by default, you can launch an instance only if the instance type supports EBS encryption.
  • Amazon EBS does not support asymmetric KMS keys.
122
Q

Amazon Pinpoint

A

n Amazon Pinpoint, an event is an action that occurs when a user interacts with one of your applications, when you send a message from a campaign or journey, or when you send a transactional SMS or email message. For example, if you send an email message, several events occur:

  • When you send the message, a send event occurs.
  • When the message reaches the recipient’s inbox, a delivered event occurs.
  • When the recipient opens the message, an open event occurs.

You can configure Amazon Pinpoint to send information about events to Amazon Kinesis. The Kinesis platform offers services that you can use to collect, process, and analyze data from AWS services in real time.The Amazon Pinpoint event stream includes information about user interactions with applications (apps) that you connect to Amazon Pinpoint. It also includes information about all the messages that you send from campaigns, through any channel, and from journeys. This can also include any custom events that you’ve defined. Finally, it includes information about all the transactional email and SMS messages that you send.

123
Q

Redis server auth

A

Using Redis AUTH command can improve data security by requiring the user to enter a password before they are granted permission to execute Redis commands on a password-protected Redis server.

Hence, the correct answer is to authenticate the users using Redis AUTH by creating a new Redis Cluster with both the –transit-encryption-enabled and –auth-token parameters enabled.

To require that users enter a password on a password-protected Redis server, include the parameter –auth-token with the correct password when you create your replication group or cluster and on all subsequent commands to the replication group or cluster.

124
Q

s3 cross account access

A

In Amazon S3, you can grant users in another AWS account (Account B) granular cross-account access to objects owned by your account (Account A). Depending on the type of access that you want to provide, use one of the following solutions to grant cross-account access to objects:

  • AWS Identity and Access Management (IAM) policies and resource-based bucket policies ( for programmatic-only access to S3 bucket objects
  • IAM policies and resource-based Access Control Lists (ACLs) for programmatic-only access to S3 bucket objects
  • Cross-account IAM roles for programmatic and console access to S3 bucket objects.

Not all AWS services support resource-based policies. Therefore, you can use cross-account IAM roles to centralize permission management when providing cross-account access to multiple services. Using cross-account IAM roles simplifies provisioning cross-account access to S3 objects that are stored in multiple S3 buckets. As a result, you don’t need to manage multiple policies for S3 buckets. This method allows cross-account access to objects owned or uploaded by another AWS account or AWS services. If you don’t use cross-account IAM roles, then the object ACL must be modified.

125
Q

Aurora cloning

A

The resiliency of an application pertains to its ability to recover from infrastructure or service disruptions. Both Amazon Aurora and Amazon RDS can give you a highly resilient infrastructure by deploying replicas in multiple availability zones. Both database services can perform an automatic failover to a standby instance in the event of failure. However, only Amazon Aurora has the ability to replicate a database in a fast and efficient manner without impacting performance, thanks to its underlying storage system. By using Aurora cloning, you can create a new cluster that uses the same Aurora cluster volume and has the same data as the original. The process is designed to be fast and cost-effective. The new cluster with its associated data volume is called a clone.

Creating a clone is faster and more space-efficient than physically copying the data using other techniques, such as restoring from a snapshot like you would in Amazon RDS or using the native mysqldump utility.

126
Q

EFS

A

To support a wide variety of cloud storage workloads, Amazon EFS offers two performance modes:

  • General Purpose mode
  • Max I/O mode.
    You choose a file system’s performance mode when you create it, and it cannot be changed. The two performance modes have no additional costs, so your Amazon EFS file system is billed and metered the same, regardless of your performance mode.

There are two throughput modes to choose from for your file system:

  • Bursting Throughput
  • Provisioned Throughput

With Bursting Throughput mode, a file system’s throughput scales as the amount of data stored in the EFS Standard or One Zone storage class grows. File-based workloads are typically spiky, driving high levels of throughput for short periods of time, and low levels of throughput the rest of the time. To accommodate this, Amazon EFS is designed to burst to high throughput levels for periods of time.

**Provisioned Throughput **mode is available for applications with high throughput to storage (MiB/s per TiB) ratios, or with requirements greater than those allowed by the Bursting Throughput mode. For example, say you’re using Amazon EFS for development tools, web serving, or content management applications where the amount of data in your file system is low relative to throughput demands. Your file system can now get the high levels of throughput your applications require without having to pad your file system.

In the scenario, the file system will be frequently accessed by users around the globe so it is expected that there would be hundreds of ECS tasks running most of the time. The Architect must ensure that its storage system is optimized for high-frequency read and write operations.

127
Q

ECS service

A

The following metrics are available for ECS Service:

-ECSServiceAverageCPUUtilization—Average CPU utilization of the service.

-ECSServiceAverageMemoryUtilization—Average memory utilization of the service.

-ALBRequestCountPerTarget—Number of requests completed per target in an Application

128
Q

Capacity reservation

A

When you create a Capacity Reservation, you specify:

  • The Availability Zone in which to reserve the capacity
  • The number of instances for which to reserve capacity
  • The instance attributes, including the instance type, tenancy, and platform/OS

Capacity Reservations can only be used by instances that match their attributes. By default, they are automatically used by running instances that match the attributes. If you don’t have any running instances that match the attributes of the Capacity Reservation, it remains unused until you launch an instance with matching attributes.

In addition, you can use Savings Plans and Regional Reserved Instances with your Capacity Reservations to benefit from billing discounts. AWS automatically applies your discount when the attributes of a Capacity Reservation match the attributes of a Savings Plan or Regional Reserved Instance.

In this scenario, the company only runs the process for 5 hours (from 10 PM to 3 AM) every night. By usinng Capacity Reservations, they not only ensure availability but can also implement automation to procure and cancel capacity, as well as terminate instances once they are no longer needed. This approach prevents them from incurring unnecessary charges, ensuring they are billed only for the resources they actually use.

129
Q

S3

A

Amazon S3 stores data as objects within buckets. An object is a file and any optional metadata that describes the file. To store a file in Amazon S3, you upload it to a bucket. When you upload a file as an object, you can set permissions on the object and any metadata. Buckets are containers for objects. You can have one or more buckets. You can control access for each bucket, deciding who can create, delete, and list objects in it. You can also choose the geographical Region where Amazon S3 will store the bucket and its contents and view access logs for the bucket and its objects.
By default, an S3 object is owned by the AWS account that uploaded it even though the bucket is owned by another account. To get** full access to the object, the object owner must explicitly grant the bucket owner access**. You can create a bucket policy to require external users to grant bucket-owner-full-control when uploading objects so the bucket owner can have full access to the objects.

130
Q

EBS Volume and enable Enhanced Networking

A

Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types. SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies. There is no additional charge for using enhanced networking.

131
Q

Resharding kinesis data stream

A

Splitting increases the number of shards in your stream and therefore increases the data capacity of the stream. Because you are charged on a per-shard basis, splitting increases the cost of your stream. Similarly, **merging reduces the number of shards **in your stream and therefore decreases the data capacity—and cost—of the stream.

If your data rate increases, you can also increase the number of shards allocated to your stream to maintain the application performance. You can reshard your stream using the UpdateShardCount API. The throughput of an Amazon Kinesis data stream is designed to scale without limits via increasing the number of shards within a data stream.

132
Q

Elastic Beanstalk logs

A

AWS Elastic Beanstalk stores your application files and optionally, server log files in Amazon S3. If you are using the AWS Management Console, the AWS Toolkit for Visual Studio, or AWS Toolkit for Eclipse, an Amazon S3 bucket will be created in your account and the files you upload will be automatically copied from your local client to Amazon S3. Optionally, you may configure Elastic Beanstalk to copy your server log files every hour to Amazon S3. You do this by editing the environment configuration settings.

133
Q

cross zone access

A

You can use an IAM role to delegate access to resources that are in different AWS accounts that you own. You share resources in one account with users in a different account. By setting up cross-account access in this way, you don’t need to create individual IAM users in each account. In addition, users don’t have to sign out of one account and sign into another in order to access resources that are in different AWS accounts.

You can use the consolidated billing feature in AWS Organizations to consolidate payment for multiple AWS accounts or multiple AISPL accounts. With consolidated billing, you can see a combined view of AWS charges incurred by all of your accounts. You can also get a cost report for each member account that is associated with your master account. Consolidated billing is offered at no additional charge. AWS and AISPL accounts can’t be consolidated together.

The combined use of IAM and Consolidated Billing will support the autonomy of each corporate division while enabling corporate IT to maintain governance and cost oversight.