Random2 Flashcards

1
Q

DynamoDB stream

A

is an ordered flow of information about changes to items in an Amazon DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.
A stream record contains information about a data modification to a single item in a DynamoDB table.
Amazon DynamoDB is integrated with AWS Lambda so that you can create triggers—pieces of code that automatically respond to events in DynamoDB Streams

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

RDS event subscription

A

RDS events only provide operational events such as DB instance events, DB parameter group events, DB security group events, and DB snapshot events.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

On which protocol runs SSL

A

TCP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

CLOUD FRONT signed url or signed cookie

A

Use signed URLs for the following cases:

  • You want to use an RTMP distribution. Signed cookies aren’t supported for RTMP distributions.
  • You want to restrict access to individual files, for example, an installation download for your application.
  • Your users are using a client (for example, a custom HTTP client) that doesn’t support cookies.

Use signed cookies for the following cases:

  • You want to provide access to multiple restricted files, for example, all of the files for a video in HLS format or all of the files in the subscribers’ area of a website.
  • You don’t want to change your current URLs.

Field-Level Encryption only allows you to securely upload user-submitted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

s3 retention mode

A

Retention modes:
1. Governance mode - some users can delete and update object. Good to test before going into complicance mode
2. Compliance mode - protected version, can’t be overwritten or deleted by any user, retention period cannot be changes

Legal hold - used only with object lock, prevents from being dleted or overwritten, doesn’ have retention period and it is in effect or not

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which custom metric in CloudWatch you have to manually set up?

A

You can also install CloudWatch Agent to collect more system-level metrics from Amazon EC2 instances. Here’s the list of custom metrics that you can set up:

  • Memory utilization
  • Disk swap utilization
  • Disk space utilization
  • Page file utilization
  • Log collection
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

NATgateway

A

NAT Gateways are charged on an hourly basis even for idle time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Amazon S3 gateway endpoint

A

VPC endpoints for Amazon S3 simplify access to S3 from within a VPC by providing configurable and highly reliable secure connections to S3 that do not require an internet gateway or Network Address Translation (NAT) device. When you create an S3 VPC endpoint, you can attach an endpoint policy to it that controls access to Amazon S3.

You can use two types of VPC endpoints to access Amazon S3: gateway endpoints and interface endpoints. A gateway endpoint is a gateway that you specify in your route table to access Amazon S3 from your VPC over the AWS network. Interface endpoints extend the functionality of gateway endpoints by using private IP addresses to route requests to Amazon S3 from within your VPC, on-premises, or from a different AWS Region. Interface endpoints are compatible with gateway endpoints. If you have an existing gateway endpoint in the VPC, you can use both types of endpoints in the same VPC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

AWS Cost Explorer

A

Services (AWS) that helps you visualize, understand, and analyze your AWS costs and usage. It provides a comprehensive set of tools and features to help you monitor and manage your AWS spending.
The primary purpose of AWS Cost Explorer is to help you gain insights into your AWS costs and usage patterns over time. It lets you** view and analyze your historical spending data, forecast future costs, and identify cost-saving opportunities.**
You can programmatically query your cost and usage data via the** Cost Explorer API**. You can query for aggregated data such as total monthly costs or total daily usage. You can also query for granular data, such as the number of daily write operations for DynamoDB database tables in your production environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Amazon Timestream

A

Amazon Timestream is great for storing and analyzing time-series data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

EC2 instance cannot be accessed from the Internet (or vice versa)

A

Be sure that the subnet route table also has a route entry to the internet gateway. If this entry doesn’t exist, the instance is in a private subnet and is inaccessible from the internet.

  • Does it have an EIP or public IP address? It must have public ip adress
  • Is the route table properly configured?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Elastic Fabric Adapter

A

Elastic Fabric Adapter is just a network device that you can attach to your Amazon EC2 instance to accelerate High Performance Computing (HPC) and machine learning applications. EFA enables you to achieve the application performance of an on-premises HPC cluster, with the scalability, flexibility, and elasticity provided by AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Which of the following will occur when the EC2 instance is stopped and started?

A
  • The underlying host for the instance is possibly changed.
  • All data on the attached instance-store devices will be lost.

The option that says: The ENI (Elastic Network Interface) is detached is incorrect because the ENI will stay attached even if you stopped your EC2 instance.

The option that says: The Elastic IP address is disassociated with the instance is incorrect because the EIP will actually remain associated with your instance even after stopping it.

The option that says: There will be no changes is incorrect because there will be a lot of possible changes in your EC2 instance once you stop and start it again. AWS may move the virtualized EC2 instance to another host computer; the instance may get a new public IP address, and the data in your attached instance store volumes will be deleted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

RDS Storage autoscaling

A

RDS Storage Auto Scaling continuously monitors actual storage consumption, and scales capacity up automatically when actual utilization approaches provisioned storage capacity. Auto Scaling works with new and existing database instances. You can enable Auto Scaling with just a few clicks in the AWS Management Console. There is no additional cost for RDS Storage Auto Scaling. You pay only for the RDS resources needed to run your applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Kinesis Data Streams

A

Kinesis Data Streams supports changes to the data record retention period of your stream. A Kinesis data stream is an ordered sequence of data records meant to be written to and read from in real-time. Data records are therefore stored in shards in your stream temporarily.

The time period from when a record is added to when it is no longer accessible is called the retention period. A Kinesis data stream stores records from 24 hours by default to a maximum of 8760 hours (365 days).

This is the reason why there are missing data in your S3 bucket. To fix this, you can either configure your sensors to send the data everyday instead of every other day or alternatively, you can increase the retention period of your Kinesis data stream.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Amazon Aurora Parallel Query

A

this feature simply enables Amazon Aurora to push down and distribute the computational load of a single query across thousands of CPUs in Aurora’s storage layer.With Parallel Query, query processing is pushed down to the Aurora storage layer. The query gains a large amount of computing power, and it needs to transfer far less data over the network. In the meantime, the Aurora database instance can continue serving transactions with much less interruption. This way, you can run transactional and analytical workloads alongside each other in the same Aurora database, while maintaining high performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Aurora cluster and reader endpoint

A

A reader endpoint for an Aurora DB cluster provides load-balancing support for read-only connections to the DB cluster. Use the reader endpoint for read operations, such as queries. By processing those statements on the read-only Aurora Replicas, this endpoint reduces the overhead on the primary instance. It also helps the cluster to scale the capacity to handle simultaneous SELECT queries, proportional to the number of Aurora Replicas in the cluster. Each Aurora DB cluster has one reader endpoint.

If the cluster contains one or more Aurora Replicas, the reader endpoint load balances each connection request among the Aurora Replicas. In that case, you can only perform read-only statements such as SELECT in that session. If the cluster only contains a primary instance and no Aurora Replicas, the reader endpoint connects to the primary instance. In that case, you can perform write operations through the endpoint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

NACL rule execution

A

It starts from lowest number if ip matches rule it will execute first matching rule and fnish.
It supports allow and dany rules.
New NACL denies all traffic by default.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

AWS config tags

A

Since tags are case-sensitive, giving them a consistent naming format is a good practice. Depending on how your tagging rules are set up, having a disorganized naming convention may lead to permission issues like the one described in the scenario. In the scenario, the administrator can leverage the require-tags managed rule in AWS Config. This rule checks if a resource contains the tags that you specify.

20
Q

Cloud front OAI

A

Key Features and Benefits of Using OAI:
Restrict Direct S3 Access: With OAI, objects in the S3 bucket are not publicly accessible via the S3 URL. Instead, the objects can only be accessed through CloudFront.

Improved Security: By restricting direct access to S3, you can prevent unauthorized users from bypassing CloudFront and directly accessing your content via S3.

Control Access to Specific Objects: You can create fine-grained access control by specifying which objects should be accessible through CloudFront using the OAI.

Leverage CloudFront Features: CloudFront provides caching, geographic-based distribution, lower latency, and improved performance for delivering your S3 content.

21
Q

Origin Access Identity (OAI)

A

You can configure an S3 bucket as the origin of a CloudFront distribution. OAI prevents users from viewing your S3 files by simply using the direct URL for the file. Instead, they would need to access it through a CloudFront URL.
To require that users access your content through CloudFront URLs, you perform the following tasks:
Create a special CloudFront user called an origin access identity.
Give the origin access identity permission to read the files in your bucket.
Remove permission for anyone else to use Amazon S3 URLs to read the files (through bucket policies or ACLs).
You cannot set OAI if your S3 bucket is configured as a website endpoint.

22
Q

Origin Access Control (OAC)

A

A more preferred way (compared with OAI) to restrict access to an Amazon S3 origin

Enables CloudFront customers to easily secure their Amazon S3 Origins by permitting only designated CloudFront distributions to access their Amazon S3 buckets

AWS Signature Version 4 (SigV4) can be enabled on Amazon CloudFront requests to Amazon S3 buckets with the ability to set if the Amazon service CloudFront should sign requests or not, as well as when a particular request will be signed.

Server-side Encryption with AWS KMS keys (SSE-KMS) can also be enabled when performing uploads and downloads through the Amazon CloudFront distribution.

23
Q

ec2 limits

A
  • you are limited to running On-Demand Instances per your vCPU-based On-Demand Instance limit per region. There is also a limit of purchasing 20 Reserved Instances and requesting Spot Instances per your dynamic Spot limit per region
  • You are limited to running On-Demand Instances per your vCPU-based On-Demand Instance limit, purchasing 20 Reserved Instances, and requesting Spot Instances per your dynamic Spot limit per region. New AWS accounts may start with limits that are lower than the limits described here.
  • If you need more instances, complete the Amazon EC2 limit increase request form with your use case, and your limit increase will be considered. Limit increases are tied to the region they were requested for.
24
Q

AWS Backup

A

AWS Backup is a centralized backup service that makes it easy and cost-effective for you to backup your application data across AWS services in the AWS Cloud, helping you meet your business and regulatory backup compliance requirements. AWS Backup makes protecting your AWS storage volumes, databases, and file systems simple by providing a central place where you can configure and audit the AWS resources you want to backup, automate backup scheduling, set retention policies, and monitor all recent backup and restore activity.
In this scenario, you can use AWS Backup to create a backup plan with a retention period of 90 days. A backup plan is a policy expression that defines when and how you want to back up your AWS resources. You assign resources to backup plans, and AWS Backup then automatically backs up and retains backups for those resources according to the backup plan.
maximum backup retention period for automated backup is only 35 days.

25
Q

RDS backup

A

you cannot directly download or export an automated snapshot in RDS to Amazon S3. You have to copy the automated snapshot first for it to become a manual snapshot, which you can move to an Amazon S3 bucket

26
Q

File Gateway

A

File Gateway presents a file-based interface to Amazon S3, which appears as a network file share. It enables you to store and retrieve Amazon S3 objects through standard file storage protocols. File Gateway allows your existing file-based applications or devices to use secure and durable cloud storage without needing to be modified. With File Gateway, your configured S3 buckets will be available as Network File System (NFS) mount points or Server Message Block (SMB) file shares.
To store the backup data from on-premises to a durable cloud storage service, you can use File Gateway to store and retrieve objects through standard file storage protocols (SMB or NFS). File Gateway enables your existing file-based applications, devices, and workflows to use Amazon S3, without modification. File Gateway securely and durably stores both file contents and metadata as objects while providing your on-premises applications low-latency access to cached data.

27
Q

Active-Active Failover

A

Use this failover configuration when you want all of your resources to be available the majority of the time. When a resource becomes unavailable, Route 53 can detect that it’s unhealthy and stop including it when responding to queries.

In active-active failover, all the records that have the same name, the same type (such as A or AAAA), and the same routing policy (such as weighted or latency) are active unless Route 53 considers them unhealthy. Route 53 can respond to a DNS query using any healthy record.

28
Q

Active-Passive Failover

A

Use an active-passive failover configuration when you want a primary resource or group of resources to be available the majority of the time and you want a secondary resource or group of resources to be on standby in case all the primary resources become unavailable. When responding to queries, Route 53 includes only the healthy primary resources. If all the primary resources are unhealthy, Route 53 begins to include only the healthy secondary resources in response to DNS queries.

29
Q

Glacier retrival

A

Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required. For all but the largest archives (250 MB+), data accessed using Expedited retrievals are typically made available within 1–5 minutes. Provisioned Capacity ensures that retrieval capacity for Expedited retrievals is available when you need it.

To make an Expedited, Standard, or Bulk retrieval, set the Tier parameter in the Initiate Job (POST jobs) REST API request to the option you want, or the equivalent in the AWS CLI or AWS SDKs. If you have purchased provisioned capacity, then all expedited retrievals are automatically served through your provisioned capacity.

Provisioned capacity ensures that your retrieval capacity for expedited retrievals is available when you need it. Each unit of capacity provides that at least three expedited retrievals can be performed every five minutes and provides up to 150 MB/s of retrieval throughput. You should purchase provisioned retrieval capacity if your workload requires highly reliable and predictable access to a subset of your data in minutes. Without provisioned capacity Expedited retrievals are accepted, except for rare situations of unusually high demand. However, if you require access to Expedited retrievals under all circumstances, you must purchase provisioned retrieval capacity.

30
Q

AWS Database Migration Service

A
  • AWS Database Migration Service helps you migrate your databases to AWS with virtually no downtime. All data changes to the source database that occur during the migration are continuously replicated to the target, allowing the source database to be fully operational during the migration process.
  • You can set up a DMS task for either one-time migration or ongoing replication. An ongoing replication task keeps your source and target databases in sync. Once set up, the ongoing replication task will continuously apply source changes to the target with minimal latency.
  • It supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora.
31
Q

ELB

A

Can run in only one region

32
Q

AWS Systems Manager

A

AWS Systems Manager is a collection of capabilities to help you manage your applications and infrastructure running in the AWS Cloud. Systems Manager simplifies application and resource management, shortens the time to detect and resolve operational problems, and helps you manage your AWS resources securely at scale.

34
Q

Parameter Store

A

Parameter Store provides secure, hierarchical storage for configuration data and secrets management. You can store data such as passwords, database strings, Amazon Elastic Compute Cloud (Amazon EC2) instance IDs and Amazon Machine Image (AMI) IDs, and license codes as parameter values. You can store values as plain text or encrypted data. You can then reference values by using the unique name you specified when you created the parameter. Parameter Store is also integrated with Secrets Manager. You can retrieve Secrets Manager secrets when using other AWS services that already support references to Parameter Store parameters.

35
Q

S3 cross account permissions

A

To be sure that a destination account owns an S3 object copied from another account, grant the destination account the permissions to perform the cross-account copy. Follow these steps to configure cross-account permissions to copy objects from a source bucket in Account A to a destination bucket in Account B:

  • Attach a bucket policy to the source bucket in Account A.
  • Attach an AWS Identity and Access Management (IAM) policy to a user or role in Account B.
  • Use the IAM user or role in Account B to perform the cross-account copy.
36
Q

Amazon WorkDocs

A

Amazon WorkDocs is commonly used to easily collaborate, share content, provide rich feedback, and collaboratively edit documents with other users. There is no direct way for you to integrate WorkDocs and an Amazon S3 bucket owned by a different AWS account.

37
Q

ports

A
  • 22 SSH uses TCP
  • 3389 RDP
38
Q

Service control policies (SCPs)

A
  • Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization. SCPs help you to ensure your accounts stay within your organization’s access control guidelines.
  • SCPs alone is not sufficient to grant permissions to the accounts in your organization. No permissions are granted by an SCP. An SCP defines a guardrail or sets limits on the actions that the account’s administrator can delegate to the IAM users and roles in the affected accounts.
39
Q

AWS Control Tower

A

The AWS Control Tower service is commonly used to set up and govern a secure multi-account AWS environment.

40
Q

Bastion host set up

A

When setting up a bastion host in AWS, you should only allow the individual IP of the client and not the entire network. Therefore, in the Source, the proper CIDR notation should be used. The /32 denotes one IP address, and the /0 refers to the entire network.

41
Q

SNI Custom SSL

A

SNI Custom SSL allows multiple SSL certificates to be served from the same IP address by leveraging the SNI extension of the TLS protocol. It is efficient, cost-effective, and allows secure hosting for multiple domains on a single IP, but requires modern client support. It’s widely used in services like Amazon CloudFront to serve SSL traffic for various domains using a shared IP infrastructure.

42
Q

ALB

A

In order to connect to a service running on an instance, you need to make sure that both inbound traffic on the port that the service is listening on and outbound traffic from ephemeral ports are allowed in the associated network ACL. When a client connects to a server, a random port is generated (like 1024-65535) from the ephemeral port range with this becoming the client’s source port.

The designated ephemeral port then becomes the destination port for return traffic from the service, so outbound traffic from the ephemeral port must be allowed in the network ACL. By default, network ACLs allow all inbound and outbound traffic. If your network ACL is more restrictive, then you need to explicitly allow traffic from the ephemeral port range.

The client that initiates the request chooses the ephemeral port range. The range varies depending on the client’s operating system.

  • Many Linux kernels (including the Amazon Linux kernel) use ports 32768-61000.
  • Requests originating from Elastic Load Balancing use ports 1024-65535.
  • Windows operating systems through Windows Server 2003 use ports 1025-5000.
  • Windows Server 2008 and later versions use ports 49152-65535.
  • A NAT gateway uses ports 1024-65535.
  • AWS Lambda functions use ports 1024-65535.

For example, if a request comes into a web server in your VPC from a Windows 10 client on the Internet, your network ACL must have an outbound rule to enable traffic destined for ports 49152 - 65535. If an instance in your VPC is the client initiating a request, your network ACL must have an inbound rule to enable traffic destined for the ephemeral ports specific to the type of instance (Amazon Linux, Windows Server 2008, and so on).

In this scenario, you only need to allow the incoming traffic on port 443. Since security groups are stateful, you can apply any changes to an incoming rule and it will be automatically applied to the outgoing rule.

To enable the connection to a service running on an instance, the associated network ACL must allow both inbound traffic on the port that the service is listening on as well as outbound traffic from ephemeral ports. When a client connects to a server, a random port from the ephemeral port range (32768 - 65535) becomes the client’s source port. Since the return traffic will use an ephemeral port, outbound traffic must be allowed on these ports to destination 0.0.0.0/0.

43
Q

RESERVED INSTANCE after stopping

A
  • The Reserved Instance Marketplace is a platform that supports the sale of third-party and AWS customers’ unused Standard Reserved Instances, which vary in terms of lengths and pricing options. For example, you may want to sell Reserved Instances after moving instances to a new AWS region, changing to a new instance type, ending projects before the term expiration, when your business needs change, or if you have unneeded capacity.
  • a stopped instance can still be restarted. Take note that when a Reserved Instance expires, any instances that were covered by the Reserved Instance are billed at the on-demand price which costs significantly higher.
44
Q

AWS Fargate

A

AWS Fargate is a serverless compute engine for containers that work with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers.

By default, Fargate tasks are given a minimum of 20 GiB of free ephemeral storage, which meets the storage requirement in the scenario

45
Q

Denial of Service (DoS) attack

A

A Denial of Service (DoS) attack is an attack that can make your website or application unavailable to end users. To achieve this, attackers use a variety of techniques that consume network or other resources, disrupting access for legitimate end users.

To protect your system from DDoS attack, you can do the following:

  • Use an Amazon CloudFront service for distributing both static and dynamic content.
  • Use an Application Load Balancer with Auto Scaling groups for your EC2 instances. Prevent direct Internet traffic to your Amazon RDS database by deploying it to a new private subnet.
  • Set up alerts in Amazon CloudWatch to look for high Network In and CPU utilization metrics.
46
Q

AWS WAF

A

AWS WAF is a web application firewall that helps protect web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. You can use AWS WAF to define customizable web security rules that control which traffic accesses your web applications. If you use AWS Shield Advanced, you can use AWS WAF at no extra cost for those protected resources and can engage the DRT to create WAF rules.