Random2 Flashcards
DynamoDB stream
is an ordered flow of information about changes to items in an Amazon DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.
A stream record contains information about a data modification to a single item in a DynamoDB table.
Amazon DynamoDB is integrated with AWS Lambda so that you can create triggers—pieces of code that automatically respond to events in DynamoDB Streams
RDS event subscription
RDS events only provide operational events such as DB instance events, DB parameter group events, DB security group events, and DB snapshot events.
On which protocol runs SSL
TCP
CLOUD FRONT signed url or signed cookie
Use signed URLs for the following cases:
- You want to use an RTMP distribution. Signed cookies aren’t supported for RTMP distributions.
- You want to restrict access to individual files, for example, an installation download for your application.
- Your users are using a client (for example, a custom HTTP client) that doesn’t support cookies.
Use signed cookies for the following cases:
- You want to provide access to multiple restricted files, for example, all of the files for a video in HLS format or all of the files in the subscribers’ area of a website.
- You don’t want to change your current URLs.
Field-Level Encryption only allows you to securely upload user-submitted
s3 retention mode
Retention modes:
1. Governance mode - some users can delete and update object. Good to test before going into complicance mode
2. Compliance mode - protected version, can’t be overwritten or deleted by any user, retention period cannot be changes
Legal hold - used only with object lock, prevents from being dleted or overwritten, doesn’ have retention period and it is in effect or not
Which custom metric in CloudWatch you have to manually set up?
You can also install CloudWatch Agent to collect more system-level metrics from Amazon EC2 instances. Here’s the list of custom metrics that you can set up:
- Memory utilization
- Disk swap utilization
- Disk space utilization
- Page file utilization
- Log collection
NATgateway
NAT Gateways are charged on an hourly basis even for idle time.
Amazon S3 gateway endpoint
VPC endpoints for Amazon S3 simplify access to S3 from within a VPC by providing configurable and highly reliable secure connections to S3 that do not require an internet gateway or Network Address Translation (NAT) device. When you create an S3 VPC endpoint, you can attach an endpoint policy to it that controls access to Amazon S3.
You can use two types of VPC endpoints to access Amazon S3: gateway endpoints and interface endpoints. A gateway endpoint is a gateway that you specify in your route table to access Amazon S3 from your VPC over the AWS network. Interface endpoints extend the functionality of gateway endpoints by using private IP addresses to route requests to Amazon S3 from within your VPC, on-premises, or from a different AWS Region. Interface endpoints are compatible with gateway endpoints. If you have an existing gateway endpoint in the VPC, you can use both types of endpoints in the same VPC.
AWS Cost Explorer
Services (AWS) that helps you visualize, understand, and analyze your AWS costs and usage. It provides a comprehensive set of tools and features to help you monitor and manage your AWS spending.
The primary purpose of AWS Cost Explorer is to help you gain insights into your AWS costs and usage patterns over time. It lets you** view and analyze your historical spending data, forecast future costs, and identify cost-saving opportunities.**
You can programmatically query your cost and usage data via the** Cost Explorer API**. You can query for aggregated data such as total monthly costs or total daily usage. You can also query for granular data, such as the number of daily write operations for DynamoDB database tables in your production environment.
Amazon Timestream
Amazon Timestream is great for storing and analyzing time-series data
EC2 instance cannot be accessed from the Internet (or vice versa)
Be sure that the subnet route table also has a route entry to the internet gateway. If this entry doesn’t exist, the instance is in a private subnet and is inaccessible from the internet.
- Does it have an EIP or public IP address? It must have public ip adress
- Is the route table properly configured?
Elastic Fabric Adapter
Elastic Fabric Adapter is just a network device that you can attach to your Amazon EC2 instance to accelerate High Performance Computing (HPC) and machine learning applications. EFA enables you to achieve the application performance of an on-premises HPC cluster, with the scalability, flexibility, and elasticity provided by AWS.
Which of the following will occur when the EC2 instance is stopped and started?
- The underlying host for the instance is possibly changed.
- All data on the attached instance-store devices will be lost.
The option that says: The ENI (Elastic Network Interface) is detached is incorrect because the ENI will stay attached even if you stopped your EC2 instance.
The option that says: The Elastic IP address is disassociated with the instance is incorrect because the EIP will actually remain associated with your instance even after stopping it.
The option that says: There will be no changes is incorrect because there will be a lot of possible changes in your EC2 instance once you stop and start it again. AWS may move the virtualized EC2 instance to another host computer; the instance may get a new public IP address, and the data in your attached instance store volumes will be deleted.
RDS Storage autoscaling
RDS Storage Auto Scaling continuously monitors actual storage consumption, and scales capacity up automatically when actual utilization approaches provisioned storage capacity. Auto Scaling works with new and existing database instances. You can enable Auto Scaling with just a few clicks in the AWS Management Console. There is no additional cost for RDS Storage Auto Scaling. You pay only for the RDS resources needed to run your applications.
Kinesis Data Streams
Kinesis Data Streams supports changes to the data record retention period of your stream. A Kinesis data stream is an ordered sequence of data records meant to be written to and read from in real-time. Data records are therefore stored in shards in your stream temporarily.
The time period from when a record is added to when it is no longer accessible is called the retention period. A Kinesis data stream stores records from 24 hours by default to a maximum of 8760 hours (365 days).
This is the reason why there are missing data in your S3 bucket. To fix this, you can either configure your sensors to send the data everyday instead of every other day or alternatively, you can increase the retention period of your Kinesis data stream.
Amazon Aurora Parallel Query
this feature simply enables Amazon Aurora to push down and distribute the computational load of a single query across thousands of CPUs in Aurora’s storage layer.With Parallel Query, query processing is pushed down to the Aurora storage layer. The query gains a large amount of computing power, and it needs to transfer far less data over the network. In the meantime, the Aurora database instance can continue serving transactions with much less interruption. This way, you can run transactional and analytical workloads alongside each other in the same Aurora database, while maintaining high performance.
Aurora cluster and reader endpoint
A reader endpoint for an Aurora DB cluster provides load-balancing support for read-only connections to the DB cluster. Use the reader endpoint for read operations, such as queries. By processing those statements on the read-only Aurora Replicas, this endpoint reduces the overhead on the primary instance. It also helps the cluster to scale the capacity to handle simultaneous SELECT queries, proportional to the number of Aurora Replicas in the cluster. Each Aurora DB cluster has one reader endpoint.
If the cluster contains one or more Aurora Replicas, the reader endpoint load balances each connection request among the Aurora Replicas. In that case, you can only perform read-only statements such as SELECT in that session. If the cluster only contains a primary instance and no Aurora Replicas, the reader endpoint connects to the primary instance. In that case, you can perform write operations through the endpoint.
NACL rule execution
It starts from lowest number if ip matches rule it will execute first matching rule and fnish.
It supports allow and dany rules.
New NACL denies all traffic by default.