Section 19: AWS Integration & Messaging: SQS, SNS & Kinesis: Quiz Flashcards
D. Do nothing SQS scaled automatically. no notes.
A. Increase the DelaySeconds parameter. Note is: Good job!
SQS Delay Queues is a period of time during which Amazon SQS keeps new SQS messages invisible to consumers. In SQS Delay Queues, a message is hidden when it is first added to the queue. (default: 0 mins, max.: 15 mins)
C. Increase visability timeout. Note is: Good job!
SQS Visibility Timeout is a period of time during which Amazon SQS prevents other consumers from receiving and processing the message again. In Visibility Timeout, a message is hidden only after it is consumed from the queue. Increasing the Visibility Timeout gives more time to the consumer to process the message and prevent duplicate reading of the message. (default: 30 sec., min.: 0 sec., max.: 12 hours)
B. SQS Dead Letter Queue. Note is: success alert
Good job!
SQS Dead Letter Queue is where other SQS queues (source queues) can send messages that can’t be processed (consumed) successfully. It’s useful for debugging as it allows you to isolate problematic messages so you can debug why their processing doesn’t succeed.
D. SQS FIFO Queue. Note is: success alert
Good job!
SQS FIFO (First-In-First-Out) Queues have all the capabilities of the SQS Standard Queue, plus the following two features. First, The order in which messages are sent and received are strictly preserved and a message is delivered once and remains available until a consumer process and deletes it. Second, duplicated messages are not introduced into the queue.
B. SNS + SQS Fan Out Pattern. Note is: Good job!
This is a common pattern where only one message is sent to the SNS topic and then “fan-out” to multiple SQS queues. This approach has the following features: it’s fully decoupled, no data loss, and you have the ability to add more SQS queues (more applications) over time.
B. Add more shards. Note is: Good job!
The capacity limits of a Kinesis data stream are defined by the number of shards within the data stream. The limits can be exceeded by either data throughput or the number of reading data calls. Each shard allows for 1 MB/s incoming data and 2 MB/s outgoing data. You should increase the number of shards within your data stream to provide enough capacity.
C. For each record sent to Kenisis add a partition key that represents the identify of the users. Note is: Question 8:
You have a website where you want to analyze clickstream data such as the sequence of clicks a user makes, the amount of time a user spends, and where the navigation begins, and how it ends. You decided to use Amazon Kinesis, so you have configured the website to send these clickstream data all the way to a Kinesis data stream. While you checking the data sent to your Kinesis data stream, you found that the users’ data is not ordered and the data for one individual user is spread across many shards. How would you fix this problem?
C. Amazon Kenesis Data Analytics. Note is: Good job!
Use Kinesis Data Analytics with Kinesis Data Streams as the underlying source of data.
C. Kinesis Data Streams + Kinesis Data Firehose. Note is: Good job!
This is a perfect combo of technology for loading data near real-time data into S3 and Redshift. Kinesis Data Firehose supports custom data transformations using AWS Lambda..
A. Amazon Kinesis Data Streams. Note is: success alert
Good job!
Note: Kinesis Data Firehose is now supported, but not Kinesis Data Streams.
B. Amazon SNS. No note.
A. SNS Message Filtering. Note is: success alert
Good job!
SNS Message Filtering allows you to filter messages sent to SNS topic’s subscriptions.
B. Implement a Dead Letter Queue with a redrive policy. no note.
B. Enable Long Polling. No note.