DynamoDB Flashcards

1
Q

With DynamoDB you don’t have to worry about

A

Hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

DynamoDB is a _______ database.

A

NoSQL

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How much amount of data can DynamoDB table store?

A

Any amount of data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Two types of backup

A
  1. On-demand

2. Point-in-time recovery

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

With point-in-time recovery, how many days can you go back in time

A

35 days with per second granularity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How is data made highly available and durable?

A

All of your data is stored on solid-state disks (SSDs) and is automatically replicated across multiple Availability Zones in an AWS Region, providing built-in high availability and data durability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

DynamoDB table terminology

A

Tables - items and attributes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What uniquely identifies each item?

A

Primary key

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

_______ provides more flexible querying.

A

Secondary Index

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which attributes are schemaless in a DD table

A

Except Primary keys

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

DynamoDB supports nested attributes up to __ levels deep.

A

32

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Types of primary keys

A
  1. Partition keys

2. Partition keys and sort key

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A simple primary key, composed of one attribute is known as ___________________

A

Partition key

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a composite key?

A

Composite key is made of two attributes - Partition key and sort key.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

DynamoDB uses the partition key value as input to an _____________

A

internal hash function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The output from the hash function determines the _______ in which the item will be stored.

A

partition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

All items with the _________ value are stored together, in sorted order by _________.

A

same partition key, sort key value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

The partition key of an item is also known as its ________

A

hash attribute

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

The sort key of an item is also known as its________

A

range attribute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Each primary key attribute must be a ______________

A

scalar (meaning that it can hold only a single value)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

The only data types allowed for primary key attributes are _____________________

A

string, number, or binary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is secondary index?

A

Secondary indexes are optional allows you to query data using alternate keys, in addition to the partition keys.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Types of secondary indexes.

A
  1. Global secondary index.

2. Local secondary index.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Secondary indexes quota

A

20 Global secondary indexes and 5 local secondary indexes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

How do you update indexes once tables are updated?

A

DD maintains indexes automatically.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What are DD streams?

A

DynamoDB Streams is an optional feature that captures data modification events in DynamoDB tables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

How does events appear in DD streams?

A

The data about DD events appear in the stream in near-real time, and in the order that the events occurred.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Each event is represented by a _________

A

stream record.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

If you enable a stream on a table, DynamoDB Streams writes a stream record whenever _________ events occurs:

A
  1. New item is added
  2. An item is updated
  3. An item is deleted from table
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Each stream record also contains the ______, _______ and __________

A

Name of the table, the event timestamp, and other metadata.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Stream records have a lifetime of _________ after that, they are automatically removed from the stream.

A

24 hours;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Table names and index names must be between____ and _____ characters long, and can contain only the following characters:

A

3 and 255; a-z; A-Z, 0-9, _, -, .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Attribute names must be at least ___characters long, but no greater than ____ long.

A

one; 64 KB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

These attribute names must be no greater than 255 characters long -

A
  1. Secondary index partition key names.

2. Secondary index sort key names.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

The minimum length of a string can be zero, if the attribute is not used as a key for an index or table, and is constrained by the maximum DynamoDB item size limit of _____

A

400 KB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

When your application writes data to a DynamoDB table and receives an _____________

A

HTTP 200 response (OK)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Data consistency

A

The data is eventually consistent across all storage locations, usually within one second or less.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

DynamoDB supports ______________ and _________ reads.

A

eventually consistent and strongly consistent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Default read type

A

eventually consistent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Disadvantages of strongly consistent reads

A
  1. A strongly consistent read might not be available if there is a network delay or outage. In this case, DynamoDB may return a server error (HTTP 500).
  2. Strongly consistent reads may have higher latency than eventually consistent reads.
  3. Strongly consistent reads are not supported on global secondary indexes.
  4. Strongly consistent reads use more throughput capacity than eventually consistent reads.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

DynamoDB uses __________, unless you specify otherwise

A

eventually consistent reads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What are the types of read/write capacity modes

A
  1. On-demand

2. Provisioned (default, free-tier eligible)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What is the purpose of capacity modes

A

Capacity modes decide how you are charged for read/write throughput and how you manage capacity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

How do you allocate capacity modes for LSIs?

A

LSIs inherit the capacity mode from the base table.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

How to serve requests without capacity planning?

A

With on-demand capacity mode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

How on-demand capacity mode charges for DD?

A

On-demand capacity mode offers pay-per-request for reads/writes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

How on-demand mode works?

A

When enabled, on-demand accommodates the workload as they ramp up and down.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

DD tables that uses on-demand mode offers:

A

Single-digit millisecond latency, SLA commitment and security that DD already offers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

When is on-demand capacity mode good?

A
  1. You have new tables with unknown workloads
  2. You have unpredictable application traffic.
  3. You prefer the ease of paying for only what you use.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Default throughput on table, account and indexes for on-demand DD tables

A

40k read request units and 40k write request units per table. Per account and index quotas are not applicable for on-demand

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Can you make a table on-demand once it is created?

A

Yes. You can enable on-demand mode either using create or update commands.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

How often can you switch between capacity modes?

A

You can switch between read/write capacity modes once for every 24 hours.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

How much read/write throughput do you specify for on-demand?

A

You don’t have to specify the read/write throughput for DD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

How DD charges for reads for on-demand

A

For reads, upto 4KB data

  1. 1 RRU for one strong consistent read
  2. 1 RRU for one eventually consistent read
  3. 2 RRU for one transactional read.
  4. For items more than 4KB data, more RRUs are required.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

How DD charges for writes for on-demand mode?

A

For writes, upto 1KB

  1. 1 WRU per 1KB write
  2. 2 WRU per transactional write upto 1KB
  3. For items more than 1KB, additional WRUs are required.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

How does DD on-demand accommodate to previous peak traffic?

A

Given a new peak traffic volume, DD immediately accommodates double peak traffic volume. making it the new traffic peak and the former - previous traffic peak.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Given when on-demand is enabled, DD scales up and down as per the traffic peaks, does it experience throttling?

A

Throttling can occur if you exceed the double your previous peak within 30 minutes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

What are the previous peak settings for a DD table with newly created table with on-demand capacity mode?

A

The previous peak is 2,000 write request units or 6,000 read request units. You can drive up to double the previous peak immediately, which enables newly created on-demand tables to serve up to 4,000 write request units or 12,000 read request units, or any linear combination of the two.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

What are the previous peak settings for a DD table with a updated table with on-demand capacity mode?

A

The previous peak is half the maximum write capacity units and read capacity units provisioned since the table was created, or the settings for a newly created table with on-demand capacity mode, whichever is higher. In other words, your table will deliver at least as much throughput as it did prior to switching to on-demand capacity mode.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Table Behavior while Switching Read/Write Capacity Mode

A

When you switch a table from provisioned capacity mode to on-demand capacity mode, DynamoDB makes several changes to the structure of your table and partitions. This process can take several minutes. During the switching period, your table delivers throughput that is consistent with the previously provisioned write capacity unit and read capacity unit amounts. When switching from on-demand capacity mode back to provisioned capacity mode, your table delivers throughput consistent with the previous peak reached when the table was set to on-demand capacity mode.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

What is the throughput for provision mode

A

Specified through the number of reads and writes per second that you require for your application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

When is Provisioned mode is a good option

A
  1. You have predictable traffic
  2. You run application whose traffic ramps up gradually or traffic is consistent
  3. You can forecast capacity requirements to control costs.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

How DD charges for writes for provisioned mode?

A

For writes, upto 1KB

  1. 1 WCU per 1KB write
  2. 2 WCU per transactional write upto 1KB
  3. For items more than 1KB, additional WRUs are required.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

How DD charges for reads for provisioned mode?

A

For reads, upto 4KB data

  1. 1 RCU for one strong consistent read
  2. 1 RCU for one eventually consistent read
  3. 2 RCU for one transactional read.
  4. For items more than 4KB data, more RCUs are required.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

When calling DescribeTable on an on-demand table, read capacity units and write capacity units are set to ___

A

0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

For provisioned mode, __________ is the maximum amount of capacity that an application can consume from a table or index.

A

Provisioned throughput

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

For provisioned mode, when does an application experience throttling?

A

If your application exceeds your provisioned throughput capacity on a table or index, it is subject to request throttling.

68
Q

When a request is throttled, it fails with an ________

A

HTTP 400 code

69
Q

If you use the AWSManagement Console to create a table or a global secondary index, DynamoDB ________ is enabled by default.

A

auto scaling

70
Q

With ____________, you pay a one-time upfront fee and commit to a minimum provisioned usage level over a period of time.

A

reserved capacity

71
Q

With ___________, you realize significant cost savings compared to on-demand or provisioned throughput settings.

A

Reserved Capacity

72
Q

Reserved capacity is not available in _____________

A

on-demand mode.

73
Q

Any capacity that you provision in excess of your reserved capacity is billed at _________ rates

A

standard provisioned capacity.

74
Q

Amazon DynamoDB stores data in ___________.

A

partitions

75
Q

What is a partition?

A

A partition is an allocation of storage for a table, backed by solid state drives (SSDs) and automatically replicated across multiple Availability Zones within an AWS Region.

76
Q

Partition management is handled entirely by ________

A

DynamoDB

77
Q

DynamoDB allocates additional partitions to a table in the following situations:

A
  1. If you increase the table’s provisioned throughput settings beyond what the existing partitions can support.
  2. If an existing partition fills to capacity and more storage space is required.
78
Q

Global secondary indexes in DynamoDB are composed of _________

A

partitions.

79
Q

The data in a ______________ is stored separately from the data in its base table

A

global secondary index

80
Q

DynamoDB stores and retrieves each item based on its ____________

A

partition key value.

81
Q

To read an item from the table, you must specify the _________ for the item.

A

partition key value

82
Q

In most cases, the DynamoDB response times can be measured in single-digit milliseconds. However, there are certain use cases that require response times in microseconds. For these use cases, __________ delivers fast response times for accessing_______

A

DynamoDB Accelerator (DAX) ; eventually consistent data.

83
Q

What is DAX

A

DAX or DynamoDB Accelerator is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications.

84
Q

DAX supports _________ encryption.

A

server-side encryption and encryption in transit.

85
Q

DAX supports encryption in transit by ensuring all requests and responses between your application and the cluster are encrypted by ___________, and connections to the cluster can be authenticated by ____________

A

transport level security (TLS); verification of a cluster x509 certificate.

86
Q

DAX writes data to disk as part of _____________

A

propagating changes from the primary node to read replicas.

87
Q

DAX provides access to __________ data from DynamoDB tables, with ____________

A

eventually consistent; microsecond latency.

88
Q

A ________ DAX cluster can serve millions of requests per second.

A

Multi-AZ

89
Q

DAX is ideal for

A
  1. Applications that require the fastest possible response time for reads.
  2. Applications that read a small number of items more frequently than others.
  3. Applications that are read-intensive, but are also cost-sensitive. With DynamoDB, you provision the number of reads per second that your application requires. If read activity increases, you can increase your tables’ provisioned read throughput (at an additional cost). Or, you can offload the activity from your application to a DAX cluster, and reduce the number of read capacity units that you need to purchase otherwise.
  4. Applications that require repeated reads against a large set of data.
90
Q

DAX is not ideal for the following types of applications:

A
  1. Applications that require strongly consistent reads (or that cannot tolerate eventually consistent reads).
  2. Applications that do not require microsecond response times for reads, or that do not need to offload repeated read activity from underlying tables.
  3. Applications that are write-intensive, or that do not perform much read activity.
  4. Applications that are already using a different caching solution with DynamoDB, and are using their own client-side logic for working with that caching solution.
91
Q

DAX supports applications written in ________, using AWS-provided clients for those programming languages.

A

Go, Java, Node.js, Python, and .NET

92
Q

DAX is only available for the EC2_______(There is no support for the ____________.)

A

EC2-VPC platform; EC2-Classic platform

93
Q

What is memory exhaustion in the DAX cluster?

A

DAX clusters maintain metadata about the attribute names of items they store. That metadata is maintained indefinitely (even after the item has expired or been evicted from the cache). Applications that use an unbounded number of attribute names can, over time, cause memory exhaustion in the DAX cluster. This limitation applies only to top-level attribute names, not nested attribute names. (Example: When key value is timestamp it is a pbm)

94
Q

Amazon DynamoDB Accelerator (DAX) is designed to run within an _________ environment.

A

Amazon Virtual Private Cloud (Amazon VPC)

95
Q

You can launch a DAX cluster in your virtual network and control access to the cluster by using __________.

A

Amazon VPC security groups

96
Q

To create a DAX cluster, you use the _________. Unless you specify otherwise, your DAX cluster runs within your _______.

A

AWS Management Console; default VPC

97
Q

To run your application, you launch an Amazon EC2 instance into your Amazon VPC. You then deploy your _________ on the EC2 instance.

A

application (with the DAX client)

98
Q

How are requests handled in DAX?

A

At runtime, the DAX client directs all of your application’s DynamoDB API requests to the DAX cluster. If DAX can process one of these API requests directly, it does so. Otherwise, it passes the request through to DynamoDB.

99
Q

How DAX Processes Requests

A

A DAX cluster consists of one or more nodes. Each node runs its own instance of the DAX caching software. One of the nodes serves as the primary node for the cluster. Additional nodes (if present) serve as read replicas.
Your application can access DAX by specifying the endpoint for the DAX cluster. The DAX client software works with the cluster endpoint to perform intelligent load balancing and routing.

100
Q

If the request specifies _________, it tries to read the item from DAX:

A

eventually consistent reads (the default behavior)

101
Q

If DAX has the item available called as __________, DAX returns the item to the application without accessing DynamoDB.

A

a cache hit

102
Q

If DAX does not have the item available called as _______, DAX passes the request through to DynamoDB. When it receives the response from DynamoDB, DAX returns the results to the application. and __________________

A

a cache miss; it also writes the results to the cache on the primary node.

103
Q

If there are any read replicas in the cluster, _______ automatically keeps the replicas in sync with the ______

A

DAX ; primary node.

104
Q

What results from DynamoDB are not cached in DAX?

A

If the request specifies strongly consistent reads, DAX passes the request through to DynamoDB. The results from DynamoDB are not cached in DAX. Instead, they are simply returned to the application.

105
Q

DAX does not recognize any DynamoDB operations for ___________

A

managing tables

106
Q

When is Throttling Exception received?

A

If the number of requests sent to DAX exceeds the capacity of a node, DAX limits the rate at which it accepts additional requests by returning a ThrottlingException. DAX continuously evaluates your CPU utilization to determine the volume of requests it can process while maintaining a healthy cluster state.

107
Q

You can monitor the ThrottledRequestCount metric that DAX publishes to ________. If you see these exceptions regularly, you should consider ______

A

Amazon CloudWatch.; scaling up your cluster.

108
Q

DAX maintains an _______ to store the results from GetItem and BatchGetItem operations.

A

item cache

109
Q

The items in the cache represent __________ from DynamoDB, and are stored by their ________values.

A

eventually consistent data; primary key

110
Q

The item cache has a ________, which is 5 minutes by default.

A

Time to Live (TTL) setting

111
Q

What is Time to Live (TTL) setting in DAX?

A

DAX assigns a timestamp to every item that it writes to the item cache. An item expires if it has remained in the cache for longer than the TTL setting.

112
Q

If you issue a GetItem request on an expired item, this is considered a _________,, and DAX sends the ________ request to DynamoDB.

A

cache miss; GetItem

113
Q

You can specify the TTL setting for the item/query cache when you ___________

A

create a new DAX cluster

114
Q

DAX also maintains a _________ list for the item cache.

A

least recently used (LRU)

115
Q

_______ tracks when an item was first written to the cache, and when the item was last read from the cache.

A

The LRU list

116
Q

If the________ becomes full, ________ evicts older items (even if they haven’t expired yet) to make room for new items.

A

item cache; DAX

117
Q

_________ is always enabled for the item cache and is ___________

A

The LRU algorithm; not user-configurable.

118
Q

If you specify zero as the ___________, items in the item cache will only be refreshed due to an __________

A

item cache TTL setting; LRU evection or a “write-through” operation.

119
Q

DAX also maintains a __________ to store the results from Query and Scan operations.

A

query cache

120
Q

The items in this query cache represent _____________

A

result sets from queries and scans on DynamoDB tables.

121
Q

DAX also maintains an _________ list for the query cache

A

LRU

122
Q

The ________ tracks when a result set was first written to the cache, and when the result was last read from the cache

A

LRU List

123
Q

If you specify ___________, the query response will not be cached.

A

zero as the query cache TTL setting

124
Q

A ______ is the smallest building block of a DAX cluster.

A

node

125
Q

Each node runs ____________ and _____________

A

an instance of the DAX software, and maintains a single replica of the cached data.

126
Q

You can scale your DAX cluster by

A
  1. By adding more nodes to the cluster. This increases the overall read throughput of the cluster.
  2. By using a larger node type. Larger node types provide more capacity and can increase throughput. (You must create a new cluster with the new node type.)
127
Q

A ________ is a logical grouping of one or more nodes that DAX manages as a ________

A

cluster; unit

128
Q

One of the nodes in the cluster is designated as the __________ and the other nodes (if any) are ___________

A

primary node,; read replicas.

129
Q

The primary node is responsible for

A
  1. Fulfilling application requests for cached data.
  2. Handling write operations to DynamoDB.
  3. Evicting data from the cache according to the cluster’s eviction policy.
130
Q

Read replicas are responsible for

A
  1. Fulfilling application requests for cached data.

2. Evicting data from the cache according to the cluster’s eviction policy.

131
Q

However, unlike the _________, ___________ don’t write to DynamoDB.

A
  1. primary node

2. read replicas

132
Q

Read replicas additional purposes:

A
  1. Scalability

2. High availability

133
Q

For maximum fault tolerance, you should deploy read replicas in ____________

A

separate Availability Zones.

134
Q

A DAX cluster in an AWS Region can interact with DynamoDB tables that are in the _______ Region.

A

same

135
Q

What are parameter groups?

A

Parameter groups are used to manage runtime settings for DAX clusters.

136
Q

___________ ensures that all the nodes in that cluster are configured in exactly the same way.

A

Parameter groups

137
Q

A ____________ acts as a virtual firewall for your VPC, allowing you to control inbound and outbound network traffic.

A

security group

138
Q

When you launch a cluster in your VPC, you add an

_________ to your security group to allow_________ traffic.

A

ingress rule; incoming network

139
Q

The ingress rule specifies the________ for your cluster.

A

protocol (TCP) and port number (8111)

140
Q

The applications that are running within your VPC can access the DAX cluster only after ____________

A

Adding ingress rule to security groups.

141
Q

Every DAX cluster provides a __________ for use by your application.

A

cluster endpoint

142
Q

Usage of cluster end point

A

By accessing the cluster using its endpoint, your application does not need to know the hostnames and port numbers of individual nodes in the cluster. Your application automatically “knows” all the nodes in the cluster, even if you add or remove read replicas.

143
Q

Your application can access a node directly by using its _________. However, we recommend that you treat the DAX cluster as a single unit and access it using the___________ instead.

A

node endpoint; cluster endpoint

144
Q

Access to DAX cluster nodes is restricted to ___________. You can use ______ to grant cluster access from Amazon EC2 instances running on specific subnets.

A

applications running on Amazon EC2 instances within an Amazon VPC environment; subnet groups

145
Q

What are Events in DAX

A

DAX records significant events within your clusters, such as management of nodes,

146
Q

You can access events using the _________________

A

AWS Management Console or the DescribeEvents action in the DAX management API.

147
Q

After you create your DAX cluster, you can access it from an ____________ running in the ___________.

A

Amazon EC2 instance; same VPC

148
Q

For your DAX cluster to access DynamoDB tables on your behalf, you must create a__________

A

service role.

149
Q

Amazon DynamoDB Accelerator (DAX) is a ________ caching service that is designed to simplify the process of ________

A

write-through; adding a cache to DynamoDB tables.

150
Q

In many use cases, the way that your application uses DAX affects the _____________________

A

consistency of data within the DAX cluster, and the consistency of data between DAX and DynamoDB.

151
Q

To achieve high availability for your application, we recommend that you provision your DAX cluster with at least _________. Then place those nodes in ______________

A

three nodes; multiple Availability Zones within a Region.

152
Q

If you are building an application that uses DAX, that application should be designed so that it can tolerate _________

A

eventually consistent data.

153
Q

Every DAX cluster has two distinct caches—____________

A

an item cache and a query cache

154
Q

DAX caches the results from _________- requests in its query cache.

A

Query and Scan

155
Q

DAX does not invalidate Query or Scan result sets based on _______________

A

updates to individual items.

156
Q

The PutItem operation is only reflected in the DAX query cache when the __________

A

TTL for the Query expires.

157
Q

To perform a strongly consistent GetItem, BatchGetItem, Query, or Scan request, you set the __________ parameter to true.

A

ConsistentRead

158
Q

DAX can’t serve ___________ reads by itself because ___________

A

strongly consistent; it’s not tightly coupled to DynamoDB.

159
Q

Any subsequent strongly consistent reads would have to be ___________

A

passed through to DynamoDB.

160
Q

DAX handles __________requests the same way it handles strongly consistent reads.

A

TransactGetItems

161
Q

DAX passes all TransactGetItems requests to DynamoDB. When it receives a response from DynamoDB, DAX returns the results to the client, but it ________________

A

doesn’t cache the results.

162
Q

What is Negative Cache?

A

A negative cache entry occurs when DAX can’t find requested items in an underlying DynamoDB table. Instead of generating an error, DAX caches an empty result and returns that result to the user.

163
Q

DAX supports negative cache entries in both the ________________

A

item cache and the query cache.

164
Q

A negative cache entry remains in the DAX item cache until ________________

A

its item TTL has expired,
its LRU is invoked
the item is modified using PutItem, UpdateItem, or DeleteItem.

165
Q

For the DAX management APIs, you can’t scope API actions to _______. ___________ This is different from DAX data plane API operations, such as GetItem, Query, and Scan. Data plane operations are exposed through the DAX client, and those operations can be scoped to _______

A

a specific resource
The Resource element must be set to “*”.
specific resources.

166
Q

Establish a ______ for normal DAX performance in your environment, by measuring performance at various times and under different load conditions.

A

baseline

167
Q

To establish a baseline, you should, at a minimum, monitor the following items both during load testing and in production:

A
  1. CPU utilization and throttled requests, so that you can determine whether you might need to use a larger node type in your cluster. The CPU utilization of your cluster is available through the CPUUtilization CloudWatch metric.
  2. Operation latency (as measured on the client side) should remain consistently within your application’s latency requirements.
  3. Error rates should remain low, as seen from the ErrorRequestCount, FaultRequestCount, and FailedRequestCount CloudWatch metrics.
  4. Estimated database size and evicted size, so that you can determine whether the cluster’s node type has sufficient memory to hold your working set
  5. Client connections, so that you can monitor for any unexplained spikes in connections to the cluster.