CloudAcademy: Knowledge Check: Architecture (SAA-C03) 1 of 2 Flashcards

1
Q

In Amazon Kinesis, a _____ contains a sequence of data records.

A. data lake
B. partition
C. data blob
D. shard

A

D. shard

Explanation:
A shard contains a sequence of data records.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Amazon Kinesis _____ is used to perform stream processing on binary-encoded data.

A. Video Streams
B. Data Analytics
C. Data Firehose
D. Data Streams

A

A. Video Streams

Explanation:
Kinesis Video Streams is used to do stream processing on binary-encoded data, such as audio and video.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which Amazon Kinesis solution capability enables you to write standard SQL queries on streaming data?

A. Amazon Kinesis Firehose
B. Amazon Kinesis OpenSearch
C. Amazon KinesisAnalytics
D. Amazon Kinesis Streams

A

C. Amazon KinesisAnalytics

Explanation:
Amazon Kinesis provides three different solution capabilities. Amazon Kinesis Streams enables you to build custom applications that process or analyze streaming data for specialized needs. Amazon Kinesis Firehose enables you to load streaming data into the Amazon Kinesis Analytics, Amazon S3, Amazon Redshift, and Amazon OpenSearch services. Amazon Kinesis Analytics enables you to write standard SQL queries on streaming data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following statements about Amazon MSK is false?

A. You can encrypt the data volumes in Broker storage using Amazon EBS server-side encryption and AWS KMS.
B. Amazon will keep an eye on your Broker nodes and replace them if they become unhealthy.
C. Broker storage is housed within EBS volumes.
D. Non-AWS Kafka clusters cannot be migrated over to Amazon MSK.

A

D. Non-AWS Kafka clusters cannot be migrated over to Amazon MSK.

Explanation:
One of the large benefits of using Amazon MSK over a roll-your-own version of Kafka is that Amazon will keep an eye on these Broker nodes and replace them if they become unhealthy. Within Amazon, this storage is housed within EBS volumes, and gains all the protections that EBS provides, like durability and fault tolerance. You can also encrypt these data volumes using Amazon EBS server-side encryption and AWS KMS, the Key Management Service. If you already have a Kafka cluster that you are managing yourself, either on-premises or within the cloud, you can migrate over to Amazon MSK.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Amazon Elastic MapReduce is based on the _____ framework.

A. Spring
B. Apache Hadoop
C. ASP.net
D. React

A

B. Apache Hadoop

Explanation:
EMR is based on the popular and solid Apache Hadoop framework, an open-source distributed processing framework intended for big data processing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is an Amazon Kinesis stream?

A. an ordered sequence of data records meant to be written to and read from in real-time
B. a fully-managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service, or S3, Amazon Redshift, or Amazon Elasticsearch service
C. a producer that pushes data to an Amazon Kinesis firehose
D. a consumer that receives and processes records from an Amazon Kinesis firehose

A

A. an ordered sequence of data records meant to be written to and read from in real-time

Explanation:
A Kinesis stream is an ordered sequence of data records meant to be written to and read from in real-time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

AWS Step Functions operate by reading in your workflow from a(n) _____ file used to define your state machine and its various components.

A. Amazon State Language
B. XML
C. Lambda
D. Python

A

A. Amazon State Language

Explanation:
AWS Step Functions operate by reading in your workflow from an Amazon State Language File–a JSON-based, structured language used to define your state machine and its various components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In Kafka, _____ read data.

A. consumers
B. partitions
C. topics
D. producers

A

A. consumers

Explanation:
You have producers, who create data, such as a website gathering user traffic flow information. You have topics, which receive the data; this information is stored with extreme fault tolerance. And you have consumers, which can read that data in order and know that it was never changed or modified along the way.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Regarding Amazon SNS, ________ communicate asynchronously with _____________ by producing and sending a message to a topic, which is a logical access point and communication channel.

A. clients; services
B. publishers; subscribers
C. queues; topics
D. brokers; recipients

A

B. publishers; subscribers

Explanation:
In Amazon SNS, there are two types of clients—publishers and subscribers—also referred to as producers and consumers. Publishers communicate asynchronously with subscribers by producing and sending a message to a topic, which is a logical access point and communication channel.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In real-time data streaming, the consumers deliver data records to the _____ layer.

A. source
B. stream storage
C. destination
D. stream processing

A

C. destination

Explanation:
The consumers deliver data records to the fifth layer, the destination.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Amazon Elastic MapReduce is a managed service designed to _____.

A. process and analyze vast amounts of data
B. provide a way to model a collection of related AWS and third-party resources, provision them quickly and consistently, and manage them throughout their life cycles
C. make provisioning and creating IT stacks easier for both the end user and IT admins
D. provide secure, resizable compute capacity in the cloud

A

A. process and analyze vast amounts of data

Explanation:
Amazon Elastic MapReduce is a managed service designed to process and analyze vast amounts of data through the use of jobs, which can be short running with per second costs, or long-running workloads, allowing you to build high availability into your architecture.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

In real-time data streaming, the _____ layer accesses the stream storage layer using one or more applications called consumers.

A. stream ingestion
B. source
C. destination
D. stream processing

A

D. stream processing

Explanation:
The stream processing layer accesses the stream storage layer using one or more applications called consumers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Which AWS service enables you to store message data until your application is able to process it?

A. AWS Lambda
B. Amazon Simple Notification Service (SNS)
C. Amazon Simple Queue Service (SQS)
D. Amazon Simple Email Service (SES)

A

C. Amazon Simple Queue Service (SQS)

Explanation:
Amazon Simple Queue Service (Amazon SQS) is a web service that gives you access to message queues that store messages waiting to be processed. With Amazon SQS, you can quickly build message queuing applications that can run on any computer.

Amazon SQS offers a reliable, highly scalable, hosted queue for storing messages in transit between computers. With Amazon SQS, you can move data between diverse, distributed application components without losing messages and without requiring each component to be always available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the main reason one would want to use Amazon MSK rather than produce one’s own implementation of Kafka?

A. Amazon MSK is a fully managed service.
B. You get to manage all of your upgrades.
C. You get to orchestrate your cluster tasks and maintain the state of your clusters using Apache Zookeeper.
D. You have total control over your servers.

A

A. Amazon MSK is a fully managed service.

Explanation:
The main reason you’d want to use Amazon MSK over rolling your own implementation of Kafka is that Amazon MSK is a fully managed service. This means you don’t need to take care of any servers, you don’t need to worry about any upgrades, and you also don’t need to bother with handling Apache Zookeeper.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

After it’s configured, where do Amazon SQS messages go when they have failed to be read after the maximum number of attempts, so that a developer can review the message to determine the cause of failure?

A. An AWS-created S3 bucket
B. A user-created S3 bucket
C. A Dead-Letter Queue
D. They are not stored anywhere. The messages are deleted.

A

C. A Dead-Letter Queue

Explanation:
A dead-letter queue differs fromthe standard and FIFO queues as this dead-letter queue is not used as a source queue to hold messages submitted by producers. Instead, the dead-letter queue is used by the source queue to send messages that fail to process for one reason or another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which AWS service is a pub-sub notification service that provides both application-to-application and application-to-person communication?

A. Amazon SQS
B. Amazon API Gateway
C. AWS Fargate
D. Amazon SNS

A

D. Amazon SNS

Explanation:
This service is a pub-sub notification service that provides both application-to-application and application-to-person communication. SNS can also act as an event-driven hub similar to Amazon EventBridge; It’s just more bare bones.

17
Q

Apache Kafka provides which of the following services?

A. A project development management and comprehension tool
B. A collection of open-source software utilities that facilitates using a network of many computers to solve problems involving massive amounts of data and computation
C. A platform for stream processing and operates as a publisher/subscriber-based durable messaging system
D. An open-source HTTP server for modern operating systems including UNIX, Microsoft Windows, Mac OS/X, and Netware

A

C. A platform for stream processing and operates as a publisher/subscriber-based durable messaging system

Explanation:
Kafka provides a platform for stream processing and operates as a publisher/subscriber-based durable messaging system. Its key features are the ability to intake data with extreme fault tolerance, allowing for continuous streams of these records that preserve the integrity of the data, including the order in which it was received.

Apache Kafka then acts as a buffer between these data-producing entities and the customers that are subscribed to it. Subscribers receive information from Kafka topics on a first in, first out basis or FIFO, allowing the subscriber to have a correct timeline of the data that was produced.

18
Q

In stream processing, _____ collect events or transactions and put them into a data stream.

A. consumers
B. event buses
C. sources
D. producers

A

D. producers

Explanation:
Producers collect events or transactions and put them into a data stream.

19
Q

A _____ is the base throughput unit of an Amazon Kinesis stream.

A. shard
B. sequence
C. record
D. data blob

A

A. shard

Explanation:
A shard is the base throughput unit of an Amazon Kinesis stream.

20
Q

What is Amazon Kinesis Firehose?

A. a consumer that receives and processes records from an Amazon Kinesis firehose
B. a fully-managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service, or S3, Amazon Redshift, or Amazon OpenSearch service
C. an ordered sequence of data records meant to be written to and read from in real-time
D. a producer that pushes data to an Amazon Kinesis firehose

A

B. a fully-managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service, or S3, Amazon Redshift, or Amazon OpenSearch service

Explanation:
Amazon Kinesis Firehose is a fully-managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service, or S3, Amazon Redshift, or Amazon OpenSearch service.

21
Q

Which SQS component is responsiblefor processing the messages within your queue?

A. Producers
B. Consumers
C. SQS servers
D. Dead-Letter Queues

A

B. Consumers

Explanation:
Consumers are responsible for processing the messages within your queue. As a result, when the consumer element of your architecture is ready to process the message from the queue, the message is retrieved and is then marked as being processed by activating the visibility timeout on the message.

22
Q

In Kafka, _____ receive data.

A. consumers
B. topics
C. producers
D. partitions

A

B. topics

Explanation:
You have producers, who create data, such as a website gathering user traffic flow information. You have topics, which receive the data; this information is stored with extreme fault tolerance. And you have consumers, which can read that data in order and know that it was never changed or modified along the way.

23
Q

Which AWS Step Functions state executes a group of states as concurrently as possible and waits for each branch to terminate before moving on?

A. Parallel
B. Task
C. Succeed
D. Wait

A

A. Parallel

Explanation:
The Parallel State executes a group of states as concurrently as possible and waits for each branch to terminate before moving on. The results of each parallel branch are combined together in an array-like format and will be passed on to the next state.

24
Q

AWS Step Functions allow you to _____.

A. create, publish, maintain, monitor, and secure APIs at any scale
B. automate code deployments to any instance, including Amazon EC2 instances and servers running on-premises
C. analyze and debug distributed applications, such as those built using a microservices architecture
D. create workflows where your system waits for inputs, makes decisions, and processes information based on the input variables

A

D. create workflows where your system waits for inputs, makes decisions, and processes information based on the input variables

Explanation:
AWS Step Functions allow you to create workflows just like a vending machine, where you can have your system wait for inputs, make decisions, and process information based on the input variables.