Analytics | Amazon Kinesis Video Streams Flashcards
What is Amazon Kinesis Video Streams?
General
Amazon Kinesis Video Streams | Analytics
Amazon Kinesis Video Streams makes it easy to securely stream video from connected devices to AWS for analytics, machine learning (ML), and other processing. Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming video data from millions of devices. It also durably stores, encrypts, and indexes video data in your streams, and allows you to access your data through easy-to-use APIs. Kinesis Video Streams enables you to quickly build computer vision and ML applications through integration with Amazon Rekognition Video and libraries for ML frameworks such as Apache MxNet, TensorFlow, and OpenCV.
What is time-encoded data?
General
Amazon Kinesis Video Streams | Analytics
Time-encoded data is any data in which the records are in a time series, and each record is related to its previous and next records. Video is an example of time-encoded data, where each frame is related to the previous and next frames through spatial transformations. Other examples of time-encoded data include audio, RADAR, and LIDAR signals. Amazon Kinesis Video Streams is designed specifically for cost-effective, efficient ingestion and storage of all kinds of time-encoded data for analytics and ML use cases.
What are common use cases for Kinesis Video Streams?
General
Amazon Kinesis Video Streams | Analytics
Kinesis Video Streams is ideal for building computer vision-enabled ML applications that are becoming prevalent in a wide range of use cases such as the following:
Smart Home
With Kinesis Video Streams, you can easily stream video and audio from camera-equipped home devices such as baby monitors, webcams, and home surveillance systems to AWS. You can then use the streams to build a variety of smart home applications ranging from simple video playback to intelligent lighting, climate control systems, and security solutions.
Smart City
Many cities have installed large numbers of cameras at traffic lights, parking lots, shopping malls, and just about every public venue, capturing video 24/7. You can use Kinesis Video Streams to securely and cost-effectively ingest, store, and analyze this massive volume of video data to help solve traffic problems, help prevent crime, dispatch emergency responders, and much more.
Industrial Automation
You can use Kinesis Video Streams to collect a variety of time-encoded data such as RADAR and LIDAR signals, temperature profiles, and depth data from industrial equipment. You can then analyze the data using your favorite machine learning framework including Apache MxNet, TensorFlow, and OpenCV for industrial automation use cases like predictive maintenance. For example, you can predict the lifetime of a gasket or valve and schedule part replacement in advance, reducing downtime and defects in a manufacturing line.
What does Amazon Kinesis Video Streams manage on my behalf?
Key concepts
Amazon Kinesis Video Streams | Analytics
Amazon Kinesis Video Streams is a fully managed video ingestion and storage service. It enables you to securely ingest, process, and store video at any scale for applications that power robots, smart cities, industrial automation, security monitoring, machine learning (ML), and more. Kinesis Video Streams also ingests other kinds of time-encoded data like audio, RADAR, and LIDAR signals. Kinesis Video Streams provides you SDKs to install on your devices to make it easy to securely stream video to AWS. Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest video streams from millions of devices. It also durably stores, encrypts, and indexes the video streams and provides easy-to-use APIs so that applications can access and retrieve indexed video fragments based on tags and timestamps. Kinesis Video Streams provides a library to integrate ML frameworks such as Apache MxNet, TensorFlow, and OpenCV with video streams to build machine learning applications. Kinesis Video Streams is integrated with Amazon Rekognition Video, enabling you to build computer vision applications that detect objects, events, and people.
What is a video stream?
Key concepts
Amazon Kinesis Video Streams | Analytics
A video stream is a resource that enables you to capture live video and other time-encoded data, optionally store it, and make the data available for consumption both in real time and on a batch or ad-hoc basis. When you choose to store data in the video stream, Kinesis Video Streams will encrypt the data, and generate a time-based index on the stored data. In a typical configuration, a Kinesis video stream has only one producer publishing data into it. The Kinesis video stream can have multiple consuming applications processing the contents of the video stream.
What is a fragment?
Key concepts
Amazon Kinesis Video Streams | Analytics
A fragment is a self-contained sequence of frames. The frames belonging to a fragment should have no dependency on any frames from other fragments. As fragments arrive, Kinesis Video Streams assigns a unique fragment number, in increasing order. It also stores producer-side and server-side time stamps for each fragment, as Kinesis Video Streams-specific metadata.
What is a producer?
Key concepts
Amazon Kinesis Video Streams | Analytics
A producer is a general term used to refer to a device or source that puts data into a Kinesis video stream. A producer can be any video-generating device, such as a security camera, a body-worn camera, a smartphone camera, or a dashboard camera. A producer can also send non-video time-encoded data, such as audio feeds, images, or RADAR data. One producer can generate one or more video streams. For example, a video camera can push video data to one Kinesis video stream and audio data to another.
What is a consumer?
Key concepts
Amazon Kinesis Video Streams | Analytics
Consumers are your custom applications that consume and process data in Kinesis video streams in real time, or after the data is durably stored and time-indexed when low latency processing is not required. You can create these consumer applications to run on Amazon EC2 instances. You can also use other Amazon AI services such as Amazon Rekognition, or third party video analytics providers to process your video streams.
What is a chunk?
Key concepts
Amazon Kinesis Video Streams | Analytics
Upon receiving the data from a producer, Kinesis Video Streams stores incoming media data as chunks. Each chunk consists of the actual media fragment, a copy of media metadata sent by the producer, and the Kinesis Video Streams-specific metadata such as the fragment number, and server-side and producer-side timestamps. When a consumer requests media data through the GetMedia API operation, Kinesis Video Streams returns a stream of chunks, starting with the fragment number that you specify in the request.
How do I think about latency in Amazon Kinesis Video Streams?
Publishing data to streams
Amazon Kinesis Video Streams | Analytics
There are four key contributors to latency in an end-to-end media data flow.
Time spent in the device’s hardware media pipeline: This pipeline can comprise of the image sensor and any hardware encoders as appropriate. In theory, this is can be as little as a single frame duration. In practice it rarely is. All encoders in order to work effectively for media encoding (compression) will accumulate several frames to construct a fragment. This process and any corresponding motion compensation algorithms will add anywhere from one second to several seconds of latency on the device before the data is packaged for transmission.
Latency incurred on actual data transmission on the internet: The quality of the network throughput and latency can vary significantly based on where the producing device is located.
Latency added by the Kinesis Video Streams as it receives data from the producer device: The incoming data is made available immediately on the GetMedia API operation for any consuming application. If you choose to retain data, then Kinesis Video Streams will ensure that the data is encrypted using AWS Key Management Service (AWS KMS) and generate a time-based index on the individual fragments in the video stream. When you access this retained data using the GetMediaforFragmentList API, Kinesis Video Streams fetches the fragments from durable storage, decrypt the data, and make it available for the consuming application.
Time latency on data transmission back to the consumer: There can be consuming devices on the internet or other AWS regions that request the media data. The quality of the network throughput and latency can vary significantly based on where the consuming device is located.
Finally the Kinesis Video Streams management console fetches the supported H.264 media type, trans-packages it for various browsers, and allows you to playback streams for development or test purposes.
How do I publish data to my Kinesis video stream?
Publishing data to streams
Amazon Kinesis Video Streams | Analytics
You can publish media data to a Kinesis video stream via the PutMedia operation, or use the Kinesis Video Streams Producer SDKs in Java, C++, or Android. If you choose to use the PutMedia operation directly, you will be responsible for packaging the media stream according to the Kinesis Video Streams data specification, handle the stream creation, token rotation, and other actions necessary for reliable streaming of media data to the AWS cloud. We recommend using the Producer SDKs to make these tasks simpler and get started faster.
What is the Kinesis Video Streams PutMedia operation?
Publishing data to streams
Amazon Kinesis Video Streams | Analytics
Kinesis Video Streams provides a PutMedia API to write media data to a Kinesis video stream. In a PutMedia request, the producer sends a stream of media fragments. As fragments arrive, Kinesis Video Streams assigns a unique fragment number, in increasing order. It also stores producer-side and server-side time stamps for each fragment, as Kinesis Video Streams-specific metadata.
What is the Kinesis Video Streams Producer SDK?
Publishing data to streams
Amazon Kinesis Video Streams | Analytics
The Amazon Kinesis Video Streams Producer SDK are a set of easy-to-use and highly configurable libraries that you can install and customize for your specific producers. The SDK makes it easy to build an on-device application that securely connects to a video stream, and reliably publishes video and other media data to Kinesis Video Streams. It takes care of all the underlying tasks required to package the frames and fragments generated by the device’s media pipeline. The SDK also handles stream creation, token rotation for secure and uninterrupted streaming, processing acknowledgements returned by Kinesis Video Streams, and other tasks.
In which programming platforms is the Kinesis Video Streams Producer SDK available?
Publishing data to streams
Amazon Kinesis Video Streams | Analytics
Kinesis Video Streams Producer SDK’s core is built in C, so it is efficient and portable to a variety of hardware platforms. Most developers will prefer to use the C++ or Java versions of the Kinesis Video Streams producer SDK. There is also an Android version of the producer SDK for mobile app developers who want to stream video data from Android devices.
What should I be aware of before getting started with the Kinesis Video Streams producer SDK?
Reading data from streams
Amazon Kinesis Video Streams | Analytics
The Kinesis Video Streams producer SDK does all the heavy lifting of packaging frames and fragments, establishes a secure connection, and reliably streams video to AWS. However there are many different varieties of hardware devices and media pipelines running on them. To make the process of integration with the media pipeline easier, we recommend having some knowledge of: 1) the frame boundaries, 2) the type of a frame used for the boundaries, I-frame or non I-frame, and 3) the frame encoding time stamp.