Broadcast Flashcards

1
Q

Interlacing

A

Interlacing in video resolution refers to a method of displaying video images on screens. Instead of showing each frame progressively, interlaced video displays alternate lines of the image in a two-step process. First, odd-numbered lines are displayed, followed by the even-numbered lines in the next step. This technique was used in older cathode-ray tube (CRT) TVs to improve image stability and reduce flickering. However, modern displays and video content mostly use a progressive scan method, which displays each frame sequentially, resulting in a smoother and clearer image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Video resolutions

A
  1. 4/3?
  2. 720p (HD): 1280x720 pixels
  3. 1080p (Full HD): 1920x1080 pixels
  4. 2160p (4K UHD): 3840x2160 pixels

1440p (Quad HD): 2560x1440 pixels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Progressive scanning

A

Progressive scanning is a method used in video broadcasting to display images on a screen. Unlike interlaced scanning, which divides each frame into two fields (odd and even lines), progressive scanning displays the entire image in a single pass. This means that each line of a frame is displayed sequentially, resulting in a smoother and clearer picture with no flickering or interlacing artifacts. It is commonly used in modern video displays, such as LCD and LED screens, as well as in digital video formats to offer superior image quality and reduce motion blur.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

SDI

A

Serial Digital Interface

SDI is commonly used in professional broadcasting for live events, news, and sports, as well as in post-production for
editing and distribution of content. SDI has been around for decades and has evolved over time to support higher
resolutions, frame rates, and bit depths. Some common variations include SD-SDI (standard definition), HD-SDI (high
definition), and 3G-SDI (which can support up to 1080p HD video at 60 fps).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Video file formats

A
  1. DNxHD/DNxHR: These formats are developed by Avid Technology and are widely used in professional video editing and broadcast workflows. They provide excellent video quality and efficient compression.
  2. XDCAM: Developed by Sony, XDCAM is a popular format used for professional broadcast, offering good video quality and efficient file sizes. It supports both standard definition and high definition content.
  3. AVC-Intra: This format is based on the H.264/AVC codec and is commonly used in professional broadcast environments. It offers high-quality video with efficient compression.
  4. ProRes 422/4444: These formats are Apple’s proprietary video codecs and are widely used in professional post-production and broadcast workflows. They provide excellent video quality and support both standard and high definition content.

These formats are chosen based on the specific requirements of the broadcast environment, such as video quality, bandwidth limitations, and compatibility with equipment and networks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

ProRes

A

short for Apple ProRes, is a high-quality video codec that is commonly used in the broadcast and post-production industry. It was developed by Apple Inc. and is widely supported on Mac-based video editing software and hardware. ProRes is known for its ability to maintain high video quality while minimizing file sizes, making it ideal for professional video production and broadcasting.

There are several versions of ProRes, each offering a different balance of compression and quality. Some common ProRes variants include:

ProRes 422: This is a popular choice for broadcast and video editing due to its good balance between file size and image quality.

ProRes 422 HQ: This variant offers higher data rates and even better video quality, suitable for high-quality broadcast content.

ProRes 4444: This version supports high-quality, virtually lossless video with an alpha channel, making it suitable for high-end applications, including visual effects and color grading.

ProRes is often used in professional video editing and broadcasting workflows because it provides excellent image quality, preserves a wide range of colors, and allows for easy post-production editing while keeping file sizes manageable. Many high-end cameras and video equipment support ProRes recording, making it a popular choice for broadcast content production.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

FAST Channels

A

Free ad-supported TV (FAST) apps host linear channels that deliver scheduled programming to a mass audience through connected devices. (Ad-supported streaming)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

AES3

A

AES3 is a standard for the exchange of digital audio signals between professional audio devices. An AES3 signal can carry two channels of pulse-code-modulated digital audio over several transmission media including balanced lines, unbalanced lines, and optical fiber.

AES3 was jointly developed by the Audio Engineering Society (AES) and the European Broadcasting Union (EBU) and so is also known as AES/EBU. The standard was first published in 1985 and was revised in 1992 and 2003. AES3 has been incorporated into the International Electrotechnical Commission’s standard IEC 60958, and is available in a consumer-grade variant known as S/PDIF.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

ip broadcasting

A

ip broadcasting (or internet protocol broadcasting) is a newer technology that allows video and audio to be transmitted over computer networks, such as the internet or local area networks, using internet protocol (ip). unlike sdi, ip broadcasting uses packet-based transmission, which allows for more efficient use of bandwidth and can enable greater flexibility and scalability.
in an ip-based broadcast system, video and audio signals are encoded into compressed data packets, which are then transmitted over a network using standard ip protocols. these packets can be routed to specific destinations and can also be buffered or queued as needed to ensure the smooth transmission of the broadcast stream.
ip broadcasting is often used in streaming video services, webcasts, and other online video applications. it can also be used in professional broadcasting for live events, remote production, and distribution of content. some of the benefits of ip broadcasting include lower infrastructure costs, greater scalability, and the ability to easily transmit signals over long distances. however, it also requires higher bandwidth and more complex network infrastructure compared to traditional broadcasting methods.
IP broadcasting allows broadcasters to use the same networks and internet connections that they already have in place, which can reduce costs. It is also very flexible since it can easily accommodate changes in the number of users or viewers. With IP broadcast, any device that’s connected to the internet can potentially be a receiver for the video and audio stream, which means it’s possible to reach a much larger audience.
Another benefit of IP broadcasting is that it can be used for remote production and collaboration. For example, an IP-based system can allow cameras and other equipment in a remote location to be controlled and monitored in real-time by a production team in another location. It also enables multiple video feeds to be transmitted over a single network connection, which can help simplify production workflows and lower costs.
Overall, IP broadcasting is a powerful and flexible technology that is rapidly becoming a key part of the broadcasting industry. While it does require careful planning and infrastructure investment upfront, it can ultimately help broadcasters to reach larger audiences, reduce costs, and improve the quality and efficiency of their broadcasts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

TS

A

A Transport Stream (TS) is a standard format used for transmitting audio, video, and data over digital broadcast networks. It is commonly utilized in cable, satellite, and terrestrial television broadcasting systems. The TS format provides a multiplexed stream that combines multiple audio and video streams along with other data, allowing efficient transmission and management of content.
Key characteristics of a Transport Stream include:
* Multiplexing: TS allows the combination of multiple audio, video, and data streams into a single stream, facilitating the simultaneous transmission of different programs or services.
* Packetization: The content in a TS is divided into fixed-size packets called Transport Stream packets. Each packet contains a header followed by payload data, allowing for efficient data transmission and error handling.
* Program Specific Information (PSI): PSI provides metadata about the programs, services, and components within the Transport Stream. It includes tables such as the Program Association Table (PAT), Program Map Table (PMT), and Service Description Table (SDT).
* Error Detection and Correction: TS employs error detection techniques, such as CRC (Cyclic Redundancy Check), to identify transmission errors. Error correction mechanisms, such as Reed-Solomon Forward Error Correction (FEC), are used to recover lost or corrupted data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

SCTE-35

A

signaling protocol used in digital television systems

SCTE-35 (Society of Cable Telecommunications Engineers-35) is a standard that defines a signaling protocol used in digital television systems. It provides a mechanism for inserting time-based triggers, known as SCTE-35 messages or cues, into a Transport Stream.
SCTE-35 messages carry information related to advertisement insertion, program switching, content blackouts, or other scheduled events. These messages are typically used by broadcasters, cable operators, and video service providers to control the presentation of content and accurately manage ad breaks within a program.
Key features of SCTE-35 include:
* Cue Points: SCTE-35 messages are inserted at specific cue points within a Transport Stream to trigger specific actions or events.
* Splicing Information: SCTE-35 messages carry information about splice events, which indicate where commercials or other content should be inserted or removed within a program.
* Timing and Duration: SCTE-35 provides precise timing and duration information for the triggered events, allowing broadcasters to synchronize their systems for seamless content transitions.
* Signal Insertion and Extraction: SCTE-35 messages can be inserted into a Transport Stream using specialized equipment and extracted by compatible devices for proper interpretation and execution.
SCTE-35 plays a critical role in ensuring accurate and synchronized content delivery, especially for targeted ad insertion in broadcast and cable television systems. It enables broadcasters to efficiently manage the timing and presentation of content, providing a seamless viewing experience for the audience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

EBU

A

European Broadcasting Union

How well did you know this?
1
Not at all
2
3
4
5
Perfectly