MPEG-4 system layer and A/V containers for wireless video
By Rick Tewell, Fujitsu Microelectronics America
Courtesy of Video Imaging DesignLine, Jul 14 2006 (1:03 AM)
Dick Tracy's watch, back in the 1950s, allowed the comic strip detective to make phone calls from his wrist. Sure, he had a limited number of people he could call, but anyone watching him do it was impressed. Now, of course, cell phone technology has become ubiquitous, so the younger generations don't think much of Detective Tracy's cool watch. Today's equivalent might be a device allowing you to watch video streamed wirelessly to a palm-sized device from anywhere on the planet. This capability, as it emerges, will make hand held real-time video conferencing a reality.
So here we are 50 years after Dick Tracy and billions of telecommunications dollars later and we've solved many of the challenges of audio -- but what about wireless video? Most of the technology pieces are in place -- the only major issues remaining are consumer demand and an economic model that will make it feasible for companies to develop the solution. One of the most important technology pieces is the MPEG-4 video standard, and its early implementations. MPEG-4 is ideally suited to the task of wireless mobile device video, as will be discussed below. First, a bit of technology background that has led us to this point.
The technology for streaming audio by itself is acceptable - just consider music and phone conversations. Video, however, is different and, by the way, requires audio. In the 1920s silent movies gave way to movies with sound, but only after the technology advanced enough to make sound viable. Today, silent video is considered one shade shy of "useless" in the mind of the consumer. The two key challenges with adding sound to movies were "synchronization" and "amplification". In a streaming video environment either wireless or over a wired network, the basic problem of audio/video synchronization is still with us. We can tolerate some occasional glitches in video, but we are extremely sensitive to audio discrepancies such as stuttering or "out of sync" audio. The human ear can detect audio errors as small as a few milliseconds, so accurate audio and video synchronization is critical to successful video transmission.
To ship audio and video together either over a wired or wireless network, we must first employ techniques to put the video and audio in a container and keep it together before, during and after shipping. While much has been spoken and written concerning the techniques of encoding and decoding audio and video, the technologies involved in placing these encoded audio and video streams in containers for shipping over wired and wireless networks is less understood. It is our goal in this article to shed a bit of light on this technology aspect of audio and video transmission.
To read the full article, click here
Related Semiconductor IP
- NPU IP Core for Mobile
- NPU IP Core for Edge
- Specialized Video Processing NPU IP
- HYPERBUS™ Memory Controller
- AV1 Video Encoder IP
Related White Papers
- New Realities Demand a New Approach to System Verification and Validation
- Paving the way for the next generation of audio codec for True Wireless Stereo (TWS) applications - PART 5 : Cutting time to market in a safe and timely manner
- From a Lossless (~1.5:1) Compression Algorithm for Llama2 7B Weights to Variable Precision, Variable Range, Compressed Numeric Data Types for CNNs and LLMs
- Proven solutions for converting a chip specification into RTL and UVM
Latest White Papers
- Ramping Up Open-Source RISC-V Cores: Assessing the Energy Efficiency of Superscalar, Out-of-Order Execution
- Transition Fixes in 3nm Multi-Voltage SoC Design
- CXL Topology-Aware and Expander-Driven Prefetching: Unlocking SSD Performance
- Breaking the Memory Bandwidth Boundary. GDDR7 IP Design Challenges & Solutions
- Automating NoC Design to Tackle Rising SoC Complexity