Scaling a video on demand server
Early performance estimation is key to successful implementation
Illia Cremer, CoFluent Design
EETimes (4/27/2011 9:51 AM EDT)
Abstract
In a growing and more competitive video on demand (VoD) market, system designers face new challenges in VoD server infrastructures definition and sizing. Early performance estimation thanks to abstract modeling is a key enabler for providing best quality of service and compelling user experience.
This article illustrates how to model and simulate an example model of a RTP/RTSP video on demand server using the method, notations and tools provided by CoFluent Design.
The objective is to determine the client’s frame rate deviation and the average power consumption for different server configurations. The frame rate deviation is the difference between the expected theoretical frame rate and the actual frame rate of the video stream. It directly impacts the user’s watching experience and should be kept as much as possible close to zero.
The impact of different hardware elements of the server such as HDD type and server buffering is studied. The example also illustrates how to model multiple instances of the same function, and how to define an abstract network of computers.
To read the full article, click here
Related Semiconductor IP
- Root of Trust (RoT)
- Fixed Point Doppler Channel IP core
- Multi-protocol wireless plaform integrating Bluetooth Dual Mode, IEEE 802.15.4 (for Thread, Zigbee and Matter)
- Polyphase Video Scaler
- Compact, low-power, 8bit ADC on GF 22nm FDX
Related White Papers
- Using vector processing for HD video scaling, de-interlacing, and image customization
- Polyphase Video Scaling in FPGAs
- Revisiting the analogue video decoder: Brushing up on your comb filters
- Thoughts on Streaming Video Securely
Latest White Papers
- Reimagining AI Infrastructure: The Power of Converged Back-end Networks
- 40G UCIe IP Advantages for AI Applications
- Recent progress in spin-orbit torque magnetic random-access memory
- What is JESD204C? A quick glance at the standard
- Open-Source Design of Heterogeneous SoCs for AI Acceleration: the PULP Platform Experience