Dealing with the complexity related to the variety of content coming from a variety of sources for delivery to a variety of platforms, can become very time consuming and expensive.
Unlike traditional linear TV, where a carefully planned line-up of content is transmitted on air every day, the load of content that needs to be packaged and delivered is becoming extremely irregular. Content volumes and publishing plans can differ depending on release windows agreements and platform and device requirements, with last minute modifications being common.
The required hardware capacity to process and store media files can also place a strain on the resources of a media company. Ensuring seamless performance of systems to maintain quality and speed during peak load times is challenging when using traditional solutions and infrastructure. In addition, relying on a solution designed for worst case usage also means that the capacity will remain underutilized at times.
Cloud and parallel processing to the rescue
Let’s suppose there is an intensive process running on a server and due to capacity limitations, it is slower than required. That process could be the transcoding of a large media file in several different formats for multi-screen distribution or a quality check process to make sure all tracks are perfectly synchronized in the right language and without defect or gathering media and metadata for online packaging and delivery.
The traditional approach would be to look at the existing equipment KPIs, identify any bottlenecks and decide whether to scale the system up, to upgrade or to replace the hardware to make it more powerful. This approach works and helps deliver the required performance but only with additional capital investment. Moreover, it wouldn’t be possible to repurpose some of the existing hardware and it would need to be written off. In case requirements change further and performance needs to be increased, adding more capacity might work. However, upgrade options are expensive, and they are likely to make your utilization even worse.
What if you rented available capacity from a cloud provider only when needed instead of spending resources on hardware upgrades and expansions? In that case, during off-peak times, there’s no underutilized hardware, which means you have solved your utilization problem. However, at this stage, you haven’t really improved the speed at which you process content, and you are still very dependent on reliable infrastructure to meet your SLAs.
But what if the task is split into parts that can be processed in parallel? Distribution of the process across multiple servers in parallel has zero additional cost and will provide significantly shorter processing time.
In addition, if we look at processing quality, not only speed, splitting a task into pieces and distributing it in the cloud is also more resilient. Typically, in case of a failure during a media processing job, the loss and recovery of assets can lead to a delayed delivery and additional cost. Splitting the job into parallel tasks increases overall tolerance to failure. As each piece is smaller and processed independently from the rest, recovery will be faster, as there won’t be a need to restart the whole processing job, only a small fraction of it. Relying primarily on cloud capacity allows for quick switching between nodes in case of hardware failures. In addition, you get more flexibility and cost efficiency since the cloud capacity will be used only “on demand”.
Hurdles to overcome
Successfully implementing a parallel processing solution comes with its technical challenges though. In order to split media processing across nodes, the fragments need to be put together in a way that will guarantee the required quality of the output. A solution for this problem can be a better and more careful process design, especially for assets with lower bit rate outputs.
Another challenge is related to the media codecs used. They can place constraints on how video and audio streams can be split – header and timecode information need to be considered to allow each node to successfully process the asset and put it together at the end. This represents another major consideration when determining the logic behind the software executing parallel processing of media assets.
The Ericsson Broadcast and Media Services team has implemented an innovative solution capable of processing content at unprecedented speed by fixing all of these challenges. The focus of the continuous development of Ericsson’s Cloud Media Processing Platform, the backbone of our media management services, is to deliver efficient, innovative, and high-quality services to content providers. We believe that the combination of parallel processing architecture and the public cloud has the potential to fundamentally change media processing workflows by increasing the agility of media companies.