What is PERSEUS?
V-Nova’s PERSEUS™ is a new codec format that delivers on the promises of next generation image and video codecs by compressing video at “bandwidth that matters” on available encoders and decoders.
Uniquely, the PERSEUS codec format was designed from the ground up for hierarchical processing (“multiscale”), massively parallel processing and machine learning. This allows better compression and substantial processing power reductions without requiring dedicated hardware acceleration, hence maintaining compatibility with devices deployed and already in the field.
The PERSEUS format is available in two variants:
Professional equipment enabled with PERSEUS provide broadcasters, content producers and service operators with the industry’s best and most reliable performance at the lowest cost.
Why should I care about PERSEUS?
For end users, PERSEUS provides a step change in availability and quality of experience of video services, enabling higher quality and more affordable video services for all.
For TV & Media operators, PERSEUS enables better services to more people (i.e., higher revenues), lower investments and lower operating costs, often all at the same time and after very limited set up investment. Payback times are often measured in calendar days, rather than weeks or months.
For other companies and industry verticals dealing with video, PERSEUS either unlocks video-related use cases that would not otherwise be viable, or significantly improves the quality and cost efficiency of existing services.
What computing platforms does PERSEUS support?
Commercial optimised, rate-controlled and maintained PERSEUS codec implementations are available from V-Nova as software SDKs or as pre-integrated solutions running on a variety of computing platforms, including x86, ARM, Set-Top-Box SOCs, GPUs, FPGAs. Notably, PERSEUS does not require dedicated hardware acceleration thanks to its design criteria of leveraging hardware acceleration already available in all CPUs, GPUs and SOCs.
Does upgrading to PERSEUS require changes in the workflow?
No. PERSEUS software upgrade happens at a low level at the encoder (content preparation) and decoder (content playback) level, with no other changes in the existing workflow.
PERSEUS leverages standard encapsulation and transmission standards, and is thus transparent to remuxers, transmuxers, encryption (DRM, CA), metadata management, packagers, Content Delivery Networks, ad insertion, etc.
Connected devices, PCs and deployed set top boxes alike can be easily be upgraded to decode PERSEUS, transparently to the end user.
Let us know if you would like to know more about us or our data and video compression solutions:
What is PERSEUS Plus?
PERSEUS Plus is the ideal solution for high quality video distribution to the highest number of end users, where backward compatibility with billions of deployed low power legacy devices is key. PERSEUS Plus provides an exceptional combination of encoding density, lightweight decoding, use of existing hardware acceleration, visual quality improvement and ease of deployment.
PERSEUS Plus leverages the unique hierarchical characteristics of the PERSEUS format to operate in combination with other codec formats such as h.264, HEVC or VP9 (and in the future AV1). By doing so it builds a higher quality picture (an enhancement) on top of a picture (a base) encoded with a different codec at a lower resolution. The result is a higher quality picture for the same bitrate or a better compressed image for a given quality.
What performance can I expect from PERSEUS Plus?
PERSEUS Plus enables service providers and operators to deliver high quality live and VoD video at the operating points that matter.
In OTT, IPTV, DTT, DTH Satellite and Cable distribution contexts, PERSEUS-enabled on-premise and cloud distribution encoders can deliver, for example:
Other operating points are – of course – available. Ask yourself, what bandwidth can I afford (technically or commercially) for a service and PERSEUS will deliver the best possible video within.
Does PERSEUS Plus require a specific media player?
No. PERSEUS Plus can be easily integrated with any media player of choice, just like an additional audio codec. Several pre-integrated commercial media players are already available, and the effort for a new integration is typically around ten to twenty man days of work, including testing.
Several pre-integrated commercial media players are available, and common options such as Android Exo Player, Microsoft MFT libraries, iOS libraries and HTML5 scripted decoding are available directly from V-Nova.
Is PERSEUS Plus a competitor of h.264, HEVC, VP9 or AV1?
In fact, no. PERSEUS is indeed a next generation codec format, and as such it belongs to the same category as HEVC, VP9 and AV1. However, PERSEUS Plus is designed to work as a turbocharger, in combination with a base codec format of choice, which is used to best leverage available HW acceleration and for compatibility with the ecosystem of ad insertion, metadata management, packaging, delivery, encryption, etc. As such, PERSEUS Plus is not strictly an alternative to the h.264, HEVC, VP9 or AV1 formats, nor is it an alternative to solutions that optimize the use of the h.264, HEVC, VP9 or AV1 formats. The real alternative is between whether to add the PERSEUS Plus turbocharger or not.
PERSEUS Plus h.264 provides maximum device compatibility, since it provides unprecedented compression performances while at the same time being able to be decoded by any devices compatible with h.264.
If an operator is able to deploy HEVC or VP9, then PERSEUS Plus HEVC and PERSEUS Plus VP9 can also be used, providing even better compression than HEVC or VP9, and at the same time strongly reducing processing power load (and costs) at both encoding and decoding.
What are the risks of deploying PERSEUS Plus?
None. Both encoders and decoders are still able to encode and playback existing video formats. Since economic payback from the investment of integrating and deploying PERSEUS is typically measured in a few days or a few weeks, deploying PERSEUS is essentially risk free.
What are the benefits of deploying PERSEUS Plus in OTT?
PERSEUS Plus provides immediate and substantial benefits on all of the major drivers of monetisation for an OTT service:
PERSEUS Plus is fully compatible with the connected device ecosystem and is support on Android, iOS and Windows as well as HTML5 capable browsers.
Importantly, PERSEUS Plus benefits do not come from costly pre-processing or post-processing optimizations (quite the opposite, they free up computing resources), and thus apply to both VoD and linear Constant Bit Rate encoding.
What are the benefits of deploying PERSEUS Plus in payTV?
PERSEUS Plus provides immediate and substantial benefits on all of the major drivers of monetisation of a payTV service:
PERSEUS Plus is compatible and can be deployed on most installed MPEG-4 Set-Top-Boxes, via an Over-The-Air update.
Importantly, PERSEUS Plus benefits do not come from costly pre-processing or post-processing optimizations (quite the opposite, they free up computing resources), and thus apply to both VoD and linear Constant Bit Rate encoding.
What is PERSEUS Pro?
PERSEUS Pro is a no-compromise solution for professional content exchange and production applications requiring mathematically lossless, near lossless or visually lossless compression.
PERSEUS Pro’s unique benefits make it ideal for use cases such as multi-feed contribution, low-latency remote production, UHD contribution, IP Production, non-linear editing and short-term production/post-production storage.
What are the benefits of deploying PERSEUS Pro?
PERSEUS Pro combines the most efficient compression with a light-weight fully parallel machine-learning design, providing:
Is PERSEUS Pro an alternative to JPEG2000 and HEVC-I in mathematically lossless or visually lossless intra-only production and contribution use cases?
Yes.
What is V-Nova?
V-Nova Ltd. is a London-headquartered technology company providing next generation compression solutions to address the ever-growing image and video processing and delivery challenges.
V-Nova provides solutions spanning the entire media delivery chain, including content production, contribution, storage and distribution to end users
When was V-Nova founded?
V-Nova International Ltd. (“V-Nova”) was founded in 2011 by Guido Meardi, Luca Rossato, Pierdavide Marcolongo, Eric Achtmann, quickly joined by a team of senior business and technical leaders. The company was publicly launched on April 1, 2015.
Who is behind V-Nova?
V-Nova is led by MIT graduate and former McKinsey Senior Partner Guido Meardi as CEO and former Goldman Sachs Executive Director Pierdavide Marcolongo as CCO.
They are supported by an executive team of sales, marketing, business development, intellectual property, research and engineering professionals with long-standing track records in the industry.
V-Nova’s board is chaired by McKinsey Director Emeritus David Benello. Mr. Benello acts as a non-executive director on the boards of listed telecoms companies throughout the world and has a successful track record as a professional investor.
V-Nova’s shareholders include institutional investors, UHNIs and strategic partners in TV & Media directly contributing to V-Nova’s business, such as the satellite operator Eutelsat.
Who are V-Nova’s investors?
V-Nova’s shareholding comprises of a diversified group of strategic partners in TV & Media including the satellite operator Eutelsat and leading entertainment company Sky plc, and financial investors including as the investment company Whysol Investment and the London Stock Exchange listed investment company Limitless Earth.
How many people does V-Nova employ?
V-Nova employs over 60 people.
Is PERSEUS™ patent protected?
Yes. V-Nova has over 250 globally-registered patents protecting its intellectual property. In early 2017 V-Nova also acquired the highly complementary patent portfolio of renowned video imaging experts Faroudja Enterprises Inc., founded by video pioneer and multiple Emmy Award winner Yves Faroudja.
What is P.Link?
Separately from the codec itself, but instrumental to materialise the game-changing impact of PERSEUS on appliance cost reduction, channel density, reliability and cloud-readiness, V-Nova also developed a complete embedded PERSEUS-based contribution and production solution called P.Link, which is offered as embedded software running either on V-Nova-specified Commercial-Off-The-Shelf (COTS) appliances or in cloud-based environments.
Let alone its game-changing video compression benefits, validated by multiple reputable third parties, since 2015 P.Link has been successfully and reliably used by Tier 1 broadcasters and service providers for studio-to-studio contribution, remote production and prime time sports contribution.
What is video?
A video clip is made up of a series of pictures or frames, viewed in sequence at a given speed. Video data is stored, processed and transmitted as “streams” of bits (basic units of data).
What determines video quality?
Video quality depends mainly on four factors: “Resolution” (the number of pixels or picture elements in each frame), “Frame Rate” (the number of frames or pictures per second), “Bit Depth” & “Dynamic Range” (the accuracy with which colour is described), and “Quality” of compression (when applicable).
What is video compression?
Compression is the process of reducing the size of a video stream in order to make it more manageable. The amount of video data that gets produced or transmitted in a given period of time, usually one second, is called bitrate. Compression involves encoding, an operation where the size or bitrate of the stream gets reduced, and decoding, where the data gets uncompressed again so that we can interpret and view it.
Compression works by exploiting the correlation of information that exists within each picture (so-called “intra-frame” or “intra-only” compression) and among similar pictures in a sequence (temporal or “inter-frame” compression).
For example, if one third of a picture depicts a white wall, then intra-frame compression allows us to treat that white wall much more like a single piece of predictable information, rather than thousands or millions of individual white pixels. Similarly, if the white wall is stationary across a number of frames in a sequence, we can treat it as the same object by using temporal compression instead of duplicating the information in every frame.
Achieving high quality compression means finding out with painstaking care which information can and must be grouped, and which should not. Additional compression can be achieved by selectively discarding information in order to increase the correlation of the signal (what experts call “lossy encoding” as opposed to “lossless encoding”), at the expense of a certain quality degradation. In other words, lossy encoding involves trading off full mathematical precision in the reconstruction of the video in order to achieve a higher rate of compression.
What makes a good video codec?
Codec is short for (en)COder/DECoder. A good video compression codec:
Rate control, compression speed and latency optimisations are particularly challenging in temporal mode, because of the need to examine and process a much larger amount of data.
How are video compression codecs compared?
The efficiency of a video compression solution (i.e., a given implementation of a certain codec technology) is assessed using both subjective and objective testing.
Subjective testing often involves experts (“golden eyes”) looking at a screen to evaluate the quality of the image in much the same way as a highly skilled sommelier would judge wine. Like sommeliers, the experienced “golden eyes” give a rating of their subjective interpretation of the quality of each picture or video. This is a judgement made by an expert, but it does not necessarily represent the preferences of a majority of viewers.
In other types of subjective testing, a random sample of non-expert viewers is asked to rate the video quality on a simple scale of 1 to 5.
The very nature of video compression leads to subjective likes and dislikes. The same is true for so-called “objective” metrics, which were developed in an attempt to quantify those human likes and dislikes.
There are some industry recognised metrics (like PSNR, DMOS or MS-SSIM) that offer a mechanism to convert the “quality” of compressed video signals into a hard number. Despite the well-known limitations of such techniques, using them to compare various operating points or to tune parameters within a specific codec often makes more sense than trying to overextend them in an effort to simplify comparisons among different kinds of video compression technologies or codecs.
Typically, different codecs exhibit distinct types of artefacts when put under stress. For example MPEG-2 can look very blocky and pixelated at low bitrates, whereas JPEG 2000 can exhibit ringing and blurring artefacts. This contributes to making comparison difficult and very subjective.
Why is comparing video compression codecs’ performance hard?
First it is important to understand what it is that we are comparing, because while terms like “format”, “codec”, “encoding technology” and “stream optimisation technology” are often used interchangeably in this context, they do indicate very different things:
Having understood what we are comparing, we should consider that it is impossible to provide a single number that summarises how much better a given product compresses with respect to another product, as much as it is impossible to generically say with a single number how much better a car is compared to another without knowing its use case.
Different types of images compress differently. Different series of images, even more so. For example, white noise does not compress at all (lossless compression leaves it unchanged), while a monochrome screen can be compressed perfectly, down to a single number. That being said, in practical use cases we seldom find ourselves staring at either white noise or perfectly monochrome screens.
Also, distinct implementations of the same codec format, or even different uses of the same implementation, can surprise us by compressing very differently. It is often enough to tune some of the input parameters—much like tuning a car’s engine—and/or to change the allowed encoding delay (e.g., “fast mode” versus “slow mode”, double-pass versus single-pass, long GOP versus short GOP) to obtain huge variations in compression efficiency, even within the same codec format.
In addition, given that most compression codecs have pronounced strengths and weaknesses, some of them may perform very well in a given situation, and yet be easily surpassed by others in a different scenario.
To make things even more complex, there is no recognised standard for what lossy video should look like. For example, it is obviously impossible to know beforehand whether a viewer will be looking at a video sequence without interruptions or by repeatedly pressing the pause button; however, studies show that viewers tend to be more sensitive to variations of quality in time across a series of pictures rather than to the average quality within a single frame. This makes proper scientific comparisons even more cumbersome, because it is hard to decide how to weigh image quality versus quality consistency.
As a consequence, evaluators of codecs are advised to use a combination of objective and subjective testing on relevant video clips to get a feeling for the type of performance that they can expect. In the end, the true performance of a codec can be defined only for specific content based at a given operating point, much like a comparison between the speed potential of a Ferrari and a pickup truck may only be done in the context of a driving surface (e.g., a race track or an unpaved, rutted mountain road).
Even after the most appropriate conditions for evaluation have been established, “better compression” can still be provided by multiple factors:
Why is video compression important?
Video, like pictures before, has become an integral part of our everyday lives (from television to video conferencing, to medicine, to mining!). Given that video is made up of subsequent images (often up to 60 per second even with legacy standards), it generates a huge amount of data. During a 2-hour HD movie, your eyes will see almost two hundred thousand individual pictures or frames flashing by. Or to put it another way, the same movie uncompressed would take up over one terabyte of storage, which is still rather expensive and difficult to manage.
What are the video quality levels?
Video quality varies substantially among applications. Broadcast TV uses interlaced formats for SD and 1080 HD content (i50 and i59.94) and progressive formats for 720 HD and UHD video (p50 and p59.94). VOD and OTT content is generally delivered over the Internet as progressive and can range in resolution from sub-SD (360p or even less) to UHD (2160p), with compression qualities that can range from much lower to even higher than broadcast video; frame rate can change too, with the most common rates for online video being 25 and 30 frames per second.
We must remember that video resolution/rate and perceived quality are not necessarily the same thing. Visually perfect SD video is often preferable to viewers than poorly encoded HD video, as artefacts can be much more annoying and distracting than some loss of detail; this stresses once again the importance of scientific testing of representative content at predefined operating points (e.g., resolution, bitrate, colour depth).
What is video streaming, and what are the typical benchmarks for streaming at various quality levels?
Video streaming is the real-time transfer of video. The video signal gets transmitted at a given rate (e.g., 2 megabits per second), which must be less than the available bandwidth in order to ensure uninterrupted viewing. Depending on the use case, video can be encoded and transmitted in Constant Bit Rate (CBR) mode—at the expense of a lower quality during high complexity scenes—or at more or less uniform quality in Variable Bit Rate (VBR) mode.
Indicative bitrates vary widely depending on the use case. For instance, contribution video streams and mezzanine video formats used during the early stages of a video production workflow will typically encode each frame (or interlaced field) independently and need very high bitrates. This is due to the stringent requirements that call for excellent quality (since the same video stream may be re-encoded several times before being distributed to the end viewer, and degradations tend to accumulate) and very low latency (in order to minimise the overall “glass to glass” delay, i.e., from the camera to the viewer screen). On the other hand, distribution video streams are typically delivered at lower bitrates, historically associated with lower quality, due to bandwidth constraints, much higher costs of delivery and decoder processing power limitations.