Google Cloud Platform Blog
Product updates, customer stories, and tips and tricks on Google Cloud Platform
Panda achieves greater video quality using motion compensation for frame rate conversion
April 9, 2015
Today’s guest post comes from Ed Byrne, Director at
Panda
– a cloud-based video transcoding platform. To learn more about how Panda uses Google Cloud Platform, watch their
case study video
.
Panda
makes it easy for video producers to encode their video in multiple formats for different mobile device screen sizes. But delivering blazing fast, high-quality videos to customers is no easy task – especially when your engineers are also dealing with infrastructure. Google Cloud Platform features like
Live Migration
and
Autoscaler
have allowed us to cut our infrastructure maintenance load to only half of a developer.
With more resources to direct at innovation, we can put our focus on our customers, making their experience better with new and improved features in Panda. In fact, since relying on Google Cloud Platform for underlying infrastructure, we’ve developed our frame rate conversion by motion compensation technology. Our customers love the video quality they get using this feature, and we’re so excited about it, we agreed to give you the low down on how it works.
Introduction to motion compensation
Motion compensation is a technique that was originally used for video compression, and now it’s used in virtually every video codec. Its inventors noticed that adjacent frames usually don’t differ too much (except for scene changes), and then used that fact to develop a better encoding scheme than compressing each frame separately. In short, motion-compensation-powered compression tries to detect movement that happens between frames and then use that information for more efficient encoding. Imagine two frames:
Panda on the left...
aaaand on the right
Now, a motion compensating algorithm would detect the fact that it’s the same panda in both frames, just in different locations:
First stage of motion compensation: motion detection
We’re still thinking about compression, so why would we want to store the same panda twice? Yep, that’s what motion-compensation-powered compression does – it stores the moving panda just once (usually, it would store the whole frame #1), but it adds information about movement. Then the decompressor uses this information to construct remaining information (frame #2 based on frame #1).
That’s the general idea, but in practice it’s not as smooth and easy as in the example. The objects are rarely the same, and usually some distortions and non-linear transformations creep in. Scanning for movements is very expensive computationally, so we have to limit the search space and optimize the code, even resorting to hand-written assembly.
Frame rate conversion by motion compensation
Motion compensation can be used for frame rate conversion too, often with really impressive results.
For illustration, let’s go back to the moving panda example. Let’s assume we want to change the frame rate from two frames per second (FPS) to three FPS. In order to maintain the video speed, each frame will be on screen for a shorter amount of time (.5 sec vs .33 sec).
One way to increase the number of frames is to duplicate a frame, resulting in three FPS, but the quality will suffer. As you can see, frame #1 has been duplicated:
Converting from 2 FPS to 3 FPS by duplicating frames
Yes, the output has three frames and the input has two, but the effect isn’t visually appealing. We need a bit of magic to create a frame that humans would see as naturally fitting between the two initial frames – panda has to be in the middle. That’s a task motion compensation could deal with – detect the motion, but instead of using it for compression, create a new frame based on the gathered information. Here’s how it should work:
Converting from 2 FPS to 3 FPS by motion compensation: Panda's in the middle!
Notice that by creating a new frame, we keep our panda hero at the center.
Now for video examples, taken straight from a Panda encoder. Here’s what frame duplication (the bad guy) looks like in action (for better illustration, after converting FPS, we slowed down the video):
While the video on the left is very smooth, the frame duplicated version on the right is jittery. Not great. Now, what happens when we use motion compensation (the good guy):
The movement’s smooth and outside of slight noise, we don’t catch glimpse of any video artifacts.
There are other types of footage that fool the algorithm more easily. Motion compensation assumes simple, linear movement, so other kinds of image transformations can produce heavier artifacts that may or may not be acceptable, depending on the use case.
Occlusions
, refractions – you see these in water bubbles – and very quick movements, which means that too much happens between frames, are the most common examples of image transformations that can produce lower visual quality. Here’s a video full of occlusions and water:
Now let’s slow it down and see frame duplication and motion compensation side-by-side.
Motion compensation produces clear artifacts (those fake electric discharges), but still maintains higher visual quality than frame duplication.
The unilateral verdict of a short survey we shared in our office: motion compensation produces much better imaging than frame duplication.
Google Cloud Platform products like
Google Compute Engine
allowed us to improve performance in encoding by 30%, as well as shift our energy from focusing on underlying infrastructure to innovating for our customers. We’ve also been able to take advantage of
sustained use discounts
, which have helped lower our infrastructure costs, without the need to sign contracts or reserve capacity. Google’s network performance is also a huge asset for us, given video files are so large and we need to move them frequently. To learn more about how we’re using Cloud Platform,
watch our video
.
Panda’s
excited to be at this year’s
NAB show
, one of the world’s largest gatherings of technologists and digital content providers. They’ll be in the StudioXperience area with Filepicker in the South Upper Hall, SU621.
No comments :
Post a Comment
Free Trial
Labels
Android
Announcement
api
app engine
Atmosphere Live
bigquery
BigTable
CDN
Cloud Console
Cloud Dataflow
Cloud Datastore
cloud endpoints
Cloud Pub/Sub
Cloud SDK
cloud sql
cloud storage
Cloudera
Compute
Compute Engine
container cluster
customer
Dev Tools
developer tools
developer-insights
Developers
Developers Console
devfests
Disaster Recovery
Encryption Keys
ESG
Event
events
GA
Go Client
Google App Engine
Google Apps
Google BigQuery
Google Cloud Deployment Manager
Google Cloud Networking
Google Cloud Platform
Google Cloud Storage
Google Compute Engine
Google Container Engine
gRPC
hadoop
Hardware
Helium
how to
IO2013
iOS
Kubernetes
Levyx
Local SSD
mapreduce
Media
Nearline
networking
open source
PaaS Solution
Partner
Pricing
Research
round-up
Server
Siggraph
solutions
Startup
Tableau
TCO
Technical
Windows
Wowza
Zync
Archive
2015
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Feed
Technical questions? Check us out on
Stack Overflow
.
Subscribe to
our monthly newsletter
.
Follow @googlecloud
No comments :
Post a Comment