Hacker News new | past | comments | ask | show | jobs | submit login
Google releases Anthill to bake VP8 into hardware (cnet.com)
73 points by cma on March 20, 2011 | hide | past | favorite | 23 comments



When I scanned this headline, I first read "bake V8 into hardware" (i.e. the V8 JavaScript Engine). An interesting idea!


Same here. I thought that maybe they are getting away from NaCl again and just try to make JS so fast that NaCl is not needed anymore (because we already have LLVM->JS anyway).


While I am at that topic: An idea I had earlier: Maybe it is possible that if V8 knows the LLVM->JS translator so well that it can identify such JS code and reverse this translation to get the LLVM back (this is a bit different than a generic JS->LLVM translator). If that is possible, it would also make NaCl obsolete.


They must be pretty confident that the H.264 patents don't apply to WebM at this point; I'm sure Texas Instruments required some assumption of liability on Google's part.


I don't think there are any "H.264" patents, per se, rather there are a number of processes associated with H.264 that have patents associated with them. This is a subtle difference, but, as an example, US Patent 7,577,304, "Method and device for condensed image recording and reproduction", has some relevance to Predictive reference picture selection. This is useful in H.264, as well as VP8.

I've never really understood why something as complex a picture encoding/decoding was not thought to be covered by the patents listed at MPEG LA for the H.264/AVC Licensors [1].

It will be interesting to see how this plays out - Either Google goes to war with the patent licensors, and tries to invalidate a lot of the patents that apply to both H.264 as well as VP8, pays them a bucket of money to license their patents, or some other outcome that I can't envision (The patent holders just don't bother going after the distributors of VP8 products?)

http://www.mpegla.com/main/programs/avc/Documents/avcCrossRe...


Just because a method for decoding pictures is patented does not mean all methods for doing so are as well.


That was just an example. The AVC Patent Portfolio has hundreds of such patents.

On the Diary of an H.264 Developer, [1] - the preliminary analysis of Intra Prediction as implemented in VP8 is that "this is a patent time-bomb waiting to happen. " - though there may be prior art to invalidate the H.264 stuff.

We'll see, MPEG-LA has a call for patents on VP8 right now.

http://x264dev.multimedia.cx/archives/377#more-377


That section was updated not long after it was published, which must be about a year ago now with:

"Update: spatial intra prediction apparently dates back to Nokia’s MVC H.26L proposal, from around ~2000. It’s possible that Google believes that this is sufficient prior art to invalidate existing patents — which is not at all unreasonable!"

Which is a bit less worrying than your quote of "patent time-bomb waiting to happen." from two sentences above.

Also, on the number of patents which you number as "hundreds", MPEG-LA only has 164 non-expired patents in the US, about 120 of which can be dismissed out of hand as they apply to techniques or technology that WebM simply doesn't use (e.g. CABAC) which leaves only 44 US patents for Google to dodge, work-around, invalidate or come to an agreement with the owners of.


There is no much additional risk for those companies.

If they're already paying MPEG LA fees then even if VP8 is found to infringe MPEG LA patents, they are covered.

Most of the companies that would entertain adding VP8 hardware codec already have H264 codec, hence pay the fees.

The companies that don't pay MPEG LA fees, can start paying them.

The important thing to know is MPEG LA sells a bunch of patents. They don't care if they're used in implementing H264 codec or VP8 codec.

MPEG LA fees are relatively low and not a problem for companies like Texas Instruments.

The people who are really affected by those patents are startups, small companies, open-source developers etc.


So what's the point of not using H.264?


From the perspective of Texas Instruments?

They get the tech for free, so why not?

More importantly, it keeps MPEG LA in check wrt. to increasing the fees. Monopolies have this tendency.

If VP8 pushes out H264, TI (and others) can just switch to it and save licensing fees.

Even if it doesn't but gains a significant popularity, it'll be a bargaining tool when/if MPEG LA tries to raise H264 licensing fees.

From my perspective?

As someone who might use some sort of video encoding technology in my software, I would rather not pay the licensing fees.

Not to mention that Google innovates with VP8 faster than all of the combined H264 actors (see e.g. the constraint quality encoding mode http://blog.webmproject.org/2011/03/vp8-constrained-quality-...).


Can anyone who has some hardware knowledge give us a rundown on what is currently being used to decode H.264 in hardware and this offering? We clearly can't know that much before we test actual drives, but is there anything concrete one can gather from this announcement?


I'll add, specifically: are FPGAs used? Commonly?

That seems to be where they're going, since Google promises VHDL and Verilog code.

http://www.webmproject.org/hardware/


Google is really doing this to get people to put this in silicon for consumer devices. VHDL/Verilog aren't tied to FPGAs at all - they work equally well for custom silicon.

FPGAs aren't appropriate for high volume - they cost far too much per unit. (Silicon has massive upfront cost, but very little per unit cost.) FPGAs are useful in low volume applications and places where the hardware may need to be modified.

Cell towers were (are?) big users of them because 1) cell specs were changing frequently and the FPGA behavior could be modified and 2) there weren't enough cell towers to make the front loaded cost of silicon worth it.


Generally FPGAs aren't used, but there's nothing stopping you from going from HDL to an ASIC.


Here's an overview on TI's OMAP SoC, http://www.omappedia.org/wiki/Ducati_For_Dummies#Ducati_Subs...

Though as mentioned there is always the challenge of die space, making an rtl available is a smart step. You coul imagine synthesising this into the IVA/HD in OMAP.


The uphill climb will be getting 3rd party vendors to actually put this in their chips.

Seeing as WebM/VP8 is only supported on Android really, and won't be supported on other platforms that use these chips (WP7, chinese linux phones), it's a gamble to get them into the mainstream chips. Space on a die is extremely valuable - for example, nVidia left the NEON FPU off their implementation of the ARM A9 in the Tegra2, to the detriment of quite a few workloads, for the main reason of shrinking the chip to maintain high volume and lower cost.

I'm going to guess at least a year or two before we see phones with this hardware encoder. The big issue is network effect - this is designed for videoconferencing, and thus both ends have to have the HW encoder for it to not drain battery life.

I'm thinking this will be a very slow, very long uptake, especially when there have been H.264 implementations in the wild for years now. Ideally, someone will come along with a combined dual codec VP8/H.264 encoding block (as the formats share a lot of internal functionality) that uses less die space than having two separate blocks, and software vendors will just get to choose which format to support.


> Seeing as WebM/VP8 is only supported on Android really, and won't be supported on other platforms that use these chips (WP7, chinese linux phones), it's a gamble to get them into the mainstream chips. Space on a die is extremely valuable - for example, nVidia left the NEON FPU off their implementation of the ARM A9 in the Tegra2, to the detriment of quite a few workloads, for the main reason of shrinking the chip to maintain high volume and lower cost.

Luckily Android is the most popular smartphone platform. WP7 and unnamed "Chinese Linux Phones" aren't market makers like Android is.


"software vendors will just get to choose which format to support."

can they have both?


Please note that this is only an encoder, not a decoder, and therefore not what most seem to expect from the announcement.


They released a hardware decoder a while back. It's been licensed by some chip makers, though I haven't heard which ones.

http://www.webmproject.org/hardware/


"only" an encoder? It is quite big to have hardware encoding because that is where the bottleneck for real-time encoding is. Hence, if you were to choose between the encoder or decoder path, you would definitely focus on the encoder path first.

But it does look like VP8 has both paths now, which is kind-of nice. If Google plays this well, they give people all the reason for including a VP8 hardware decoder in new devices. I don't really buy the space-is-important argument as Moore's law still holds true for transistor counts.


How do you get that?

"Google yesterday released a version of its VP8 video encoder and decoder designed to be baked into hardware."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: