Hacker News new | past | comments | ask | show | jobs | submit login

I think backing companies like Google and Netflix don't care about that. They need hardware decoding support in phones and TVs, and they will serve av1 from their platforms and save a lot of money. It might become the dominant codec without you even noticing it.



Sure, but I meant that hardware support is implemented if vendors are forced to support something -- such as user downloaded files.

Netflix will serve both av1, h265 and h264, so my TV doesn't have to implement av1 support.


HW vendors have already/are implementing AV1 because they were on the big consortium and the industry is a bit more grown up now.

AV1 is a big compromise between a bunch of vendors. It is definitely the future, the powers that be have already decreed this, but these things move slowly.


Couldn’t they reuse the tensor cores that are shipped in every device at this point? There are already lots of papers on compressing images using deep learning, I don’t see any reason why the companies couldn’t make a video standard that relies on that hardware.


having a hardware encoder and decoder on a device is super useful for streaming content of that device. Not sure I would want to use other compute for that, that compute is much better used doing CV on the video stream :)


Why do you think so? Those tensor processors are actually already optimized for video processing: all of the complex postprocessing in the iPhone camera app is done by the tensor cores inside the M1 chip. I wouldn't be suprised if it would already far be able outperform the mentioned codecs, but of course it needs lots of software development that can only be done by the big companies.


A codec it’s static, almost not changing at all over a decade. This allow you to implement it as a single purpose hardware which is orders of magnitude more efficient and fast than code running in a multipurpose chip, tensor or not.

For things that evolve fast, as deep learning, an programmable chip is the right choice.


The iPhone doesn't yet use M1. Besides, post-processing a video is one thing, encoding is completely different. What Apple does with the neural processing is most likely the analysis of the content, not the "editing".


In something like a mobile device, every watt counts. If it takes more energy to decode video on the tensor cores than it does to have a dedicated hardware block, you keep the hardware video decoder.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: