Hacker News new | past | comments | ask | show | jobs | submit login

I guess one can break videos into 200-frame chunks and process them independent of each other.



Not if there isn't coherency between those chunks


Easily solved, just overlap by ~40 frames and fade the upscaled last frames of chunk A into the start of chunk B before processing. Editors do tricks like this all the time.


Decent editors may try that once, but they will give up right away because it will only work by coincidence.


There has to be a way where you can do it intelligently in chunks and reduce noise along the chunk borders.

Moreover I imagine that further research and power will do a lot, smarter, and quicker.

Don't forget people had toy story-comparable games in a decade or so after it was originally rendered at 1536x922.


Or upscale every 4th frame for consistency. Upscaling in between frames should be much easier.


And now you end up with 40 blurred frames for each transition.


'before processing'


At 30fps, which is not high, that would mean chunks of less than 7 seconds. Doable but highly impractical to say the least.


It's not so much that it would be impratical (video streaming, like HLS or MPEG-Dash, requires to chunk videos in pieces of roughly this size) but you'd lose the inter-frame consistency at segments boundaries, and I suspect the resulting video would be flickering at the transition.

It could work for TV or movies if done properly at the scene transition time though.


7s is pretty alright, I've seen HLS chunks of 6 seconds, that's pretty common I think.


6s was adopted as the "standard" by Apple [0].

For live streaming it's pretty common to see 2 or 3 seconds (reduces broadcast delay, but with some caveats).

0: https://dev.to/100mslive/introduction-to-low-latency-streami...


You will probably have to have some overhang of time to get the state space to match enough to minimize flicker in between fragments.


You could probably mitigate this by using overlapping clips and fading between them. Pretty crude but could be close to unnoticeable, depending on how unstable the technique actually is.


Perhaps a second pass that focuses on smoothing out the frames where the clips are joined.


Maybe they could do a lower framerate and then use a different AI tool to interpolate something smoother.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: