Hacker News new | past | comments | ask | show | jobs | submit login
The Hindenburg disaster denoised, upscaled, and colorized using ML [video] (youtube.com)
192 points by DamnInteresting on Oct 16, 2021 | hide | past | favorite | 109 comments



I don't know if this is a breakthrough but... Frankly this looks awful.

I think I could handle unrealistic colors but the way they flicker so much frame to frame is really jarring.

It's interesting that the algorithm seems to generate chromatic aberration at hard edges? Most clearly around the letters on the title cards.


The de-noising and super-resolution looks quite good to me. It's the colorization that's super unstable and looks ugly. If they'd just left it B&W it would IMHO be more impressive.


I kept waiting for it to get better than a light brown haze with flashes of green... but it never did. The fire looked okay, but even that looked weird at the beginning of the sequence.

What really made me laugh was the blue smoke followed by a headline implying that was all at night. The AI filled in pristine blue skies and inverted the wreckage colors. It actually made up history. I'm calling it: Unintentional automated disinformation.


The original video was interlaced. The deinterlacing part of the algorithm is not very good, and I think they would have gotten better results with a special-purpose pass before handing off to the neural network.


Or using a better source video. As there's no way the original film was interlaced.


I'm now imagining a low-paid 1930s film job called "interlacer" which consisted of taking every frame and drawing tiny, perfectly straight black lines on it, with a tiny fineliner and a tiny ruler, on odd and even rows, every next frame.


I know of a company that did the opposite to handle 2:3 pulldown for rotoscoping. They had a photoshop action that would select every other line, copy&paste to a new doc, collapse the empty space, and then paint the frame. Then, do that a second time for the other half. Finally, more actions to recombine.

I couldn't make this up. My jaw just hit the floor when it was explained to me the first time. I still shake my head typing it up to post here.


I had to look this up: https://en.m.wikipedia.org/wiki/Three-two_pull_down

What tool should have been used to do that conversion (in reverse)? To their credit, it sounds like they solved the problem and moved on.


They solved the problem by increasing the work unnecessarily. So to explain the issue with more detail and not requiring peeps to jump to wikipedia, 2:3 pulldown is how 24fps film was converted to 29.97 fps interlaced video. This increases the number of frames roughly 20%, but it does this by repeating fields of the original progressive data. When you step through this 29.97 video frame-by-frame, you will see a repeating pattern of 3 progressive frames followed by 2 interlaced frames. To do this asinine made up workflow, they then increased the number of frames again. Instead, they should have done an inverse telecine (IVTC), which reverses the 2:3 pulldown returning 24 fps progressive frames. When rotoscoping, you definitely do not want to be creating additional frames "just because". Applying a small amount of logic should have suggested that these interlaced frames are odd since the orgininal source had nothing but clean progressive frames. If this was introduced in the conversion to video, surely it can be undone.

They, as you, decided it was acceptable to do whatever needed to be done at whatever expense rather than taking 10 minutes to find the proper workflow which would have saved them money. A simple phone call to the company they used to transfer the film to video would have been able to explain to them how to do this in less than 5 minutes. I know because I worked for the company doing the film transfer and had helped several other clients with the video for film post workflows. Today, it's even easier because there's like a bazillion write ups on how to do this posted on the web.

Edit: I didn't answer the question directly after pontificating. IVTC is the process that needed to be applied. Many many tools exist(ed) for this. The tools at this company's disposal would have been After Effects, Avid, etc. Later more tools became available like AVISynth, FFMPEG, and other dedicated tools were created by people to tackle this directly.


A flow like this could have been someone's job security.


Analogue video (e.g. television) interlacing did actually begin in the 1930s, a few years before the Hindenburg disaster. But the original disaster footage would have been shot on film (not interlaced of course), cut into newsreels and then converted to a television signal via telecine.

Supposedly the original newsreals still exist, preserved by the National Film Registry: https://en.wikipedia.org/wiki/Hindenburg_disaster_newsreel_f...


Original film may have flickered, or the playback process to capture may have induced flicker.


What is missing is object continuity with respect to color. That would quiet things down tremendously. Right now it is as if every object gets re-painted from one frame to the next in a completely new (and often garish or wildly incorrect) color.


I believe there already is quite some continuity, otherwise the colors would flicker much stronger from frame to frame. In the video it varies smoothly from frame to frame.


There are many instances of frame-to-frame discontinuity that I can't explain other than by a lack of object detection and labeling. It would be less wrong to use the color from the previous frame even if the lighting changes than to use an entirely different hue for the same object.

Only things like TV screens and other displays (and some interesting objects covered with micro surfaces that can cause light interference) can change color that rapidly given the same color incident light.


I think they get that not because of putting in that constraint, but only because subsequent frames are similar. That makes the coloring algorithm pick similar coloring.


Looks like the AI can handle objects it "knows" (people, grass, sky etc.) okish, but is completely confounded by the Zeppelin - which is sad, as the Hindenburg is of course present in most of the shots. Maybe they should have trained it with some color(ized) photos of the Hindenburg (like this one: https://www.alamy.de/das-deutsche-zeppelin-luftschiff-die-hi...) first?


This is where a human videographer could trivially do better or add value. The obsession with "automating everything" is a real disease in ML and CS generally. Sometimes it makes things worse! This is one of those situations.


Only thing that looked good was the closeup of the captain. That one short segment was very well done.


For an early alpha version, it is ok-ish. For anything but that, it is terrible.


I had to stop 60% of the way through. It was giving me a headache.

This video proves to me that you can’t do denoise, upscale and then color without stabilizing first. The result is too jarring once image stab is a known transform.


There are also certain frames where the smoke turns into green trees


People complaining about the colors shifting don't release it pulsated red in time to the phat beats it was dropping.


Considering it is entirely automated, this is extremely good.

I'm not a bit against ML restoration. We've been paying artist to color BW in the early days, this is just replacing the artist by machines.

Judging from the video, it looks like no inter-frame relation is considered, so color varies wildly from on frame to another. The video still lacks some form of stabilization. Frames still have defects inherited from the film.

Also, it clearly was recorded in different speeds, so people have an uncanny walk in the last scenes.

I hope, in the not too distant future, these flaws will be taken care of and we'll see restorations that are very hard to differentiate from original footage.


I couldn't possibly disagree more. This is nothing more than the digital equivalent of the monkey Jesus restoration: https://en.wikipedia.org/wiki/Ecce_Homo_(Mart%C3%ADnez_and_G... It's extremely disrespectful. The colours are shifting all over the place. At times it's no better than overlaying a random gradient over the original.

> We've been paying artist to color BW in the early days, this is just replacing the artist by machines.

And some people are very upset over those. (Have you ever actually seen classic movies like "It’s a Wonderful Life" in colour?) But this ML nonsense doesn't even hold a candle to those. In the manual process, for every object, they pick a colour and stick with it.


How is this disrespectful if it's just a demo of the tech progress?

Are painting artists not allowed to train by repainting Picassos until they get good?


The original medium contains information collected at the time, with the technology available at the time. As such, it represents a time capsule that includes much more context than we can possibly realize; especially analog filmstock and audio. "Upscaling" and "autocolorizing" is injecting inferred (and often wrong) context from today using today's technology and context, tainting, corrupting (IMHO), and again, IMHO, ruining what the original artifact represents.

In our current technological, digital zeitgeist, everything that seems like higher and better resolution is universally better and must absolutely replace what was there before. But this is just a shifting fad, unrecognizable to the people who were alive then, and probably unrecognizable to people in the future.

Preserve, conserve, protect and pass on. Don't corrupt and "better" these things.


Sure, you can train all you want, but showing off something that is inferior to other things that can already be done is just embarassing. At least it should be. There's a difference of showing off progress to your parents or investors or whatever, but making a public post of something clearly inferior in a way that screams "look what I did" is sad really.


Interesting. This is a common consumer view. Since pure consumers aren't acquainted with anything but the best in the field, they demand very high quality of anything they witness. That makes sense.

At least part time content creators know that creation is harder - because that first step to actually doing something is mentally hard (pure consumers find it impossible). People will make crappy things before they make good things and they'll share them with each other.

Perhaps the problem is when pure consumers, seeking further stimulus, enter creator spaces. Or where part-creators accidentally expand the audience for their fellow creators into pure consumer spaces or more-consumer-than-creator spaces like HN.


If you are attempting make something from scratch that has a competitor on the markent now, why would you evern think someone would use your product when it is so clearly inferior to other offerings? Hope to attract business by being the cheaper option? That just means you're going to get the clients nobody wants.

There's a difference between showing your friends and family something, but opening up the entire public with posting to YouTube and asking "tell me what you think" means you must be pretty proud of it. If you were proud of the results from this, then just wow.

In full disclosure, I've spent many an hour in the chair of restoring film/video. I've written many a tool to help with this endeavor for internal use as well as used professional tools. I have beta tested software that later went to production release. I have sent content out to outsource the work and critically evaluated the results. If this was the result someone sent me after suggesting they could do the work, I would never send them the work as well as vocally tell others not to waster their time.

This isn't even good enough to send someone's 8mm footage to for viewing on modern devices let alone professional.


It’s put up to share, man.

You’re writing this comment of yours and posting it publicly. Are you so incredibly proud of it? It’s riddled with typos and wouldn’t get you a passing grade in middle school.

That’s what I would say if I were being overly critical of someone sharing their opinion. It really doesn’t need this degree of hyperbolic value judgment. If it’s such rubbish it will easily be outcompeted for attention.


Presumably, because they're training on a video of the Hindenburg disaster.


It's not like the original video is full of 'tact'.

> Suddenly - The fatal moment!


Lack of temporal consistency for colors is absolutely horrible.

I think it indicates that the problem posed to ML was defined wrongly.


Exactly, the upscaling is good but any human intern would have fixed the color for free, even by choosing a random one, and it would have produced a result closer to expectations...


Maybe then we'll finally get an HD release of Deep Space 9.


Evidently all efforts of the following HN member to improve consumer-grade DS9 video has reached an end:

https://news.ycombinator.com/item?id=24417255

The studio itself won't commission the necessary professional work for an HD (or at least upscaled) re-release.


He came back to it :).

> Update (7/17/2021): The article below has been superseded by the results discussed in “Far Beyond the Stars” and the accompanying tutorial, “How to Upscale Star Trek: Deep Space Nine.” These articles are the latest that I’ve published and the best showcase for my latest work.

[1] https://www.extremetech.com/extreme/323905-far-beyond-the-st...

[2] https://www.extremetech.com/extreme/324466-tutorial-how-to-u...


The article you linked in that thread had an update from the same person.

https://www.extremetech.com/extreme/324466-tutorial-how-to-u...


It really isn’t very good. It’s worse than you’d get from low cost hand retouching outsourced to a low cost of labor geo as was done in the 80s.


>as was done in the 80s.

as is done to this day. FTFY


>> The video still lacks some form of stabilization.

Yeah, I wonder why. Once you are able to create colors, I'd think stabilizing the picture must be really easy. And since we're doing fancy interpretation (yeah, I say "interpretation" 'cos ML can't know the exact colors, just see the craft turning red and blue all of the time), then it'd be nice to go to the end of it : color, stabilization, speed fixing, the whole thing.

Besides, how do we know how such "interpretation" is close or far from reality ? What sort of validity testing is done ?


The channel Rick88888888 has a ton of restored movies: https://youtube.com/user/Rick88888888


I'd sooner call it vandalism. Restoration is what's shown in "They Shall Not Grow Old" where they paid attention to detail and approached it with respect.

Sharp edges and shifting red-blue colours is not what I would call restoration.


It’s not vandalism unless the originals have been damaged. If someone makes a digital copy and goofs around with it I really don’t see any harm being done.


It's getting sadder and sadder to see people enjoying these fake restorations. Eventually we'll have so much of a mess that the original source will be forgotten or lost.


Hahaha, that's their creative touch so they can claim copyright on this for the next 70 years


In this particular case I think there is the extra complication that the ship skin was made of a special material that likely does not burn with the usual wood fire colors. Combined with the hydrogen on the inside I actually have no idea what color the fire would have been - the most accurate way to restore this would probably have been with an actual eyewitness in the loop, or by restoring a section of the ship including the flammable paint.


The crazy, crazy fact:

Fatalities 36 (13 passengers, 22 crewmen, 1 bystander) Survivors 62 (23 passengers, 39 crewmen)

Looking at the video, you certainly would not expect two thirds of the people on board to survive the hellish fire.


Apparently, most of the passengers were along the window bay in preparation for landing. They were able to hop out of the windows as the guest compartment reached a safe distance to the ground, and before hot wreckage could fall on them.

The three things that saved them was A) an airship, even one leaking and on fire, had a slow enough fall to safely evacuate B) the majority of fuel sources were above the crew and guests and C) most of the guests had an easy exit.


About half of the guests had an easy exit, the other half were on the other side of the airship where the door had become jammed, most of the people on that side perished in the flames.


Indeed, a little over half a minute from the start of the fire to total destruction of the airframe, it is amazing that there were that many survivors. Important to note that the crew and the airframe itself were two separate components, but that the passenger area was embedded in the fuselage.

More miraculously when you have seen that footage is to realize that some of the passengers and crew walked away without major injury.


It is mostly hydrogen, so most of the energy was dissipated in the first few seconds. Most of the other materials or do not burn well or have very low thermal capacity. It was not on its highest altitude when burning started and drag dumped the falling to much slower speeds than a free-falling person.

I'd estimate that most people who died were trapped or unable to move. Those who were free to run and had enough "air" were very likely to survive.


Also, hydrogen being much lighter than air, a lot of the burning and associated heat was above the zeppelin.

Slightly related question: this film talks about “white hot steel”. It wasn’t steel (https://en.wikipedia.org/wiki/Duralumin#Aviation_application...). Was it white hot?


Heat rises, and the passengers were all below the fire. I suspect many of the deaths were from being crushed as it fell on them.


I’ve read that most fatalities were people jumping out, and those who stayed inside until the burning airship touched down mostly walked away.

Edit: that’s not quite correct, Wikipedia has a better summary https://en.wikipedia.org/wiki/Hindenburg_disaster


Hydrogen does not burn energetically, especially before it has been mixed with oxygen. The tragedy was caused by the coating used on the exterior which was essentially solid rocket fuel.


That theory has been largely discredited. There wasn't nearly enough reagents in the fabric to create a large thermite reaction. An oxyhydrogen explosion is the most likely explanation: there was probably a leak prior to the explosion, which allowed air to mix with hydrogen inside and surrounding some of the gas bladders. Also, a walkway ran through the bladders, which acted as an oxygen source once the bladders burst, as evidenced by flames being directed through the axial walkway.

https://en.wikipedia.org/wiki/Hindenburg_disaster#Incendiary...


> Hydrogen does not burn energetically, especially before it has been mixed with oxygen.

Drop the word "especially" and I might agree, but a mere party balloon of hydrogen mixed with oxygen in a stoichiometric ratio going off will sound like a rifle and rattle windows.


When I was young, I volunteered at a science museum who would, as part of their chemistry show, fill a balloon with hydrogen and ignite it off a tesla coil. One time, we (~15yo kids) convinced the person giving the show (maybe...college age kid?) to also mix O2 in the balloon, instead of only H2.

The boom was so loud that management folks on the other floors sent people down to see WTF was happening. I don't think he got into (much) trouble, but he sure never did that again.


I don't know why this theory is repeated as though it's fact. The explanation that a lifting gas bag ruptured, mixing it's contents with the air seems a better fit to the witness accounts.


New footage shows the tail-fins catch fire before the gas-bladders erupted. Those tail-fins must, therefore, have been quite flammable. The video suggests that the trigger for the fire was the release of a large build up of static electricity on approaching proximity to the ground. An airship is essentially, an oversized Leyden jar, and such a build up would cause problems, even for ones using helium for lift.

https://www.youtube.com/watch?v=UFCgipjR2ow


The Hindenberg was also reportedly quite tail-heavy coming in for landing, which suggests a leak. If it was leaking at the tail and this was triggered by static discharge it would fit most of the facts.


> New footage shows the tail-fins catch fire before the gas-bladders erupted.

In the film you link the entire rear half of the craft is on fire from the first frame.


This happens quite often with accidental explosions I've noticed. Explosions are not as lethal as you might expect, unless someone deliberately ensured that it caused a lot of damage.


It was a design flaw. The flames were able to travel quickly through the central passage which passed through the center of the gas-bladders. The footage shows flames venting from the nose of the airship.


Were all in the zeppelin when it caught fire?


yes. It hadn't landed yet.


One of the deaths was a ground crewman, Allen Hagaman.


I've really come to enjoy these sorts of upscaled videos that allow a view of various cities in the early 1900's.

New York: https://www.youtube.com/watch?v=85WMpJMv8aA

San Francisco: https://www.youtube.com/watch?v=VO_1AdYRGW8

Paris: https://www.youtube.com/watch?v=JOPaxhhgyd8

Wuppertal, Germany: https://www.youtube.com/watch?v=EQs5VxNPhzk


This one of Berlin, though not as early as the ones you posted, is the most spectacular one I've found:

https://m.youtube.com/watch?v=_YSDgruADlE

This gives a much better view into what everyday life must have been like at that time, it almost reminds me in some ways of movies such as Koyaanisqatsi.


This is absolutely astonishing. The colorization isn't great, but you can tell it will get much, much better as the algorithms improve.

The stabilization, however, is ... well, astonishing. There is zero visible judder except in a very few places.


In the early 90's I was working in early streamed digital video. One of the projects was a history of aviation documentary. The original negatives of early flight, including the original Wright Brothers flights are lost. But in many cases paper print outs of them remain. Before copyright was extended to film, to get a copyright a film had to be printed to paper. These paper print outs are all that remain. But this being the early 90's, there was no machine learning nor a formal field of computer vision as we know it now. We painstakingly scanned the thousands of feet of printed film, and reconstructed them as CLUT-127 (color look up table) animations. I seem to remember that project got streaming media awards of some sort. It was a long time ago.


Me too, really interesting to see and easier to connect somehow when the quality is better, compared to the black and white versions with strange frame-speeds.

Here is another one:

Barcelona, Spain (1911): https://www.youtube.com/watch?v=P-3NlMAdq9I


I agree. Correcting the frame rate jitter and also motion stabilization (in addition to upscaling the resolution), really does result in a much more engaging view into the past.

I'm perfectly happy without colorization, but also willing to overlook it.


Groningen, The Netherlands : https://www.youtube.com/watch?v=ogsTc9JLfWY


This source material is very well done, given it's age.


That is truly interesting, but I also find it quite depressing. There's almost not a tree in sight. I'm glad that I did not live in such a place.

Edit: The one from the Netherlands is lovely though, and an example of how life in our cities could be without cars.


The city council started pushing cars outwards from the late 70s onwards and the city is more or less back to the situation at the time of the video.


A beautiful animation of the ship's design and the disaster:

https://www.youtube.com/watch?v=VJy17qZmhjE


The temporal color jitter is pretty bad, but the other processing seemed to crisp it up. Based on my watching two minute papers I have to guess they can do better on the temporal consistency of the color.


Two more papers down the line, I'm sure they'll be able to fix that.


What a time to be alive!


Károly Zsolnai-Fehér is just so darn charismatic. That has to be one of the reasons Two Minute Papers[0] has >1M subs. Is there anyone in the space even close to that level of engagement?

[0] https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg


My guess is that they are doing this frame-by-frame with an NN built for images, and that is why there is some much jitter.

Providing actual video as input to the NN could probably do away with this.


I never noticed until now there were big honkin swastikas on the tail.


I did notice them as well. I wonder if the original video had already been edited so that the swastikas were (in my opinion) mostly clipped out, or if this is a more recent edit...


I remember swastikas in the 1970s. I don't think they were ever edited out.


I think it would be cool if you could have some kind of combo human+ML system, where you use ML to actually detect the different objects and track them between frames, and a human can choose what color to use for each one. In a couple shots at least the Hindenburg showed as fully red instead of silver, and I don't think it is always possible for an algorithm to actually know the correct color without external data. If the algorithm could just say "I've identified this object between frames", and then the human can choose the correct color, could be best of both worlds.

Or maybe you could feed in some reference photos with the same objects, but already colorized. And then the algorithm could match objects from the reference photos to those in the black and white ones to get the colors.


It’s interesting that the color of the Hindenburg itself is always shifting. The object is so out of sample that the ML colorizer has no idea what to do with it.


I was half-expecting it to turn into the colour of a manatee.


Although visually it appears to be high resolution, it feels like an illusion, like my brain still receives no additional information Vs the original. Which is odd because I can look at individual elements and see more precision.


This sent me down a zeppelin Wikihole and led me to a much greater appreciation of the engineering and operational complexity of these airships. Thanks!


Please add "(very poorly)" to the title.


They should add a regularizing term between frames, like a simple L2 loss between output pixel values, to stabilize the color.


And stabilize the frames, an early assignment in many computer vision courses. No need for the text frames and video images to be so shaken/stuttered.


Err...WOW! That was fast...

And I've been unaware that Manhattan had so many skyscrapers at the time. Or maybe not. But one can glimpse that for a few seconds in the video. Awesome.


Yeah it's wild how modern Manhattan looks in the background. I've seen pictures of course but in color/motion it is really striking.


Another way of looking at that is how little change a typical city will undergo once it is laid out. This goes for Rome and Amsterdam just as it does for NYC.

I was away from Amsterdam for about a decade before coming back there after having lived there before for a stretch of nearly 28 years (with a few longer absences) and what surprised me is how little had changed, and yet, here and there there were buildings that I was pretty sure weren't there before or some familiar landmarks that had gone. Over many decades or even centuries that kind of change will add up, and even though for Amsterdam in particular at some point there was a plan to 'overhaul' it and make it more modern (which fortunately got arrested in the earliest stage, but it did do a lot of damage to the east side of the river Amstel) the vast bulk of the city has been unchanged since decades.

What real change there is is expansion wherever there was undeveloped land, but that isn't change insomuch as it is simply growth, what was there before remains.

https://viewpointvancouver.ca/2019/10/27/the-1960s-when-the-...

https://en.wikipedia.org/wiki/Expansion_of_Amsterdam_since_t...

My mom still remembers that just behind the Olympic stadium in Amsterdam there were meadows and cows grazing! (That's now deep inside the city).

If you're interested in when buildings in Amsterdam were first constructed you can find that information in:

https://www.wozwaardeloket.nl/

Just zoom in on the city, click on any building outline on the map and on the right hand side you will find all kinds of interesting information, including the year that ground was broken or the building was formally entered into the registry. This information is not always available but it still contains a wealth of information.



Joseph Goebbels and the Amazing Technicolor Dream-Blimp


Huh. Never noticed the swastikas before.


Manhattan has never looked so green!


Came here to comment on how much park space there was in lower Manhattan. Not the same type a loss as the explosion, but a tragic loss nonetheless.


This reminds me so much of the Gene Wilder Willy Wonka Tunnel Scene


a real tragedy, but also a succinct metaphor for Nazi Germany!


Or for our culture of growth, believing we can outdo nature, like stories in The Onion's reporting on the Titanic: "World's Largest Metaphor Hits Ice-Berg" https://lh4.ggpht.com/-gWV2MSK4odI/T4yLpd1STpI/AAAAAAAASrw/o...


A bunch of the comments in this thread are inappropriate. There’s no justification for the over-the-top insulting language on this forum.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: