Hacker News new | past | comments | ask | show | jobs | submit login

Bigger is one option, but more of them is another; combining the data from multiple telescopes at, say, 10km apart effectively turns them into a telescope with a 10km diameter (see e.g. the event horizon telescope that made the first image of a black hole: https://eventhorizontelescope.org/ )



You get the resolution but not the light capturing capability of a big telescope. The event horizon telescope had to do a lot of machine learning trickery to get an image. Also, as far I understand, it's much more difficult to do this at the wavelength of visible light.


The EHT has used machine learning techniques, but those were not required to get an image that shows the shadow of the black hole. Just using the "clean" algorithm for deconvolution yielded an image showing the shadow, and that algorithm has been in use for radio interferometry for over 40 years, and has nothing whatsoever to do with machine learning.


I watched a presentation by a researcher that worked on the EHT. She pointed out that there was a lot of machine learning to rule out wrong pictures. It’s not simple interferometry.


I am also a member of the EHT project. I'm not saying no machine learning was used by EHT team members. I'm saying that you can process the 2017 EHT data using standard radio interferometry techniques, and get an image showing the M87 black hole shadow without machine learning or any other type of AI




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: