That's an interesting article, and relevant in the sense that the "magic kernel" can be used for purposes of super-resolution, but Adobe is using a fairly different approach. Instead of using analytically-derived functions Adobe is using a deep learning model trained on a large dataset of Low resolution-High resolution image pairs. The details are propietary, obviously, but it's likely similar to various deep learning superresolution algorithms in the academic literature. (Some more info here https://blog.adobe.com/en/publish/2021/03/10/from-the-acr-te...)
This sounds like how nvidia have implemented Deep Learning Super Sampling (DLSS) into computer graphics cards for gaming. Allowing people to run at higher resolutions (e.g. 4k) and in some cases the image looks better than native.
For the tl;dr the bit at the end (and linked paper) can cover the topic without the backstory if that's not your sort of thing:
"As noted above, in 2021 I analytically derived the Fourier transform of the Magic Kernel in closed form, and found, incredulously, that it is simply the cube of the sinc function. This implies that the Magic Kernel is just the rectangular window function convolved with itself twice—which, in retrospect, is completely obvious. This observation, together with a precise definition of the requirement of the Sharp kernel, allowed me to obtain an analytical expression for the exact Sharp kernel, and hence also for the exact Magic Kernel Sharp kernel, which I recognized is just the third in a sequence of fundamental resizing kernels. These findings allowed me to explicitly show why Magic Kernel Sharp is superior to any of the Lanczos kernels. It also allowed me to derive further members of this fundamental sequence of kernels, in particular the sixth member, which has the same computational efficiency as Lanczos-3, but has far superior properties."