>> At this point there's huge value in giving better access to the platform itself to start applying machine learning techniques directly on the camera.
What does machine learning have to do with photography? If there's an explanation for that, then I'd add why does it need to happen on the camera?
I'd agree with parent that ML is one of the killer apps of more open professional camera hardware.
Nowhere I can think of (other than Android, but a lot of caveat there) has an open hardware upstart challenged entrenched commercial players for legacy needs and won.
Where you do win is by targeting emerging needs, especially ones legacy commercial players are ill equipped to take advantage of (due to institutional inertia or ugly tech stacks).
As for why ML on edge devices: a large amount of work is going into running models (or first passes) on edge devices with limited resources (see articles on HN). I would assume they have business reasons.
But offhand, almost everything about vision-based ML in the real world gets better if you can decrease latency and remove high-bandwidth internet connectivity as a requirement.
It relies on the fact that nobody takes really original pictures, hence you can build a large enough data set and apply learned techniques/attributes/behavior to "new" pictures. Computer vision = small standard deviation.
What does machine learning have to do with photography? If there's an explanation for that, then I'd add why does it need to happen on the camera?