Apple is going exactly where I predicted they’d go. They’ve already had machine learning hardware on their phones for multiple generations, mostly “just” doing on-device image categorisation
Now they’re moving into more generic LLMs, most likely supercharging a fully local Siri in either this year’s or next year’s iOS release
And I'm predicting that there will be an opportunity for 3rd party developers to plug in to said LLM to provide extra data released during WWDC. So you can go "Hey Siri, what are the current news on CNN" -> CNN app provides data to Siri-language model in a standard format and it tells what's going on.
I wonder how hard it would be for Apple to rebuild Shortcuts around an AppleScript backend to allow power users the ability to edit the scripts directly.
Now they’re moving into more generic LLMs, most likely supercharging a fully local Siri in either this year’s or next year’s iOS release
And I'm predicting that there will be an opportunity for 3rd party developers to plug in to said LLM to provide extra data released during WWDC. So you can go "Hey Siri, what are the current news on CNN" -> CNN app provides data to Siri-language model in a standard format and it tells what's going on.