I'm torn. On the one hand, there are many websites I hate interacting with because they are unnecessarily structured as JS clients exchanging data with a remote server via a network link. On the other hand, it's easier for me to develop libraries and CLI clients of my own, these days getting quite far with Firefox dev tools' network view and "copy as curl," and only occasionally having to actually read the code in those JS clients. In the old days, I would have to resort to some ugly and comparatively brittle page scraping.
This new world sucks for the interactive user in a browser (which is me often enough), but it's great for the guy who wants to treat a website as just a remote provider of some kind of data or service (also me often enough).
This new world sucks for the interactive user in a browser (which is me often enough), but it's great for the guy who wants to treat a website as just a remote provider of some kind of data or service (also me often enough).