UA "detection" is an unworkable mess (i.e. a bad idea) when it's trying to differentiate between different consumer web browsers. But UA detection works just fine when it's only trying to differentiate between browsers as a whole, and things that are not browsers—like web spiders, or curl(1), or your native client app.
I have a feeling that this is the latter case. The detection code was likely written to give a higher QOS priority to direct browser access, and a lower QOS priority to the native OneDrive sync service built into Windows (because it happens in the background.) But in doing so, they probably made the check ask the wrong question—"is this a browser" instead of "is this our client"—and thus wrote the check with a browser whitelist pattern (that was never tested outside of UA strings from browsers on Windows), rather than a blacklist pattern (their one sync-service UA.)
Doesn't giving web crawlers different pages than the user facing UI get you kicked out of search engine indexes? The original ticket indicated different pages were served, not just QOS.
(I think) we're talking about pages served to logged-in users—i.e. the pages for files and folders in a given user's OneDrive account. Web crawlers wouldn't be able to reach those either way, so it (inconveniently from a fail-early perspective) wouldn't affect anything if the detection code does something stupid/crazy in the case of crawlers.
I have a feeling that this is the latter case. The detection code was likely written to give a higher QOS priority to direct browser access, and a lower QOS priority to the native OneDrive sync service built into Windows (because it happens in the background.) But in doing so, they probably made the check ask the wrong question—"is this a browser" instead of "is this our client"—and thus wrote the check with a browser whitelist pattern (that was never tested outside of UA strings from browsers on Windows), rather than a blacklist pattern (their one sync-service UA.)