I've noticed the same in my usage across almost all google services at this point. My speculation is that this is due to them now being so heavily dependent on neural networks these days vs the previous mix of algorithms. The old approaches seemed to be better at not presenting false positives / creating a rediculous mish-mash of previous trains of thought in an effort to recommend something, anything... no matter how wrong.
For example: older versions of the google keyboard were pretty good about not recommending/autocorrecting non-words. So I'd type things like function/variable names (which often use camel case) and it would either suggest valid English words or perhaps a non-word used earlier in the message. These days, my use of camel case names in technical emails spills over so that emails to family now pop up with bizarre camel case recommendations. I seeing similar things happening to YouTube recommendations and elsewhere.
For example: older versions of the google keyboard were pretty good about not recommending/autocorrecting non-words. So I'd type things like function/variable names (which often use camel case) and it would either suggest valid English words or perhaps a non-word used earlier in the message. These days, my use of camel case names in technical emails spills over so that emails to family now pop up with bizarre camel case recommendations. I seeing similar things happening to YouTube recommendations and elsewhere.