Recommendation algorithms don't "conclude" anything though, they are merely statistical tools to ensure you get somewhat relevant content, but they are never perfect because there is a lot of noise in what everyone watches.
A. Jeesh, there's no problem with informal anthropomorophizing in this situations. When humans have a goal and feedback towards reaching a goal. When a human gets positive feedback that X gets them towards the goal, human choose X.
The combined system Google-corp+developer+algorithm is also goal seeking and making choices so anthropomorphizing the system is appropriate.
B. The problem we're talking about isn't "noise" but "feedback" - a goal-seeking-system that muddies it's final result with it's initial state. Essentially, bias, a situation that's quite common in statical systems.
A. Jeesh, there's no problem with informal anthropomorophizing in this situations. When humans have a goal and feedback towards reaching a goal. When a human gets positive feedback that X gets them towards the goal, human choose X. The combined system Google-corp+developer+algorithm is also goal seeking and making choices so anthropomorphizing the system is appropriate.
B. The problem we're talking about isn't "noise" but "feedback" - a goal-seeking-system that muddies it's final result with it's initial state. Essentially, bias, a situation that's quite common in statical systems.