Humans follow orders which are given by humans on the basis of data which is analyzed by machines and interpreted by humans.
If the machine says "dude is terrorist based on XYZ" and the human cannot realistically verify all of that is factually correct (perhaps the subject's phone was lost as the subject walked by a mosque?), then it is much easier for the human to say "Data says this dude is terrorist" than it is to say "Data says this dude is terrorist, but the data is probably wrong and we shouldn't..."
The existence of the data itself is a threat against every subject the data includes, at a minimum.
I believe the core problem there is still making extreme decisions without proper evidence. This could happen if the government knows much less about you (e.g. just the info on your driver's license) or much more about you. That is, the problem in these specific examples is not the existence of the data, but rather the willingness to throw caution to the wind and operating on shaky foundations.
Human's have a threshold where their confidence in the accuracy of something will determine their willingness to participate or take action. The machines/algorithms/authority structures and so forth are in place in large part to provide that confidence.
The issue today is that the leadership (in many areas of life from business to military to government), who make the decision to kill/censor/interrupt business/etc or not, are saying "we have to follow the data" without having any understanding of what that really means.
Ultimately, this creates false confidence both in the decision-maker and those that are following their lead. I find it unlikely that there would be anywhere near the same willingness if the 'intelligence' many of these decisions were based on didn't seem as rich and unmistakably correct as it often does.
Of course, the practical effect here is that leadership gets to blame the algorithm/model/data instead of having to accept the blame themselves. If only those pesky engineers and nerds in the lab were better at the job we'd bomb less foreigners.
If the machine says "dude is terrorist based on XYZ" and the human cannot realistically verify all of that is factually correct (perhaps the subject's phone was lost as the subject walked by a mosque?), then it is much easier for the human to say "Data says this dude is terrorist" than it is to say "Data says this dude is terrorist, but the data is probably wrong and we shouldn't..."
The existence of the data itself is a threat against every subject the data includes, at a minimum.