I interned in St. Louis one summer and my neighbor worked at Boeing. The only reason I remember him to this day is because he introduced himself as a programmer working on smart bombs. He immediately said it was okay because he didn't fire the bombs, he just made them smarter so they didn't kill innocent people.
Is it a realistic expectation that we have no one working on smart bombs/weapons technology in general?
I’d love to be in the utopia world where there is no war and the entire planet gets along. But even the US has political bipolar disorder, so we’re a looooong way from global peace.
I do think we spend an absurd and inappropriate amount on defense, but as long as we are forced to develop weapons due to the global geopolitical climate, I’d consider it preferable to work on making them more targeted.
My problem is I don’t like or agree with the people picking the targets, but that’s a whole other argument.
I don't believe it is unethical to work on weapons technology any more than I believe it is unethical to be member of the armed forces. We live in a world with bad actors, so some amount of work on defense is necessary for survival.
However, I also believe you are responsible for the consequences of the technology you create. If you decide to work on smart bombs and because of a bad government those end up killing innocent civilians, some of that blood is on your hands. If you're willing to accept the risk of that, I respect your choice.
Personally, I would rather avoid that by not working on that category of technology at all. It's not the kind of mark I want to leave on the world. I do appreciate that I only have the luxury of this choice because I live in a country where others do choose to work on defense so that I can be protected.
The "if" is significant. If you work on weapons that turn out to be used to stop legitimately bad actors with minimal collateral damage, then your hands are as clean as anyone's can be in war.
I don't generally believe in black and white, so I'm making no claims that anything is 100% ethical, 100% unethical, 100% blood on your hands, etc. I think it's possible for good people to work on weapons and still sleep soundly at night. I also think it's possible for good people to work on weapons and end up regretting the consequences of that choice.
If you're working on weapon systems that will be used to attack, say, Nazi Germany, and you know or expect that they will also kill innocent civilians, then you are de-facto endorsing that murdering[0] innocent civilians is worth it to stop Nazi Germany, which is a resonable, if debatable, position.
If you're working on weapon systems that (you know or expect) will be just be directly used for murdering[1] innocent civilians, then you're endorsing murdering innocent civilians.
If you don't know where on that scale things are, then you're reasoning under uncertainty, and your ethics are going to have to deal with that.
0: technically most of those would actually be manslaughter, going by the usual definitions, but a: not all, b: that's not a verb.
> However, I also believe you are responsible for the consequences of the technology you create.
I agree with this to an extent, but at the same time, technologies you develop can go well beyond what you intended them to be. Do you think Ritchie knew C would be used the way it is today? It is used in UAVs, smart missiles, and more. Building faster algorithms also means faster weapons. Everyone working at SpaceX is directly or indirectly working on missile technology. You can do this with practically any technology, even as simple as building a better screw. Building materials that are stronger, lighter, and cheaper reduce the prices of homes, but it also has applications in war. Studying diseases and vaccines directly impacts biological warfare research.
It is easy to say you're responsible when you're working on things like smart bombs. But it isn't easy if you're researching fertilizer. The thing that arguably has saved the most lives yet how many lives have bombs and bullets taken?
> Personally, I would rather avoid that by not working on that category of technology at all.
So what I'm saying is you're working on that tech, just how removed are you? I also don't think there's a cognitive dissonance. You can be pro putting nitrogen in soil and anti explosive nitrates. But saying the two aren't related is naive.
Engaging in any kind of commerce (i.e. beyond growing food for yourself and your family) means paying taxes, which means funding governments and armies.
Maybe you can’t convince everyone else to behave ethically, but you can hold yourself to that standard. Working on bomb tech is over a line for me personally.
Sure, I get that. And I think it’s a position shared by a lot of people. Probably the majority of people. But it’s an ugly necessity of modern politics.
We could absolutely stop all weapons production in the US, I’d just ask for some time to learn Mandarin first.
As someone involved with the industry, this is my take.
The work defence contractors do and will continue to do is a measurable positive in the world. This work generally falls into three categories:
First is projects that make the lives of armed forces personnel easier. There is no moral or ethical hazard with these projects. They simply are making it so that pre-existing work, often non-combat work, is less tedious.
Second is projects that actively protect the lives of armed forces personnel and assets. Why would somebody have any reason to feel anything but pride in their work when they know that the only things it could ever do is save lives.
Third is the arguably morally shaky stuff. Weapons technology would fall under this category. The thing people seem to miss about this category however is what new projects' goals are. Very rarely is it ever to create a more lethal weapon. Almost all development of this kind focuses on either making existing technology cheaper, more precise, or have increased range.
In the first case, you could argue that it makes the weapons more likely to be used which is fair however if a weapon system is in active use already, it is unlikely that a cheaper version would do anything other than bring down operating costs. In the second case, these projects are actively making these weapons safer. Look at the Hellfire R9X as a prime example. This weapon sole purpose is to minimise casualties and minimise damage to the surrounding area. Finally the third and last case I mentioned. Increasing range is important for force projection reasons but the most immediate benefit is that it allows armed forces personnel to be further from danger. These people would be in this fight regardless but now they are able to operate from a safer distance instead.
Weapons tech gets a bad rap for obvious reasons but I see that as mostly having been a relic of a past era. Nowadays there isn't really any reason to make more lethal weapons. We already have those and you can see a steady trend in new military technology towards lower risks, costs, damage to the surroundings, and minimising civilian casualties.
Additionally, in my experience, you generally have a choice whether you work on a project or not. Leadership fully understands what they do as an organisation and get that some people may not be comfortable with doing certain projects. If you can't conscientiously work on a certain type of project, simply say so and leadership will take that into account when planning personnel assignments for projects.
TL;DR Armed conflict and existing weapons will never go away. What we as a society(namely the modern purpose of defence contractors) can do instead is make armed conflict less dangerous to those caught in the crossfire and those who risk life and limb to defend our countries.
Even those first two can have fairly foreseeable negative effects, though. Not necessarily net negative, mind you, but negative. If (e.g.) American soldiers have less tedium in their downtime (e.g., because paperwork is automated) and are safer (e.g. because their body armor is better), it could very reasonably lead to missions or even whole wars being undertaken that would otherwise be unconscionable. It's kind of like boxing gloves or football pads - on the face of it, it makes things safer, but because everyone knows it's "safer", they're willing to push it further.
Not judging your decision (although I personally prefer not working for defense/defense contractors), just pointing out that "it only makes soldiers safer!" isn't an ironclad rebuttal.
That's the point of armies, to be as lethal and effective as possible. A protracted war benefits no one, and a powerful army acts as a deterrent. I think that if you think your country has the right to self-defense, you have a moral obligation to make it as lethal as possible. Ethics is not an excuse for inaction or abandonment of responsibility.
I think this calculus becomes slightly different when you're talking about a nation that will never be in an existential defensive war. A protracted war benefits no one, but knowing that a war would be protracted might benefit the side that would otherwise be the loser by serving as a deterrent to the winner.
A protracted war benefits the military of the weaker side but I doubt their people would like it. Also, I can't recall the last time the United States was deterred from entering war because it would last too long.
I generally agree with your comment but this part is IMO delusional:
> These people would be in this fight regardless but now they are able to operate from a safer distance instead.
I sincerely strongly doubt US would be fighting 5 wars with boots-on-the-ground in Afganistan, Pakistan, Somalia, Yemen and Iraq, if it weren't for remotely-operated drones.
If one considers the whole operation of armed forces to be unethical, it's easy to find ethical and moral hazards with the first two points as well.
Analogy: Would you consider it unethical to provide tools and services that make the lives of human traffickers easier? To actively protect assets of drug lords distributing contaminated opiates and meth?
Slave- and drug trade are about as likely to go away as armed conflict.
I think this argument is a bit unfair. As much as I would love for our armed forces and the concept of armed forces as a whole to be unnecessary, the reality is that there is no realistic way for nation states to coexists without some type of armed forces as a deterrent to conflict even if they were never to be put into action. Because of that I don't think it is possible for the simple existence and operation of armed forces to be unethical. Sure armed forces can be used unethically and individuals or units can do unethical things but that is true for all organisations.
This is in the same way that corrupt charity organisations can defraud the causes they claim to support and corporations can knowingly risk the lives of their workers due to either negligence, mismanagement, or for the sole purpose of saving costs or making more money. This doesn't make charities or corporations unethical solely because they have the potential to be.
Additionally, armed forces can and do maintain their operational fitness by using their immense logistics networks and pools of human labour to do very real quantifiable good in the world. Outside of their use as a deterrent and show of power, the US armed forces spends a large chunk of their time and effort responding to natural disasters, taking part in humanitarian efforts, and building infrastructure.
---
On to the analogy, I would like to reframe both examples a bit.
For the purpose of this discussion, illegal transportation of individuals falls into two categories: Human trafficking and human smuggling. Human trafficking is done without consent of the individuals being transported. Conversely, human smuggling is done with consent. Despite being illegal, human smuggling provides a very important service to individuals. It may be against the law but it can be argued that it can provide a net benefit.
Mind you that it comes with severe downsides and by no means am I trying to minimise those. I'm just trying to keep this from turning into a 20 page essay on the topic. One of those downsides namely is that it increases the volume of people being transported which makes it cheaper and easier for human trafficking rings to operate. Another downside is that a large number of people die during transportation, both those consenting and non-consenting.
Now if you move onto the less clear cut outcomes, human smuggling allows individuals to enter a country without going through the vetting process. This opens a vector for criminals and terrorists to enter a country however at the same time, it provides a means for people who either aren't willing or aren't able to wait untold years to get a visa to enter the country only for it to then not be renewed a few months or years down the road. These may be individuals or families that want to be productive members of society in a country with a higher standard of living or are trying to move to a location where they can be treated for medical issues that they can't be treated for at home. Another common case in human smuggling is individuals using human smuggling to cross countries that have closed their borders to them so that they can get to a country that will accept them as refugees. The number of reasons somebody would illegally enter a country go on quite a bit and the list is full of both morally sound and morally reprehensible reasons.
With that in mind, while I personally wouldn't participate in any type of business connected to the illegal movement of people across national borders, I see no moral or ethical issues with someone who does so for the sole purpose of making the process safer. Despite the fact that it is illegal, the way I see it is that you are reducing the risk that people lose life and limb.
---
Moving on to the other example. My opinion is that most drugs should be legalised and regulated. By moving to a legitimate market and starving these organisations' source of income, we should see drug lords be replaced by corporations. You could argue that the cartels would just find a new market but I see giving them one less black market to deal in as a plus. If this were to happen, in the US those drugs would be required to meet FDA & ATF standards. Sure those that don't would still exist much like improperly distilled/methylated moonshine is still an issue however I could only hope that deaths and injuries due to contaminations would plummet compared to where we are now. Why would you buy contaminated drugs when cheaper, pure, regulated, and mass produced variants exist legally. At this point wouldn't working to improve safety standards and effective distribution in this industry be no more morally or ethically problematic than working for existing alcohol and tobacco corporations?
Now past the hypothetical, I would argue that making things less dangerous would be tangential to your example. Making the drugs themselves less dangerous would be working to increase production standards and decrease contaminations and impurities. With regard to the transportation aspect, drug trafficking is extraordinarily dangerous for those involved. Drug mules expose themselves to immense risk when carrying large doses within their bodies across borders. This is by no means the only way the drugs are transported across the border but if somebody was already involved and wanted to make a difference, making this safer would be one way. Drug mules are often either doing so as a way to support themselves financially, as part of a deal to allow them to be smuggled across the border, or against their will in some cases due to human trafficking or blackmail. In most of those cases, the mule is a more or less unwilling participant. Lowering the risk for these individuals is in my eyes both morally and ethically valid.
---
Ultimately I think all of this comes down to this. If you dedicate yourself to work that reduces the loss of life and limb, generally I find that to be a just cause. Defence contractors do this by developing weapon systems that operate at long range with surgical precision to minimise casualties. Armed forces generally do this by serving as a deterrent to violence and by responding to natural disasters and humanitarian crises. In the same sense, there can be people whose work with drug cartels and organisations facilitating illegal human transportation results in a positive impact on people.
Another way to put it would be that I would prefer a drug cartel making a pure product(of quality meeting FDA standards) that is transported without risking people's lives rather than what we have now. In the same way I would rather we have an organisation facilitating human smuggling and trafficking where nobody died or was permanently injured in transit than one where people died and were injured. Obviously in an ideal world we would prefer that there wasn't a reason for either to exist but like you said, they aren't going away any time soon and just because in these cases the organisation might be unethical it shouldn't make the people trying to improve the situation from the inside unethical.
Note: This post is a bit rushed and quite long winded as I wrote it up while taking a break and I've spent more time on this than I probably should have already. I would like to be able to actually write up a proper concise response with citations to back up my assertions however this should hopefully get enough of my point across.
It's rather amazing the mental gymnastics people perform to get out of cognitive dissonance. I'm much more sympathetic with people who know exactly what they're working on and the full effects of that work, why some will find it distasteful, but nonetheless can at least make a case for why those people are wrong and the work important, rather than agreeing with them but hastily adding some transparent excuse to avoid being lumped in the distasteful category.
I mean, that seems legitimate to me. Various governments are going to continue dropping bombs. If there's a market for PGMs for U.S. operations, that means less collateral damage; whether you like to think of it that way or not.
Yeah, and the commenter upthread didn't personally saw Jamal Kashoggi up alive, and he's got a whole headful of justifications as to why he couldn't possibly be complicit in that. When he is. He doesn't have a single atom in his body with the ethics and morals of Tim Bray...
I disagree with that analogy. Palantir isn’t getting them more data. It’s helping them use it more effectively. The organizations already have all the data recorded.
If you don’t like the data that organizations record, I’m right there with you, but demonizing the tools that allow them to access data more effectively is an absurd stance to take. Their system also does things like add an immutable audit log, or enforce requirements on how data can be accessed.
Working with law enforcement, before Palantir there was nothing preventing an officer from looking up their ex husband or wife, or celebrities, or anyone at random. With it, there’s an audit trail of every search, and most of them need a case number as justification. Even then, certain searches would automatically get flagged for review.
We can go back to giving people direct and unaudited SQL access if you’d like, but I’d prefer some more accountability, even if the downside is organizations being able to use data _they already have_ more effectively.
I’ve only worked with a handful of LEOs in my time there, but the ones I did work with were genuinely concerned about it.
If you’ve never worked with a government agency before, let me tell you, Hanlon’s Razor is in full effect. Never attribute to malice that which is adequately explained by stupidity.
The higher ups know that free reign for their officers to look up anything is a recipe for legal nightmares. The individual officers are often just too dumb to realize that they shouldn’t search for something, and do so out of curiosity. Having blocks in the way like requiring a case number be entered to run a search, and knowing that that search and results are forever associated to that case, cut down a lot on abuse.
Are all forms of abuse the same, or so easily mitigated? No, but I’d like for some system to audit that and at least try to enforce it.
During a training, demonstrating how to do geo searches on license plate data from ALPRs, a detective insisted we look up their car.
The hits were all more or less expected, a bunch around their home and the precinct, local supermarkets, etc. But then a weird grouping way out of town. Detective insisted it was a mistake and wanted more data, thinking someone stole his license plate (insert eye roll here).
Anyways, zooming in and looking at the highest density of hits, it ended up being a strip club that he frequented. He got bright red and his buddies didn't stop giving him shit for it all day.
This right here is a great argument for why "helping them use [the data they already have] more effectively" (as you've said) is probably not as great a mission as you're making it out to be.
Not sure why you're being downvoted. All your comments have been very well and politely argued (whether people agree with them or not) and this is the first time I got a pick at what palantir is actually doing. Thanks for the info.
Meh, it’s fine, downvotes don’t bother me. Especially when you take the position that goes against common opinion, it’s to be expected.
The irony to me is that when I was at Palantir they talked extremely openly about all their technology. A ton of it was open sourced, and in depth tech talks were posted to YouTube, but almost no one watched them.
I think people just like writing them off as shady or sketchy because it’s easier than needing to reason about the clear ethical nuance that exists in the government contractor space.
> I think this is position is a bit naive. Like saying "but child porn is just bits, like any other type of file".
Everything has a domain it operates in, so I believe that statement has nuance. In particular when you look at how general or specific the domain is.
From the perspective of something uploaded to S3? It sure is like any other file.
From the perspective of a website dedicated to the dissemination of child porn? It clearly is more than "just bits" in that context.
How about from the perspective of a search engine, where that site may be indexed? Welcome to the grey area that all of these debates are rooted in. Technically it's just indexed strings or ngrams, so it is like any other type of file. But there's an argument that a search engine should "know more" than just the raw data, and should be able to understand that context somehow.
Palantir is in this grey area. The software isn't built for spying. It's built for managing and understanding vast amounts of data. Can this be used for spying? Yes. It can also be used for double blind clinical trials. Or maintaining insurance systems. Or coordinating disaster relief efforts.
It operates technologically in a very general domain, but their flexible user-defined ontology makes it very powerful at operating in more specific domains. So it's a lot less cut and dry than the dissenters make it seem, IMHO.
Well, in the context of this post, that is irrelevant at best, or whataboutism at worst. I notice you don't offer a defense of Palantir here. Why is that?
> Well, in the context of this post, that is irrelevant at best, or whataboutism at worst. I notice you don't offer a defense of Palantir here. Why is that?
HN won't let me reply to that, so I'll do it here. I didn't feel the need to offer a "defense" of Palantir there since I already outlined it above and don't feel like there's value in repeating myself.
But you asked, here you go - The government is going to use the data regardless, so long as they have it. The only effective recourse is to make them collecting or storing the data illegal, but good luck with that, seeing as even when it is illegal they do it anyways (see: Snowden).
Do you think is Palantir weren't there they'd just say "Ah well, I guess that's that" and be done with it? Nope; they'd go to Lockheed, Raytheon, IBM, or another long-time contractor to build a replacement almost immediately. The technology is genuinely nothing special. It federates searches across disparate database, and stitches results together.
But for me, having worked at Palantir, I know what mechanisms are provided to attempt to mitigate abuse. So as long as that data does exist and is going to be used, I'd prefer to have it be as well controlled and audited as possible. And sure, you can argue that the government may forego auditing, or be entirely corrupt, but that doesn't seem to me to be a good reason to not use tools that at least provide that capability.
> The only effective recourse is to make them collecting or storing the data illegal, but good luck with that, seeing as even when it is illegal they do it anyways (see: Snowden).
So, basically "screw rule of law, people are gonna do it anyway, so let's make money off it!"
And we would think the same about Lockheed, Raytheon or IBM. Just because there'll always be someone without morals doesn't mean we just shrug and say anything goes.
I’m not saying we should shrug and say anything goes. I’m saying all these companies share one commonality - the are building things for the US government.
If you want to see change that is more than just superficial, that’s where you need to make it.
Sure. That doesn't mean that what Palantir are doing isn't wrong - selling tools to an entity which is going to use them for harm is wrong, even if they might result in slightly less harm than selling them some other tool.
If someone comes into a gun shop and tells you they want to shoot up a classroom, selling them a pistol instead of a machine gun doesn't make you immune from judgement.
What makes Palantir different than any other software vendor that sells to the government?
Want to know how a lot of that data is generated? Microsoft excel. Arguably without excel there would be less data to act abusively with. If only Microsoft just refused to sell excel to the government.
People are happy to accept that excel is general enough that it isn’t made for that one purpose, but refuse to apply the same reasoning to Palantir; that may seem like whataboutism, but they genuinely are comparable tools if you take away the marketing and fear mongering.
As for your analogy, it’s more like you own shop that makes metalworking tools, and someone buys some. You don’t know what they are going to do with it, and (here’s where it gets opinionated) you shouldn’t need to care. You sold a tool to someone. Can the tool be used to make a gun? Sure. But it can also be used for anything else involving metal work. At some point we need to accept that the responsibility for how a tool of used needs to be placed on the person using it.
Palantir develops tools to be better at tasks that are pitched to them in some degree of detail through the RFP process and similar processes. Palantir as a company would not exist if it were not for the less savoury of those tasks.
We do not hear, "Government X purchases a couple thousand Excel licenses to keep a database on people it'd like to kill", and I imagine that Microsoft does not sell Excel to Government X in response to an RFP for tooling to keep track of people it'd like to kill.
(Of course, it'd probably stick its metaphorical fingers in its ears if it did hear about Excel being used for that purpose.)
I'm talking about neutering the surveillance state. That's not something that demonizing individual companies is ever going to do. You need to push for that change at the government level.
Unless you're pro-surveillance, but your previous comments made it seem like the opposite.
The analogy seems apt; weapons dealers don't give warlords armies (if they did, they'd be called mercenaries.) They give the armies of warlords the tools they need to be more effective. The violent acts, killing people or holding PII, are still performed by the warlord's organization, but does that excuse the people selling the tools used?
I guess we’ll just have to agree to disagree. I just don’t think that complaining about a tech company when what you actually dislike is what governments or other organizations are doing is going to impact anything.
But Palantir has been the “tech boogieman” since before I even left in 2014, so it probably always will be. It is admittedly easier to blame a smaller tech company as a scapegoat than try to address the larger governmental or organizational actors that you actually have grievances against.
Just know it won’t do anything. The people who complain online aren’t their target market and never will be. And the paranoia lends way more credence to their dressed up search engine than it really deserves on technical merit alone.
So in a way the people complaining about it are making the issue they’re so upset about worse.
The reason why Palantir is the "tech boogieman" is because it willingly and enthusiastically sells its tech to governments specifically for surveillance reasons, and even consults them as to how best integrate it all. And then we look at the politics of its owner, and it's explicitly anti-democratic - so it's not a coincidence.
And yes, of course, complaining about Palantir online isn't going to change the government policy. But if working for Palantir means that no other self-respecting software engineer will want to shake your hand, then fewer people will want to work there, and their surveillance tech will be lower quality and have more holes in it.
> It is admittedly easier to blame a smaller tech company as a scapegoat than try to address the larger governmental or organizational actors that you actually have grievances against.
Or you can do both. You might as well suppose that people are critical of the mercenary company Blackwater because they want to avoid criticizing America foreign policy and military exploits. I don't think that's accurate at all, it's been my experience that people who are critical of one are very often critical of the other as well.
Similarly, it's been my experience that people critical of Palintar are also critical of the organizations and governments who use the services of Palintar.
Do you buy things from amazon? They treat their warehouse employees terribly, so I hope not, lest you facilitate an immoral act.
Do you ever use uber? They work to keep their drivers legally declared as contractors, despite treating them employees in all other regards. That’s pretty immoral, so I hope you don’t encourage it by supplying them with business.
Machine Learning/AI adds an entirely new perspective on big data. It can see patterns in data that would be meaningless without it, correlations that can't be found in other ways. Some data is OK for a government to have if they can just look it up in exceptional cases, but not if they act on every single bit of data collected.
In essence, using ML is creating new data (or at least: Information from the raw data). This information can be very revealing and not at all in line with the scope the data was initially collected under, which is one of the principles of GDPR.
You can't really view data as static once it's collected. Combining with other existing data and new methods of data analysis make it a totally different picture in terms of privacy.
I could argue the exact same about literally any database used by the government. Their tools are just easier to use and more expressive.
If you don’t like the bullshit the government does, that seems like an issue you should take up with the government, no? You’re using a tech company as a scapegoat.
> Their tools are just easier to use and more expressive.
And that's the problem – they're giving governments tools that let them more easily and effectively infringe on the privacy rights of their citizens. Privacy isn't invaded when the data is collected, although that's a necessary step; it's invaded when a cop or bureaucrat queries that data to see what a citizen has done. Giving them that tool is morally complicit.
~ "That's not my department," said Wernher von Braun. ~
As soon as the data is collected it is going to be used. That is where the privacy invasion is. If a massive data set on citizens leaks and becomes available online, people will still consider it an invasion of privacy, even if it was never "used" prior.
This is a ridiculous stance to take. If true, then open sourcing any form of software is morally corrupt since the government could then use it for evil.
Quite frankly, I don't think any reasonable person would consider selling software to the government to be morally corrupt unless they were reasonably confident the government would use it for evil. Right now, nobody has made a compelling-enough argument to make me believe the government will use this software for evil.
Intent matters. Palantir specifically makes surveillance tech, and specifically markets it to governments.
As for compelling arguments... is the history of most of the world's governments not enough for you? In US, you can look at the census for a case in point: this data was used to chase draft dodgers during WW1, and to compile list of Japanese for internment during WW2. Curiously, by WW2, the federal census law had specific provisions preventing the Bureau from disclosing that information to any other government agencies, precisely to prevent this kind of use - Congress simply repealed those provisions. Nevertheless, the Bureau subsequently denied sharing that data and buried any leads, so we didn't have definitive proof until this century.
> This is a ridiculous stance to take. If true, then open sourcing any form of software is morally corrupt since the government could then use it for evil.
Some databases used by governments are for things like making sure people driving cars have received the required training/certifications to share public roads, or that their vehicles carry the necessary insurance.
Other databases get used to track down, torture, and murder critical journalists - by dismembering them with a bone saw while they're still alive.
Remind me again which kind of database Palantir staff work on?
Well, when I was at Palantir I worked with police departments integrate a dozen or so databases so looking someone up took one query rather than several, cutting down on the time needed for traffic stops drastically.
The software was also used to run double blind clinical drug trials - the acl granularity was great at that; you could have different roles for drug manufacturers, doctors, and patients, none of which had absolute information. Then trial supervisors could open up the data for analysis at the end of the trial.
At that same time, the ability to use GPS phones to upload data over unreliable networks was used when they teamed up with the Clinton foundation and Team Rubicon to improve disaster relief coordination for hurricanes.
Myself or coworkers (“Palantir staff”) worked on all of these.
It isn't simply a database, as I'm sure you know. There are multiple pipelines of information which need to be glued together, cross correlated, etc. They (Palantir) provide services to make policing citizens easier and allow police departments to overstep their boundaries.
Also, I'd like to point out that this isn't an "either or" situation. Obviously the government is involved, as they no doubt put out the contracts, but Palantir is the one cashing the check. They've created their entire business model around big secretive government contracts.
The same argument is actually used w.r.t. DHS/ICE using GitHub to work on software to better track illegal immigrants.
This is software that will ostensibly cut down on errors and speed up processing times for things like asylum claims, but for some reason Github was called out as being complicit.
I think the argument against Palantir is better formed than an argument against Github based on specificity of the tool, but I think both are driven more by viral outrage than careful consideration of any moral calculus or actual hands-on knowledge of how the tools are used.
Palantir is _explicitly_ contracted by a government to create a tool for combining surveillance information to make tracking individuals more effective.
Github is providing tools (Github Enterprise IIRC) which is presumable being used to create tools which are used to track and deport individuals.
It’s a giant federated search engine... that’s about it. They don’t even hold any data themselves, it all stays on site with the customer.