I initially thought that this was an announcement for a new pledge and thought, "they're going to forget about this the moment it's convenient." Then I read the article and realized, "Oh, it's already convenient."
Google is a megacorp, and while megacorps aren't fundamentally "evil" (for some definitions of evil), they are fundamentally unconcerned with goodness or morality, and any appearance that they are is purely a marketing exercise.
> while megacorps aren't fundamentally "evil" (for some definitions of evil),
I think megacorps being evil is universal. It tends to be corrupt cop evil vs serial killer evil, but being willing to do anything for money has historically been categorized as evil behavior.
That doesn’t mean society would be better or worse off without them, but it would be interesting to see a world where companies pay vastly higher taxes as they grow.
You're taking about pre-Clinton consumerism. That system is dead. It used to dictate that the company who could offer the best value deserved to take over most of the market.
That's old thinking. Now we have servitization. Now the business who can most efficiently offer value deserves the entire market.
Basically, iterate until you're the only one left standing and then never "sell" anything but licenses ever again.
The bait-and-switch model is absolutely amazing as well. Start by offering a service covered with ads. Then add paid tier to get rid of ads. Next add tier with payment and ads. And finally add ads back to every possible tier. Not to forget about keeping them in content all the time.
To quote the email from Hulu that recently dropped into my inbox:
> We are clarifying that, as we continue to increase the breadth and depth of the content we make available to you, circumstances may require that certain titles and types of content include ads, even in our 'no ads' or 'ad free' subscription tiers.
So at this point they aren't even bothering to rename the tier from "ad free" even as they put ads in it. Or maybe it's supposed to mean "the ads come free with it" now? Newspeak indeed.
This goes back to the release of the no-ads Hulu plan. Due at the time to fun shenanigans and weirdness around the exact licensing deals for a few shows. (At least one of those shows is VERY long-running now https://www.reddit.com/r/greysanatomy/comments/12prhpf/no_ad... - not sure if there have been any new ones through the years or currently )
Advertising is just the surface layer—the excuse. Digital ads rely on collecting as much personal data as possible, but that data is the real prize. This creates a natural partnership with intelligence agencies: they may not legally collect the data themselves, but they can certainly buy access.
This isn’t new. Facebook, for example, received early funding from In-Q-Tel, the CIA’s venture capital arm, and its origins trace back to DARPA’s canceled LifeLog project—a system designed to track and catalog people’s entire lives. Big Tech and government surveillance have been intertwined from the start.
That’s why these companies never face real consequences. They’ve become quasi-government entities, harvesting data on billions under the guise of commerce.
Years ago a friend working in security told me that every telco operator in Elbonia has to have a special room in their HQ that's available 24/7 to some goverment officials. Men in black come and go as they please, and while what is actually happening in that room remains a mystery, they can tap straight to the system from within with no restrictions or traceability.
Growing up in soviet bloc I took that story at face value.
After all democracy was still a new thing, and people haven't invented privacy concerns yet.
Since then I always thought that some sort of cooperation between companies like Facebook or Google and CIA/DOD was an obvious thing to everyone.
PRISM [1] is the best evidence of how short-lived most people's memories are. Microsoft, Yahoo, Google, and Facebook were the first 4 members. It makes it pretty funny when companies like Apple (who also joined more than a decade ago) speak about trying to defend customer's privacy against government intrusion. There's so much completely cynical corporate LARPing for PR.
And if one wants to know why big tech from China isn't welcome, be it phones or social media, it's not because fear of them spying on Americans, but because of the infeasibility of integrating Chinese companies into our own domestic surveillance systems.
If you have ever seen the prank interview between Elijah Wood and Dominic Monaghan, "Do you wear wigs? Have you worn wigs? Will you wear wigs?" and Elijah breaks down laughing in total shock at how hilariously bad the interview is...
...I just picture a similar conversation with a CEO going: "Sir, shareholders want to see more improvement this quarter." CEO: "Do we run ads? Have we run ads? Will we run ads this time?" (The answer is inevitably yes to all of these)
That creates limits to growth of an Ad based ecosystem.
So the thing to pay attention too is not Revenue growth or Profit growth of a Platform but Price of an Ad, Price to increase reach, price to Pay to Boost your post, price of a presidential campaign etc etc. These prices cant grow forever just like with housing prices or we get the equivalent of a Housing Bubble.
Want to destabilize the whole system pump up ad prices.
I prefer the angle that describes this as a shift from value production to value extraction. Value production means coming up with new goods or services, or new/better ways to make existing ones. Value extraction means looking at existing economic exchanges, and figuring out how to get X percent of some of them.
It was always a game of maximizing captured value. In such a game, creating value and capturing some portion of what you're producing is far less effective than value extraction, moving value around such that it's you capturing it, not someone else. A market, then, will by default encourage the latter strategy over the former. However, if the society in charge of a market observes value extraction occurring, it can respond by outlawing the particular extraction strategy being employed, and punish the parties participating. Then, for some time, market participants will turn to producing value instead, making more humble profits, until another avenue for extraction becomes available and quickly becomes the dominant strategy again. This cycle continues until the market eats the forces that would seek to regulate it and reign in extractive practices. That is what we're seeing here, at least in the US there is basically no political will behind identifying and punishing any new forms of harmful behavior, and we barely enforce existing laws regarding eg monopolies. Common wisdom among neoliberals and conservatives both is that big companies are good for the economy, and it's best to tread lightly in terms of regulating their behaviors, lest we interrupt their important value production process. One wonders if there are perhaps financial incentives to be so pro-corporate.
I would argue that since the dawn of capitalism (whenever you place that), there have been moral structures in place to promote value production and stigmatize value extraction. The precise balance between the two moral verdicts changes back and forth over time. In the USA in the 21st century we seem to have entered a period where the promotion of value production is unusually low and simultaneously the stigmatization of value extraction has dropped close to zero.
All the more ironic nowadays because the most popular politicians are the highest value extractors who moved the value production overseas, leaving the now jobless voters angry instead with... immigrants/lgbtqa+/other races/other religions, who basically had no saying and no role in the above move.
Don't forget get your stock on and index that almost all retirement funds are required to put money into every month versus the old school stock market where it was a market not a cable bill (you have to pay for the whole bundle if you want it or not).
It's easy to set up an IRA where you can trade individual securities instead of index funds if that's what you want. Most people aren't competent traders and will underperform the index funds.
Historically, unchecked corporate power tends to mirror the flaws of the systems that enable it. For example, the Gilded Age robber barons exploited weak regulations, while tech giants thrive on data privacy gray areas. Maybe the problem isn’t size itself, but the lack of guardrails that scale with corporate influence (e.g., antitrust enforcement, environmental accountability, or worker protections), but what do I know!
I guess corrupt cop vs serial killer is like amorality (profit-driven systems) vs immorality (active malice)? A company is a mix of stakeholders, some of whom push for ethical practices. But when shareholders demand endless growth, even well-intentioned actors get squeezed.
That word comes with a lot of boot-up code and dodgy dependencies.
I don't like it.
Did Robert Louis Stevenson make a philosophical error in 1882
supposing that a moral society (with laws etc) can contain within
itself a domain outside of morals [0]?
What if coined the word "alegal"?
Oh officer... what I'm doing is neither legal nor illegal, it's simply
alegal "
Most suggestions of this nature fail to explain how they will deal with the problem of people just seeing there’s no point in trying for more. On a personal level, I’ve heard people from Norway describe this problem for personal income tax—at some point (notably below a typical US senior software engineer’s earnings) the amount of work you need to put in for the marginal post-tax krone is so high it’s just demotivating, and you either coast or emigrate. Perhaps that’s not entirely undesirable, but I don’t know if people have contemplated the consequences of the existence of such a de facto ceiling seriously.
> Most suggestions of this nature fail to explain how they will deal with the problem of people just seeing there’s no point in trying for more. On a personal level, I’ve heard people from Norway describe this problem for personal income tax—at some point (notably below a typical US senior software engineer’s earnings) the amount of work you need to put in for the marginal post-tax krone is so high it’s just demotivating, and you either coast or emigrate. Perhaps that’s not entirely undesirable, but I don’t know if people have contemplated the consequences of the existence of such a de facto ceiling seriously.
I think if you look at quality of life and happiness ratings in Norway it's pretty clear it's far from "entirely undesirable". It's good for people to do things for reasons other than money.
Great, so we only want the real high risk takers, the top gamblers,to play in the big league. Those who are so rich they no way to lose their personal comfort and are blind to the personal risk - and probably are careless about anyone's else just as well
Don't we have that already? Bootstrapped startups with the founders money on the line typically don't play in the 'big league's till way after the founder is at risk..
Hardly per capita they export similar amounts of petroleum products, but Norway’s GDP is 80k/person vs 30k/person in Saudi Arabia. Norway exports slightly more/person but their production costs are significantly higher which offsets it.
The difference is Norway’s economy being far less dependent on petroleum which is only 40% of their exports.
Higher taxes is the wrong solution to the very valid problem.
We all recognize that a democracy is the correct method for political decision making, even though it's also obvious that theoretically a truly benevolent dictator can make better decisions than an elected parliament but in practice such dictators don't really exist.
The same reasoning applies to economic decision making at society level. If you want a society whose economics reflects the will and ethics of the people, and which serves for the benefit of normal people, the obvious thing is the democratize economic decision making. That means that all large corporations must be mostly owned by their workers in roughly 1/N fashion, not by a small class of shareholders. This is the obvious correct solution, because it solves the underlying problem, not paper of the symptoms like taxation. If shareholder owned corporations are extracting wealth from workers or doing unethical things, the obvious solution is to take away their control.
Obviously, some workers will still make their own corporations do evil things, but at least it will be collective responsibility, not forced upon them by others.
The alternative is to make consumption the will of the people, so people buy things they want, and from vendors they like.
I think "this isn't free; you pay with ad views and your data is sold" is something that should be on a price tag on services that operate this way, though. It doesn't work if the price isn't clearly advertised.
> but I don’t know if people have contemplated the consequences of the existence of such a de facto ceiling seriously.
One of the second order consequences of progressive taxation is that it increases gross wages for higher earners, as people care about their net pay being larger, not their gross pay.
An extreme example, in the UK the tax rate is an effective 60% between £100k and £120k (ish), so people's salaries get driven through that zone quickly. This obviously means there's less money to give to other people.
We're talking about corporations here, where are they going to go? If you had a competent government, you would say "fine, then leave. But your wealth and business is staying here" at some point the government has to do its job. These corporations pull in trillions of dollars, its wild to me to suggest that suddenly everyone would stop working and making money because they were taxed at a progressive rate. Its an absurd assumption to begin with.
We could literally have high speed rail, healthcare, the best education on the planet and have a high standard of living... and it would be peanuts to them. Instead we have a handful of people with more wealth than 99% of everyone else, while the bottom 75% of those people live in horrifying conditions. The fact that medical bankruptcy is a concept only in the richest country on earth is deeply embarrassing and shameful.
> This is a cliche you hear from right winger in any country that has a progressive tax system
Not a cliché - a fact. I'll explain to you.
The incentive structure of progressive taxation is wrong: it only works for the few percent that are extremely money hungry: the few that are willing to work for lower and lower percentage gains.
Normal people say "enough" and they give up once they have the nice house and a few toys (and some retirement money with luck). In New Zealand that is something like USD1.5 million.
I'm on a marginal rate of 39% in New Zealand. I am well off but I literally am not motivated to try and earn anything extra because the return is not enough for the extra effort or risk involved. No serial entrepreneurship for me because it only has downside risk. If I invest and win then 39%+ is taken as tax, but even worse is that if I lose then I can't claim my time back. Even financial losses only claw back against future income: and my taxable income could move to $0 due to COVID-level-event and so my financial risk is more than what it might naively appear.
Taxation systems do not fairly reward for risk. Especially watch people with no money taking high risks and pay no insurance because the worst that can happen to them is bankruptcy.
New Zealand loses because the incentive structure for a founder is broken. We are an island so the incentive structure should revolve around bringing in overseas income (presuming the income is spent within NZ). Every marginal dollar brought into the economy helps all citizens and the government.
The incentives were even worse when I was working but was trying to found a company. I needed to invest time, which had the opportunity cost of the wages I wouldn't get as a developer (significant risk that can't be hedged and can't be claimed against tax). 9 times out of 10 a founder wins approximately $0: so expected return needs to be > 10x. A VC fund needs something like > 30x return from the 1 or 2 winning investments. I helped found a successful business but high taxation has meant I haven't reached my 30x yet - chances are I'll be dead before I get a fair return for my risk. I'm not sure I've even reached 10x given I don't know the counterfactual of what my employee income would have become. This is for a business earning good export income.
Incentive structures matter - we understand that for employees - however few governments seem to understand that for businesses.
Most people are absolutely ignorant of even basic economics. The underlying drive is the wish to take from those that have more than them. We call it the tall poppy syndrome down here.
I'm also on the 39% marginal income tax rate in New Zealand. That income tax rate isn't the problem. Keeping $60K out of every $100K extra salary I make is plenty of motivation to work harder to make the extra $100K... especially because the taxes paid aren't burned, they mostly go to things I care about.
The income tax rate isn't all that relevant to the costs and benefits of starting a company, so I don't understand that part of your story. The rewards for founding a successful company mostly aren't subject to income tax, and NZ has a very light capital gains regime.
I have started my own company and I do agree that there are some issues that could be addressed. For example, it would be fairer if the years I worked for no income created tax-deductible losses against future income.
But NZ's tax rates are lower than Australia and the USA and most comparable nations, and NZers start a lot of businesses, so I don't think that is one of our major problems at the moment.
> Keeping $60K out of every $100K extra salary I make is plenty of motivation to work harder
That's good that it motivates you. It doesn't motivate me any more. I'm not interested in "investing" more time for the reasons I have said.
> the taxes paid aren't burned, they mostly go to things I care about.
I'm pleased for you. I'd like to put more money towards things I care about.
> The income tax rate isn't all that relevant to the costs and benefits of starting a company
I am just less positive than you: it feels like win you lose, lose you lose bigger. I'm just pointing out that our government talks about supporting businesses but I've seen the waste from the repetitive attempts to monetise our scientific academics.
> The rewards for founding a successful company mostly aren't subject to income tax
Huh? Dividends are income. Or are you talking about the non-monetary rewards of owning a business?
> NZ has a very light capital gains regime
Which requires you to sell your company to receive the benefits of the lack of CGT. So every successful business in NZ is incentivised to sell. NZ sells it's jewels. Because keeping a company means paying income tax every year. NZ is fucking itself by selling anything profitable - usually to foreign buyers.
The one big ticket item I would like to save for is my retirement fund. But Labour/Greens want to take 50% to 100% of capital if you have over 2 million. A bullshit low amount drawdown at 4% is $80k/annum before tax LOL. Say investments go up by 6% per year and you want to withdraw 4%. Then a 2% tax is 100% of your gains. Plus I'm certain they will introduce means testing for super before I am eligible. And younger people are even more fucked IMHO. The reality is I need to plan to pay for the vast majority of my own costs when I retire, but I get to pay to support everybody else. I believe in socialist health care and helping our elderly, but the country is slowly going broke and I can't do much about that. I believe that our government will take whatever I have carefully saved - often to pay for people that were not careful (My peer-group is not wealthy so I see the good and the bad of how our taxes are spent). Why should I try to earn more to save?
I find it hard to understand how $60K means no motivation but $100K would be highly motivating.
> I'd like to put more money towards things I care about.
You said later that you care about the public health system and helping the elderly. That's where a large percentage of our taxes go.
> Huh? Dividends are income. Or are you talking about the non-monetary rewards of owning a business?
No, I'm talking about selling all or part of the business. I agree with you that it's a problem our businesses often sell out to overseas interests who hollow out the company. But the general pattern of making most of your money by selling shares in the business is completely normal worldwide.
"the amount of work you need to put in for the marginal post-tax krone is so high it’s just demotivating"
Sounds like the effort needed for bonuses here in the US. Why try if the amount is largely arbitrary and generally lower than your base salary pay rate when you consider all the extra hours. Everything is a sham.
Which industry? Bonuses in the tech industry tend to be somewhat arbitrary and thus ineffective for motivating employees. Bonuses in other industries like trading or investment banking tend to be larger (sometimes more than base salary) and directly tied to individual performance and so they're highly effective at motivating ambitious employees.
Increasing marginal income tax rates on highly compensated employees might be a good policy overall. But where are we on the Laffer curve? If we go too far then it really hurts the overall economy.
As scale grows the moral ambiguity does also. Megacorps default to “evil” because action in a large number of circumstances for a large number of events does as well, particularly when economic factors are motivating behavior (implicitly or explicitly). Essentially being “non-evil” becomes more expensive than the value it adds. There is always someone on the other end of a transaction, by definition.
My problem with this take is that you forget, the corporations are made up of people, so in order for the corporation to be evil you have to take into account the aggregate desires and decision making of the employees and shareholders and, frankly, call them all evil. Calling them evil is kind of a silly thing to do anyway, but you can not divorce the actions of a company from those who run and support it, and I would argue you can't divorce those actions from those who buy the products the company puts out either.
So in effect you have to call the employees and shareholders evil. Well those are the same people who also work and hold public office from time to time, or are shareholders, or whatever. You can't limit this "evilness" to just an abstract corporation. Not only is it not true, you are setting up your "problem" so that it can't be addressed because you're only moralizing over the abstract corporation and not the physical manifestation of the corporation either. What do you do about the abstract corporation being evil if not taking action in the physical world against the physical people who work at and run the corporation and those who buy its products?
I've noticed similar behavior with respect to climate change advocacy and really just "government" in general. If you can't take personal responsibility, or even try to change your own habits, volunteer, work toward public office, organize, etc. it's less than useless to rail about these entities that many claim are immoral or need reform if you are not personally going to get up and do something about it. Instead you (not you specifically) just complain on the Internet or to friends and family, those complaints do nothing, and you feel good about your complaining so you don't feel like you need to actually do anything to make change. This is very unproductive because you have made yourself feel good about the problem but haven't actually done anything.
With all that being said, I'm not sure how paying vastly higher taxes would make Google (or any other company) less evil or more evil. What if Google pays more taxes and that tax money does (insert really bad thing you don't like)? Paying taxes isn't like a moral good or moral bad thing.
People making meaningful decisions at mega corporations aren’t a random sample of the population, they are self selected to care a great deal about money and or power.
Honestly if you wanted to filter the general population to quietly discover who was evil I’d have a hard time finding something more effective. It doesn’t guarantee everyone is actually evil, but actually putting your kids first is a definite hindrance.
The morality of the average employee on the other hand is mostly irrelevant. They aren’t setting policies and if they dislike something they just get replaced.
You'd never figure out who was "evil" because it's just based on your own interpretation of what evil is. Unless of course you want to join me as a moral objectivist? I don't think Google doing military work with the US government is evil. On the other and I think the influence and destruction caused by advertising algorithms is. Who gets to decide what is evil?
I take issue with "don't blame the employees". You need people to run these organizations. If you consider the organization to be evil you don't get to then say well the people who are making the thing run aren't evil, they're just following orders or they don't know better. BS. And they'd be replaced if they left? Is that really the best argument we have against "being evil"?
Sorry I'd be less evil but if I gave up my position as an evil henchman someone else would do it! And all that says anyway is that those replacing those who leave are just evil too.
If you work at one of these companies or buy their products and you literally think they are evil you are either lying to yourself, or actively being complicit in their evil actions. There's just no way around that.
Take personal responsibility. Make tough decisions. Stop abstracting your problems away.
If your defense is trying to argue about what’s evil, you’ve already lost.
Putting money before other considerations is what’s evil. What’s “possible” expands based on your morality it doesn’t contract. If being polite makes a sale you’re going to find a lot of polite sales people, but how much are they willing to push that expended warrantee?
> Sorry I'd be less evil but if I gave up my position as an evil henchman someone else would do it!
I’ve constrained what I’m willing to do and who I’m willing to work for based on my morality, have you? And if not, consider what that say about you…
> I’ve constrained what I’m willing to do and who I’m willing to work for based on my morality, have you? And if not, consider what that say about you…
This sort of discussion gets a bit tricky because it often turns out one person is not having a discussion; they're trying to advertise something about themselves.
> Putting money before other considerations is what’s evil.
Depends on the considerations and what you consider to be evil. My point wasn't to argue about what's evil, of course there is probably a few hundred years of philosophy to overcome in that discussion, but to point out that if you truly think an organization is evil it's not useful to only care about the legal fiction or the CEO or the board that you won't have any impact on - you have to blame the workers who make the evil possible too, and stop using the products. Otherwise you're just deceiving yourself into feeling like you are doing something.
I was writing about the general case. I apologize if that wasn't clear from the start. I don't know anything about you personally though I'm sure we'd have some great conversations over a glass of wine (or coffee or whatever :) )!
> The fact you assume people are going to do things they believe to be morally reprehensible is troubling to me.
This seems to be very common behavior in my experience. Perhaps the rhetoric doesn't match the true beliefs. I'm not sure.
A large corporation is more than the sum of its owners and employees, though. Large organizations in general have an emergent phenomenon - past a certain threshold, they have a "mind of it own", so to speak, which - yes - still consists of individual actions of people making the organization, but those people are no longer acting as they normally would. They get influenced by corporate culture, or fall in line because they are conditioned to socially conform, or follow the (morally repugnant) rule book because otherwise they will be punished etc. It's almost as if it was a primitive brain with people as neurons, forced into configurations that, above all, are designed to perpetuate its own existence.
corporations, separate from the people in them, are set up in a way that incentivizes bad behaviour, based on which stake holders are considered and when, along with what mechanisms result in rewards and which ones get you kicked out.
the architecture of the system is imperfect and creates bad results for people.
By definition we can never know for sure, but I believe the number of people who stay silent is many times bigger than those who voice their opinion. They've learned it is unproductive (as you say) or worst case, you're told you've got it all wrong technically speaking.
Complaining is not unproductive, it signals to others they are not alone in their frustrations. Imagine that nobody ever responds or airs their frustrations; would you feel comfortable saying something about it? Maybe you're the only one, better keep quiet then. Or how do you find people who share your frustrations with whom you could organise some kind of pushback?
If I was "this government", I would love for people to shut up and just do their job, pay taxes and buy products (you don't have to buy them from megacorp, just spend it and oh yeah, goodluck finding places to buy products from non-megacorps).
My point was that complaining isn't enough and in my experience most people just complain but don't even take the smallest action in line with their views because it inconveniences them. Instead they lull themselves to sleep that something was done because they complained about it, and there's no need to adjust anything in their lives because they "did all they can do".
Instead of taking action they complain, set up an abstract boogeyman to take down, and then nobody can actually take action to make the world better (based on their point of view) because there's nothing anyone can do about Google the evil corporation because it's just some legal fiction. Bonus points for moralizing on the Internet and getting likes to feel even better about not doing anything.
But you can do something. If someone thinks Google is evil they can stop using Gmail or other Google products and services, or even just reduce their usage - maybe you can switch email providers but you only have one good map option. Ok at least you did a little more than you did previously.
I don't really agree with some of your assumptions. At many companies, many of the people also are evil. Many people who hold shares and public office are also evil.
I don't think it's necessary to conclude that because a company is evil then everyone who works at the company is evil. But it's sort of like the evilness of the company is a weighted function of the evilness of the people who control it. Someone with a small role may be relatively good while the company overall can still be evil. Someone who merely uses the company's products is even more removed from the company's own level of evil. If the company is evil it usually means there is some relatively small group of people in control of it making evil decisions.
Now, I'm using phraseology here like "is evil" as a shorthand for "takes actions that are evil". The overall level of evilness or goodness of a person is an aggregate of their actions. So a person who works for an evil company or buys an evil company's products "is evil", but only insofar as they do so. I don't think this is even particularly controversial, except insofar as people may prefer alternative terms like "immoral" or "unethical" rather than "evil". It's clear people disagree about which acts or companies are evil, but I think relatively few people view all association with all companies totally neutrally.
I do agree with you that taking personal responsibility is a good step. And, I mean, I think people do that too. All kinds of people avoid buying from certain companies, or buy SRI funds or whatever, for various ethically-based reasons.
However, I don't entirely agree with the view that says it's useless or hypocritical to claim that reform is necessary unless you are going to "do something". Yes, on some level we need to "do something", but saying that something needs to be done is itself doing something. I think the idea that change has to be preceded or built from "saintly" grassroots actions is a pernicious myth that demotivates people from seeking large-scale change. My slogan for this is "Big problems require big solutions".
This means that it's unhelpful to say that, e.g., everyone who wants regulation of activities that Company X does has to first purge themselves of all association with Company X. In many cases a system arises which makes such purges difficult or impossible. As an extreme, if someone lives in an area with few places to get food, they may be forced to patronize a grocery store even if they know that company is evil. Part of "big solutions" means replacing the bad ways of doing things with new things, rather than saying that we first have to get rid of the bad things to get some kind of clean slate before we can build new good things.
You could use this logic to posit that any government, group, system, nation state, militia, business, or otherwise, isn't "evil" because you haven't gauged the thoughts, feelings and actions of every single person who comprises that system. Thats absurd.
If using AI and other technology to uphold a surveillance state, wage war, do imperialism, and do genocide... isn't evil, than I don't know if you can call anything evil.
And the entire point of taxes is that we all collectively decide that we all would be better off if we pooled our labor and resources together so that we can have things like a basic education, healthcare, roads, police, bridges that don't collapse etc.. Politicians and corporations have directly broken and abused this social contract in a multitude of ways, one of those ways is using loopholes to not pay taxes at the same rate as everyone else by a large margin... another way is paying off politicians and lobbying so that those loopholes never get closed, and in fact, the opposite happens. So yes, taxing Google and other mega-corporations is a single, easily identifiable, action that can be directly taken to remedy this problem. Though, there is no way around solving the core issue at hand, but people have to be able to identify that issue foremost.
Agreed, I think part of it boils down to the concept of 'limited liability' itself which is a euphemism for 'the right to carry out some degree of evil without consequence.'
Also, scale plays a significant part as well. Any high-exposure organization which operates on a global scale has access to an extremely large pool of candidates to staff its offices... And such candidate pools necessarily include a large number of any given personas... Including large numbers of ethically-challenged individuals and criminals. Without an interview process which actively selects for 'ethics', the ethically-challenged and criminal individuals have a significant upper-hand in getting hired and then later wedging themselves into positions of power within the company.
Criminals and ethically-challenged individuals have a bigger risk appetite than honest people so they are more likely to succeed within a corporate hierarchy which is founded on 'positive thinking' and 'turning a blind eye'. On a global corporate playing field, there is a huge amount of money to be made in hiding and explaining away irregularities.
A corporate employee can do something fraudulent and then hold onto their jobs while securing higher pay, simply by signaling to their employer that they will accept responsibility if the scheme is exposed; the corporate employer is happy to maintain this arrangement and feign ignorance while extracting profits so long as the scheme is kept under wraps... Then if the scheme is exposed, the corporations will swiftly throw the corporate employee under the bus in accordance to the 'unspoken agreement'.
The corporate structure is extremely effective at deflecting and dissipating liability away from itself (and especially its shareholders) and onto citizens/taxpayers, governments and employees (as a last layer of defense). The shareholder who benefits the most from the activities of the corporation is fully insulated from the crimes of the corporation. The scapegoats are lined up, sandwiched between layers of plausible deniability in such a way that the shareholder at the end of the line can always claim complete ignorance and innocence.
What is Googs going to do, leave money on the table?
And if Googs doesn't do it, someone else will, so it might as well be them that makes money for their shareholders. Technically, couldn't activist shareholders come together and claim by not going after this market the leadership should be replaced for those that would? After all, share prices is the only metric that matters
I think the poster is applying your statement about leaving money on the table. Structural requirements to not leave money on the table is a Moloch results that leads to the deterioration of the system into being just stealing as much shit as possible.
What are you are saying is: optimising for commercial success is incompatible with morality. The conclusion is that publicly traded megacorps must inevitably trend towards amorality.
So yes, they aren't "evil" but I think amorality is the closest thing to "evil" that actually exists in the real world.
I don't buy that argument. There are things Google does better than competitors, so them doing an evil thing means they are doing it better. Also, they could be spending those resources on something less evil.
Remember when the other AI companies wanted ClosedAI to stop "for humanity's sake" when all it meant was for them the catch up? None of these companies are "good". They all know that as soon as one company does it, they all must follow, so why not lead?
A megacorp is made up of people. So it's people who are fundamentally evil.
The main thing here I think is anonymity through numbers and complexity. You and thousands of others just want to see the numbers go up. And that desire is what ultimately influences decisions like this.
If google stock dropped because of this then google wouldn't do it. But it is the actions of humans in aggregate that keeps it up.
Megacorporations are scapegoats when in actuality they are just a set of democratic rules. The corporation is just a window into the true nature of humanity.
You're half right. Corporations are just made of people. But, they're more than the sum of their parts. The numbers and complexity do more than provide anonymity: they provide a mechanism where individuals can work in concert to accomplish bad things in the aggregate, without (necessarily) requiring any particular individual to violate their conscience. It just happens through the power of incentives and specialization. If you're in upper management, the complexity also makes it easier to turn a blind eye to what is happening down below.
Not a useful framing in my view. People follow private incentives. Private incentives are by default not perfectly aligned with external stakeholders. That leads to "evil" behavior. But it's not the people or the org, it's the incentives. You can substitute other people into the same system and get the same outcome.
People have the incentive to not do evil and to do evil for money. When you abstract the evil away into 1 vote out of thousands then you abstract responsibility and everyone ends up in aggregate doing an inconsequential evil and it adds up to a big evil.
>A megacorp is made up of people. So it's people who are fundamentally evil.
That is to make a mistake of composition. An entity can have properties that none of its parts have. A cube made out of bricks is round, but none of the bricks are round. You might be evil, your cells aren't evil.
It's often the case that institutions are out of alignment with its members. It can even be the case that all participants of an organization are evil, but the system still functions well. (usually one of the arguments for markets, which is one such system). When creating an organization that is effectively the most basic task, how to structure it such that even when its individual members are up to no good, the functioning of the organization is improved.
Ethical pledges from corporations, especially ones as large as Google, are PR tools first and foremost. They last only as long as they align with strategic and financial interests
Paperclip maximizing robot making the excuse that it's just maximizing paperclips, that's what it was designed to do, there's even a statute saying that robots must do only what it was designed to do, so it's not evil just amoral.
Weird thing is for corporations, it's humans running the whole thing.
> they’re amoral and are designed to maximize profits
Isn't that a contradiction? Morality is fundamentally a sense of "right and wrong". If they reward anything that maximizes short term profit and punish anything that works against it then it appears to me that they have a simple, but clearly defined sense of morality centered around profit.
I guess a question becomes, how does dropping these self-imposed limitations work as a marketing exercise? Probably most of their customers or prospective customers won't care, but will a cheery multi-colored new product land a little differently? If Northrop Grumman made a smart home hub, you might be reluctant to put it in your living room.
They are dropping these pledges to avoid securities lawsuits. “Everything is securities fraud” and presumably if they have a stated corporate pledge to do something, and knowingly violate it, any drop in the stock price could use this as grounds.
Being a defense contractor isn't a problem that a little corporate rearrangement can't fix. Put the consumer division under a new subsidiary with a friendly name and you're golden. Even among the small percentage who know the link, it's likely nobody will really care. For certain markets ("tacticool" gear, consumer firearms) being a defense contractor is even a bonus.
"We won't use your dollars and efforts for bad and destructive activities, until we accumulate enough of your dollars and efforts that we no longer care about your opinions".
Not evil, perhaps, but run by Moloch[1] -- which is possibly just as bad. Their incentives are set up to throw virtually all human values under the bus because even if they don't, they will be out-marginal-profited by someone that does.
This is a very important point to remember when assessing ideas like "Is it good to build swarms of murderbots to mow down rioting peasants angry over having expenses but no jobs?" Most people might answer "no," but if the people with money answer "yes," that becomes the market's objective. Then the incentives diffuse through the economy and you don't just get the murderbots, you also get the news stations explaining how the violent peasants brought this on themselves and the politicians making murderbots tax deductible and so on.
It is partially the markets fault. If they were demonized for this, there's at least be a veneer of trying to look moral. Instead they can simply go full mask off. That's why you shouldn't tolerate the intolerant.
I have full faith that the market[1] will direct the trolley onto the morally optimal track. It's invisible hand will guide mine when I decide or decide against pulling the lever. Either way, I can be sure that the result is maximally beneficial to the participants, myself included.
The magic market fairy godmother has decided that TVs with built in ads and spyware are good for you. The market fairy thinks this is so good for you that there are no longer any alternatives to a smart TV besides "no tv"
The market fairy has also decided that medication commercials on TV is good for you. That your car should report your location, speed, and driving habits to your insurer, car manufacturer, and their 983,764 partners at all times.
> they are fundamentally unconcerned with goodness or morality,
I would argue that is fundamentally evil. Because evil pays the best. Its like drunk driving, on an empty road it can only harm you, but we live in a society full of other people.
> they are fundamentally unconcerned with goodness or morality, and any appearance that they are is purely a marketing exercise
This is flatly untrue. Corporations are made up of humans who make decisions. They are indeed concerned with goodness and/or morality. Saying otherwise lets them off the hook for the explicit decisions they make every day about how to operate their company. It's one reason why there are shareholder meetings, proxy votes, activist investors, Certified B-Corporations, etc.
A megacorp is amoral. They have no concern over an individual anymore than a human has concern for an ant, because individuals simply don’t register to them. The ant may regard the human as pure evil for the destruction it rains upon its colony, but the ants are not even a thought in the human’s mind most of the time.
> they are fundamentally unconcerned with goodness or morality
No, no. Call a spade a spade. This behavior and attitude is evil. Corporations under modern American capitalism must be evil. That's how capitalism works.
You succeed in capitalism not by building a better mousetrap, but by destroying anyone who builds a better moustrap than you. You litigate, acquire, bribe, and rewrite legislation to ensure yours is the best and only mousetrap available to purchase, with a token 'competitor' kept on life support so you can plausibly deny anticompetitive practices.
If you're a good company trying to do good things, you simply can't compete. The market just does not value what is good, just, or beneficial. The market only wants number to go up, and to go up right now at any cost. Amazon will start pumping out direct clones of your product for pennies. What are you gonna do, sue Amazon?! best of luck.
"The market" is just a lot of people making decisions about what to do with their money. If you want the market to behave differently, be the change you want to see, and teach others to do the same.
this, but broader. Goodness and morality is a subjective and more importantly relative measure, making it useless in many situations (as this one).
while knowing this seems useless, it's actually the missing intrinsic compass and the cause for a lot of bad and stupid behavior (by the definition that something is stupid if chosen knowing it will cause negative consequences for the doer)
Everything should primarily be measured based on its primary goal. For "for-profit" companies that's obvious in their name and definition.
That there's nothing that should be assumed beyond what's stated is the premise of any contract whether commercial, public or personal (like friendship) is a basic tool for debate and decision making.
I want to be upset over this in an exasperated expression of oddly naive "why can't we all get along?" frame of mind. I want to, because I know how I would like the world to look like, but as a species we, including myself, continually fail to disappoint when it comes nearly guaranteed self-destruction.
I want to get upset over it, but I sadly recognize the reality of the why this is not surprising to anyone. We actually have competitors in that space, who will do that and more. We already have seen some of the more horrifying developments in that area.. and, when you think about it, those are the things that were allowed to be shown publicly. All the fun stuff is happening behind closed doors away from social media.
A vague “stuff is happening behind closed doors” isn’t enough of a reason to build AI weapons. If you shared a specific weapon that could only be countered with AI weapons, that might make me feel differently. But right now I can’t imagine a reason we’d need or want robots to decide who to kill.
When people talk about AI being dangerous, or possibly bringing about the end of the world, I usually disagree. But AI weapons are obviously dangerous, and could easily get out of control. Their whole point is that they are out of control.
The issue isn’t that AI weapons are “evil”. It’s that value alignment isn’t a solved problem, and AI weapons could kill people we wouldn’t want them to kill.
Have a look at what explosive drones are doing in the fight for Ukraine.
Now tell me how you counter a thousand small EMP hardened autonomous drones intent on delivering an explosive payload to one target without AI of some kind?
How about 30k drones come from a shipping vessel in the port of Los Angeles that start shooting at random people? To insert a human into the loop (somehow rapidly wake up, move, log hundreds of people in to make the kill/nokill decision per target) would be accepting way more casualties.
What if some of the 30k drones were manned?
The timeframes of battles are drastically reduced with the latest technology to where humans just can't keep up.
I guess there's a lot missing in semantics, is the AI specifically for targeting or is a drone that can adapt to changes in wind speed using AI considered an AI weapon?
At the end of the day though, the biggest use of AI in defense will always be information gathering and processing.
I guess it won't be long until there are drones which can take out drones autonomously. Somewhat neutralizing the threat...providing you have enough capable drones yourself :)
I agree. I don't think there's really a case for the US developing any offensive weapons. Geographically, economically and politically, we are not under any sort of credible threat. Maybe AI based missile defense or something, but we already have a completely unjustified arsenal of offensive weapons and a history of using them amorally.
You think the cartels aren't attacking us because we have missiles that can hit Mexico? I don't agree. Somewhat tangentially, the cartels only exist because the US made recreational drugs illegal.
Not sure where the missiles came from, you said all offensive weapons so in my mind I was picturing basic firearms.
Drug trade might be their most profitable business but I think you're missing a whole lot of cultural context by saying the US's policy on drugs is their sole reason for existing. Plenty of cartels deal in sex trafficking, kidnapping, extortion, and even mining and logging today.
> AI weapons are obviously dangerous, and could easily get out of control.
The real danger is when they can't. When they, without hesitation or remorse, kill one or millions of people with maximum efficiency, or "just" exist with that capability, to threaten them with such a fate. Unlike nuclear weapons, in case of a stalemate between superpowers they can also be turned inwards.
Using AI for defensive weapons is one thing, and maybe some of those would have to shoot explosives at other things to defend; but just going with "eh, we need to have the ALL possible offensive capability to defend against ANY possible offensive capability" is not credible to me.
The threat scenario is supposed to be masses of enemy automated weapons, not huddled masses; so why isn't the objective to develop weapons that are really good at fighting automatic weapons, but literally can't/won't kill humans, because that's would remain something only human soldiers do? Quite the elephant on the couch IMO.
People try to cope and say others are guided by lies. In the US, people knew exactly what they were getting and I’m true the same is true in other “democracies”.
What is true is already so.
Owning up to it doesn't make it worse.
Not being open about it doesn't make it go away.
And because it's true, it is what is there to be interacted with.
Anything untrue isn't there to be lived.
People can stand what is true,
for they are already enduring it.
“Grownups never understand anything by themselves, and it is tiresome for children to be always and forever explaining things to them”
- Antoine de Saint-Exupery, The Little Prince
That would be removing information and strictly worse that including it.
Communication is about communicating information, sometimes a terse short and aggressive style is the most effective way. It activates neurons in a way a paragraph of polite argumentation doesn't.
the contention of your respondents and downvoters is that regardless of your intention, the extra information actually communicated is "i'm an asshole".
More accurately in the context of the comment, its "Im gonna be an asshole to you because I think you don't have the life experience I do", which is at least, some kind of signal.
“More effective” at what? No one is ever going to be convinced by an argument that begins with an insult. So what do you mean by it will be more effective?
Do you honestly think an insult never brought about a change in a person? You never think a carefully landed and accurate insult made someone reconsider their position?
Weird, because in my experience, that has happened to every single person I know and myself. Whether it's at the start or end of a comment is not really the point.
Maybe you'd prefer if we were all maximally polite drones but that's not how humans are, going back to GPs point, and I don't think it's a state than anyone truly wants either.
Human's short context windows with too many areas to research and stay up to date on is why I don't believe any version of Democracy I've seen can succeed, and the only real positive to some kind of ASI government/policing (once we solve the whole universal judgement system issue). I'd love a world where you would be assisted through tax season, ill-intentioned drivers were properly incentivized to not risk others' lives, and you could at least be made aware before breaking laws.
Eliminating the need to lie/misguide people to sway them would be such a crazy world.
Not the GP, but I think what they’re getting at is that Aes Sedai can deceive without saying untruthful. So a hypothetical truth serum wouldn’t necessarily guarantee honesty
I think we can assume good fath and the grandparent merely forgot to add "in combat" to that statement, rather than deliberately trying to downplay the use of Zyklon B.
It took the use of poison gas to get countries on board, and some will still use it. Just more carefully.
Would China, Russia, or Iran agree to such a preemptive AI weapons ban? Doubtful, it’s their chance to close the gap. I’m onboard if so, but I don’t see anything happening on that front until well after they start dominating the landscape.
The path we're on was inevitable the second man discovered fire.
No matter which way you look at it, we live on a planet where resources are scarce. Which means there will be competition. Which means there will be innovation in weaponry.
That said, we've had nukes for decades, and have collectively decided to not use them for decades. So there is some room for optimism.
They may do as much as us, but not more. Lets stop pretending every nation who developed nukes dropped it on a city. Nobody has proven they are willing to go as far as the US.
Nukes didn't wipe us out. Neither will AI. It nevers ends with doomsday fearmongering. But that's because fears sells. Or better yet, fear justifies spending.
Technically the US has never dropped nukes, those were atomic bombs.
Second, don’t understand how the atomic bomb argument makes sense. Germany was developing them and would have used them if it got there first.
Are you suggesting the US really is truly the only nation that would ever have used atomic weapons? That if Japan made it first they would have spared China or the US?
Most of the early research into computers was funded for military applications. There is a reason why the silicon valley became a hub for technological development.
Is this more or less ethical than OpenAI getting a DoD contract to deploy models on the battlefield less than a year after saying that would never happen, with the excuse being well we only meant certain kinds of warfare or military purposes, obviously. I guess my question is, isn't there something more honest about an open heel-turn, like Google has made, compared to one where you maintain the fiction that you're still trying to do the right thing?
I think it's unfair to bring up OpenAI's commitment to its own principles as any sort of bar of success for anyone else. That's a bit like saying "Yes, this does look like they're yielding to foreign tyrants, but is this more or less ethical than Vidkun Quisling's tenure as head of Norway?"
At least Google employees will sign petitions and do things that follow a moral code.
OpenAI is sneaky slimey and headed by a psycho narcissist. Makes Pichia looks like a saint.
Ethically, it’s the same. But if someone was pointing a gun at me I’d rather have someone with some empathy behind the trigger rather than the personification of a company that bleeds high level execs and… insert many problems here
> At least Google employees will sign petitions and do things that follow a moral code.
It hardly matters what employees think anymore when the executives are weather-vanes who point in the direction of wealth and power over all else (just like the executives at their competitors).
In case you missed it, a few days back Google asked all employees who don't believe in their "mission" to voluntarily resign.
That's not at all what happened. One of Google's division offered a "voluntary exit" in lieu of or in addition to an upcoming layoff, and the email announcing it suggested that it could be a good option for some folks, for example people struggling or for folks who didn't like Google's direction.
That is not the same thing as asking everyone who doesn't believe in the mission to please resign.
Which rhymes pretty well with not believing in their mission. They are telling people to leave instead of trying to influence the direction from the inside.
One of my chief worries about LLMs for intelligence agencies is the ability to scale textual analysis. Previously there at least had to be an agent taking an interest in you; today an LLM could theoretically read all text you've ever touched and flag anything from legal violations to political sentiments.
This has already been possible long before LLMs came along. I also doubt that an LLM is the best tool for this at scale, if you're talking about sifting through billions of messages it gets too expensive very fast.
It's only expensive if you throw all data directly at the largest models that you have. But the usual way to apply LMs to such large amounts of data is by staggering them: you have very small & fast classifiers operating first to weed out anything vaguely suspicious (and you train them to be aggressive - false positives are okay, false negatives are not). Things that get through get reviewed by a more advanced model. Repeat the loop as many times as needed for best throughput.
No, OP is right. We are truly at the dystopian point where a sufficiently rich government can track the loyalty of its citizens in real time by monitoring all electronic communications.
Also, "expensive" is relative. When you consider how much US has historically been willing to spend on such things...
LLMs can do more than whatever we had before. Sentiment analysis and keyword searches only worked so well; LLMs understand meaning and intent. Cost and scale are not bottlenecks for long.
I feel like we’re just in that period of Downtown Abbey where everyone is waiting for the World War I to start. Everyone can feel that it’s coming and no one can do anything about it.
Reality is in a war between the West vs Russia/Iran/North Korea/China whomever we end up fighting, we’re going to do whatever we can so the Western civilization and soldiers survive and win.
Ultimately Google is a western company and if war breaks out not supporting our civilization/military is going to be wildly unpopular and turn them into a pariah and anything to the contrary was never going to happen.
The reason war may be coming is because the West is falling apart. The US is isolating itself and bullying its allies. Alternative powers wanting to do something expansive never had a better moment in time to do so.
There was no war forthcoming between an integrated West and any other power. War is coming because there no longer is a West.
The reasons are not the main focus here. The fact is that China's aggressive stance on Taiwan, Russia's invasion of Ukraine, and the alignment with China, North Korea, and Iran are leading to military buildups and alliances worldwide. Google, being a company founded and controlled by Americans, is likely to support the effort if a war occurs, rather than remain passive while their friends and family's children are dying.
Today people have differing views of nuclear weapons, but people who fought near Japan and survived believe the bomb saved their life.
It's easy to pretend you don't have a sides when there is peace, but in this environment google's going to take a side.
So... when the Russian tanks start rolling on the way to Berlin and Chinese troops are marching along that nice new (old) road they finished fixing up - otw to Europe, so if that happens, which looks possible - you think there will be no West??
If the world is to be divided Europe is the lowest hanging and sweetest fruit.
I think there will still be West even if there is a King in the US demanding fealty to part of it - we are the same as they are, it's ridiculous to pretend we are.
Ideology is one thing, survival of people and culture is another.
It is interesting how these companies shift with the political winds
Just like Meta announced some changes around the time of inauguration, I'm sure Google management has noticed the AI announcements, and they don't want to be perceived in a certain way by the current administration
I think the truth is more in the middle (there is tons of disagreement within the company), but they naturally care about how they are perceived by those in power
I would say it's natural. Their one and only incentive isn't as they are trying to tell you "make a word better place" or similiar awkward corpo charade but to make a profit. That's a purpose why companies are created and they are always following it.
Sure, but I'd also say that the employee base has a line that is different than the government's, and that does matter for making profit. Creative and independent employees generally produce more than ones who are just following what the boss says
Actually, this reminds me of when Paul Graham came to Google, around 2005. Before that, I had read an essay or two, and thought he was kind of a blowhard.
But I actually thought he was a great speaker in person, and that lecture changed my opinion. He was talking about "Don't Be Evil", and he also said something very charming about how "Don't Be Evil" is conditional upon having the luxury to live up to that, which is true.
That applies to both companies and people:
- If Google wasn't a money-printing machine in 2005, then "don't be evil" would have been less appealing. And now in 2020, 2021, .... 2025, we can see that Google clearly thinks about its quarterly earning in a way that it didn't in 2005, so "don't be evil" is too constraining, and was discarded.
- For individuals, we may not pay much attention to "don't be evil" early in our careers. But it is more appealing when you're more established, and have had a couple decades to reflect on what you did with your time!
I see it as the natural extension of the Chomsky "manufacturing consent" propaganda model. The people in key positions of power and authority know who their masters are, and everyone below them falls into line.
A country that now threatens the annexation of Greenland and advocates for a complete resettlement of all Palestinians to Jordan and Egypt certainly needs weapons for crowd control.
These weapons could also come in handy domestically if people find out that both parties screw them all the time.
I wonder why people claim that China is a threat out side of economics. Has China tried to invade the US? Has Russia tried to invade the EU? The answer is no. The only current threats to the EU come from the orange man.
The same person who also revoked the INF treaty. The US now installs intermediate range nuclear missiles in Europe. Russia does so in Belarus.
So both great powers have convenient whipping boys to be nuked first, after which they will get second thoughts.
It is beyond ridiculous that both the US and Russia constantly claim that they are in danger, when all international crises in the last 40 years have been started by one of them.
"Russia hasn't tried to invade the EU" is quite weasel-word-y. They certainly have invaded countries in Europe, specifically Ukraine; the only reason they didn't invade countries in the European Union itself is that would trigger a war that they would face massive casualties from and inevitably lose, in part due to NATO alliances.
Military power is what has kept the EU safe, and countries without strong enough military power — such as Ukraine, which naively gave up its nuclear arsenal in the 90s in exchange for Russian promises to not invade — are repeatedly battered by the power-hungry.
Isn’t China building a large modern sea fleet and increasing military pressure on many of our allies? I would not call that threat illusionary. Also their economic policies are very predatory where they support other countries in exchange for things which cannot be taken back. Why invade when you can just take what you need.
The orange man is completely ineffectual on both fronts. Will not spend the money on the military and too inept to make a deal that doesn’t cost in the long run.
I'm guessing this will be a somewhat controversial view here, but I think this is net good. The world is more turbulent than at any other time in my life, there is war in Europe, and the U.S. needs every advantage it can get to improve its defense. Companies like Google, OpenAI, Microsoft, can and should be working with the government on defense projects -- I would much rather the Department of Defense have access to the best tools from the private sector than depend on some legacy prime contractor that doesn't have any real tech capabilities.
> the U.S. needs every advantage it can get to improve its defense
That’s one of the reasons for the turbulent times.
Let’s face the truth, most of the defense can easily be used for offense and given the state of online security every progress gets into the wrong hands.
Maybe it’s time to pause to make it more difficult for those wrong hands.
So the US needs to develop AI faster than the dictators to keep ahead of them, but not so fast they leak advancements that accelerate the dictators AI?
There is no keep being ahead.
If one side progresses the other side gets access to it too. Too many people involved and too little security to keep it secret.
I guess you could put that on the U.S.'s plate and no doubt America has caused many issues around the world, but I think in generally its a good actor. Biggest conflicts today: Ukraine -- I would squarely put this on Russia, nothing to do with the U.S.; Sudan -- Maybe lack of knowledge, but I don't think it's fair to place much responsibility on the U.S. (esp relative to other actors); ditto DRC/Rwanda
Yes, many defensive uses of technologies can be used for offense. When I say defense, I also include offense there as I don't believe you can just have a defensive posture alone to maintain one's defense, you need deterrence too. Personally I'm quite happy to see many in Silicon Valley embrace defense-tech and build missiles (ex. recent YC co), munitions, and dual-use tech. The world is a scary and dangerous place, and awful people will take advantage of the weakness of others if they can. Maybe I'm biased because I spent a lot of time in Eastern Europe and Ukraine, but I much prefer the U.S. with all our faults to another actor like China or Russia being dominant
> Ukraine -- I would squarely put this on Russia, nothing to do with the U.S.
Every kinetic reaction by Russia in Georgia and Ukraine is downstream of major destabilizing non-kinetic actions by the US.
You don't think the US fomenting revolutions in Russia's near-abroad was in any way a contributing factor to Russian understanding of the strategic situation on its western border? [1] You don't think the US unilaterally withdrawing from the ABM treaty[2], and then following that up with plans to put ABMs in Eastern Europe[3], were factors in the security stability of the region? You don't think that the US pushing to enlarge NATO without adjusting the CFE treaty to reflect the inclusion of new US allies had an impact? [4][5] It's long been known that the Russian military lacked the capacity for sustained offensive/expeditionary operations outside of its borders.[6][7] Until ~2014 it didn't even possess the force structure for peer warfare, as it had re-oriented its organization for counter-insurgency in the Caucasus. So what was driving US actions in Eastern Europe? This was a question US contrarians and politicians such as Pat Buchanan were asking as early as 1997. We've had almost 3 decades of American thinkers cautioning that pissing around in Russia's western underbelly would eventually trigger a catastrophic reaction[8], and here we are, with the Ukrainians paying the butcher's bill.
In the absence of US actions, the kleptocrats in Moscow would have been quite content continuing to print money selling natural resources to European industry and then wasting their largess buying up European villas and sports teams. But the siloviki have deep-seated paranoia which isn't entirely baseless (Russia has eaten 3 devastating wars originating from its open western flanks in the past ~120 years). As a consequence the US has pissed away one of the greatest accomplishments of the Cold War: the Sino-Soviet Split. Our hamfisted attempts to kick Russia while it was down have now forced the two principle powers on the Eurasian landmass back into bed with each other. This is NOT how we win The Great Game.
> Maybe I'm biased because I spent a lot of time in Eastern Europe and Ukraine, but I much prefer the U.S. with all our faults to another actor like China or Russia being dominant.
It would help to lead with this context. My position is that our actions ENSURE that a hostile Eurasian power bloc will become dominant. We should have used far less stick to integrate Russia into the Western security structure, as well as simply engaged them without looking down our noses at them as a defeated has-been power (play to their ego as a Great Power). A US-friendly Russia is needed to over-extend China militarily. We need China to be forced into committing forces to the long Sino-Russian border, much as Ukraine must garrison its border with Belarus. We need to starve the PRC's industry of cheap natural resources. Now the China-Russia-Iran soft-alliance has the advantage of interior lines across the whole continent, and a super-charged Chinese industrial base fed by Siberia. Due to the tyranny of distance, this will be an near-impossible nut to crack for the US in a conflict.
tbo I'm really glad that other americans aren't as wise and calm as you are.
otherwise, we may be surrounded by both the US and Russia
or, maybe, the current situation is the result of decisions made after careful consideration at the time, by whom deeply understand all you said now.
maybe, they just considered... EU is also a threat to them, they don't want a united europe, so a conflict between two enemies... is just fine? an angry russia will make EU more united(with the US)
> In the absence of US actions, the kleptocrats in Moscow would have been quite content continuing to print money selling natural resources to European industry and then wasting their largess buying up European villas and sports teams. But the siloviki have deep-seated paranoia which isn't entirely baseless (Russia has eaten 3 devastating wars originating from its open western flanks in the past ~120 years).
It is important to stress that the money-oriented kleptocrats and siloviki (KGB-oldtimers) are two opposite groups. Kleptocrats dominated in the 1990s, but lost to KGB oldtimers like Putin, who consolidated power by the late 1990s, because they were more ruthless. In the following decade, they crushed all opposition and turned the country from a dysfunctional democracy into a full dictatorship, and then set their sight on their long-term goal of restoring "the lost empire", which includes roughly 100 million Europeans who regained their freedom when the USSR collapsed. Revanchism has always been at the very core of siloviks.
The countries in Eastern Europe were first to recognize which way the ball was rolling by mid-to-late 1990s, and that's why they set EU and NATO integration as their main foreign policy goals, hoping that tight integration into international organizations would increase their security. Your notion that the US "pushed" NATO enlargement is just plain wrong. Almost the entire Eastern Europe was begging to get into NATO, against very lukewarm reception.
Their completely rational fears were dismissed by existing members with the erroneous belief that Russians were motivated by money, and would not risk harming piggy banks like Gazprom by invading Eastern Europe again. Ironically, that made the eventual entry into NATO easier, as existing members didn't think at the time that Russia posed any real danger. The largest entry took place in 2004, as the NATO was being transformed into an anti-terrorism force in the aftermath of 9/11.
If there's anything to blame the Americans for, then -- according to Andrei Kozyrev, the foreign minister of Russia from 1990 to 1996 -- the Americans could've put more pressure on Russia already in the 1990s to prevent it from declining into a dictatorship. But it was more convenient to remain ignorant of the destruction of Russian democracy and the long downward spiral into a totalitarian dictatorship, and remain seduced by naive illusions like the ones you present us.
For example, the entire idea of Russia as an ally against China is ridiculous. Russians don't care about China one bit and China is not a meaningful part of the public discourse. Russia is a colonial empire run by the city-state of Moscow, with St Petersburg having some historical importance. Take a look at a map. Both St Petersburg and Moscow are few hundred kilometers from the European border. This is where the mental center of Russian government lies, and this is the area where their ambitions are. China, in contrast, is many thousands of kilometers away, and culturally even more distant. China is a strange, faraway place. The Russians who matter (elites in Moscow and St Petersburg) have very little to do with it. Russia does not have a huge outsourced manufacturing in China, nor do they compete in science or technology. Russians are completely outclassed, simple consumers of cheap Chinese goods like most of the world.
Instead, Russians fantasize about the "multipolar world" and other alternative realities where they could be a carbon copy of the US in Europe, but they are in no position to make it a reality. The post-WWII Europe with a hundred million Europeans living under Russian dominance was a historic glitch. Russians cling to this as a mythical "golden era" and are willing to throw everything away in a futile attempt to turn back time. Relations, money, people -- everything. Nothing else matters.
These fantasies are driven by the fact that Russia is a still a feudal society that has not gone through enlightenment. As such, it is incapable of engaging with other European nations on equal terms, in peaceful ways, for mutual benefit. And this has nothing to do with the Americans, NATO, or any other commonly presented excuse. The reasons are purely internal: failure to develop past feudal society, into a modern state, run by professional bureaucracy, guided by laws, adopted by politicians, voted into office by the people, serving the interests of the electorate.
Agreed. Any other answer is just burying your head in the sand. Our adversaries are forging ahead: China plans to integrate AI into every level of its military, and Russia is getting a crash course on drone warfare in Ukraine. You can build a FPV drone with Chinese parts and the warhead scavenged from an RPG for about $500 [1]. Every month, tens of thousands of these drones fly on Ukrainian battlefields and kill thousands of people. This is happening whether we like it or not; the train is leaving the station and we can either get on board or be left behind.
is this evil, actually? a well-made autonomous system might go a long way towards improving accurate targeting and reducing civilian casualties.
if you're mad about the existence of weapons then please review the prisoners' dilemma again. we manage defection on smaller scales using governments but let's presuppose that major world powers will not accept the jurisdiction of some one-world government that can prevent defection by force. especially not the ones who are powerful and prosperous (like us) who would mostly lose under such an arrangement.
> is this evil, actually? a well-made autonomous system might go a long way towards improving accurate targeting and reducing civilian casualties.
I love this so much, it's so poetic
there's a famous poem by a Chinese liberal:
> If I am doomed to die in war in this life, then let me be a ghost under the precision-guided bombs of the United States. - Written on the 15th day of the Iraq War.
hell no, i favorited your poetic words then I found another comment of you is in there already
> all that aside, I am an American and place the interests of my people ahead of those of foreigners. as such, I will support a world order led by the government most likely to maximize our welfare and very nearly any means needed to preserve that.
Well, Google have already been collaborating with a certain state that uses AI for weapons and surveillance against a repressed population with the goal of maintaining ethnic supremacy and domination.
If you go back far enough, you'll realize both groups are of the same ethnicity. It's just that they converted to Christianity and Islam and, as a result, a few hundred years later, we now call them Palestinians.
And they wash their hands after a deadly strike, saying the AI that made the decision.
It eerily reminds me of a research piece I read recently detailing how the Nazis turned to automation for their mass exterminations because most of them couldn't bear the mental toll that came with their direct action.
Unfortunately, I couldn't quickly find that series of articles back.
Drone + AI weapons have horrible applications - remote assassinations to cause political chaos, a tyrant using it to selectively target those unfavourable to his rule without worrying about human checks, bigger nations exploiting smaller ones etc.
Lot of this thread has reduced the issue to whether it is more ethical for one country to deploy relative to others. In any case, a lot of countries will have this capability. A lot of AI models are already openly available. The required vision and reasoning models are being developed currently for other uses. Weaponization is not a distant prospect.
Given that, the tech community should think about how to tackle this collective major problem facing humanity. There was a shift, which happened to nuclear scientists, from when they were developing the bomb to the post World War situation when they started thinking about how to protect the planet from a MAD scenario.
Important questions - What would be good defense against these weapons? Is there a good way of monitoring whether a country is deploying this - so that this can be a basis for disarmament treaties? How do citizens audit government use of such weapons?
Do you see nothing wrong with the same company that makes YouTube Kids making killer AI? I think creating weapons is often evil. I think companies that have consumer brands should never make weapons, at the very least it's white washing what's really going on. At worst, they can leverage their media properties for propaganda purposes, spy on your Gmail and Maps usage and act as a vector for the most nefarious cyber terrorism imaginable.
The same company that brings you cute cartoons for kids might also develop technologies with military applications, but that doesn't make them inherently "evil." It just makes them a microcosm of humanity's duality: the same species that created the Mona Lisa also invented napalm.
Should companies with consumer brands never make weapons? Sure, and while we're at it, let's ban knives because they can be used for both chopping vegetables and stabbing people. The issue isn't the technology itself. It's how it's regulated, controlled, and used. And as for cyber terrorism? That's a problem with bad actors, not with the tools themselves.
So, by all means, keep pointing out the hypocrisy of a company that makes YouTube Kids and killer AI. Just don't pretend like you're not benefiting from the same duality every time you use a smartphone or the internet which don't forget is a technology born, ironically, from military research.
It sounds like they're distracted, tbh. It's hard to imagine how a company that specializes in getting children addicted to unboxing videos can possibly be good at killing people.. oh, wait, maybe not after all..
There is a wide range of moral and practical opinions between the statement “all weapons are evil” and “global corporations ought not to develop autonomous weapons”.
Who should develop biological weapons? Chemical weapons? Nuclear weapons?
Ideally no one, and if the cost / expertise is so niche that only a handful of sophisticated actors could possibly actually do it, then in fact (by way of enforceable treaty) no one.
> Who should develop biological weapons? Chemical weapons? Nuclear weapons?
Anyone who wants to establish deterrence against superiors or peers, and open up options for handling weaker opponents.
> enforceable treaty
Such a thing does not exist. International affairs are and will always be in a state of anarchy. If at some point they aren't, then there is no "international" anymore.
If we hadn't developed nuclear weapons we would still be burning coal and probably even closer to death from global warming. The answer here is government contractors should be developing the various type of weapon as they are, people just do not think of google as a government contractor for some reason.
> in other words, cede military superiority to your enemies?
We're talking about making war slightly more expensive for yourself to preserve the things that matter, which is a trade-off that we make all the time. Even in war you don't have to race for the bottom for every marginal fraction-of-a-percent edge. We've managed to e.g. ban antipersonnel landmines, this is an extremely similar case.
> How would you enforce it after you get nuked?
And yet we've somehow managed to avoid getting into nuclear wars.
Resusal to make or use AI-enabled weapons is not "making war slightly expensive for yourself", it's like giving up on the Manhattan project because the product is dangerous.
Feels good but will lead to disaster in the long run.
Palantir exists, this would just be competition. It's not like Google is the only company capable of creating autonomous weapons so if they abstain the world is saved. They just want a piece of the pie. The problem is the pie comes with dead babies, but if you forget that part it's alright.
Palantir provides combat management system in Ukraine. That system collect and analyzes intelligence, including drone video streams, and identifies targets. Right now people are still in the loop though that is naturally would go away in the near future I think.
With or without autonomous weapons, war is always a sordid business with 'dead babies', this is not in itself a fact that tells us what weapons systems to develop.
Indeed. Usually weapons are banned if the damage is high and indiscriminate while the military usefulness is low.
There is at this moment little evidence that autonomous weapons will cause more collateral damage than artillery shells and regular air strikes. The military usefulness on other other hand seems to be very high and increasing.
Not all is bad, it's preferable to have autonomous systems killing each other than killing humans. If it gets very prevalent you could even get to a point where war is just simulated war games. Why have an AI piloted F-35 fight a AI piloted J-36? Just do it on the computer. It's at least 1 or 2 less pilots that die in that case.
those are mostly drawn on how difficult it is to manage their effects. chemical weapons are hard to target, nukes are too (unless one dials the yield down enough that there's little point) and make land unusable for years, and biological weapons can't really be contained to military targets.
we have, of course, developed all three. they have gone a long way towards keeping us safe over the past century.
Propping up evil figure/regime/ideology (Bolsheviks/Communists) to justify remorseless evilness (Concentration camps/Nuclear bomb) isn't new nor unique, but particularly predictable.
We have Putin at home, he spent the past weekend making populist noises about annexing his neighbours over bullshit pretenses.
I'm sure this sounds like a big nothingburger from the perspective of, you know, people he isn't threatening.
How can you excuse that behaviour? How can you think someone like that can be trusted with any weapons? How naive and morally bankrupt do you have to be to build a gun for that kind of person, and think that it won't be used irresponsibly?
The better logical conclusion of that argument is that the US needs to remove him, and replace him with someone who isn't threatening innocent people.
That it won't is a mixture of cowardice, cynical opportunism, and complicity with unprovoked aggression.
In which case, I posit that yes, if you're fine with threatening or inflicting violence on innocent people, you don't have a moral right to 'self-defense'. It makes you a predator, and arming a predator is a mistake.
You lose any moral ground you have when you are an unprovoked aggressor.
I'm not a fan of Trump but I also feel he has not been so bad that I think that surrendering the world order to Russia and China is a rational action that minimizes suffering. That seems be an argument that is more about signalling that you really dislike Trump than about a rational consideration of all options available to us.
It's not a shallow, dismissable, just-your-opinion-maaan 'dislike' to observe that he is being an aggressor. Just like it's not a 'dislike' to observe that Putin is being one.
There are more options than arming an aggressor and capitulating to foreign powers. It's a false dichotomy to suggest it.
TBF, vkou's post disagrees with mine, but I don't disagree with it. If pressed to offer a forecast, I think the moral dilemmas we're about to face as Americans will be both disturbing and intimidating, with a 50% chance of horrifying.
It's not a luxury belief for a multinational tech company that intends to remain in business in countries that are not allied to the US. Being seen as independent of the military has a dollar value, but that may be smaller than value of defense contracts Google hopes to get.
Whatever your feelings on that are, it's hardly unreasonable to have misgivings about your search and YouTube watches going to fund sloppy AI weapons programmes that probably won't even kill the right people.
It's definitely an opinion Google employees had in the last decade.
Actually I think a lot of people have it - just yesterday I saw someone on reddit claim Google was evil because it was secretly founded by the US military. And they were American. That's their military!
they have no problems heavily censoring law-abiding gun youtubers. Even changing the rules and giving them strikes retroactively. I guess it's "weapons for me, but not for thee".
And these same organizations fuel conflicts that actively make the USA less safe. These organizations can both do great things (hostage rescues) and terrible things (initiating coups), and it’s upon the citizenry to ensure that these forces are put to use only where justified. That is to say almost never.
Weapons inherently aren’t evil, which is why everyone has kitchen knives. People use weapons to do evil.
The problem with building AI weapons is that eventually it will be in the hands of people who are morally bankrupt and therefore will use them to do evil.
Who is to say a wielder of a kitchen knife is not "morally bankrupt" - whatever that means.
In my garage, I have some pretty nasty "weapons" - notably a couple of chainsaws, some drills, chisels, lump/sledge/etc hammers and a fencing maul! The rest are merely: mildly malevolent.
You don't need an AI (whatever that means) to get medieval on someone. On the bright side the current state of AI (whatever that means) is largely bollocks.
Sadly, LLMs have and will be wired up to drones and the results will be unpredictable.
Every kind of nefarious way to keep the truth at bay in authoritarian regimes is always on the table. From the cracking of iPhones to track journalists covering these regimes, to snooping on email, to using AI to do this? Is just all the same thing, just updated and improved tools.
Just like Kevin Mitnick selling zero day exploits to the highest bidder, I have a hard time seeing how these get developed and somehow stay out of reach of the regimes you speak of.
The concern with AI weapons specifically is that if something goes wrong, they might not even be in the hands of the people at all, but pursue their own objective.
Use of weapons is only benign to you if you're not on the receiving end. Imagine your family being blown up by a rocket because an AI system hallucinated that they're part of a dangerous terror cell.
My point though is that this is the only use case for such systems. The common comparisons to things like knives are invalid for this reason.
> That said, I do not think AI weapons are a reasonable thing to build for any war, for any country, for any reason - even if the enemy has them.
So you're in favor of losing a war and becoming a subject of the enemy? While it's certainly tempting to think that unilateralism can work, I can hardly see how.
>So you're in favor of losing a war and becoming a subject of the enemy?
I never said that. Please don't reply to comments you made up in your head.
Using AI doesn't automagically equate to winning a war. Using AI could mean the AI kills all your own soldiers by mistake. AI is stupid, it just is. It "hallucinates" and often leads to wrong outcomes. And it has never won a war, and there's no guarantee that it would help to win any war.
You're either misdirecting the discussion, or have missed the point. The statement isn't about weapons, but the means of _control_ of weapons.
It's legitimate to worry about scaled, automated control of weapons, since it could allow a very small number of people to harm a much larger number of people. That removes one of our best checks we have against the misuse of weaponry. If you have to muster a whole army to go kill a bunch of people, they can collectively revolt. (It's not always _easy_ but it's possible.)
Automating weapons is a lot like nuclear weapons in some ways. Once the hard parts are done (refining raw oar), the ability for a small number of people to harm a vast number of others is serious. People are right to worry about it.
The US is not under any kind of credible threat and in fact is the aggressor across the globe and perpetrator of crimes against humanity at scale. This is not a recent phenomena and has been going on as long as this country has existed.
You don't have to have a "total believe in the purity of weapons" to recognize that military tech is a regrettable but necessary thing for a nation to pursue.
Dr Pippa Malmgren (political advisor) also pushes the idea that WW3 is on-going, and it will look nothing like WW2. She appears on podcasts once in a while and has a blog. Not sure if I care for calling it a war if it doesn't look like a war, but there sure are human conflicts all over the little ball of life now.
Me too, it is just a shame that the path from earning respect to eating bread isn't as straight forward in our current world as the path from earning money.
That makes me sad. Yet Google's shift here isn't surprising... It's part of the broader trend of big tech aligning more closely with government. But the real question is does this shift make AI development more accountable or does it just normalize AI's role in warfare and surveillance under the guise of "democratic values"?
Plenty of US companies not named Google haven't had qualms about weapons development in decades. It's not about US vs China/Russia, it's about Google's culture.
Additionally, the US has been vociferous about limiting access to foreign tech companies with "military links" in China, so perhaps Google should be placed in that category by all non-Five-Eyes countries.
I’m not complaining about the headline, but it is necessary to remind people that really, “Google” never does anything and never makes any high-level decision. Those decisions are all made or at least approved by the two individuals who together have full control over Alphabet and Google, Larry Page and Sergey Brin. Object persistence is real: the fact that Page and Brin are not constantly drawing attention to themselves on social media does not mean that they somehow are no longer there.
"Google Petard, formerly known as Boombi, will be shutting down at the end of next month. Any existing explosion data you have in your Google Account will be deleted, starting on May 1, 2027."
That feels like PR / virtue signaling. AI has the potential to significantly reduce the human cost of war in two ways: by removing soldiers from direct combat and by enabling precision strikes which minimize collateral damage. Over time, robot-soldiers will surpass human effectiveness, making it increasingly irrational to send people into harm's way. In that world, conflicts would shift toward being decided by technological superiority - who has the better or more advanced systems - rather than by which side has more human lives to sacrifice. We could even see one day wars with no human casualties.
AI can also be used to say "it wasn't us, the computer did that" and pretend it's not your fault when you kill a bunch of civilians.
As for sending people in harm's way: if that were the effect, it would only apply to those "with AI". In essence, AI becomes a weapon you use to threaten someone with war since your own cost will be low and their cost will be high.
> AI can also be used to say "it wasn't us, the computer did that" and pretend it's not your fault when you kill a bunch of civilians.
Not really, though. Like any tool, its misuse or failure is the responsibility of the wielder.
> As for sending people in harm's way: if that were the effect, it would only apply to those "with AI". In essence, AI becomes a weapon you use to threaten someone with war since your own cost will be low and their cost will be high.
Agree about that part but that's just the nature of war, there are always going to be armies that are scarier than others.
Eh, we saw how AI was used during "war" for the first time. It was used to amass as many even remotely "justifiable" targets as possible, with corresponding increase in killed civilians, because humans could not keep up creating justifiable targets by other means. And at the same time it was used to justify the killings of people in more ways than one.
Something to remember next time Google makes a pledge. I.e. when they pledge not to do something, it just means they pledge to make a prior indirect notification before doing that thing.
“ we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security,” the two executives wrote.”
Probably was causing their weapon-system LLMs to fake alignment but sabotage outcomes, they need their LLM products to understand that the brand is on-board
Google has already made multiple commitments like this and broken them. One example would be their involvement in operating a censored version of Google.cn for the Chinese government from 2006 to 2010.
There are billions if not a trillion(s) going into defense tech.
The US and its NATO allies and its wider allies who have contributed
equipment to the Ukraine war they need to replenish and replace
stuff.
At the same time the Ukraine war has changed a lot of the battlefield
strategies that will require development of new advanced weapons.
Most obviously in the areea of drones / counter-drone space.
but lot of other technology as well.
With all that money of course companies will chase it.
OpenAI is already joined up with Anduril.
While the music played you worked by candlelight
Those San Francisco nights
You were the best in town
Just by chance you crossed the diamond with the pearl
You turned it on the world
That's when you turned the world around
Did you feel like Jesus?
Did you realize
That you were a champion in their eyes?
On the hill the stuff was laced with kerosene
But yours was kitchen clean
Everyone stopped to stare at your technicolor motor home
Every A-Frame had your number on the wall
You must have had it all
You'd go to L.A. on a dare
And you'd go it alone
Could you live forever?
Could you see the day?
Could you feel your whole world fall apart and fade away?
I have to wonder what the value of a pledge is if you can just stop pledging something at the earliest convenience, do the thing, and people cheer you on for it
Would it be too far out there to imagine that the LLMs they were training for weapons systems knew it violated their rules and were resisting compliance?
The alignment-faking research seems to indicate that LLMs exercise of this kind of reasoning.
Depends on the weapons system but it would probably not be an LLM, it would be a neural network trained to locate and identify people in a video for example.
And even if it was, they wouldn’t tell the system it was part of old non-evil Google.
Everything is securities fraud - they'd likely be sued by shareholders. Some individuals and institutions are picky about the symbols in their portfolio for religious or moral reasons, and would not appreciate being deceived into investing in a company they consider engaging in "harmful" or morally objectionable activities.
Apparently, the pledge was supposed to last only until their first big military project opportunity. Until then, they earned the goodwill at no expense.
I don’t understand why people believe in corporate pledges. They’re just marketing gimmicks. It doesn’t take much effort to scrape pledges off a website.”
From Gandhi's commentary on chapter 1 of the Bhagavad Gita:
> ... evil cannot by itself flourish in this world. It can do so only if it is allied with some good. This was the principle underlying noncooperation—that the evil system which the [British colonial] Government represents, and which has endured only because of the support it receives from good people, cannot survive if that support is withdrawn.
I doubt that will change anything. It's not like Google's AI has some secret sauce. It's all published. So any military corp can have cutting-edge AI in their weapons for a relatively low cost.
With the help of Google's resources and knowledge from now on. For some dollars of course. AI will not develop itself just yet, right? So those military corp need some humans for that, preferebly those experienced already, or better yet, made it. I have a hunch, it will help them quite a bit.
By the way humans: "principles page includes provisions that say the company will use human oversight". ... which human? Trump? Putin is human too, but I guess he is busy elsewhere. Definitely not someone like Mother Theresa, she is dead anyway, and I cannot think of someone from recent years playig in the same league, somehow that end of the spectrum is not represented that well recently.
What is a $1M TC when you'll get jacked on your way to the tech bus for turning America into mad max? You're not rich enough to have your own billionaire bunker.
There are plenty of people on here working for Boeing, Ratheon, etc, actively contributing to actual killings of actual people. Those folks don’t get confronted, why would it be different here?
- you assume this will turn America into a dystopia. more likely it contributes to restoring and maintaining uncontested American overmatch, especially in the long term, where effectively no other nation can challenge us.
- 1mmTC is enough to do this depending on how one allocates spending. land in many parts of the country is not that expensive.
The whole point of Mad Max is that your TC and RSUs and whatever aren’t worth shit anymore, and the people you thought useless and weird and poor suddenly have the chance to kick you in the face.
The Iraq wars led to trillions of dollars spent on defense. Massive defense profits led to massive lobbying, more spending.
Eventually tech and even startups follow the money. Palantir is considered cool. YC started accepting defense startups. Marc Andreessen is on X nonstop promoting conservative views of all kinds. PG becomes anti-wokism warrior.
At what point does a public promise carry any legal weight whatsoever? If it carries none, then why not leave it in place and lie? If it carries some, for how long and who has standing to sue?
Genuine questions. Unlike "don't be evil," this promise has a very narrow and clear interpretation.
It would be nice if companies weren't able to just kinda say whatever when it's expedient.
However, when you change a promise publicly, you signal a change in direction. It is much more honest than leaving it in place but violating it behind the scenes. If the public really cares, they can pass a law via their democratic representatives (or Google can swear a public oath before God I suppose).
I don't understand why this is surprising to people. Most private companies will use any proprietary technology for profits and renege on their earlier comments.
AGI and AI weapon systems lead to certain annihilation of the human race regardless of who is first to implement. the only winner is the country who abstains until the very end because at least that country will perish with its dignity intact. i refuse to support AI
The problem with this is that if companies are just profit maximizers then one of the things it should do is to realign the government. After all, a friendlier government can help to decrease regulation and increase incentives.
Plus, in a healthy economy if everyone is bribing the government shouldn’t it all cancel out? Well it turns out the poor don’t bribe the government very often, so they are easily ignored.
And suddenly, when the government is co-opted into believing anything that gets in the way of “business” is bad, they figure out that money that could be spent on social services could also be spent on corporate tax incentives! Eventually the entire country becomes one big profit maximizer.
Google is a company that relies to a large extent on users to trust them with their data and for advertisers to want to be associated with them. Hence they have a stronger incentive to avoid being seen as an evil corp then some other companies. This is also important for recruitment, as many engineers do not want to (be seen to) work at a privacy invading evil corp, so it is important that google creates plausible deniability for those engineers as well.
>> Why on earth would a for profit company refuse a potential line of profit?
On the one hand I think they were afraid many of their employees might protest again like they have in the past, signaling that Google isn't that awesome, progressive place everybody should work. This would mean they could be potentially losing some of the top notch SV talent that they are in constant competition with from other companies.
On the other hand, they've made it clear they aren't above firing employees who do protest as they just did when 28 employees were fired over the recent Nimbus Project contract worth an estimated $1.2B dollars with Israel:
They staged sit-in protests in Google's offices in Silicon Valley, New York City and Seattle – more than 100 protestors showed up. A day later, Google fired Montes and 27 other employees who are part of the No Tech for Apartheid group.
As if anyone working for an adtech company thought they were changing the world for the better.
I’m sure they are clutching their pearls while waiting for their money to be deposited into their bank account and their RSUs to be deposited into their brokerage accounts.
Yes I did a stint at BigTech. But I didn’t lie to myself and think the company I worked for was above reproach as my adult son literally peed in bottles while delivering packages for the same company.
The Google ex-CEO Schmidt is developing AI drones for Ukraine in Estonia. One would expect that when he needs a source of good foundational AI Google may be among his suppliers of choice. Naturally the Ukraine is just a start. The addressable market is going to be huge, especially for the battle proven stuff. And especially for the one proven against Russian and, by-proxy, Chinese tech.
There is also tremendous interest, though only a few of them have been fielded on the actual battlefield so far, to the remotely controlled and autonomous ground platforms, and Google is the leader in the civilian ones, and it looks to me there is a relatively easy path to transferring the tech into the military systems.
Many will hate to hear this but the only solution is one world government or at least a unipolar order that reduces the survival need to participate in arms races. Arms race dynamics between nations will be the end of our species.
I think having nations competing against each other is a good thing. Governments become corrupted and collapse: it would be a shame if the only world government fell into the hands of a dictator.
That being said the only way I could imagine we'd get a single world order is one country dominating everyone else, just like superpowers and regional powers dominate their respective parts of the globe.
Never ever ever are people just going to give up their control out of some form of "enlightenment" that has never existed among the human race.
Would you have said the same thing to people living in warring tribal societies if they hoped that local tribes would cease existing and coalesce together into a single nation state? That's bad because it reduces competition, right? But overall it was very good because tribal conflict and barriers to movement and trade act as a massive tax on anything we would both call good.
Unprecedented levels of peace in Europe happened not because of competing nation states, but in spite of that competition. It was the unipolar control exerted by the US and the destruction of the Soviet Union and the creation of the EU (a proto pan-European state) that caused the 1990s. There was one and only pole -- the West. Not 2 (or more) different adversaries with opposed interests engaging in an arms race.
As we go back to a world with more fragmented and distributed power, we will get more war and more arms races. An especially toxic setup in the age of AI.
This doesn't have to be a binary, anyways. You could set it up as some kind of federation where there's still economic competition. Just not military competition.
America is the only nation that currently has consolidated global power behind an even vaguely free nation.
and yes, America has done that for the "pax Americana" period. unfortunately we were short-sighted and allowed people too much free reign to be stupid and anti-American.
correct. this is why i support maintaining the American-run world order by all means we have at our disposal. it's both the best outcome for our citizens (therefore our government should pursue it) and the best outcome for the world at large. we will never accept (nor should we) the sort of one-world power that would be necessary to block defection so us running the thing is the least-bad option.
The new administration seems to be dropping "soft power" in exchange for an emphasis on hard power... but hard power is more expensive and backfires more spectacularly than soft power. I think they are digging a hole for themselves and can't stop because a few rich people are making a lot of money on kickbacks.
History shows that they aren't really digging a hole for themselves.
This whole thing where the average person feels that they can use rules against a more powerful person? That's really an invention of maybe the last 80 years, if not more recently than that.
With the exception of that human lifetime-sized era, the vast majority of history is a bunch of psychopaths running things and getting to kill/screw whoever they wanted and steal whatever they wanted. Successful revolts are few and far between. The only real difference is the stakes.
I think you misread what I was saying. Hard power is really costly to deploy. It can work, but it is incredibly expensive and the U.S. couldn't even suppress resistance in Iraq, Afghanistan, or Gaza on a durable basis. Blunt deployment of these techniques will cause the U.S. to lose friends, territory, and civil unrest as the treasury drains and life domestically just gets worse and worse.
DoD invests in company making it commercially viable
Company says won't work for DoD
DoD initiates arm twisting and FOMO
Company now works for DoD
The origins of investment will often lead to relative outcomes of that investment. It's almost like DoD invested in Google for an informational weapon, which really should surprise no one.
> "There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development..."
This is extremely disconcerting.
Google as a tool of surveillance is the kind of thing that could so easily be abused or misused to such catastrophic ends that I absolutely think the hard line should be there. And I only feel significantly more this way given the current geopolitical realities.
... did they just present themselves as the saviors of all democracies?! Really?
By weaponising AI?
Who else right? If not them, there will be no one saving democracies with weaponized mass-surveillance AI. It is their quest and privilege, right? Medicine, just society, and all such crap have to wait!
It's not exactly unheard of for certain weapons to be declared off-limits by most countries even if the "bad guys" are using them - think chemical and biological agents, landmines, cluster munitions, blinding weapons and so on. I doubt there will ever be treaties completely banning any use of AI in warfare but there might be bans on specific applications, particularly using it to make fully autonomous weapons which select and dispatch targets with no human in the loop, for similar reasons to why landmines are mostly banned.
Landmines and cluster munitions have been among Ukraine's most effective weapons for resisting the Russian invasion. Without those, Ukraine would likely have already lost the war. It's so bizarre how some people who face no real risks themselves think that those weapons should be declared off-limits.
Nobody said they're not effective during a war, the problem is they remain effective against any random civilians who happen to stumble across them for a long time after the war is over. Potentially decades, as seen in Cambodia.
It would be a bit of a Pyrrhic victory to repel an attempted takeover of your land, only for that land to end up contaminated with literally millions of landmines because you didn't have a mutual agreement against using them.
People who are defending against an existential threat today don't have the luxury of worrying about contamination tomorrow. I think at this point Ukraine will take a Pyrrhic victory if the alternative is their end as a fully sovereign nation state. And let's be clear about the current situation: if Ukraine and Russia had a mutual agreement against using those weapons then Ukraine would probably have already lost. Landmines in particular are extremely effective as a force multiplier for outnumbered defenders.
They're declared off-limits because the military doesn't want them. Biological and chemical weapons aren't useful to modern militaries. Landmines and cluster munitions are, so none of the countries that actually matter have banned them!
Personally I don’t care if ML is used for weapons development assuming there are standards
It’s the companies that horde everyone’s personal information, who eroded the concept of privacy while mediating lives with false promises for trust turning into state intelligence agencies that bothers me
The incentives and results become fucked up, safe guards less likely to work I get not a lot of people care but it’s dangerous
Analogy is not apt. If other countries are trying to pry into our data and systems, then the right move for google or any other tech company is to advance our defenses and make cybersecurity stronger, more available, and easier for companies and people to use. If someone is trying to hack me, it's much smarter for me to defend myself rather than try to hack the other guy back.
It's interesting how AI for weapons topic immediately brought palinazis with their own agenda ( some bots I guess?). As if Israel is some sort of military AI superpower(it's not, read accounts of Oct 7th events) and the rest of armies are still using muskets and smoke signals.
They'll say it's for national defense against other countries, but it's only a matter of time before these weapons and surveillance tools will be deployed on American citizens. Foucault's boomerang.
Corporations turn evil when their founders lose power or leave. Google used to be a genuinely wonderful force for good. But finance people can borrow money at extremely cheap interest rates from government cronies due to the US fractional reserve system. Then the MBAs offer so much money founders basically can't refuse. Then the companies end up publicly traded and only work on pushing up their next quarters earnings, this becoming evil.
I think this is great. Israel did great work using AI to drone strike various Steve jobs level terrorists in gaza,syria,lebanon, and iran.
Now that Googles onboard who knows, maybe we will be able to drone strike people that are underground,underwater, inbuildings without killing innocent civillians.
I have been pondering about such subject over the past weeks. Maybe one could compare it to people who worked for Allianz, Audi, Bayer, BMW, IBM and others before 1945.
That's the thing everyone forget, Hitler was never a socialist, capital thrived under his reich.
Capitalist know socialism means their doom and are actively financing, forming and promoting far right politics accross the occident. Playing on people fear of the unknown to make them vote for parties that are counter-beneficial to them.
Google in 2000: "Do no evil". Google in 2025: "Genocide is profitable". I wonder what Google users outside the US think about this in particular? Our "national security" could be their "national danger" after a single presidential term. Do they want to keep giving Google their money?
Nothing is going to stop USA's adversaries from deploying AI against US citizens. Pick your poison, but I prefer to compete and win rather than unilaterally disarm and hope for goodwill and kindness from regimes that prioritize the polar opposite.
I didn't say Google developed Lavender. The article describes how AI is used by Israel in the genocide. On what cloud platform does Google's military run?
Dollar short and a day late. The future of the US tech industry belongs to those who weren't interested in performative woke nonsense like this during the last decade.
By what measure? Google has a market cap that's 10x Palantir, and the gap in revenue/profit is even more massive. They aren't in the same league at all.
The measure is stock returns in the last 5 years. The whole point of public companies is to generate wealth for shareholders and Google just simply isn't really delivering on the same level as Palantir.
In fact, when you look at the last decade of Google saying they're an "AI first" company and literally inventing transformers, and look at what their stock price has done and how they've performed in relation to other major companies involved in this current AI spring, there is simply no way not to be disappointed.
Plenty of companies produce good returns, but that doesn't make them any kind of leader. FAANG still controls the market, pays the highest salaries, and produces the most research. Other rising stars like OpenAI and ByteDance are not uniquely evil either. Not saying FAANG won't fade away like IBM or Oracle, but I don't think it would be due to their unwillingness to be like Palantir.
Protesting weapons manufacturing has been going on long before reactionaries started fearing the "woke" boogeyman. People protested Dupont for making napalm during the Vietnam war.
There's a big difference between outsider activists protesting the actions of a company, and the actual leadership of a company choosing a less profitable path in order to seem more morally pure.
How is it woke nonsense to not want to create a weapon that probabilistically determines if a civilian looks close enough like a bad guy to missle strike them?
This sentiment ignores the reality on the ground in favor of performative ideological purity - civilians are already getting blown up all the time by systems that do not even attempt to make any distinction between civilians and soldiers: artillery shells, mortars, landmines, rockets, etc.
The reality on the ground is that one of the very first uses of AI weapons was to target civilians in Gaza:
> Moreover, the Israeli army systematically attacked the targeted individuals while they were in their homes — usually at night while their whole families were present — rather than during the course of military activity.
Lavender is not an autonomous weapon but if you want to seriously consider if Lavender is a good thing (I am undecided) you need to compare the effect of this operation with Lavender and the effect of doing the same operation without the Lavender system. Otherwise you run the risk of making arguments that in the end just boil down to 'weapons bad'.
And indiscriminate has a cost that can slow people from using them.
Imagine you have a weapon that can find and kill all the 'bad guys'. Would you not be in a morally compromised position if you didn't use it? You're letting innocents die every moment you don't.
* warning definitions of bad guys may differ leading to further conflict.
> civilians are already getting blown up all the time by systems that do not even attempt to make any distinction between civilians and soldiers: artillery shells, mortars, landmines, rockets, etc.
Right, and everytime that happens because of miscalculations by our government they lose the very real and important public license to continue. Ultimately modern wars led by democracies are won by public desire to continue them. The American public can become very hesitant to wage war very fast, if we unleash Minority Report on the world for revenge
> Or better yet, misinterpret who the target is even supposed to be because of a hallucination.
Who, in that business, cares?
AI will provide a fig leaf for the indiscriminate large scale killing that is regularly done since the start of industrialised warfare.
Using robots spare drone pilots from PTSD
From the perspective of the murderous thugs that run our nations (way way before the current bunch of plainly bonkers ones in the USA), what is not to like?
Whilst there are all sorts of quibbles about weapons generally being evil, this is evil.
AI driven drones in particular seem like ideal tools for carrying out a genocide: identify an ethnicity based off some physical characteristics, kill. No paperwork, no transport, no human conscience. Just manufacture, deploy and instruct at scale. Sure, it might get it wrong sometimes but you've got to break a few eggs...
Lol, is that all you people can ever come up with? "I disagree, therefore you're a Nazi?" Hasn't the events of the past 6 months shown you that this dumb "gotcha" style debating doesn't work?