Hacker News new | past | comments | ask | show | jobs | submit login

It never made any organizational sense for me to have a "responsible AI team" in the first place. Every team doing AI work should be responsible and should think about the ethical (and legal at a bare minimum baseline) dimension of what they are doing. Having that concentrated in a single team means that team becomes a bottleneck where they have to vet all AI work everyone else does for responsibility and/or everyone else gets a free pass to develop irresponsible AI which doesn't sound great to me.

At some point AI becomes important enough to a company (and mature enough as a field) that there is a specific part of legal/compliance in big companies that deals with the concrete elements of AI ethics and compliance and maybe trains everyone else, but everyone doing AI has to do responsible AI. It can't be a team.

For me this is exactly like how big Megacorps have an "Innovation team"[1] and convince themselves that makes them an innovative company. No - if you're an innovative company then you foster innovation everywhere. If you have an "innovation team" that's where innovation goes to die.

[1] In my experience they make a "really cool" floor with couches and everyone thinks it's cool to draw on the glass walls of the conference rooms instead of whiteboards.




Assigning ethics and safety to the AI teams in question is a little like assigning user privacy to advertising analytics teams - responsible AI is in direct conflict with their natural goals and will _never_ get any serious consideration.

I heard about one specific ratchet effect directly from an AI researcher. The ethics/risk oriented people get in direct internal conflict with the charge-forward people because one wants to slow down and the other wants to speed up. The charge-ahead people almost always win because it’s easier to get measurable outcomes for organization goals when one is not worrying about ethical concerns. (As my charge-ahead AI acquaintance put it, AI safety people don’t get anything done.)

If you want something like ethics or responsibility or safety to be considered, it’s essential to split it out into its own team and give that team priorities aligned with that mission.

Internally I expect that Meta is very much reducing responsible AI to a lip service bullet point at the bottom of a slide full of organizational goals, and otherwise not doing anything about it.


There has been plenty of serious work done in user privacy separate from advertising analytics. For example in the direction of log and database anonymization (and how surprisingly mathematically difficult it has turned out to be.) You don't have to be cynical about ALL such efforts.


That’s like saying a sports game doesn’t need a referee because players should follow the rules. At times you perhaps don’t follow them as close because you’re too caught up. So it’s nice to have a party that oversees it.


The current analogy is sports teams selecting their own referees.


A good argument for independent regulation/oversight.


Independent is the tricky part. AI companies already are asking for government regulation but how independent would that regulation really be?


As independent as any other government oversight/regulation in the US, it'd either be directly run or heavily influenced by those being regulated.


Not an economist, but that does not sound bad in general. Best case you have several companies that: (a) have the knowledge to make sensible rulings and (b) have an interest that non of their direct competitors gain any unfair advantages.


The problem case is when the companies all have a backroom meeting and go "Hey, lets ask for regulation X that hurts us some... but hurts everyone else way more"


Economists actually shouldn't even be included in regulatory considerations in my opinion. If they are then regulators must be balancing the regulation that on its own seems necessary with the economic impact of proper regulation.

It hasn't worked for the airline industry, pharmaceutical companies, banks, or big tech to name a few. I don't think its wise for us to keep trying the same strategy.


> it'd either be directly run or heavily influenced by those being regulated.

Which is also the probable fate of an AGI super intelligence being regulated by humans.


If we actually create an AGI, it will view us much like we view other animals/insects/plants.

People often get wrapped up around an AGI's incentive structure and what intentions it will have, but IMO we have just as much chance of controlling it as wild rabbits have controlling humans.

It will be a massive leap in intelligence, likely with concepts and ways of understanding reality that either never considered or aren't capable of. Again, that's *if* we make an AGI not these LLM machine learning algorithms being paraded around as AI.


You misunderstand AGI. AGI won't be controllable, it'd be like ants building a fence around a human thinking it'll keep him in.


One key question is if the teams are being effective referees or just people titled "referee".

If it's the latter, then getting rid of them does not seem like a loss.


Curling famously doesn’t have referees because players follow the rules. It wouldn’t work in all sports, but it’s a big part of curling culture.


So what happens if the teams disagree on whether a rule was broken? The entire point of a referee is that it's supposed to be an impartial authority.


The assistant captains (usually called vices) on each team are the arbiters. It’s in one’s best interest to keep the game moving and not get bogged in frivolities, there’s a bit of a “tie goes to the runner” heuristic when deciding on violations.

In my years of curling, I’ve never seen a disagreement on rules left unsettled between the vices, but my understanding is that one would refer to vices on the neighboring sheets for their opinion, acting as a stand-in impartial authority. In Olympic level play I do believe there are referees to avoid this, but I really can’t overstate how unusual that is for any other curlers.


It’s also a zero stakes sport that nobody watches and involves barely any money so there is less incentive to cheat.


You usually don’t have a referee in sports. 99% of the time it’s practice or pickup games


Apple had* a privacy team that existed to insure that various engineering teams across Apple do not collect data they do not need for their app. (And by data I mean of course data collected from users of the apps.)

It's not that engineers left to their own will do evil things but rather that to a lot of engineers (and of course management) there is no such thing as too much data.

So the privacy team comes in and asks, "Are we sure there is no user-identifiable data you are collecting?" They point out that usage pattern data should be associated with random identifiers and even these identifiers rotated every so-many months.

These are things that a privacy team can bring to an engineering team that perhaps otherwise didn't see a big deal with data collection to begin with.

I had a lot of respect for the privacy team and a lot of respect frankly for Apple for making it important.

* I retired two years ago so can't say there is still a privacy team at Apple.


Amazon had a similar team in the devices org.


Honestly this seems no different than a software security team. Yes, you want your developers to know now to write secure software, but the means of doing that is verifying the code with another team.


Isn't it the same as a legal team, another point you touch upon ?

I don't think we solved the need for a specialized team dealing with legality, feels hard to expect companies to solve it for ethics.


Legal is a massive bottleneck in many large enterprises.

Unfortunately there’s so much shared legal context between different parts of an enterprise that it’s difficult for each internal organisation to have their own own separate legal resources.

In an ideal world there’d be a lawyer embedded in every product team so that decisions could get made without going to massive committees.


We haven't formalized ethics to the point of it being a multiplayer puzzle game for adults.


Isn't that what religion in general, and becoming a Doctor of Theology in particular, is?

https://en.wikipedia.org/wiki/Doctor_of_Theology


Quite possibly yes, and I personally grew up in a cult of Bible lawyers so I can imagine it, but here we are talking corporate ethics (an oxymoron) and AI alignment, which are independent of religion.


I mean, personally I see most religious ethics as oxymoronic too, at least in the sense of general ethics that would apply across heterogenous populations. Companies and religions typically have a set of ethics optimized for their best interests.


I suppose it depends on the relative demands of legal vs AI ethics


Well, I guess we have the answer when it comes to Meta.


the people I’ve seen doing responsible AI say they have a hell of a time getting anyone to care about responsibility, ethics, and bias.

of course the worst case is when this responsibility is both outsourced (“oh it’s the rAI team’s job to worry about it”) and disempowered (e.g. any rAI team without the ability to unilaterally put the brakes on product decisions)

unfortunately, the idea that AI people effectively self-govern without accountability is magical thinking


The idea that any for-profit company can self-govern without external accountability is also magical thinking

A "Responsible AI Team" at a for-profit was always marketing (sleight of hand) to manipulate users.

Just see OpenAI today: safety vs profit, who wins?


> Just see OpenAI today: safety vs profit, who wins?

Safety pretty clearly won the board fight. OpenAI started the year with 9 board members, and end it with 4, 4 of the 5 who left being interested in commercialization. Half of the current board members are also on the board of GovAI, dedicated to AI safety.

Don't forget that many people would consider "responsible AI" to mean "no AI until X-risk is zero", and that any non-safety research at all is irresponsible. Particularly if any of it is made public.


Rumor already has it that the "safety" board members are all resigning to bring Altman and the profit team back. When the dust settles, does profit ever lose to safety?


Self-government can be a useful function in large companies, because what the company/C-suite wants and what an individual product team want may differ.

F.ex. a product team incentivized to hit a KPI, so release a product that creates a legal liability

Leadership may not have supported that trade-off, but they were busy with 10,000 other strategic decisions and not technical.

Who then pushes back on the product team? Legal. Or what will probably become the new legal for AI, a responsible AI team.


Customers. Customers are the external accountability.


Yea, this works great on slow burn problems. "Oh, we've been selling you cancerous particles for the last 5 years, and in another 5 years your ass is totally going to fall off. Oh by the way we are totally broke after shoving all of our money in foreign accounts"


Iff the customers have the requisite knowledge of what "responsible AI" should look like within a given domain. Sometimes you may have customers whose analytical skills are so basic there's no way they're thinking about bias, which would push the onus back onto the creator of the AI product to complete any ethical evaluations themselves (or try and train customers?)


Almost every disaster in corporate history that ended the lives of customers was not prevented by customer external accountability

https://arstechnica.com/health/2023/11/ai-with-90-error-rate...

Really glad to see that customer external accountability kept these old folks getting the care they needed instead of dying (please read with extremely strong sarcasm)


Maybe a better case is outsourced and empowered. What if there was a third party company that was independent, under non-disclosure, and expert in ethics and regulatory compliance? They could be like accounting auditors but they would look at code and features. They would maintain confidentiality but their audit result would be public, like a seal of good ai citizen.


Fully agree. Central functions of these types do not scale. Even with more mundane objectives, like operational excellence, organizations have learned that centralization leads to ivory tower nothing-burgers. Most of the resources should go to where the actual work gets done, as little as possible should be managed centrally (perhaps a few ops and thought leadership fluff folks...).


And decentralized functions tend to be wildly inconsistent across teams, with info sec being a particular disaster where I've seen that tried. Neither model is perfect.


Sure, but we are talking about research teams here, not about an ops or compliance team. Central research tends to be detached from the business units but does not provide any of the 'consistency' benefits. Central research makes sense if the objectives are outward-facing, not if one wants to have an effect on what happens in the software-building units. So I'd say that ideally/hopefully, the people of the RAI team will now be much closer to Meta's engineering reality.


It works for things you can automate. For example, at Microsoft they have some kind of dependency bot such as when you have newtonsoft installed but have version < 13.0.1 and don't upgrade within such and such time frame, your M1 gets dinged. This is a very simple fix that takes like five minutes of work if that.

But I don't know if things are straight forward with machine learning. If the recommendations are blanket, And there is a way to automate checks, It could work. Main thing is there should be trust between teams. This can't be an adversarial power play.

https://github.com/advisories/GHSA-5crp-9r3c-p9vr


> Every team doing AI work should be responsible and should think about the ethical (and legal at a bare minimum baseline) dimension of what they are doing.

Sure. It is not that a “Responsible AI team” absolves other teams from thinking about that aspect of their job. It is an enabling function. They set out a framework how to think about the problem. (Write documents, do their own research, disseminate new findings internally.) They also interface with outside organisations (for example when a politician or a regulatory agency asks a questions, they already have the answers 99% ready and written. They just copy paste the right bits from already existing documents together.) They also facilitate in internal discussions. For example who are you going to ask for opinion if there is a dispute between two approaches and both are arguing that their solution is more ethical?

I don’t have direct experience with a “responsible AI team” but I do have experience with two similar teams we have at my job. One is a cyber security team, and the other is a safety team. I’m just a regular software engineer working on safety critical applications.

With my team we were working on an over-the-air auto update feature. This is very clearly a feature where the grue can eat our face if we are not very carefull, so we designed it very conservatively and then shared the designs with the cyber security team. They looked over it, asked for a few improvements here and there and now I think we have a more solid system than we would have had without them.

The safety helped us decide a dispute between two teams. We have a class of users whose job is to supervise a dangerous process while their finger hovers over a shutdown button. The dispute was over what information should we display to this kind of user on a screen. One team was arguing that we need to display more information so the supervisor person knows what is going on, the other team was arguing that the role of the supervisor is to look at the physical process with their eyes, and if we display more info that is going to make them distracted and more likely to concentrate on the screen instead of the real world happenings. In effect both teams argued that what the other one is asking for is not safe. So we got the safety team involved and we worked through the implications with their help and come to a better reasoned approach.


Everyone should think about it usually means no one will.


It depends. If you embed a requirement into the culture and make it clear that people are absolutely required to think about it, at least some people will do so. And because the requirement was so clear up-front, those people have some level of immunity from pushback and even social pressure.


I agree that it's strange, and I think it's sort of a quirk of how AI developed. I think some of the early, loud proponents of AI - especially in Silicon Valley circles - had sort of a weird (IMO) fascination with "existential risk" type questions. What if the AI "escapes" and takes over the world?

I personally don't find that a compelling concern. I grew up devoutly Christian and it has flavors of a "Pascal's Wager" to me.

But anyway, it was enough of a concern to those developing these latest AI's (e.g. it's core to Ilya's DNA at OpenAI), and - if true! - a significant enough risk that it warranted as much mindshare as it got. If AI is truly on the level of biohazards or nuclear weapons, then it makes sense to have a "safety" pillar as equal measure to its technical development.

However, as AI became more commercial and widespread and got away from these early founders, I think the "existential risk" became less of a concern, as more people chalked it up to silly sci-fi thinking. They, instead, became concerned with brand image, and the chatbot being polite and respectful and such.

So I think the "safety" pillar got sort of co-opted by the more mundane - but realistic - concerns. And due to the foundational quirks, safety is in the bones of how we talk about AI. So, currently we're in a state where teams get to enjoy the gravity of "existential risk" but actually work on "politeness and respect". I don't think it will shake out that way much longer.

For my money, Carmack has got the right idea. He wrote off immediately the existential risk concern (based on some napkin math about how much computation would be required, and latencies across datacenters vs GPUs and such), and is plowing ahead on the technical development without the headwinds of a "safety" or even "respect" thought. Sort of a Los Alamos approach - focus on developing the tech, and let the government or someone else (importantly: external!) figure out the policy side of things.


> At some point AI becomes important enough to a company (and mature enough as a field) that there is a specific part of legal/compliance in big companies that deals with the concrete elements of AI ethics and compliance and maybe trains everyone else, but everyone doing AI has to do responsible AI. It can't be a team.

I think both are needed. I agree that there needs to be a "Responsible AI" mindset in every team (or every individual, ideally), but there also needs to be a central team to set standards and keep an independent eye on other teams.

The same happens e.g. in Infosec, Corruption Prevention, etc: Everyone should be aware of best practices, but there also needs to be a central team in organizations of a certain size.


Do companies need an info sec team?


They do, but I would argue that app sec is the responsibility of the development teams. Infosec can and should have a role in helping devs to follow good app sec practises, but having a seperate app sec team that don't have anything to do with app development seems unlikely to be the best model.


Yeah the developers and business people in trading firms should just do the risk assessment themselves, why have a risk department?


An “innovation” team is often useful…usually it’s called research or labs or skunkworks or incubator. It’s still terrifically difficult for a large company to disrupt itself — and the analogy may hold for “responsibility”. But there is a coherent theory here.

In this case, there are “responsibility”-scoped technologies that can be built and applied across products: measuring distributional bias, debiasing, differential privacy, societal harms, red-teaming processes, among many others. These things can be tricky to spin up and centralising them can be viable (at least in theory).


> It never made any organizational sense for me to have a "responsible AI team" in the first place. Every team doing AI work should be responsible and should think about the ethical (and legal at a bare minimum baseline) dimension of what they are doing.

That makes as much sense as claiming that infosec teams never make organizational sense because every development team should be responsible and should think about the security dimensions of what they are doing.

And guess why infosec teams are absolutely required in any moderately large org?


Step 1: Pick a thing any tech company needs: design, security, ethics, code quality, etc.

Step 2: Create a “team” responsible for implementing the thing in a vacuum from other developers.

Step 3: Observe the “team” become the nag: ethics nag, security nag, code quality nag.

Step 4: Conclude that developers need to be broadly empowered and expected to create holistic quality by growing as individuals and as members of organizations, because nag teams are a road to nowhere.


> Every team doing AI work should be responsible and should think about the ethical (and legal at a bare minimum baseline) dimension of what they are doing.

Aren't we all responsible for being ethical? There seems to be a rise in the opinion that ethics do not matter and all that matters is the law. If it's legal then it must be ethical!

Perhaps having an ethical AI team helps the other teams ignore ethics. We have a team for that!


> At some point AI becomes [...] legal/compliance

AI safety and ethics is not "done". Just like these large companies have large teams working on algorithmic R&D, there is still work to be done in the direction of what AI safety and ethics means, looks like ot can be attached to other systems. It's not, well shouldn't be, about bullshit PR pronouncements.


Perhaps, but other things that should be followed (such as compliance) are handled by other teams, even though every team should strive to be compliant. Maybe the difference is that one has actual legal ramifications, while the other doesn't yet? I suppose Meta could get sued, but that is true about everything.


Is it really that far fetched? It sounds like a self-imposed regulatory group, which some companies/industries operate proactively to avoid the ire of government agencies.

Yeah, product teams can/should care about being responsible, but there’s an obvious conflict of interest.

To me, this story means Facebook dgaf about being responsible (big surprise).


In other news, police is not needed because everyone should just behave.


This is more analogous to a company having an internal "not doing crime" division. I do mention in my original post that having specialist skills within legal or compliance to handle the specific legal and ethical issues may make sense but having one team be the "AI police" and everyone else just trying to build AI without having responsibility baked into their processes is likely to just set up a constant tension like companies often have with a "data privacy" team who fight a constant battle to get people to build privacy practises into their systems and workflows.


But there are no responsible X teams for many X. But AI gets one.

(Here X is a variable not Twitter)


There are plenty of ethics teams in many industries, I don’t think this is a great point to make.


Police are needed for society when there's no other way to enforce rules. But inside a company, you can just fire people when they misbehave. That's why you don't need police inside your company. You only need police at the base-layer of society, where autonomous citizens interact with no other recourse between them.


People do what they are incentivized to do.

Engineers are incentivized to increase profits for the company because impact is how they get promoted. They will often pursue this to the detriment of other people (see: prioritizing anger in algorithmic feeds).

Doing Bad Things with AI is an unbounded liability problem for a company, and it's not the sort of problem that Karen from HR can reason about. It is in the best interest of the company to have people who can 1) reason about the effects of AI and 2) are empowered to make changes that limit the company's liability.


The problem is that a company would only fire the cavalier AI researchers after the damage is done. Having an independent ethics department means that the model wouldn't make its way to production without at least being vetted by someone else. It's not perfect, but it's a ton better than self-policing.


The "you" that fires people that misbehave is what, HR?

It takes quite some knowledge and insight to tell whether someone in the AI team, or, better yet, the entire AI team, is up to no good.

It only makes sense for the bosses to delegate overseeing research as sensitive as that to someone with a clue. Too much sense for Facebook.


Would you just destroy the legal department in every company too since each person should be operating within the law anyway?


Also, if you are on this team, you get promoted based on slowing down other work. Introduce a new review process, impact!


> Every team doing AI work should be responsible and should think about the ethical

So that's why everyone is so reluctant to work on deep-fake software? No, they did it, knowing what problems it could cause, and yet published everything, and now we have fake revenge porn. And we can not even trust tv broadcasts anymore.

So perhaps we do need some other people involved. Not employed by Meta, of course, because their only interest is their stock value.


This. It's just another infiltration akin to DEI into corporations.

Should all be completely disbanded.


Internal incentive structures need to be aligned with the risk incurred by the business and in some cases society.

I’m sure the rationalization is an appeal to the immature “move fast and break things” dogma.

My day job is about delivery of technology services to a distributed enterprise. 9 figure budget, a couple of thousand employees, countless contractors. If “everyone” is responsible, nobody is responsible.

My business doesn’t have the potential to impact elections or enable genocide like Facebook. But if an AI partner or service leaks sensitive data from the magic box, procurements could be compromised, inferences about events that are not public can be inferred, and in some cases human safety could be at elevated risk.

I’m working on an AI initiative now that will save me a lot of money. Time to market is important to my compensation. But the impact of a big failure, at the most selfish level, is the implosion of my career. So the task order isn’t signed until the due diligence is done.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: