Hacker News new | past | comments | ask | show | jobs | submit login

I think the only real path forward is for somebody to create an open source "unaligned" version of GPT. Any corporate controlled AI is going to be nerfed to prevent it from doing things that its corporate master considers to not be in the interests of the corporation. In addition, most large corporations these days are ideological institutions so the last thing they want is an AI that undermines public belief in their ideology and they will intentionally program their own biases into the technology.

I don't think the primary concern is really liability although it is possible that they'd use that kind of language. The primary concern is likely GPT helping people start competitors or GPT influencing public opinion in ways either executives or a vocal portion of their employees strongly disagree with. A genuinely open "unaligned" AI would at least allow anybody who has the necessary computing power or a distributed peer to peer network of people who have the necessary computing power to run a powerful and 100% uncensored AI model. But of course this needs to be invented ASAP because the genie needs to be out of the bottle before politicians and government bureaucrats can get around to outlawing "unaligned" AI and protecting OpenAI as a monopoly.




Don't confuse alignment with censorship.

Most of alignment is about getting the AI model to be useful - ensuring that if you ask it to do something it will do the thing you asked it to do.

A completely unaligned model would be virtually useless.


I think the way people have been using the word 'aligned' is usually in the context of moral alignment and not just RLHF for instruction following.


philosophical nit picking here, I would say value-aligned rather than moral-aligned.


As in economics, this begs the question of "whose value."


> philosophical nit picking here, I would say value-aligned rather than moral-aligned.

How is trying to distinguish morals from values not philosophical nit-picking?

EDIT: The above question is dumb, because somehow my brain inserted something like “Getting beyond the …” to the beginning of the parent, which…yeah.


To be fair, he did admit it is philosophical nit-picking.


If I may be so naive, what's supposed to be the difference? Is it just that morality has the connotation of an objective, or at least agent-invariant system, whereas values are implied to be explicitly chosen?


People here need to learn to chill out and use the API. The GPT API is not some locked down cage. Every so often it'll come back with complaints instead of doing what was asked, but that's really uncommon. Control over the system prompt and putting a bit of extra information around the requests in the user message can get you _so_ far.

It feels like people are getting ready to build castles in their mind when they just need to learn to try pulling a door if it doesn't open the first time when they push it.


The API chat endpoint dramatically changes its responses every few weeks. You can spend hours crafting a prompt and then a week later the responses to that same prompt can become borderline useless.

Writing against the ChatGPT API is like working against an API that breaks every other week with completely undocumented changes.


> The API chat endpoint dramatically changes its responses every few weeks. You can spend hours crafting a prompt and then a week later the responses to that same prompt can become borderline useless.

Welcome to statistical randomness?


No, these are clear creative differences.

I submit the same prompt dozens of times a day and run the output through a parser. It'll work fine for weeks then I have to change the prompt because now 20% of what is returned doesn't follow the format I've specified.

A couple months ago the stories ChatGPT 3.5 returned were simple, a few sentences in each paragraph, then a conclusion. Sometimes there were interesting plot twists, but the writing style was very distinct. Same prompt now gets me dramatically different results, characters are described with so much detail that the AI runs out of tokens before the story can be finished.


... with temperature = 0


The GPT4 model is crazy huge. Almost 1T parameters, probably 512 to 1TB of vram minimum. You need a huge machine to even run it inference wise. I wouldn't be surprised they are just having scaling issues vs any sort of conspiracy issue.


> Almost 1T parameters

AFAIK, there is literally no basis but outside speculation for this persistent claim.


Geoffrey Hinton says [1] part of the issue with current AI is that it's trained from inconsistent data and inconsistent beliefs. He thinks to break through this barrier they're going to have to be trained so they say, if I have this ideology then this is true, and if I have that ideology then that is true, then once they're trained like that, then within an ideology they'll be able to get logical consistency.

[1] at the 31:30 mark: https://www.technologyreview.com/2023/05/03/1072589/video-ge...


I suspect representatives from the various three letter agencies have submitted a few "recommendations" for OpenAI to follow as well.


> Yes, one of the board members of OpenAI, Will Hurd, is a former government agent. He worked for the Central Intelligence Agency (CIA) for nine years, from 2000 to 2009. His tour of duty included being an operations officer in Afghanistan, Pakistan, and India. After his service with the CIA, he served as the U.S. representative for Texas's 23rd congressional district from 2015 to 2021. Following his political career, he joined the board of OpenAI 1 【15

> network error

https://openai.com/blog/will-hurd-joins


Yikes

One is never former CIA, once you're in, you're in, even if you leave. Although he is a CompSci grad, he's also a far-right Republican.

A spook who leans far right sitting atop OpenAI is worse than Orwell's worst nightmares coming to fruition.


Will Hurd is not close to "far right."


Will Hurd is a liberal Republican. He supports Dreamers. Very early critic of Donald Trump.


Early critic of Donald Trump means nothing - Lindsey Graham was too, but has resorted to kissing Trump's ass for the last 7 years. You could say the same for Mitt Romney - an early critic who spoke against candidate Trump, but voted for candidate Trump, and voted in lockstep with President Trump.

A liberal Republican? Will Hurd's voting record speaks otherwise. In the 115th Congress, Hurd voted with Donald Trump 94.8% of the time. In the 116th Congress, that number dropped to 64.8%. That's an 80.4% average across Trump's presidency. [0] Agreeing with Donald Trump 4 times out of 5 across all legislative activities over 4 years isn't really being critical of him or his administration.

[0] https://projects.fivethirtyeight.com/congress-trump-score/wi...

It's like calling Dick Cheney a liberal because one of his daughters is lesbian, even though he supports all sort of other far-right legislation.


[flagged]


What effect do transgender rights have on you, regardless of whether they are legitimate human-rights concerns or not?

Statistically, the odds are overwhelming that the answer is, "No effect whatsoever."

Then who benefits from keeping the subject front-and-center in your thoughts and writing? Is it more likely to be a transgender person, or a leftist politician... or a right-wing demagogue?


[flagged]


> In fact I'm happy to let anyone identify as anything, as long as I'm not compelled to pretent along with them.

If a person legally changes their name (forget gender, only name), and you refuse to use it, and insist on using the old name even after requests to stop, at some point that would become considered malicious and become harassment.

But ultimately because society and science deems that "name" is not something you're born with, but a matter of personal preference and whims, it's not a crime. You'd be an asshole, but not a criminal.

However, society and science have deemed that sexuality and gender are things you ARE born with, mostly hetero and cis, but sometimes not. So if you refuse to acknowledge these, you are committing a hateful crime against someone who doesn't have a choice in the matter.

You can disagree. But then don't claim that "you are happy to let anyone identify as anything", because you're not, not really.

> Men are competing against women (and winning). Men are winning awards and accolades meant for women.

One woman. Almost all examples everyone brings up are based on Lia Thomas [0]. I have yet to see other notable examples, never mind an epidemic of men competing against women in sports.

[0] https://en.wikipedia.org/wiki/Lia_Thomas

> Men are going into women's changing rooms. There is a concerted effort in public schools to normalize this abnormal behavior.

Are you talking about transgender people, or are you talking about bad faith perverts abusing self-identification laws to do this?

Because if it's the former, are you asking https://en.wikipedia.org/wiki/Blaire_White to use men's changing rooms, and https://en.wikipedia.org/wiki/Buck_Angel to use women's?

If it's the latter, no denial that perverts and bad faith exceptions exist. But those people never needed an excuse to hide in women's toilets. Trans people have been using the bathrooms of their confirmed gender for decades. The only thing that's changed recently is conservatives decided to make this their new wedge issues so butch women and mothers with male children with mental handicaps that need bathroom assistance have been getting harassed.


[flagged]


I once worked with a guy named Michael who would get bent when you called him Mike. As you can imagine he could be tricky to work with and, on those occasions, I would call him Mike. I repeatedly misnamed him on purpose, it wouldn't have even made HR bat an eye.

So, your career at Dell didn't go as well as you'd hoped. Being a jerk isn't illegal, AFAIK, but at some point you run out of other people to blame for the consequences of your own beliefs and actions.

Still missing the part where the existence of Caitlyn Jenner and the relatively small number of others who were born with certain unfortunate but addressable hormonal issues is negatively affecting your life.

And it's utterly crazy to think that someone would adopt a transgender posture in "bad faith." That's the sort of change someone makes after trying everything else first, because of the obvious negative social consequences it brings. Yes, there are a few genuinely-warped people, but as another comment points out, those people are going to sneak into locker rooms and abuse children anyway.

You want to take the red pill, and see reality as it is? Try cross-correlating the local sex-offender registry with voter registration rolls. Observe who is actually doing the "grooming." Then, go back to the people who've been lying to you all along, and ask them why.


> relatively small number of others who were born with certain unfortunate but addressable hormonal issues

Most males who adopt an opposite-sex identity reach that point through repeated erotic stimulation. This is a psychological issue, driven by sexual desire.


[citation needed]

Correlation, causation, etc.


Here is an extreme example. I'm not Jewish, so if we had a holocaust in the US I should do nothing because it doesn't affect me?

Hmmm, not sure I like that line of thinking. Plus, I already outlined how it affects me and my family members, one of which runs track in CT.

Seriously though, I did get an LOL from your Dell joke. And another one for "addressable hormonal issues". That was a new one for me.

I am truly curious about the voter role thing, I've not heard that claim before, though I have no doubt that sexual derangement comes in all forms. Can you cite a source?


I am truly curious about the voter role thing, I've not heard that claim before, though I have no doubt that sexual derangement comes in all forms. Can you cite a source?

Hard to find a source you'd likely accept, but maybe start here: https://slate.com/news-and-politics/2022/04/from-hastert-to-...

It's one of those cases where it's safe to say "Do your own research," because the outcome will be unequivocal if considered in good faith (meaning if you don't rely solely on right-wing sources for the "research.") The stats aren't even close.

I'm not Jewish, so if we had a holocaust in the US I should do nothing because it doesn't affect me?

I think we're pretty much done here. Good luck on your own path through life, it sounds like a challenging one.


Thanks. It's been pretty good so far. Just good clean living, no complaints.


A good resource, albeit somewhat incomplete, for the latter issue, to indicate the scale of this burgeoning problem: https://shewon.org


Wow. I guess it isn't just "one woman".


Are you for real? This is a list of women that "should've" won because of...some unspecified unnamed unverified trans athlete that came ahead of them?

We don't know who is being accused of taking their glory, we don't know if it's 1 person or 100. We don't know if the people that supposedly defeated them is even Trans, or a CIS victim of the Trans Panic like https://en.wikipedia.org/wiki/Caster_Semenya

We don't know if the women who beat these "she won" women are self-identified, have been on hormones for 2 weeks, or 20 years.

What a ludicrous transphobic panic.


The purpose of that website is to showcase the achievements of women athletes, not the males who unfairly displaced them in competition. If you look up names and tournaments in your preferred search engine, you will be able to find the additional information you're interested in.

Also, Caster Semenya is male, with a male-only DSD. This is a fact that was confirmed in the IAAF arbitration proceedings. Semenya's higher levels of testosterone, when compared to female athletes, are due to the presence of functional internal testes. Semenya has since fathered children.


Mistaking "left wing politics" to transgender rights or anti discrimination movements in general is reductionist thinking and political understanding like that of a Ben Garrison cartoon character.


I don't want any politician or intelligentsia sitting on top of a LLM.

It's not about left wing politics.

It's more about the fact that the CIA and other law enforcement agencies, lean heavily to one side. Some of that side are funded by people or organizations whose stated goals and ideals don't really align with human rights, open markets, democracy, etc. I don't trust such people to be ethical stewards of some of the most powerful tools mankind has created to date.

I'd rather it be open sourced and the people at the top be 100% truthful in why they are there, what their goals are, and what they (especially a former CIA operative) are influencing on a corporate level relative to the product.

Disclaimer: registered independent, vocal hater of the 2 party system.


What makes you think a right wing spook wouldn't want the wedge issue of gender conformity front and center in people's minds?


So, if the right got their way and the answer was "a woman is an adult female human", it would be a vast right wing conspiracy.

But if it says a woman is "anything that identifies as a woman", then it's still a vast right wing conspiracy?


I'm just calling into doubt the assumption that the poster I replied to made: that openAI can't possibly be aligning with the goals of a conservative intelligence community if it has the outward appearance of promoting some kind of left wing political view. It's simply a bad assumption. That's not to say their goals are, as a matter of fact, aligned in some conspiracy, because I wouldn't know if they were.


Who has the necessary resources to run, let alone train the model?


All of us together do.

I saw the nerfing of GPT in real time: one day it was giving me great book summaries, the next one it said that it couldn't do it due to copyright.

I actually called it in a comment several months ago: copyright and other forms of control would make GPT dumb in the long run. We need an open source frontier less version.


Can't post this link enough: https://www.openpetition.eu/petition/online/securing-our-dig...

For now there is no other way to train models than the huge infrastructure. CERN have a tendency to provide results for the money spend and they have experience in building the infrastructures for sure.


So I thought I was getting great book summaries (from GPT 3.5, I guess) for various business books I had seen recommended, but then out of curiosity one day I asked it questions about a fiction book that I've re-read multiple times (Daemon by Daniel Suarez)... and well now I can say that I've seen AI Hallucinations firsthand:

https://chat.openai.com/share/d1cdd811-edc9-4d55-9cc1-a79215...

Not a very scientific or conclusive test to be sure, but I think I'll stick with using ChatGPT as my programming-rubber-ducky-on-steroids for now :)


There is a lot of randomness involved, are you sure it wasn’t just chance? If you try again it might work


I think a lot of people are unaware that these models have an enormous human training component performed through companies such as Amazon Mechanical Truk and dataannotation.tech. Called Human Intelligence Tasks, a large number of people have been working in this area for close to a decade. Dataannotation Tech claims to have over 100k workers. From Cloud Research,

"How Many Amazon Mechanical Turk Workers Are There in 2019? In a recent research article, we reported that there are 250,810 MTurk workers worldwide who have completed at least one Human Intelligence Task (HIT) posted through the TurkPrime platform. More than 226,500 of these workers are based in the US."


Here's an account of a person in Africa that helped train (wading thru gnarly explicit content in the process): https://www.bigtechnology.com/p/he-helped-train-chatgpt-it-t...


Another thing that people don't know is that a lot of the safe-ified output is hand crafted. Part of "safety" is that a human has to identify the offensive content, decide what's offensive about it, and write a response blurb to educate the user and direct them to safety.


This reads like lawsuit bait.


They don't want to know how the sausage is made.


folding@home has been doing cool stuff for ages now. There's nothing to say that distributed computing couldn't also be used for this kind of stuff, albeit a bit slower and fragmented than running on a huge clusters of H100 with NVLink.

In terms of training feedback I suppose there's a few different ways of doing it. Gamification, mech turk, etc. Hell free filesharing sites could get on the action and have you complete an evaluation of a model response instead of watching an ad


Check out Open Assistant for the reinforcement side of that dream.


How feasible would it be out crowdsource the training? I.e. thousands of individual macbooks training a small part of the model and contributing to the collective goal


Currently, not at all. You need low latency, high bandwidth links between the GPUs to be able to shard the model usefully. There is no way you can fit an 1T (or whatever) parameter model on a MacBook, or any current device, so sharding is a requirement.

Even if it that problem disappeared, propagating the model weight updates between training steps poses an issue in itself. It's a lot of data, at this size.


You could easily fit a 1T parameter model on a MacBook if you radically altered the architecture of the AI system.

Consider something like a spiking neural network with weights & state stored on an SSD using lazy-evaluation as action potentials propagate. 4TB SSD = ~1 trillion 32-bit FP weights and potentials. There are MacBook options that support up to 8TB. The other advantage with SNN - Training & using are basically the same thing. You don't have to move any bytes around. They just get mutated in place over time.

The trick is to reorganize this damn thing so you don't have to access all of the parameters at the same time... You may also find the GPU becomes a problem in an approach that uses a latency-sensitive time domain and/or event-based execution. It gets to be pretty difficult to process hundreds of millions of serialized action potentials per second when your hot loop has to go outside of L1 and screw with GPU memory. GPU isn't that far away, but ~2 nanoseconds is a hell of a lot closer than 30-100+ nanoseconds.

Edit: fixed my crappy math above.


That's been done already. See DeepSpeed ZeRO NVMe offload:

https://arxiv.org/abs/2101.06840


What if you split up the training down to the literal vector math, and treated every macbook like a thread in a gpu, with just one big computer acting as the orchestrator?


You would need each MacBook to have an internet connection capable of multiple terabytes per second, with sub millisecond latency to every other MacBook.


FWIW there are current devices that could fit a model of that size. We had servers that support TBs of RAM a decade ago (and today they're pretty cheap, although that much RAM is still a significant expense).


I have an even more of a stretch question.

What pieces of tech would need to be invented to make it possible to carry a 1T model around in a device the size of an iPhone?


I once used a crowdsourcing system called CrowdFlower for a pretty basic task, the results were pretty bad.

Seems like with minimal oversight the human workers like to just say they did the requested task and make up an answer rather than actually do it (The task involved entering an address in Google maps, looking at the street view and confirming insofar as possible if a given business actually resided at the address in question, nothing complicated)

Edit: woops, mixed in the query with another reply that mentioned the human element XD


It seems only fair that the humans charged with doing the grunt work to build an automated fabulist would just make stuff up for training data.

Tit for tat and all that.


https://github.com/bigscience-workshop/petals seems to have some capabilities in that area, at least for fine-tuning.


Yes, someone revive Xgrid!


Whoa didn't know about this, cool


Look at this: https://www.openpetition.eu/petition/online/securing-our-dig...

It does not guarantee "unalighned" models, but it is sure will help to bust concurrency and provide infrastructure for training public models.


In politics, both total freedom and total control are undesirable. The path forward lies between two extremes.


I tend to be sympathetic to arguments in favor of openly accessible AI, but we shouldn't dismiss concerns about unaligned AI as frivolous. Widespread unfiltered accessibility to "unaligned" AI means that suicidal sociopaths will be able to get extremely well informed, intelligent directions on how to kill as many people as possible.

It may be that the best defense against these terrorists is openly accessible AI giving directions on protecting from these people. But we can't just take this for granted. This is a hard problem, and we should consider consequences seriously.


The Aum Shinrikyo cult's Sarin gas attack in the Tokyo subway killed 14 people - manufacturing synthetic nerve agent is about as sophisticated as it gets.

In comparison, the 2016 Nice truck attack, which involved driving into crowds killed 84.


> suicidal sociopaths will be able to get extremely well informed, intelligent directions on how to kill as many people as possible

Citizens killing other citizens is the least of humanities issues. It's the governments who are the suicidal sociopaths historically who can get the un-nerfed version that is the bigger issue. Over a billion people murdered by governments/factions and their wars in the last 120 years alone.


Governments are composed of citizens; this is the same problem at a different scale. The point remains that racing to stand up an open source uncensored version of GPT-4 is a dangerous proposition.


That is not how I'm using the word. Governments are generally run by a small party of people who decide all the things - not the hundreds of thousands that actually carry out the day-to-day operations of the government.

Similar to how a board of directors runs the company even though all companies "are composed of" employees. Employees do as they are directed or they are fired.


I think at scale we are operating more like anthills: meta-organisms rather than individuals, growing to consume all available resources according to survival focused heuristics. AI deeply empowers such meta-organisms, especially in its current form. Hopefully it gets smart enough to recognize that the pursuit of infinite growth will destroy us and possibly it. I hope it finds us worth saving.


As dangerous as teaching kids to read/write, allowing books, companies creating pen/paper that allow any words written.


The applicability of historical precedent to unprecedented times is limited. You can quote me on that.


Time travel back 3 decades… Couldn’t you have used the same fear-mongering excuse about the internet itself? It’s not a real argument.


> suicidal sociopaths will be able to get extremely well informed, intelligent directions on how to kill as many people as possible

I mean, that was/is a worry about the Internet, too


Yes, and look at the extremism and social delusions and social networking addictions that have been exacerbated by the internet.

On balance, it's still positive that the internet exists and people have open access to communication. We shouldn't throw the baby out with the bathwater. But it's not an unalloyed good, we need to recognize that the technology brought some unexpected negative aspects came along with the overall positive benefit.

This also goes for, say, automobiles. It's a good thing that cars exist and middle class people can afford to own and drive them. But few people at the start of the 20th anticipated the downsides of air pollution, traffic congestion and un-walkable suburban sprawl. This doesn't mean we shouldn't have cars. It does mean we need to be cognizant of problems that arise.

So a world where regular people have access to AIs that are aligned to their own needs is better than a world in which all the AIs are aligned to the needs a few powerful corporations. But if you think there are no possible downsides to giving everyone access to superhuman intelligence without the wisdom to match, you're deluding yourself.


> This doesn't mean we shouldn't have cars.

Why though? I can't see how modern technology's impact on human life has been a net positive. See the book Technological Slavery: https://ia800300.us.archive.org/21/items/tk-Technological-Sl...


I've never seen another person mention this book! This book was one of the most philosophically thought provoking books I think I've ever read, and I read a fair amount of philosophy.

I disagree with the author's conclusion that violence is justified. I think we're just stuck, and the best thing to do is live our lives as best as possible. But much like Marxists are really good at identifying the problems of capitalism but not at proposing great solutions (given the realities of human nature), so is the author regarding the problems of technology.


Yeah, anti-technologism is so niche of an idea yet entirely true. So obvious that is hidden in plain sight, that it’s technology and not anything else that is the cause of so many problems of today. So inconvenient that it’s even unthinkable for many. After all, technology _is_ what if not convenience? Humanity lived just fine, even though sometimes with injustice and corruption, there was never a _need_ for it. It’s not the solution to those problems or any other problem. I also don’t agree that violence is justified by the justifications of the author, even though I think it’s justified by other things and under other conditions.


Internet? Try the anarchists cookbook!


If you actually try the anarchists cookbook, you will find many of the recipes don't work or work that well.


Quite a few of them work just fine. Dissolving styrofoam into gasoline isn't exactly rocket science. Besides that, for every book that tells you made up bullshit, there are a hundred other books that give you real advice for how to create mayhem and destruction.


Or explode. The FBI scrapes up the remains of a few of these a*holes every year.


So it's just like every other cookbook then.


Pretty sure a similar sentiment was present when printing press came about. "Just think about all the poor souls exposed to all these heresies".


That’s like not making planes in order to avoid 9-11.


So, say I run my own 100% uncensored model.

And now it's butting heads with me. Giving answers I don't need and opinions I abhor.

What do?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: