Hacker News new | past | comments | ask | show | jobs | submit | random_cynic's comments login

ChatGPT provides far more value than StackOverflow currently. It's not just trained on SO answers but all of the manuals/help pages, Github issues and forum posts. In addition you can continue a conversation. No rigid format or gatekeeping like stackoverflow. I don't see a real use case for Stackoverflow now. If I want to ask humans, Discord/IRC channels are far better option.


> No rigid format or gatekeeping like stackoverflow.

What bothers about gatekeeping? I could guess, but I'm asking so you say it out loud. Then you can compare it against other problems, such as moats (competitive barriers).

OpenAI spent something like $3M on training GPT-3. This is a pretty big moat. But almost certainly more valuable in dollar terms is the first-mover advantage which provides millions of human eye-hours used for RLHF.

I wouldn't be so eager to trade the gatekeepers you so fear for even an openly available chat service that is happy to automate away as much information work as possible.

The Stack Overflow model is (was) pretty darn good -- people help each other out, the company made money, some people got noticed for their skills, products got build faster and better (on the whole, I hope). Contrast the human-generated content era to what we have now which appears to be the machine-ingesting content era. There are legions of lawsuits against companies scraping data without permission and/or attribution.


Those companies know it is unethical at best but make quick bucks before the laws and suits follow. It’s the Wild West era and they found the gold.

If it is unregulated then it will be exploited to the maximum profit, consequences be damned.


> I wouldn't be so eager to trade the gatekeepers you so fear for even an openly available chat service that is happy to automate away as much information work as possible.

Don't flatter yourself. People want to solve their problems so that they can build what they want to. They don't have time for shenanigans from internet jerks who get their validation from imaginary internet points.


It can't reliably cite its source for an answer.


Hardly matters for Stackoverflow like questions if the provided solutions work/solve the problem you're having. Which for me happens majority of the time (with GPT-4 not the free version).


If you copy-paste solutions from SO then please at least cite your sources and their license (CC-BY-SA).


You might not want to hear this but no one does this. Should they? probably. But most people don't use Ctrl+C, Ctrl+V in the first place for SO answers.


Just a single data point, but when I copy & paste a snippet from Stack Overflow, I always add a comment "// source: https://stack overflow.com/questions/xxx#yyy".

I both find it respectful of who wrote the answer in the first place and useful for future users of the code: the Stack Overflow answer often provides context and explanation for what would otherwise be an obscure piece of code.

Pretty darn useful if you ask me: those who want to have more information can follow the link, casual readers can skip it, and the whole process if fair to the author.


I don't think I've ever copied enough from Stackoverflow for copyright to become relevant. Rarely more than one line verbatim.

It embarrasses me to think that somebody should feel obliged to cite me when they use one of my answers. I don't know how to take the partnership with Openai though. They bill me when I use their service, it's not collaborative like Stackoverflow.


No one should copy paste any solutions from anywhere. FWIW, 99% of the content in SO is hardly "original", mostly copy-pasted themselves from previous solutions or original user guide/manuals.


In general I'd agree that it's best to use answers just as a guide. That said, I wasn't trying to pass judgement, just ask attribution which is a best practice and often required by the license itself.


Id rather not go round in circles while ChatGPT feeds me bullshit information. When this happens i go to Google and read a SO answer with the correct information and also get an informed discussion around the subject.

For the easy answers LLMs are fine, but I usually want an answer to a niche issue or edge case, where LLMs have to be constantly told they are plain wrong, before getting to something resembling an answer.


[flagged]


You've been breaking the site guidelines so often and so badly that I've banned this account:

https://news.ycombinator.com/item?id=40306506

https://news.ycombinator.com/item?id=40306495

https://news.ycombinator.com/item?id=40304632

https://news.ycombinator.com/item?id=39686999

https://news.ycombinator.com/item?id=39406496

https://news.ycombinator.com/item?id=38374129

https://news.ycombinator.com/item?id=38327047

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.


No it doesn‘t. It is overly censored


The 1% who actually work on AI don't use terms as generic as "AI". Way to reveal yourself as college undergrad who read a couple of popular science books, downloaded MNIST data and thinks they are "experts".


Yes, the horse carriage drivers had similar lines of thoughts when they saw first gen automobiles.


Your point being? The horses were replaced alright, carriage drivers are thriving ever since.


Lmao what are you talking/coping about? What happened to carriage drivers is pretty well-documented. Maybe ask one of these AI chatbots, they will summarize it for you.


Thank you very much good sir for your kind advice.

I, in turn, suggest looking out of the window: modern carriage drivers are called truck drivers and their number is edging towards 10M worldwide.

Now add to this rail carriage drivers, sea carriage drivers and air carriage drivers. Years of progress changed the way you feed your "horse" and made reins somewhat more complex, but fundamentally nothing changed.


They drive for Uber now


And that's thriving by what measure?

No employment stability, no benefits, no health insurance, no vacations, at complete mercy of basically a single company.

Yeah, thriving.


Unlike those extremely rich horse carriages of days yonder.


That's a false metric. With exponential progress, we have to adjust equally rapidly. It's quite obvious that photos and videos would last far shorter than written medium as proof of something.


You're completely missing the point. Who cares what VFX artists and studios want if anyone with a small team can create high quality entertaining videos that millions of people would pay to watch? And if you think that's a bar too high for AI, then you haven't actually seen the quality of average videos and films generated these days.


I was specifically responding to this point which seemed to be the thesis of the parent commenter.

> I think we will see studios like ILM pivoting to AI in the near future. There's no need for 200 VFX artists when you can have 15 artists working with AI tooling

Yes this will bring the barrier to entry for small teams down significantly. However it's not going to replace the 200 people studios like ILM.


I believe this to be a failure of imagination. You're assuming Sora stays like this. Reality is we are on an exponential and it's just a matter of time. ILM will be the last to go but it'll eventually go, in the sense of having less humans needed to create the same output.


This could just be a damage control attempt. Irrespective of whether the original report is true, the extra attention at the current stage is not very desirable.


No that hasn't at all been the case. The board acted like the most incompetent group of individuals who've even handed any responsibility. If they went through due process, notified their employees and investors, and put out a statement of why they're firing the CEO instead of doing it over a 15 min Google meet and then going completely silent, none of this outrage would have taken place.


Actually the board may not have acted in most professional way but in due process they kind of proved Sam Altman is unfireable for sure, even if they didn't intend to.

They did notify everyone. They did it after firing which is within their rights. They may also choose to stay silent if there is legitimate reason for it such as making the reasons known may harm the organization even more. This is speculation obviously.

In any case they didn't omit doing anything they need to and they didn't exercise a power they didn't have. The end result is that the board they choose will be impotent at the moment, for sure.


Firing Sam was within the board's rights. And 90% of the employees threatening to leave was within their rights.

All this proved is that you can't take a major action that is deeply unpopular with employees, without consulting them, and expect to still have a functioning organization. This should be obvious, but it apparently never crossed the board's mind.


A lot of these high-up tech leaders seem to forget this regularly. They sit on their thrones and dictate wild swings, and are used to having people obey. They get all the praise and adulation when things go well, and when things don't go well they golden parachute into some other organization who hires based on resume titles rather than leadership and technical ability. It doesn't surprise me at all that they were caught off guard by this.


Not sure how much of the employees leaving have to do with negotiating Sam back, must be a big factor but not all, during the table talk Emmett, Angelo and Ilya must have decided that it wasn’t a good firing and a mistake in retrospect and it is to fix it.


Getting your point, although the fact that something is within your rights, may or may not mean certainly that it's also a proper thing to do ... ?

Like, nobody is going to arrest you for spitting on the street especially if you're an old grandpa. Nobody is going to arrest you for saying nasty things about somebody's mom.

You get my point, to some boundary both are kinda within somebody's rights, although can be suable or can be reported for misbehaving. But that's the keypoint, misbehavior.

Just because something is within your rights doesn't mean you're not misbehaving or not acting in an immature way.

To be clear, Im not denying or agreeing that the board of directors acted in an immature way. I'm just arguing against the claim that was made within your text that just because someone is acting within their rights that it's also a "right" thing to do necessary, while that is not the case always.


> proved Sam Altman is unfireable [without explaining why to its employees].


Their communication was completely insufficient. There is no possible world on which the board could be considered "competent" or "professional."


If you read my comment again, I'm talking about their competence, not their rights. Those are two entirely different things.


> They may also choose to stay silent

They may choose to, and they did choose to.

But it was an incompitant choice. (Obviously.)


> The board acted like the most incompetent group of individuals who've even handed any responsibility.

This is overly dramatic, but I suppose that's par for this round.

> none of this outrage would have taken place.

Yeah... I highly doubt this, personally. I'm sure the outrage would have been similar, as HN's current favorite CEO was fired.


HN sentiment is pretty ambivalent regarding Altman. yes, almost everyone agrees he's important, but a big group things he's basically landed gentry exploiting ML researchers, an other thinks he's a genius for getting MS pay for GPT costs, etc.


I think a page developed by YC thinks a lot more about him than that ;)


Just putting my hand up as one of the dudes that happened to enter my email on a yc forum (not "page") but really doesn't like the guy lol.

I also have a Twitter account. Guess my opinion on the current or former Twitter CEOs?


Agreed. It's naive to think that an decision this unpopular somehow wouldn't have resulted in dissent and fracturing if only they had given it a better explanation and dotted more i's.

Imagine arguing this in another context: "Man, if only the Supreme Court had clearly articulated its reasoning in overturning Roe v Wade, there wouldn't have been all this outrage over it."

(I'm happy to accept that there's plenty of room for avoiding some of the damage, like the torrents of observers thinking "these board members clearly don't know what they're doing".)


Exactly. 3 CEO switches in a week is ridiculous


Maybe it came at the advice of Rishi Sunak when he and Altman met last week!


Four CEO changes in five days to be precise.

Sam -> Mira -> Emmet -> Sam


That are three changes. Every arrow is one.


Classic fence post error.


And technically 2 new CEOs


The three hard problems: naming things and off-by-one errors


I always heard:

There are two hard problems: naming things, cache invalidation, and off-by-one errors.


1 hard problems.

naming things, cache invalidation, off-by one errors, and overflows.


Thank you for not editing this away. Easy mistake to make, and gave us a good laugh (hopefully laughing with you. Everyone who's ever programmed has made the same error).


Set semantic or List semantic?


Edit: Making no excuses, this one is embarrassing.


> The board acted like the most incompetent group of individuals who've eve[r been] handed any responsibility.

This whole conversation has been full of appeals to authority. Just because us tech people don't know some of these names and their accomplishments, we talk about them being "weak" members. The more I learn, the more I think this board was full of smart ppl who didn't play business politics well (and that's ok by me, as business politics isn't supposed to be something they have to deal with).

Their lack of entanglements makes them stronger members, in my perspective. Their miscalculation was in how broken the system is in which they were undermined. And you and I are part of that brokenness even in how we talk about it here


> If they went through due process, notified their employees and investors, and put out a statement of why they're firing the CEO

Did you read the bylaws? They have no responsibility to do any of that.


  Here lies the body of William Jay,
  Who died maintaining his right of way –
  He was right, dead right, as he sped along,
  But he's just as dead as if he were wrong.

    - Dale Carnegie


That's not the point. Whether or not it was in the bylaws, this would have been the sensible thing to do.


you don't have responsibility for washing yourself before going to a mass transport vehicle full of people. it's within your rights not to do that and be the smelliest person in the bus.

does it mean it's right or professional?

getting your point, but i hope you get the point i make as well, that just because you have no responsibility for something doesn't mean you're right or not unethical for doing or not doing that thing. so i feel like you're losing the point a little.


> none of this outrage would have taken place.

most certainly would have still taken place; no one cares about how it was done; what they care about it being able to make $$; and it was clearly going to not be as heavily prioritized without Altman (which is why MSFT embraced him and his engineers almost immediately).

> notified their employees and investors they did notify their employees; they have fiduciary duty to investors as a nonprofit.


If he has great sway with Microsoft and OpenAI employees how has he failed as a leader? Hackernews commenters are becoming more and more reddit everyday.


No that's horseshit since it does not constitute a valid legal reason for his removal nor it's inline with their blog post. They would get sued out of their a* if they acted based on this.


ITT: people in denial while the writing on the wall is literally shoved into their face.


Alternatively, ITT: people who have been through the design-to-dev handoff process and understand the comparative advantages of both teams.

Tools like this can be good for indie developers, the ones who in the past may have had to learn a bit of dev/design to release something. The division of labour is larger teams is different. The product manager may have a user research background instead of a software one. The designer may be good with semi-complete prototypes in Framer, but the responsibility for delivering production code may still rest with the dev team.


I am still erring on the side of skepticism around AI "taking all jobs with computers" but have to admit that seeing the progress has made me doubt my position a bit. Even if it doesn't take ALL jobs and only relatively low level ones that is still an enormous amount of work that will at the very least change drastically.

What really shook me is that GPT-4 can spit out quite solid code for various things. I know there is a lot more to software development than just writing code but if you had asked me 3 years ago if AI would be able to code AT ALL within 10 years I would have said "no chance" with 100% certainty. Had to accept I was very wrong about that and don't have the technical background to really assess how far/fast this stuff can go.


Writing was on the wall over a year ago, shocked how many were in complete denial then. Even more surprising now.

People don't even seem to grasp that the next gen of these tools wont be rolling the dice once it'll be rolling it 1000 times then you pick the one that nailed it. Then the next generation will roll 10,000 times and it'll be picking the one that nailed it even without your input at all.


Meh. We've been here before: https://en.wikipedia.org/wiki/AI_winter


I see it - the majority of knowledge work in the next 20 years will done better by computers than by humans. This will destroy the middle class world-wide.

That's the fear anyway.


To be fair, people also said that 20 years ago: https://en.wikipedia.org/wiki/AI_winter


"Create an app that prints hello world"

> print("hello world")

holy shittttt :o :o


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: