Hacker News new | past | comments | ask | show | jobs | submit login
Paul Graham AI Essay Writer (paulgrahamai.com)
73 points by aubthread on Jan 24, 2023 | hide | past | favorite | 33 comments



Well I asked for discussion about Smurfette as a sex symbol, and instead I got an article about She-Ra, sexualization of cartoons, and interesting discourse about Pixar and being able to do things in animation that you can't do otherwise [0]. Still it was pretty good even though it didn't mention Smurfs.

[0]https://www.paulgrahamai.com/recPKFXrH5aI6KUks


Wandering off from the topic seems legit. Any human would do the same.


The output asking him to write an essay on "Lisp" is a real gem[1]! There's some absurd things in there like "Lisp used to be Scheme" but some real gems of insights like "most languages weren't designed by their users, but Lisp kinda was". There is also the obvious AI garbage nonsense like "MIT programmers are people and people are MIT programmers" :P

1: https://www.paulgrahamai.com/recAjXpYUQCDArMUT


Haha I think you've summarized it well! I've gotten some gems of one-liners, but often the arc of the essay is unintelligible or facts are reversed.

On Sam Altman: "He's not just a fighter pilot. He's a helicopter pilot."

On Stripe: "Online payments is a huge, broken mess. [...] And once you've done it, you have to deal with a Byzantine spaghetti of fraud detection, chargebacks, and so on."


It's hard to unsee these articles, isn't it?

Although I know it's AI generated nonsense (although fun to read), it's as if parts of my brain believe it's for real, and want to update themselves based on it.

Almost a bit scary. I cannot read the texts, without a feeling that I get affected by them. — The brain lacks a `--dry-run` flag?

In the future: People will read AI generated nonsense, it'll change their worldview, and once they learn that what they read was AI nonsense — their brain & beliefs have been altered already, no undo.

(It's like that already, with ordinary news (or "news") media, but now with AI, this can be mass produced)


Perhaps this will cause people to think for themselves more. There will be no zeitgeist at all as there will be so many diverging AI generated beleifs. Then again, people said the same thing about the internet. But perhaps there still isn't enough diverse thought on the internet, just enough to force people into their own echochambers. Maybe AI (ML, DL, whatever) will force people out of these chambers somehow.


> Perhaps this will cause people to think for themselves more

Seems more likely that people will think whatever the best-funded and operated LLMs tell them to believe.


This is a good point. More and more people will unknowingly encounter AI content and then feel disconcerted when learning of that fact. It seems likely that it will lead to a general distrust of the technology, insofar as the AI content contains falsehoods.

I suspect we'll see new developments in systems and policies relating to provenance.


Great advice on minimizing distractions: You may be thinking ‘I can't just delete my browser or delete my PC.’ But you can delete your browser, or delete your PC, or move to a country where there are no PCs. If you want to get things done, you have to be willing to make sacrifices.

https://www.paulgrahamai.com/rect2Rp1eu7V1I8JG


Haha! Only real ones understand.

This has echoes of his Disconnecting Distraction piece!


I turned GPT-3 into PGT-3. Browse or create essays using a model fine-tuned to Paul's 200+ essays. Sorry Paul!


In the late 19th century Native Americans' refused to have their photo taken, fearful the process would steal a person's soul and own nature. How prescient of them, to know 100 years later, the algorithms would be coming. To chew on our photos, average our voice, scan our faces, integrate and differentiate our words, generalize our gait and demeanor, generalize our ideas into a multilayered deep network that completely steals all our essence and soul. We are destined to be cloned into thousands of insipid Agent Smiths. :-( I say we burn it now....


https://www.paulgrahamai.com/recERuX0piB0V6Xfa

I think I broke it? There's tons of repetition and then it fails.


Oof, sorry about that. I guess being a great software engineer isn't so straightforward :)


Thanks everyone for checking this out! Anyone at OpenAI here that can expedite my limit increase request?


Good job! Some very funny pieces


It requires username and password. I couldn't see anything.


I am having the same problem


Yes, and when it works, it sometimes takes forever. Seems I'll never learn how to teach my parents' cat to fly!


The Paul we didn't want but the product team said we needed


If Paul were Clippy.


It's hilarious, I got some really fun results.

https://www.paulgrahamai.com/recNhTl3FeA0hHt9f


I think once you get to the third paragraph, it's best to start scrolling down quickly, sampling the opening sentence of each successive one. Then, imagine a few HN comments speaking praise to such lucid and profound analysis.


> Then, imagine a few HN comments speaking praise to such lucid and profound analysis.

Feels like praising Paul's thoughts on a topic with a surreptitious link to the AI version should be the HN equivalent of rickrolling in future.

(I'd link an essay on never giving you up, but it's let me down by not generating one)


This is really bad, looks like the kind of output you'd get from a self-hosted GPT-2 instance: the article "contents" doesn't include any reference to the given topic, it's just repeated mumbo jumbo: https://www.paulgrahamai.com/recnOmtVDt9C4C8ss


Yeah that's certainly a wonky result. What do you think would cause that? My thoughts would be (i) the lack of content specifically related to 'herding' or 'cats' or (ii) the ratio of completions (~200) to completion-length (~2k words each) of the fine-tune data. I'm new to this so appreciate any feedback.


And .. now it's behind an HTTP Basic Auth login.

I couldn't even get it to work once. All I wanted to know is what pg has to say about teledildonics. I guess the answer will remain: "nothing".


Paul is writing ... window hangs for me after countdown reaches 0:00 but maybe it's just overloaded. Or maybe Paul is overthinking the writing on this one.


I'd be interested to hear about how the infrastructure works, the queueing, the storage, how it's hosted, how it's throttled...


This validates what Paul Graham has been saying about AI and quality of writing.


Heads up - getting a 502 when I tried to use it.


(Example) How to Start a Successful Business January 2023 I've been writing for a while about how to start a startup, and I've noticed a pattern in the mistakes founders make. I thought I'd write a little about what I think is the most common mistake, because it's so easy to avoid.

The most common mistake founders make is to underestimate the founders. They think they're smarter than they are.

This is the mistake I made when I started Viaweb. I thought I was smarter than I was. I thought I could build a web application, but what I actually built was a Lisp application that ran on a web server. That was not what I wanted to build. Only when I realized I couldn't build what I wanted did I switch to building something I could build: the web.

(I was lucky to realize this when I did. I was only a few months from running out of money.)

The reason this mistake is so common is that it's usually harmless. It's usually harmless to overestimate your own abilities. If you think you can do a 10 day project in 7 days, and you're right, you save 3 days. If you think you can do a 10 day project in 4 days, and you're right, you save 6 days. The difference between 7 days and 4 days is the same as the difference between 4 days and 1 day. [1]

Only when you're building something new can underestimating your own abilities be harmless. Only then can you be sure you're going to hit a point where you realize you can't do what you thought you could. And then you're in trouble. [2]

The reason founders make this mistake is that they're optimists. It's the nature of founders to overestimate their own abilities. They have to be. If they thought about how hard it would be to start a successful startup, they'd never do it.

But it's not just founders who overestimate their own abilities. It's everyone. This is a mistake that can only be made by people who are both optimists and trying to do something new.

Fortunately, it's also a mistake that's easy to avoid. The reason founders underestimate their own abilities is that they're pessimists. They have to be. Otherwise they wouldn't be founders; they'd be investors. So if you want to avoid this mistake, you have to consciously take on some of the pessimism of a founder. [3]

If you want to start a successful startup, you have to be as pessimistic as a founder, but only about yourself. You have to be as pessimistic as a founder about the chances of success, and as optimistic as a founder about your own ability to make it.

Notes

[1] There is one case where overestimating your abilities can be a good thing: when you're doing something you know you can do. Then you can save time by underestimating.

[2] The reason this mistake is so dangerous when you're building something new is that you're forced to make it at exactly the worst time. You're forced to realize you can't do what you thought you could at exactly the moment when you need to be able to.

[3] There are two ways to be pessimistic. You can be pessimistic about the chances of success, or you can be pessimistic about your own ability to achieve it. I've found the latter to be more effective.

For example, if you believe you can achieve a 10x improvement over existing solutions, but believe there's only a 1% chance you can succeed, you're overestimating your own abilities. You should instead believe you can achieve a 10x improvement, but there's only a 10% chance you can succeed.

In the first case you're overestimating your ability to achieve a 10x improvement. In the second you're underestimating it. But the second way of being pessimistic leads to the right answer.

For example, if you believe there's only a 10% chance you can succeed, you should either not start a startup, or only start one as a hobby. But if you believe there's only a 10% chance you can achieve a 10x improvement, you should definitely start a startup to try.

If you believe there's only a 10% chance you can achieve a 10x improvement, you have a 90% chance of failing. But the payoff if you succeed is not merely a 10x improvement. It's a 10x improvement multiplied by the percentage of the market you capture. So the expected value of the bet is a 100x improvement in the world. Which is a good bet.

(There is a case where overestimating your ability to achieve a 10x improvement is also a good bet: when you're so good at what you do that

Thanks to John Collison, Patrick Collison, Elon Musk, and Geoff Ralston for reading drafts of this.

Upvote this essay (+0)


'The difference between 7 days and 4 days is the same as the difference between 4 days and 1 day. '... and they say chatgpt can't do math!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: