Hacker News new | past | comments | ask | show | jobs | submit login
When you lose the ability to write, you also lose some of your ability to think (twitter.com/paulg)
941 points by blueridge on May 11, 2023 | hide | past | favorite | 571 comments



Sometimes I wonder if I'm using the same ChatGPT as everyone else. I pay for GPT4 access in ChatGPT, and it is a really terrible writer. I also pay for Copilot, and I think a can count the number of times it has been helpful with one hand. I'm trying to get on this hype train, but I'm left with an empty feeling every time I try to seriously spend some time with these new tools.


I feel like this is the bias of a very smart, highly educated, highly capable person. If you are already an excellent writer and coder, can take vast fields of knowledge and blossoming ideas and spit them out onto paper in skilled form with minimal anxiety, GPT probably does very little for you. Maybe you could see if you were missing something very obvious and average.

I read for probably 8-12 hours a day. I write every day. I think every day. I’ve been doing this for decades. GPT has been like a talking dog. It’s an insane achievement, but he’s not the worlds best conversationalist.

But to be 17 again and struggling all night to write a three page paper filled with unoriginal ideas, it would have been a game changer back then.


The thing is this will rob a whole generation of that learning. To struggle to learn to write, code, and all that is how you become a professional. A lot of people will never develop these skills, and the chasm between the talented and untalented will only grow.


I'm not sure. With every generation some tech pops up that'll make every next generation worse off and in the end there's just something else people spend time on. My generation built their own PC's, struggled to install Windows by popping tons of floppies in a drive and hoping none of the steps failed, our internet connection made sounds and we learned DHCP the hard way.

The generation after me is hardly able to reset their iPhone if it freezes without going to a help desk for help. Meanwhile, that same generation is using the cameras in their phones, readily available video editing software and processing power to create content like no generation has ever before. They are able to entertain millions of people around the globe while doing it and some build incredible wealth at the same time.

My grandfather built his own house. My generation gets their house delivered in prefab parts that are then welded together. Nobody really cares that we've lost (the majority of) the ability to build houses or blacksmith katanas.


> They are able to entertain millions of people around the globe while doing it and some build incredible wealth at the same time.

I think you meant hoard incredible wealth?

There's a limit to how much wealth you can actually create with immaterial entertainment alone. Even a best seller book: it takes time to read, so while it provides value to millions of readers, it also removes value in the form of opportunity cost. No way around choosing what to do with your own time.

Other alternatives have more potential. Educational content could lift some people out of incompetence and help them build actual wealth (say a very good programming or sawing tutorial). Writing useful software could also create wealth, even more so if it's Free (and free). And of course, building stuff (while taking care not to deplete our resources or burning up our planet…).

We say that people "make" money, but that's a dangerously misleading idiom. They don't actually make money, they extract money. Hopefully this money is actually earned in proportion to the value they actually injected into society (make a chair, get paid for the chair, all fair and square). But never forget that the people who "make" the most money generally do so by taking it from other people. Employers, landlords, stakeholders… who get most of their "earnings" not from what they do, but from what they own.


I think your viewpoint stems in a belief that there is only so much money to go around. Your view about a landlord taking money is kind of a micro view. I am a landlord…the house I rent I actually built. It is an asset that people are willing to give me money to use for a while…they get a place to live…IF I could not have ever rented it out I would have never built it…it would be a worthless piece of ground that nobody would want to live on…this is how the world works…when I die, I will give it to my son…he will rent it out. Are you saying that he also would be “taking” money from people?


Assuming you paid your house-for-rent with your own money: that you worked for, as opposed to won at the lottery or from previous investments. In a sense, the family that is giving you rent money is only paying your work back… until that point where you get your full ROI (cost of the house + your own work + interest).

Beyond that, the only reason they give you more rent money is because you own the house. But you got your ROI so by now they have paid for the house. And yet it's not theirs, so they still have to pay you. And when they do, any additional money you get, you basically took from them. As for your son, who presumably did not work for the house at all, will definitely take money from people through that house. Just because he had the right dad.

Don't get me wrong, you investing for your son and securing his future is you being a good dad. You just can't wave away how the current system works: people who own stuff have the power to take money from people who don't.


> people who own stuff have the power to take money from people who don't.

People who have invested capital, hedged risks, setup maintenance, did marketing etc.

Now we can agree that housing is very specific branch of economy where if you have upper hand(capital) you can be dealing cards to those who don't. There are ways to address that without the canvas of grand narratives(extracting labour etc).


So if you didn't build the house, no one would've and people would live on the streets? No, people need a place to stay and the house would get built even if renting was impossible. We would just figure out different ways to pool money to build houses. House prices would also come down if wealthy people couldn't store their money into land.


So, in the many thousands of years of men trying every which way…this has not happened. So, unless someone has found a better way capitalism is the best way so far.


What? Of course this has happened.

I grew up in a council house. I.e. a house created by pooled money in the form of taxes.

Houses must exist. Landlords are an inefficiency to be optimized out.


How many houses did investors build vs how many houses did the government build…from the money they actually took from somebody else? And which houses are nicer, safer and better maintained?


> How many houses did investors build vs how many houses did the government build…

Strictly speaking the answer is zero and zero. They use the money they have to have construction workers build the houses.

> from the money they actually took from somebody else?

Is this supposed to imply someone is taking money from others and the other does not? I know the view that government taxes is basically theft, and I think this view is ludicrous: citizen get that back in the form of infrastructure and public services, and in a functioning democracy it's basically their collective will that decided how to allocate that money. (How functioning the democracy really is is another debate.)

Investors on the other hand… well there are two kinds: those who worked for their money, and those who took it from workers. And it's a spectrum too, it depends how much of your income comes from your work, and how much comes from your possessions. Now I can guess most small landlords like yourself probably paid their houses with money they earned through actual work. Some even have constructed the houses themselves. But if that Second Thought video https://www.youtube.com/watch?v=m1m7WmKJZyQ has any accuracy the majority of rented houses are rented by big landlords, and those definitely did not earn their money through their own work (the clerks that work under them do, but they don't own the houses and are paid a meagre salary, compared to the renting money they manage).

I can't speak for government constructed houses, but given the above, your average investor-owned house was constructed or bought with money that was taken from people working for the investor. (Well strictly speaking the worker creates value for their boss, which gives a salary in return, but there's always a difference between value created and salary returned. That difference is the exploitation/theft part. And I'm glossing over the fact that managing a company is valuable work unto itself.)

> And which houses are nicer, safer and better maintained?

Am I supposed to answer "the investor's houses"? A citation is needed for that answer, and it'd better control for when the house is build (construction norms tend to evolve over time), how much it actually cost, and in some cases who it actually cost. Suburbian houses may be nice for instance, but the car dependency and the unsustainable infrastructure costs definitely are not.


If you build your houses from money, they’re not going to stand for very long. Houses are built from bricks.

Though for what it’s worth, the government at least has the advantage that it can print its own money, so it doesn’t need to take it from somebody else, unlike investors.


It's completely plausible that a nation with a wealth of natural resources but an economic system that treats people terribly could win a series of wars and establish its economic system as the dominant one in the world, and then use its power to maintain status quo.

A marketplace of ideas does not necessarily lead to the best outcomes, or even good outcomes.

Suppose the marketplace of ideas is an unregulated marketplace. Then the wealthy and powerful can use their wealth and power to determine the world that other people see, getting those people to mistakenly fight for things that help the wealthy and powerful.

But suppose it's kept a fair marketplace. Fairness is not natural – it's a human value that we have to actively maintain. But suppose we succeed. Even then, the marketplace may not select the best ideas, because people have emotional needs – like belonging to community – that will cause them to stubbornly reject good ideas. But we accept this because it's the only way for us to live with each other; if I want you to let me disagree with you, I have to let you disagree with me.* The price of intellectual freedom is very, very steep, even if it's worth it.

So I don't believe the fact that capitalism is dominant is good evidence that it's the best idea so far. In fact, I think we have little evidence of it at all.

* There are of course complications of the Popperian, paradox-of-tolerance sort, where we do have to fight to the death, so to speak. If someone repeatedly demonstrates an unwillingness to compromise or act fairly, they've left you the choice of letting yourself perish or eliminating them, which is a shitty situation to be in.


> It's completely plausible that a nation with a wealth of natural resources but an economic system that treats people terribly could win a series of wars and establish its economic system as the dominant one in the world, and then use its power to maintain status quo.

FWIW, Adam Smith has argued that this is unlikely. The world seems to prove him correct.

That said I do disagree with the way GP's argues the point. While the dominant economic system is likely more effective at creating dominating economies, it's not a given that it is good at being fair. Shutting down discussion while claiming that "capitalism is the best so far" because nobody else has come up with an alternative reeks of willful ignorance. It's easy to believe something is the best if you don't actively look for alternatives and demand others to serve it to you on a silver platter.


I was trying to be careful by writing "treats people terribly" rather than "treats its people terribly", having imperialist nations in mind, or "treats all people terribly", having class-divided societies in mind.

I didn't realize that Adam Smith had argued on this point, though! Thank you for letting me know.


If people do not "make" money (i.e. create value), how do you explain the increase in the standard of living over the last 100 years?


> "make" money (i.e. create value)

See, the harmful confusion right there.

Making money and creating value are two separate things. There are people who create value without receiving any money in return. And there are people who receive an obscene amount of money while creating very little value — and sometimes destroying more value than they create.

Understanding that the two are separate is the only way you can question the legitimacy of our richest people. I’ll give you a hint: the most impressive CEOs journals routinely praise as geniuses, are very, very unlikely to produce as much value as the money they actually receive.

If you want to even stand a chance at critically looking at our current economic system, you absolutely need to properly separate the notions of "making money" and "creating value".


Another way to put it would be that the wealthiest people tend to own the assets that are creating lots of value. But owning is not the same thing as creating.


You’re absolutely right. David Graeber’s “Bullshit Jobs” is a good take on this if one wants to start reading into it.


It seems to be that "bullshit jobs" isn't actually a good position for a leftist to take. You're basically saying workers don't do anything useful.

Even a more thought out form doesn't do anything class analysis didn't already do.


> You're basically saying workers don't do anything useful.

Not quite: bullshit jobs are when the workers themselves say their own job isn't doing anything useful. And apparently they comprise a sizeable portion of all workforce.

Of course, jobs people say are useless, jobs the workers themselves say are useless, and actually useless jobs, are 3 different sets. But I think we can confidently say something is wrong when so many workers say their own job is useless, even if they aren't: working a job you think is useless just isn't healthy.


This is what is known as "cope", ie, they don't feel emotionally satisfied.

Americans in particular always have pretty narcissistic ideas about what their jobs should be. You can see this from all the examples of hippies trying to start communes, which then fail because everyone appoints themselves official poet instead of farmworker.


> This is what is known as "cope", ie, they don't feel emotionally satisfied.

Yeah, I'm gonna need a citation for that one.


> how do you explain the increase in the standard of living over the last 100 years?

https://ourworldindata.org/energy-production-consumption

That will catch up to us. It already has, in many parts of the world, and it's not looking like it'll get better soon.

And let's not forget, it's not just people who have paid the price.

> The contemporary rate of extinction of species is estimated at 100 to 1,000 times higher than the background extinction rate, the historically typical rate of extinction (in terms of the natural evolution of the planet); also, the current rate of extinction is 10 to 100 times higher than in any of the previous mass extinctions in the history of Earth.

- https://en.wikipedia.org/wiki/Holocene_extinction

The ecocide which we've inflicted on the planet for the last 100 years will some day be seen for what it is - an atrocity.

https://www.newyorker.com/cartoon/a16995

All of which is to say nothing about the inequality in how that shareholder wealth is distributed, which is incredibly short-sighted for literally billions of reasons. 8 Americans own more wealth than 4 billion humans. It's perverse beyond comprehension.


You only see the curves go upward. But if you look very detailed, you see that a plateau is coming. A downward trend will follow.

When the going gets tough...

About the inequality : it is really bad, but hundreds years ago, the poor were starving because of lack of food, in stark contrast with landlords and knights and kings.

Today an equal amount of or more, people die from too much food than of too little.

My point is, you need to take a step back and look from a bit more distance to see the real trend. Things are bad now, because we are at a maximum.


> a plateau is coming. A downward trend will follow.

The plateau is not coming fast enough, not even close. Scientists are very clear on this point.

We used all that oil and gas to get more work done, but we're still working more hours per year than those feudal peasants.

> people die from too much food than of too little.

We still have thirteen million hungry children in America. 700 million hungry people worldwide.

It's all very solvable, we just don't. We could end world hunger by eating like, two or three rich [0]. Knights and Kings have nothing on our oligarchs.

These trends of irresponsible emissions and rising inequality won't fix themselves. Not in time. Not without radical action.

[0] - https://www.theguardian.com/global-development/2020/oct/13/e...


What makes a chair more valuable to society than entertainment?


Many of us have the ability to entertain ourselves entirely within our own minds on the regular. None of us can snap our fingers and summon a chair from nothingness.

Do you know anybody who owns zero chairs?


There are cultures that don't use chairs.

The Japanese generally didn't use them (I think?) before they came into contact with European civilization.

I don't think any civilization could survive without entertainment though.


Many of us can sit on the ground or in a deep squat. None of us can snap our fingers and summon a book from nothingness.


sure, you can, many did, they are called authors


You can't rest your legs on an entertainment


Yet the entertainment I'm getting out of my chairs is limited.

But I think this is a false dichotomy: A comfortable armchair and a good book go well together.


I did not say it was.


Just a thought, why is it ok these skills were lost? You never really said whether it was good or bad, just that it happened.


Okay in the sense that nobody really mourns their loss aside from a few and it has no material impact on our understandings of underlying phenomena. My partner's grandfather was a wizard on the abacus, my grandfather loved his slide rule, but a calculator and now smartphone can do all of that much better. Has computation been retarded due to the lost knowledge of operating the abacus? There's probably been thousands of skills that have been lost that doesn't really affect our everyday lives.


I recall a study whose result said that beginners improve faster when they're told the solutions right away, and experts improve faster when they're given time to figure things on their own.

Struggling as a learning tool is important, but there's a good chance the fastest path to expertise is to start with examples of how to do it, and delay that struggle a bit.

What you don't want is to keep the crutch forever and never struggle. That would be a sure path to stagnation.


It would be trivial to configure multiple LLMs to track a student's progress in real time and ensure the optimal difficulty gradient for maximum learning. It could even detect when you should go outside and play.

The rejection of LLMs for education by otherwise intelligent people is disheartening.


Indeed I believe the LLMs are an excellent tool to education. Especially because you can ask it questions until you understand.

Also, some things are not need to be known anymore, like phone numbers or the inner working of a car. There are plenty of things to learn even if automation made some of knowledge beneign.


I remember a study that I presume is that one as it is so similar to what you are saying not saying that that you are better off being given the answer "right away" but that you are better off being given an answer after a bit of struggle, but not like, just letting you off in the desert and hoping for the best... I'd be shocked if it helped to simply give people the answer right away and I just don't remember the study--at least the one I remember--saying that.


I agree but also I will also argue that the beginners that got all the right answers at the beginning may learn faster in the first steps but will struggle a lot faster later on at the expert level because:

- they wouldn’t have yielded all the knowledge of the little kirks, know how, edge cases, etc… they now have to know (and the Pareto rule 20% vs 80%),

- created an intuition from all the hours spent just struggling to understand what is wrong or find a solution and little by little honing their intuition from that hard work they would have done by themselves for themselves (compared to having ChatGPT, a teacher, or an expert telling them most of time what to do),

- and getting a will to find a solution by themselves even after hours looking at a shell prompt (or logs or whatever else) that is taunting them and will tell them nothing.


I would agree. At least for some people. The act of trying to code something and failing and then failing again before I finally get it right is what I enjoy about coding. Sitting down to write code and it just works each and every time is boring. It's like writing boiler plate UI code. But... other people want to succeed quickly so they can show people they succeeded quickly. For them, ChatGPT is like a super power.

Edit: I've had conversations with people who write. We have come to an agreement that there are writers who love the process of writing. They want to BE writers. Then there are people who want to HAVE BEEN a writer. ChatGPT was meant for these people.


> The thing is this will rob a whole generation of that learning

I disagree with this take. My personal anecdote is that I am far, far better at mental math than I was in high school because I ended up in a job where doing a lot of ad-hoc calculations was just a day-to-day thing. I started out using a calculator, but eventually got to where I didn't need it most of the time just through sheer repetition.

I think AI tools (the kind people are worried about right now, anyway) will be used in a similar fashion to calculators. After learning the basics, it will be a tool taught as an assist for streaming ideas into and getting out a well-formatted and coherent output. People who write a lot will eventually find the hassle of using the tool to be more trouble than it's worth. People who don't will still know how to use it to and know how to recognize when the output doesn't make any sense.


You are wrong.

LLMs are the single biggest advance in mass learning since the printed book. They are infinitely patient and can be reprogrammed, or repurposed with three sentences. Debate partner, editor, quiz master, literally anything. Every new tool can be abused, of course we can't structure education the same, but we should always be re-evaluating our methods and techniques in the face of new technology and knowledge.

I am dreaming when physics students can have an electronics TA that can projection map over a breadboard to help kids understand and debug their circuits. For 70% of physics undergrads, hands on electronics classes are the hardest classes they took.


Which is worse, having no debate partner, editor, or quiz master? Or a debate partner who insists they're right when they're wrong, an editor who injects false references and made-up citations, and a quiz master whose accuracy is essentially random?


All those things are already true. Debate isn't about being right, it is about winning a debate. It is literally rhetorical combat. A quiz master that is wrong, how delightful! A chatbot is the multiple choice test you can argue with!

If those things are a problem, we have already failed. Because if your current education system is built upon taking things at face value, the harm is not in the shoveled knowledge but in building a student who accepts what is presented as unquestionable fact.

My child is an phenomenal bullshit discriminator. So much so that they have been correctly correcting all of their teachers in every grade they have been in. The real education is learning how to use the tools we have available to learn anything you want. And in that way, the LLM is most powerful thing we have since the printed book.


Yes but "bullshit detection" is a skill honed over time by not only reading many sources, but by learning to read skeptically, analyzing and comparing sources for bias and factual credibility, and adjusting your certainty level on a subject as you go along. No teacher is accurate 100% of the time; lots of teachers spew bullshit when they don't know the answer to something; practically everyone laces their facts with opinions. But you can turn to other primary sources for facts, secondary sources for opinions, and the shape of some general consensus will emerge.

The problem is if there's just one teacher tutoring, one quiz master who points to sources it controls or simply invents as its "proof". My concern is that AI will be trusted more than a single book or a single teacher, precisely because it claims to neutrally encompass a consensus already obtained through all those sources. [It's a tertiary source masquerading as one offering primary and secondary sources, without the ability to differentiate or trust one source more than another].

This is exactly why the frequent comparison to calculators is so misguided: Because calculators are always right. To whatever degree LLM proponents believe LLMs are simply useful tools like calculators, that will be the same degree to which they fail to check for bullshit.


Before books, everyone had to memorize information they wanted to preserve. Books robbed a hundred generations of that learning. So it goes.


Not really. Many cultural traditions emphasize reciting every single word from classical texts as part of the learning process.

(I hated it, but it's still there 2000+ years after books were invented.)


People still ride horses sometimes too, but I would argue that the advent of the car meant most will never become proficient at horseback riding. There are many examples like this.

People aren’t great at writing cursive with fountain pens anymore because we type. Our mental math skills aren’t as strong because we have calculators. Even our sense of direction and ability to navigate is probably weakening due to decades of reliance on GPS.

Are there still people who can memorize whole books, ride horses, write cursive, calculate big numbers in their head, and navigate by the stars? Of course, but they are outliers. Most of us just use our smartphones.


The history of progress is full of machines robbing people of learning to master difficult tasks.

On balance, it's been hugely positive.


It will rob people going through the motions and cheating from actually learning. The people who most benefit from education have never been the people just going through the motions and have at least some intrinsic motivation to actually learn.


unfortunately, this just happens. I trust let baby watch YouTube from the fist of their is not a good idea, too. but we are here.


I think we went through this with calculators in the 1970s.


I think the thing to remember is that there are far more people like me than you. People who are poor writers or middling thinkers outside of specific domains.

I agree that GPT isn’t much of a conversationalist. But it is an exceptionally good tool some of us. It has already proved invaluable for me in helping diagnose a medical issue with a family member — one that escaped several ICU doctors and nurses. And a couple of weeks ago helped me fix an issue with my car.

A good analogy might be a calculator. There are some people who can do calculations in their heads near instantly. And some of these people could do arithmetic of numbers too large to be represented by calculators even. But for most of us, calculators were immensely useful.


I would add: And doing the same thing over and over.

I find GPT very useful in that I can quickly become mediocre in any domain. Need a legal filing? I can get a good-enough legal filing. Need some code in a framework I've never used? I can get it instantly. Need something written in the language of some esoteric field of science? I can get that instantly from a bullet point list.

It doesn't do the hard parts well: What's being written, and how it's structured. That's where I fit back in.


But the only purpose of writing that paper is for you to practice doing research, thinking and writing. Most people who write deep, thought-provoking work started out doing assignments like that. What could possibly be the point of getting a computer program to do it for you?


That you never have to learn how to do it yourself.


May I ask - what job do you do that you read 8-12 hours a day?


I’m an enterprise architect. I’ve got a pretty wide set of projects I need to keep up to date on, a lot of industry developments and just following general tech trends. Lots of my role focuses on documenting our designs and projects.


I think you might be right. I can get GPT-4 to produce serviceable text on factual topics, if I prompt and guide it correctly. That's about it. Honestly, I'd very much prefer it if my actual dog could talk to me.

I'm not sure about "17 year old me struggling to write a short paper full of unoriginal ideas," but it sure could help me write that paper titled "Bullshit, bullshit, etc., etc., bullshit" that I always wanted an excuse to turn in for some college course or another. ;)


I feel similarly to how you have described.

My worry is the innovator's dilemma. Given a few generations (software generations, not human) of continuing exponential improvement in capability of LLMs (or whatever replaces them), I worry that those who have spent much more time learning to co-work with a GPT-alike (or replacement) will move to the fore.


I think these going to the fore will become inevitable. These improvements will make things we do now easier, so why not let them help? Until we as human are totally replaced by AI, we will just coexist with them and then we find something new to work on. Evolution.


You read for 8-12 hours per day? I guess you get paid to read? What do you spend 8-12 hours per day reading?


Slack, email, Jira, StackOverflow?


hacker news, obviously


> I read for probably 8-12 hours a day

Jealous. What's your job?


The tools are at their most useful when they're replacing Stack Overflow searches for common tasks.

An example I had last night: how do you split a pdf into separate pages on the command line, and how do you put them back together again (I replaced one of the pages with a scanned signed version).

This is a basic task, a beginner's task in the world of pdf munging, but I didn't know the answer off the top of my head. ChatGPT gave me command lines straight away.

That's what they're good for right now. Beginner / common tasks in areas you're not familiar with.

And a further implication, they're great for one man bands, generalist roles where simple things need to be done in lots of different areas. For specialist deep roles where you have all the tacit knowledge in mind already, not nearly as useful.


> The tools are at their most useful when they're replacing Stack Overflow searches for common tasks.

Interesting, because I still find it faster/easier to do a quick DDG search, with a click on the first relevant SO post, and scrolling straight to the top answer. Asking the same question to ChatGPT means waiting for its output, which may be objectively a few ms faster but creates the perception of slowness ( = irritation). (That is: waiting for X seconds is more disruptive than taking several actions for 1.5X seconds.)

And then it's definitely going to be much more verbose, which is worse 99% of the time. SO is nicely folded - if I want a deeper explanation, I look at the answer's comments, or I can read other answers, or I can even read the full question. ChatGPT doesn't have the same guarantee of such a clear information hierarchy in its answers.


I see these complaints about ChatGPT's verbosity, but this is no different than when crafting an effective google query was a skill in the early 2000s.

Here's a GPT4 response to a (poorly written) prompt from me to answer the parent question and try to satisfy your desire for a straight-to-the-point response.

https://i.imgur.com/OmsHaXO.png

> Using only the command line, answer the following question (also, keep it terse, no extraneous narrating please): how do you split a pdf into separate pages on the command line, and how do you put them back together again


Ok, but that's not an answer to my actual point, because now it's more work than doing a web search for the same solution.


I often find that the top Stack Overflow answer doesn’t adequately answer my question, so it’s not faster for me.

For example, sometimes the top answer is long and requires a few minutes of reading, sometimes there are corrections in the replies that need to be read, and sometimes I don’t fully understand it and therefore need to read multiple replies.

This can easily take up 5 - 15 minutes, versus 1-5 minutes for ChatGPT.

Additionally, I find value in NOT having to keep myself from getting distracted by the tons of other highly interesting things on Stack Overflow. It can be nice to have that focused, text-only, distraction-free dialogue that ChatGPT provides.


Yes, I mostly use ChatGPT for ffmpeg commands. Just give it the description along with the full file path and it will output the correct command 99% of the time. Bard can also do that (not as well though) but the format of the answer is so much worse, the resulting command is not put in a code block for example.


For me, they are more miss than hit. I was amazed in the initial days but I have bounced back to searching from google and reading stackoverflow. It’s not just about finding solution for your problem, it’s also looking at what other options are there and comparing them mentally is a really good exercise. You learn a lot by that way. The comments and not accepted solution are also invaluable.


> it’s also looking at what other options are there and comparing them

I think I'm still better off starting with clChatGPT, I can always follow up with "list 5 alternative solutions".


Chatgpt is fastfood of programming. We all know how healthy that is


By the way, on macos the preview tool can do this. You can drag and drop pages between documents using the sidebar and reorder them the same way.


That's interesting and relevant to know if you'd be doing this a few times. And it doesn't invalidate your point at all, but the time savings of having a machine tell you really fast how to do something simple is valuable.

I recently had to do a 'git pull' on 120 repositories, and of course we already had an in-house tool to do that. Of course I could google a script to do it. But it saved me 5 minutes to just have ChatGPT write it for me, and I saved the overhead of a more thorough context-switch.

Even the guy sitting next to me. "Hey, check this out, ChatGPT wrote this script for me and saved me 5 minutes". "Oh, but we already have a tool for that". "Yeah, I figured. Where is it?" "Ummmmm, I don't know, let me ask someone. Frank maybe?" And 5 minutes spent chasing Frank.


Hopefully that ad-hoc second script never makes it into persistent storage or a repo, otherwise you now have 2 of them and Frank will think his part to himself about others duplicating things.


Cool. I wasn't running macOS though.


This is exactly what I don't understand about it. I copied your question directly from your comment into google and the top-ten results were all sufficient to answer the question. The nature of common tasks is that the answers are also common, so where's the value in paying for a service that regurgitates them to you?


Time, specificity, and interaction.

1. Time — for me to search Google and then read anywhere from 1-10 links to get an answer typically takes anywhere from 5 - 30 minutes. The average is probably 10-15 minutes. But using ChatGPT takes less than 1 minute.

2. Specificity — ChatGPT gives me answers to more specific questions, with no fluff text to read through, no ads, no redirects, no links that appear to answer my specific question but actually don’t when I click on them and read them. I get a specific answer to a specific question.

3. Interaction — If I have follow-up questions and/or need clarification or alternative options/solutions I can simply ask ChatGPT and it answers. I often find that getting answers to followup questions via web search takes even longer than the original search.

These all make it worth it for me. And that still doesn’t touch on all the other use cases that search engines don’t do at all.


Somewhat agreed on chatgpt specifically. Having in-IDE tooling that can take context and provide info, corrected for context, is more useful than having to jump to a browser, then copy paste. It's usually not much of an effort itself, but it does help minimize drift and helps keep focus. Every time I jump to a search engine to search, or SO, I'm bombarded with loads of info to filter out, then more to judge for relevancy. Tools that help reduce that filtering burden, and more context-aware relevancy, help.


I find copilot and chatgpt constantly useful as just a simple way of automating what would otherwise be tedious work. I think the most important thought in using them is that as you use them more you get a sense if what they are going to be able to do and what they aren't, which lets you know when you can use them (and therefore use them with more confidence and more often)

Especially in refactoring or working with poor codebases. I do notice with copilot specifically some people seem to get wildly different response times with it, for me it gives suggestions almost instantly so as im typing ive got an ongoing suggestion I can accept at any time as it gets closer to what I intend or ignore if not. A common really useful thing is when I have to set up a series of something (recently in a power plant code I set up parameters for a certain kind of battery) and then I just start writing for each other kind of battery it suggests the whole boilerplate for every case, I can go back and fill in the details and exceptions.

As for chatgpt its more useful at interpreting existing code. I often give it a giant function and say "I want you to split off this part and put it into its own thread, get the output with a queue" or something like that. Again, output can often need a little bit of massaging, but I find it reduces cognitive load involved in doing such a task. The trick is to be naturally skeptical of the output but appreciate how much of your life is spent writing nonsense.

I've also found that learning to put what I want exactly into words can be a particular skill that has helped my communication. Making sure you know the name and function of every tool you use. I think if I had said that to myself before I started using these things id have been incredulous, "of course I know the name of things" but no, its a different thing to be talking about them often, especially when you have an idiot savant that takes everything you say too literally. You have to learn to be consise and precise


> (recently in a power plant code I set up parameters for a certain kind of battery)

I can't tell any more what is satire and what is not.


That's the gist of it.

People want to use LLM hammers like screw drivers and are disapointed.

Also, since the quality of the context matters a lot for completion, copilot will be way more useful on clean code with beautiful variable names and nice structure flows than on code-golf snippets. Same got chatgpt: write good prose, and it will help you better.


> People want to use LLM hammers like screw drivers and are disapointed.

IMHO, the reason people do so is because LLMs are advertised as such.


Totally agree, I also pay for api access and I am still trying to figure out how I can benefit from using it when it gives me blatantly incorrect commands or wrong answers.

I can see it guide me in the right direction if I have a more generic question but even then, if I am not specific enough I fear I might go down a rabbit hole that ends no where.

It may be better than google in finding something but when I manage to find something on google the information is usually correct and not totally wrong. Especially technical problem that someone wrote somewhere on a forum regarding a specific device.

Finding the things I am looking for on google allows me to judge the correctness of the data. When I find answers in what is clearly blog spam I know that the info is probably wrong. Which ChatGPT I have no idea how it arrived at the conclusion it gives me.

A good example is if I want to know the max current rating for a chip. If I search google and find the manufacturers datasheet, I know the information is correct. If however the same information is on stack overflow or some forum I may not take it as a fact but more as a guidance.

ChatGPT would be 100 times more useful is it cited a source. Like "The max rating of chip ... is ... according to the manufactures datasheet dated ... which you can find here ...". I guess Bing and Google are trying to do something like that.


> Totally agree, I also pay for api access and I am still trying to figure out how I can benefit from using it when it gives me blatantly incorrect commands or wrong answers.

So here's one that saved me a bunch of time:

https://gitlab.com/-/snippets/2535443

Basically, I knew the package could do what I wanted; I knew it was almost certainly in GPT-4's data set; I could do it myself, but it would involve searching through all the documentation and figuring out the right bits. It just did it for me.

Now there were a few minor bugs: It duplicated a buggy error message, and at some point later it called a "Subtree" method rather than a "Tree" method. But those were a lot easier to fix than writing the code from scratch.

Once I had a list of 27 book names I wanted put into "canonical order" and in the form of a golang array. I could have done it myself but it would have been tedious; I just pasted the list into GPT-4, asked it what I wanted, and out popped the result.

Here's another place it was helpful recently; I prompted:

"We're hiring a new community manager for the $PROJECT. I'd like you to help me develop some interview questions. What information to you need about the role to help you develop more useful questions?"

The questions it asked me about the role were really good; just answering the questions was quite a useful exercise, and I'm sure the resulting document will be a good intro to whomever we hire. I wouldn't say the resulting interview questions were brilliant, but they were solid, and I used a couple of them.

There are times when "the form" is there for a reason; if you want something re-written in a specific form, GPT-4 can do a good job. I wrote an email recommending something to somebody's managers in a different company in a different country; then I pasted it into GPT-4 and asked if it had any suggestions. It did a fair amount of rewording, of which I decided to take about half. In this case, the "polite form" is there to avoid offense, and it's exactly what I wanted.

I've also asked it to write some Tweets highlighting some specific aspects of an upcoming conference I'm planning. It did a good job coming up with the sort of punchy, tweet-length messages which seem to do really well.

Connecting it to the context of the article: My day job is basically arguing with people on the internet. :-). I do read and write all day every day; but I don't write messages where diplomacy is critical, nor do I write tweets. Perhaps I could get better at those, but I don't think it's worth the effort. Am I the worse off for that? Probably not in the way the author thinks; I don't think being diplomatically polished would change my thinking that much -- much less being able to write punchy tweets.

If I started relying on it for the core writing, however, I'd certainly be selling myself short.


> There are times when "the form" is there for a reason; if you want something re-written in a specific form, GPT-4 can do a good job. I wrote an email recommending something to somebody's managers in a different company in a different country; then I pasted it into GPT-4 and asked if it had any suggestions. It did a fair amount of rewording, of which I decided to take about half. In this case, the "polite form" is there to avoid offense, and it's exactly what I wanted.

FWIW, this use case of writing a core message, then using an LLM to polish it into, as you put it, the polite form, is what I had in mind with Nicer.email, a Gmail extension that lets you do exactly what you described with one click, rather than copying/pasting/writing prompts. Trying to optimize the common case! https://chrome.google.com/webstore/detail/niceremail-easiest...


> My day job is basically arguing with people on the internet. :-)

Okay, now I have to know what this job is.


How impressed or unimpressed you are by it is a direct reflection of yourself and what you are trying to achieve with it. I find zero value in it. Others are very impressed by it, but I'm not very impressed by what they hold up as an example of its output. Sometimes I can only laugh when it's horribly wrong, but they herald it as amazing.

The whole thing reminds me of the blockchain hype train. Still using and loving databases here for the foreseeable—still writing things by hand for the foreseeable, and loving every moment.


You're not impressed!?

Here on HN, a few days ago, there was a post about Microsoft publishing a GitHub repo that contained a "table recognizer AI". Basically, you feed it PDFs that contain horrible scanned images of finance records, and it spits out Excel spreadsheets. For some reason, Microsoft had just "thrown this over the fence" and released it to the public for free. This, despite man-years of effort developing the thing. It was working, and everything.

I made a comment wondering if Chat GPT 4 with the vision extension could solve the same problem. One of the devs that had worked on the aforementioned AI (for years!) mentioned that yes, yes it can.

Game over.

Those years of effort had just been replaced with a one-sentence English-language prompt that starts with "Please output a table from..."

If this doesn't blow your mind, then... I don't know how to help you understand just how much has changed, virtually overnight.


I mean, sure, I guess that's neat, but Kofax OmniPage Ultimate has been a thing for years at this point and does the whole OCR from a crappy scan thing, including reformatting it into whatever format you want. Can even take data from forms and digitize it into editable text or stick it into a database. Goes back to the '80s. Sure, different methodology, but it's not something we haven't already been doing for yonks. Users have even been able to train OmniPage Ultimate to recognize someone's bizarre and unique handwriting styles in seconds.

OmniPage Ultimate is an entire workflow that's turn-key and ready-to-go.


Neat? Neat!?

Generally speaking, Kofax -- or any other off-the-shelf OCR tool -- can't process tables properly. They get confused by headings, total rows, and the like. Hence the R&D effort by that Microsoft team to develop a purpose-built AI-driven tool that can specifically identify these elements and then output the result not just as ASCII text, but as a spreadsheet.

Either way, whether you are talking about the Microsoft AI or the Kofax tool, the effort is measured in man-years, man-decades, or perhaps even man-centuries.

You can just ask ChatGPT to do similar tasks for you in seconds.

A real-world use case I had for GPT is to fix up the formatting of badly copy-pasted tables. E.g.: I had created a cloud VM and pasted the summary tab into text editor, and only then noticed that the cells ended up on individual rows, interleaved with the headings. Even if I had used a regex to undo the damage, the headings mess up the alternating row interleave. E.g.:

    Overview
    Hardware
    SKU
    A2
    Zone
    2
    Software
    Operating System
    Windows
    Image
    Licensed
    Yes
Chat GPT can "undo the damage" because it understands not only what text is likely to be a heading, property, or value... but it has also seen specifically the properties of the type of cloud VM I was working on! It understands these things to the same level as I do, and can fix up the formatting the same way I would.

Sit down for a second, and ask yourself: How much time and effort would you have to invest to solve this style of problem, in general. E.g.: given some random, broken table formatting for an arbitrary but well-known subject, fix up the formatting.

Years of effort?

Decades?


And yet, my default go-to example with me being personally unimpressed with ChatGPT 3 and 4 is getting it to output a valid NGINX configuration without hallucinating, despite feeding it the required documentation, the original Apache .htaccess file, and telling it what URL rewrites would be required. This should not be a difficult thing, but it constantly hallucinates things that aren't there because it's an LLM, and that's what LLMs do, they predict the next word.

Whilst to the untrained eye, sure, the NGINX config looked great, but it didn't function, and it didn't include the URL rewrites as instructed, hallucinated a bunch of rewrites that didn't exist and would serve no purpose, and despite refining the prompts over many days, and consulting with self-proclaimed "prompt engineers", it still didn't give the expected output.

It's neat. Reliable? Not in my experiences for my needs, but I am genuinely glad you're making it work for what you need, that's definitely cool, provided it doesn't hallucinate in an unnoticed capacity. It's a lot of trust to place in an LLM.


These things have limits, definitely. In almost every use-case I've tried, I've bumped up against them.

The trick is to know what they can and can't do, and use them where they're useful enough.

Someone here on HN quipped that ChatGPT is "isn't intelligent" because it couldn't come up with a revolutionary new battery chemistry.

Like... what are you expecting? A god?

LLMs are useful when you can validate the output yourself. Similarly, they're useful when the output doesn't have to be precise but the inputs are English.

For example, they're awesome "filters" for human input as measured against human metrics. "Is the following text rude? Output YES or NO only?" is very useful and works well enough, right now.

They're also useful when you need to iterate to find something, or when your search parameters are incredibly vague but have a narrow Venn diagram intersection.

I've been using ChatGPT as a replacement for the /r/tipofmytongue sub-Reddit. It knows everything well enough to be able to interactively find what I'm looking for to an extent that is super-individual-human, and in some ways beyond what even a very large collection of humans can achieve.


I think you and I are pretty aligned in our thoughts on ChatGPT. I think its revolutionary, and I use it every day. But I don't consider it generally intelligent, I do think that novel thoughts of demonstrable value are the real test for that, and a useful invention would convince me beyond all doubt.


> a useful invention

In my opinion, the mental model you should use when evaluating LLMs-vs-Human capability: When asked a question, the LLM has to answer without being able to iterate, back-track, or even use a scratch pad such as pen&paper or a text editor. Basically it's the same as an "oral exam", where you have to stand in the middle of a room and get grilled by a professor to determine your knowledge on a subject.

Don't compare "human with tools and unlimited time" to "LLM with no tools and seconds of time". Compare "human being interrogated in an empty room" and then it is much more clear where an LLM rates.

GPT 4 is definitely super-human in some areas, such as general knowledge and translation between languages.

No human knows as much, or can speak as many languages.

Ask yourself this: can any human, when asked to "invent something", just do it, then and there?


Give it a billion subjective hours. Give it databases and network access. Do you really think it will produce something? You definitely do a good job of explaining one of the limitations an LLM faces compared to a general super-intelligence.


Nginx/Apache config is not an easy task. I would call this an incredibly hard problem to solve. I certainly never trust that I've done the config correctly until I've tested it carefully. The way this eventually works is you tell the AI "I want Nginx config that has these properties" and the AI generates config and tests it against Nginx until it has the requisite config and it's tested.


I'm really surprised it cannot output a valid NGINX config when you give it that amount of context.

Can you share the prompt and your expected output? I'm interested to see where it goes wrong.


Here's an experiment that showcases GPT-4 deep (meta)semantic abilities:

https://pastebin.com/8FwQzDiE

TL;DR. I start by asking it to generate sentences where subject-object inversion yields a meaningful sentence where the verb's meaning is shifted metaphorically. For instance "I smoke the cigarette vs the cigarette is smoking me". After some back and forth it comes up with:

The painter captures the landscape vs. The landscape captures the painter. The gardener shapes the garden vs. The garden shapes the gardener. The chef creates the dish vs. The dish creates the chef.

Maybe that will change your mind.


>> You can just ask ChatGPT to do similar tasks for you in seconds.

And how well does it do them? Correct me if I'm wrong, but for the time being all you know about its ability to replace that other MS tool is a "yes, yes it can" by a MS dev? Don't you want to withhold judgement until after you've spent some time actually using ChatGPT to do OCR?


So what you're saying is that one piece of software with years of labor from a team of engineers can match another piece of software with years of labor from a team of engineers? You're incorrectly comparing the creation of the software with the usage of the software. Yeah, now all you need to do is give ChatGPT the file and prompt it for a solution, whereas before, you... gave the file to a different piece of software and hit a button.


ChatGPT isn't a complete workflow and doesn't even get off the starting blocks for doing what OmniPage Ultimate does.

This is like saying you could do CAD via ChatGPT, sure, but it's an objectively crappy experience versus an actual piece of CAD software with an appropriate UX.

Using different pieces of software isn't a problem for most adults.


It might depend on your workflow. In many workflows (mine included), you tend to start out with a bunch of messy ideas and then slowly polish them into something usable. ChatGPT is a very powerful tool along the way here. It can help organize things, it can help rephrase your thoughts, and it can help solve minor issues along the way. Sometimes it can even do some fairly fancy things that previously could only be done by a human.

For now I'd treat it more like a sparring partner who can help you with your ideas and support you throughout your process, rather than a magical genie that can magically solve your problem for you. And in this manner I find it to be very useful indeed.


Can you give an example on how you organize via ChatGPT? Keep in mind that this is a term I associate with project management systems like Active Collab, note-taking systems like Obsidian.md and OneNote, task managers like MyLifeOrganized, so your idea of what that entails might be quite different.


Do you have a link to the repo? How does it compare to axcell (which is very domain-specific) [0].

[0] https://github.com/paperswithcode/axcell


Everyone knows that using ChatGPT is about signalling your non Luddite status so that the police won't go after you.


I noticed that you haven't been talking to your AI very much lately, what's wrong?


Please drink a verification can of AI marketing kool aid!


There is also a huge difference between code generation (be it Copilot or ChatGPT). With Code you still have the compiler that rejects the utter nonsense and then you hopefully have tests to check if what was generated makes sense.

With prose it's all your manual work again to check. If your text is mundane it might be trivial but for anything a pg would write I guess an AI is pretty useless in its current state.


I wonder if someone has tried to chain ChatGPT's output with static analyzers/compilers/linters in a loop to improve the final response.


https://github.com/newhouseb/clownfish

Structural Alignment: Modifying Transformers (like GPT) to Follow a JSON Schema


No offense but you are almost certainly using it wrong. Consider the following:

1. Use it when you know how to do something but don't want to.

2. Use it when you don't know something, and don't know how to find the answer.

3. Never trust. Verify.

Clearly, this means the examples that impress others will often not impress you.


It’s very difficult for me to wrap my head around people that aren’t blown away by both this and generative image AIs, and I tend to be in the skeptical side when it comes to fancy tech adoption.


It is astonishing, even more so that some in this camp are smart people.

I chalk it up to "hype aversion" -- an internal heuristic that anything with so much hype around it must be bullshit. For some, there might also be good old resistance to change, to the unknown consequences of all these developments.


I find generative AI very impressive, but impressive in the sense of "wow, it's impressive that an algorithm can do that." If I ignore the fact that something was algorithmically generated, then I'm not impressed by the generated output on its own.


The people who are least impressed by GPT seem to be the ones who've never tried to build anything similar, even a text adventure game.

Industry outsiders also tend to take ML for granted, even when the 'experts' can see it's completely inappropriate. (My GF, a few years ago: "Why do we still need judges and juries? Can't computers judge cases and author legal opinions?" Me, at the time: "LOL, no, that would be ridiculous, and here's why..." Me, today: "Uh, about that argument we were having...")


How is #3 supposed to work when #2 applies?

Especially when folks are already out there LLM-generating SEO-spam, so even "well the answer also came up in a Google search" isn't a sign of accuracy.


Example: I want to buy some construction material that I've seen somewhere. I don't know what to search for since I don't know the term. Prior to LLM, the only way was to ask someone familiar with it, e.g. a plumber. Now I can ask GPT. And then I just do a quick image search to verify.

This doesn't always work, as some stuff is harder to verify. But what's valuable to me is it allows me to ask a vague question and have follow-ups. I usually learn just by having the conversation.


> it is a really terrible writer

Most people are terrible writers, and never read anything at all. So they have no way to judge if the wordy informationless crap gpt wrote is good or not. It's long, thus is good.


>It's long, thus is good.

It's also somewhat structured, which puts it at above average for writing tasks.


Try coding in a language/style you are not familiar with.

It is not revolutionary for areas you specialize in, but if I try using it for something like setting up a web scraper with python (something I understand conceptually but have never done) it feels pretty amazing.

My one concern is that if I use it for JS I can immediately spot issues with it, but I am too ignorant of python to spot subtle problems.


It’s definitely awesome for that, but for me, on side projects even in languages I’m familiar with, getting ChatGPT to write the boring annoying stuff, or things that would require a bit of referencing (like a library I’ve not completely memorized), I’ve still found it to be pretty useful.


My experience is very similar. I am neither a great writer, not a great coder, not a grammar nazi - but still have to make major corrections to the output of such tools.

I came to the sad conclusion that perhaps what I write and what I do is not as repetitive or as generic as others. Over the years when I have had to do more repetition, writing once well and ctrl-c / ctrl-v.


I have similar experience.

The differentiator is, I guess, expectations: I fully expect my own code to strike a certain balance of readability / optimization / correctness. In this context all but the most trivial code suggestions, sloppy as they usually come, instantly fall out of place.

On the other hand, there is a group of people (e.g. those spamming GitHub pull requests...) who expect the output to be kind of visually similar to some code. When they realize that the result also sometimes works and gives sane-looking results, they are absolutely blown away.


It fills the role of an intern. It can do stuff but the quality is inconsistent and the work needs to be verified.

Nonetheless it's better than Google at many things. I find it really good at giving paragraph-long introductions to things, instead of pages-long blogspam. It's also good at suggesting improvements when writing in a second language.

But I don't see what those AI startups see when they want to replace all sorts of skilled workers with this bullshit generator.


I paid for it too, but I'm slowly figuring out its utility.

The trick is to use it for what it's good at.

For example, I had this vague, fuzzy memory of a talk I saw online about a decade ago. It was about a database modelling paradigm. All I remembered was that the country it was developed in was in northern Europe, the methodology was based on splitting out columns as individual objects, and that it had funny little icons such as diamonds on the edges connecting things together in a graph.

Good luck finding that on Google. I had tried and failed. It was bugging me, because I needed something like it for my work, but for the life of me I couldn't dig up the reference.

Chat GPT 4 found it just three prompts: https://www.anchormodeling.com/

That's insane.


5 minutes ago I copy pasted a 200 line react component and asked ChatGPT to explain why I was getting a certain bug. Although it didn't quite get it right, it allowed me to quickly identify the cause.

It only needs to save 10 minutes of my time a month for it to be worth the $20 - I would estimate it saves hours.


It is probably because you have standard / quality expectations and those are above the level of what GPT will give you. Keep your standards and quality expectations, I say.


I'm surprised about copilot. I find every day it saves me time by "nexting" whatever I'm doing. Whenever it's right about what I wanted to write, I save a bit of time.

My go-to example is if conditions. When I start with the first case, it almost always figures out the rest of the cases correctly.


I thought Github Copilot was useless until I found out that I have to wait for a few seconds for Copilot to display the autosuggested code.


I think copilot it's amazing saving time in boilerplate code.

But I can understand why some workflows may be a liability rather than help. Similar to chatgpt sometimes allusinate code with reserved words that don't exist in the language (but exists in others), or uses old code from a deprecated api / version of your framework.

In other words, somtimes the cleaning process can take as much time, as just writing the code yourself.


For writing I find it makes an excellent rubber duck. I absolutely do not expect it to write long form prose for me, but I can explain what I am trying to say and talk to it about how I might present that information.


Tell it the columns of a certain table. Ask it a GraphQL schema for a CRUD for this table. Ask it for a resolver using X libraries. Ask it for unit tests using Y libraries.

Couple hours work done in a few minutes.


I suppose you have to play with it enough to get a feeling of what kind of stuff you can trust it with. For most tasks it's pretty bad, and this is why, if you just try it for something you just thought about without any kind of filter or feedback, you're going to be disappointed with the results.

I agree that there is currently way too much hype. I mean GPT4 is certainly cool and probably going to be useful in some limited use cases, but it's not the new electricity and it's not going to take over the world.


I needed to read this. Seeing posts about having gpt “write apps from scratch” vs me trying to get it to spit out what I want has been tedious. I’m assuming what’s not written about is the hours of just how tedious this process currently is, and how much “from scratch” is your own human brain doing the prompt edits bc you have a very good idea of what coding step comes next. It’s a lot of work, and not faster than me just concentrating hard and doing it myself.


I use Copilot with nvim and C-x C-e at my terminal. I can use it to build out pipelines in the shell so easily it's crazy.

But in the end, it's just a tool. If it isn't working for you, it isn't working for you. For me, I prefer thinking in what structure to me as bigger building blocks.

So I prefer vim bindings because I used up teenage time learning it. And now I don't think in terms of characters but in terms of text-blocks.

I prefer writing Java in IntelliJ and Rust in Clion because I don't think in char terms since I can use the built-in refactor tools. E.g. I write functions inline and then extract them rather than writing them as functions and calling.

Likewise when I'm at the shell I think of pipelines as less than pieces. Being an experienced shell user I can usually write a parallel pipeline with conditionals and loops correct first time verbally with no computer usually.

But my copilot flow is improved even better. I talk to the computer in components:

# get all private ip addresses in Alibaba region cn-zhangjiakou ips=...prompted text... # ssh to each and check last long line ...Prompted text...

It's faster than me and this is great for r-i-search with fzf.

Overall, quite happy. But just a tool.


I wonder if it’s time to start seriously looking at why some people have completely embraced copilot/chat-gpt and others can not get it to work at all.

I’d love to see which of the following might have any correlations with reported usefulness of chatgpt/copilot:

1. How articulate a person is.

2. How much of an expert they are in what they are trying to do with the tool.

3. How successful they are at giving instructions to a human to perform the same task.

4. How much experience they have managing/coaching junior devs/interne/newbies.

5. How much experience someone has at decomposing a problem into smaller parts and identifying the simple parts and complicated parts

There are huge consequences to either answer to the question: “Is using an AI tool effectively a coachable skill?”. I’m sure someone has already looked into this or is looking into this - if it turns out to be a coachable skill, and we can identify what those are, there’s a lot of money to be made in bringing a coursework to market.


No idea what you are doing there, I've been using ChatGPT Pro and Copilot since four weeks and it literally blows my mind every single day.

Give it a Django model and ask it to create a data visualisation. It'l perform complex ORM queries with filters, annotations and aggregations, then load it into a pandas dataframe, then create more columns in the dataframe, does more aggregations, then spits out matplotlib code to draw the chart. The time typing the prompt is time I would have needed even in the pre-GPT-era, since I would have to think about what I want to achieve anyways, now I'm typing it into ChatGPT, which makes the thinking process more structured as well!

I have built more tools in the last three weeks (for work and personal) than I have in the last three years combined. All thanks to OpenAI's brain constantly pushing me into the right direction.


I think that you are at least using the same one that I am.

"Hype" can also mean marketing.


I think ChatGPT is impressive, but it's revealed just how insular and vapid the thoughts of some HNers are. Hearing people's "endgame" for ChatGPT has been very depressing


I don’t know how you’re using it but it seems to step up its game if you challenge it more. You can also ask it to refrain from dumbing things down.

Copilot wasn’t useful for me because I want a back and forth conversation to straighten out concepts. With Copilot you don’t get that and it just sprays out little messes of code.

I use GPT4 like a rubber duck in the sense of “rubber duck debugging” and it can be surprisingly supple and creative in its solutions… many bad ones, but again, push back on those and you get to good ones.


It seems really great at vacuous american essaywriting.


All the time I see blog posts by people who give ChatGPT way too much credit for partially correct answers than it deserves.


Is it terrible at the content or the delivery of the content? I find it to be bad at the prior but rather useful for the latter.

If you asked chatgpt to generate a document, its generates one that reads well but has terrible content. Eg. Super broad or just plain contradictory. But given some not so well written piece of writing, it can clean that up fairly well.


I find copilot most useful when writing comments tbh. Whenever I'm stuck contemplating how to phrase the rest of a comment it comes with a suggestion. It might be a terrible suggestion but that's usually enough to get me unstuck. And every now and then it's exactly what you need and it saves a bunch of typing :P


The point of it isn’t to write something like a book from scratch. Imagine if I told you “write a book about unicorn”, it would be trash.

The genius of it is that you could give it the notes of a meeting and get it to rewrite them in a different form. Nothing else can handle language like that.


Well, you made only one mistake in these 4 sentences you wrote, which is like a 400% improvement over the average messages I see outside HN, and an infinite improvement on some other people.


What effort have you put into learning useful prompts?

I only use the free 3.5+ ChatGPT and it's written some nice mapping code and some fun poems for me. Poem subjects ranged from rheology to texas hold'em.


I am trying to use ChatGPT for writing code and that code is mediocre to bad. I really hope that in couple of years LLMs will advance enought to support/refactor what they are generating now.


they are tools, you have to learn how to use them.

I use chatgpt everyday and am getting better and better at asking questions to get the answers I want.

I used copilot when it was free, and it was best at boilerplate, and sometimes not bad at auto completing the next few lines, I didn't think it was worth paying for at the time. I use Codeium now, that works in a similar way, probably not quite as good as copilot in that it doesn't understand as many languages, but still does a reasonable job with boilerplate type code.


It makes me feel like my grandmother who can’t use search. She can use the browser fine she just lacks any Google Fu and uses the most ridiculous search phrases.


ask it to write like someone else (some established author you know) and it gets much better, you need to play with it a bit


Just to make sure, are you selecting the GPT4 model at the top of the conversation in ChatGPT?


Maybe you're not using the right prompts. But yes, ChatGPT needs more creativity input.


Same here! I can feel like the Emperor's clothes sometimes.


You got filtered btw


half the time i tell copilot to do something the suggestion is just “return true” and there are no other suggestions


You are likely not prompting correctly. https://www.deeplearning.ai/short-courses/chatgpt-prompt-eng...

GPT-4 Passed the Bar Exam, so I think there's something to it https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4389233


Just for one persona.

Good old Peter Drucker formalized the personas in his advice to new employees to figure out how their bosses consumed information:

1 - The lawyer. Needs walls of text.

2 - The architect. Needs diagrams, visuals.

3 - The accountant. Needs numbers, spreadsheets.

4 - The actor. Needs a conversation, verbal.

It's a generalization, of course, but I have applied this pattern successfully. The core bit is that using the wrong method with any of the 4 personas flat out does not work.

Send a long article to an accountant and they're stumped. But send them a bunch of numbers and they hear the numbers sing. Etc.

Tangent on the Amazon way of write ups - Bezos forced his persona on the whole company. I would hste it. I am an architect, a few diagrams is all I need.


> The lawyer. Needs walls of text.

Lawyer and law professor here: No, no, NO!

Wall-of-words text is a sign of a L.O.A.D. — a Lazy Or Arrogant Drafter.

I teach contract drafting to 3Ls and stress The Two Great Rules of Readability:

1. SSSSP: Short, Simple, Single-Subject Paragraphs. https://toedtclassnotes.site44.com/Notes-on-Contract-Draftin...

2. BLUF: Bottom Line Up Front. https://toedtclassnotes.site44.com/Notes-on-Contract-Draftin...


"The lawyer" is probably a bad name, but I've definitely worked with this sort of person before. "The novelist" might be a better name.

Of of my ex-bosses preferred long, well-written emails to just about any other form of communication. He was a really bright guy with a degree in English Literature that somehow stumbled his way into the software industry. When he wanted to start working with a new technology, he'd ingest a small pile of driest textbooks on the subject over the course of a week or so, understand and retain all of it, then start running laps around the folks who had been working with the tech for years. He's still the only engineer I've ever met that preferred learning-by-reading over learning-by-doing.


I also prefer learning by reading. But there are some things you can only learn by doing. I usually don't like and try to avoid technology that I should be able to learn by reading, but which is so *uck*d up, that you can only learn it by doing.


(Former law student here...)

Definitely agree with your approach to drafting, but I think you could be overreacting a bit here. Even the shortest contracts might seem like a "wall of text" to the lay person, and short paragraphs stringed together is a wall of text.

At any rate the preferred format in most legal contexts is still "`wall' of text" (whether big or small wall, as opposed to diagrams, spreadsheets, or conversations), which is what I think was the original point.


> short paragraphs stringed together is a wall of text.

"White space is your friend" is another rule I preach to my students. Some 40 years of dealing with contracts has persuaded me that short paragraphs with white space in between them are FAR easier to review and negotiate — which means they get signed sooner, and clients vastly prefer that.

> as opposed to diagrams, spreadsheets, or conversations

For several years there have been people banging the drum for contracts to include more icons, flow charts, etc., for improved readability. So far, that idea hasn't gotten a lot of traction — IMHO that's because most lawyers barely even know how to use Microsoft Word competently and would have no idea how to create a flow chart or meaningful icon, so they default to writing pure text (which is also the quickest and thus cheapest path to getting a draft onto someone else's desk).

About the best I've been able to come up with in this regard, as something that can be done with Word, is to create tables of, e.g., action items, with columns such as "Event | Action | Who | When" — basically, an Intel-style "AR list" [0], or in software, a CASE statement.

[0] http://www.nathanzeldes.com/wp-content/uploads/2014/01/Effec...


You are describing the "good lawyer". Judging by the absolute walls of text given in the terms of service and contracts of almost every big company out there, most lawyers are not "good lawyers".


I sometimes wonder whether some lawyers intentionally draft wall-of-words text in the hope that the other side's contract reviewer will miss something because of the MEGO Effect (Mine Eyes Glaze Over).


I hate when a long technical document doesn't use BLUF, ie omits a summary at the start with the main topics.

There's too much info on the internet, and even in internal documentation: if I have to read half the thing to understand if it's relevant, I'm gonna spend most of my time on irrelevant stuff.

This is a problem with asking engineers to document without having a writing culture. I've seen it happen: once it's written down, that's it: it's documented, great work! But we should treat it more like code, with clear divisions of concern s, scopes and even refactors when needed!


I really love how even the lawyers don't want these gigantic walls of texts they produce.

I totally get the walls of text are needed to a point, and we don't have good options to produce succint contracts. It just feels like an everybody lose situation.


> I totally get the walls of text are needed to a point, and we don't have good options to produce succin[c]t contracts.

You're too kind — literally.

Wall-of-words text is never needed in a contract. Clients should demand that lawyers do a better job in drafting readable contracts — because readable contracts get signed sooner, thus taking up less of the business people's time to negotiate and freeing them up to focus on other things (not to mention being less costly).

</rant>

The Two Great Rules of Readability above will get you at least 95% of the way there.

Wall-of-words contract language is analogous to spaghetti code in a computer program.


Forgot to add the business case for readability: Readable contracts get signed sooner (and, probably, inadvertently-breached less often).


pg's observations are equally applicable to all of those domains, all of which are likely to become AI-mediated by default.


Interesting. Is this really true, that every person prefers at most one of these?

Also a quotation link would be appreciated.


Here are some guesses at popular CEOs.

LAWYERS - needs walls of text

Jeff Bezos is known for his long-term thinking and his annual letters to shareholders are famous for their extensive detail and length.

Mark Zuckerberg is known for his long-term vision and often communicates his ideas in lengthy blog posts and public statements.

ACCOUNTANTS - needs numbers, spreadsheets

Sundar Pichai has an engineering background and tends to rely on data and analysis in decision-making.

Satya Nadella has an engineering background and has emphasized the importance of data-driven decision-making at Microsoft.

Warren Buffett is famous for his value investing approach, which emphasizes analyzing financial statements and data to identify undervalued companies.

ARCHITECTS - needs diagrams, visuals

Tim Cook is known for his attention to design and user experience, and Apple's keynotes and product launches often feature detailed visuals and product demonstrations.

Elon Musk frequently communicates his ideas through visually compelling presentations, and is known for his ambitious plans for space exploration and sustainable energy.

ACTORS - needs a conversation, verbal

Mark Cuban is known for his outgoing and talkative personality. He often engages in conversations with fans and customers on social media and in public appearances, and is very comfortable in front of the camera.

Richard Branson is known for his flamboyant and charismatic personality. He often appears in public events and media interviews, and is known for his storytelling and ability to captivate audiences with his words.


Which persona is Bezos? Accountant?


Probably the lawyer. At Amazon they’re somewhat known for the their six-page essay.


Yeah but the weekly WBR where the directors ask what a cell means on the DataNet report but the data isnt' sussed out so the L5 Data science guy that reports to the L7 PMT will spend the next week changing SQL to produce that requested view for the next week WBR.

Many L6 promos over the years went out to folks who could quickly and accurately build WBR decks that gave L8s the visibility they wanted.

So maybe Bezos is the lawyer, and all the directors are accountants.


Definitely lawyer and accountant.

PRFAQs, COEs, and WBRs.


This reads like Plato's warning (through Socrates' words) 2,400 years ago that writing will make people forgetful:

"For this invention [writing] will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise." [1]

[1] Phaedrus 275a-b, http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%...


Plato's criticism is very true on an individual basis. What he didn't consider was the effect the written word had on a social scale in terms of being able to exchange ideas over physical and temporal distances.

It turns out that was a greater benefit that to some extent makes up for losing the exceptionally well trained memory of scholars from a verbal tradition. Not that that last part isn't a loss though, and his warning against false wisdom is more relevant than ever today when it feels like half the people you talk to online are just googling up wikipedia articles to base their world view upon on the fly.

--edit-- sometimes me not grammar good


I think it's better to think of these as different, rather than superior... especially given our times' proliferation of mediums with differing cognitive implications.

Is Twitter > books? Is HN > podcasts? Every medium has its tendencies... positive, negative and complicated. We don't actually choose between them. It's useful to keep an eye on losses, not just gains.

Oral traditions, for example, naturally evolve and diversify. A fairy tale can literally spread over the whole world, and develop thousands of versions. Each one adapted and localised. Written stories also spread, but remain identical... even more-so after printing.

This is, largely, the story of monotheist religions. Whether or not they had sacred scrolls in ancient times, the religions were mostly based in folk traditions, temple authority, prophets, judges... people. As the moved from oral tradition to authoritative canon, the religions became extremely stiff an unyielding.


> This is, largely, the story of monotheist religions. Whether or not they had sacred scrolls in ancient times, the religions were mostly based in folk traditions, temple authority, prophets, judges... people. As the moved from oral tradition to authoritative canon, the religions became extremely stiff an unyielding.

The Catholic Church is interesting counterpoint. Their religious teaching still openly draws upon tradition, institutional church authority and individuals like the Pope.


> The Catholic Church is interesting counterpoint. Their religious teaching still openly draws upon tradition, institutional church authority and individuals like the Pope.

IIRC this was their main contention against the Protestant reformation. They believed that just pointing at scripture without some practice of custom and tradition to ensure you're properly interpreting it in a sensible way would lead people to folly and extremism/fundamentalism.

Granted it's sort of a pick your poison deal. You either get the fundamentalism of hardline scripturalists or you get the corruption that comes with a hierarchical institution that requires people to offloading some of their critical thinking. Or you can blend them together and get the worst of both worlds by creating something that looks like the more noxious elements of the American Evangelical movement.


> You either get the fundamentalism of hardline scripturalists or you get the corruption that comes with a hierarchical institution that requires people to offloading some of their critical thinking.

I don't think it is fair to say the latter leads to "corruption" or "offloading some of their critical thinking". This phobia is largely a feature of radical individualism and the lack of humility that the notion of radical self-sufficiency (intellectual or otherwise) entails. The paradox of individualism is that it actually weakens the ability to be an individual. Human beings are social animals whose lives are relatively short and therefore whose experiences are limited. And because individualism atomizes the individual, the effect is that this social animal becomes subject to greater levels of fear and stress about his isolation, and when isolated from the social and from tradition, he becomes easy prey for weird ideologies.

In this specific case, scriptural canon was compiled by the Catholic Church. You obviously cannot appeal to the canon to codify the canon, so some extra-biblical body of knowledge and understanding must exist that would allow such a canonical body of scripture to be identified. This body of knowledge and understanding is called Tradition, and it is guarded, refined, and transmitted by an unbroken chain of authority found in the apostolic succession. (Tradition is the basis for the transmission and development of all knowledge in human history; there is likewise a scientific tradition without which there could be no science.) The notion that an individual could do better alone without relying or drawing on tradition is belied by the presence of over 40,000 Protestant sects in the US alone, each claiming to be able to interpret scripture. And how do they interpret scripture? However they want! In most cases, it means interpreting it according to whatever happens to be fashionable at the time. So much for "critical thinking". This is very visible in the mainline Protestant churches that, not so incidentally, are shrinking very rapidly. Where this Protestant mindset has leeched into Catholic circles, it has produced the same deadly effect.


But the body of canon law would have been impractical to standardize and disseminate without the printed word.

See: how many schisms and papal crises there were in medieval times (admittedly, the Catholic Church was also much more a political-nation-religious entity then vs now)


I largely agree about the administrative difficulty although I‘d emphasise the written word over the printed word in this context.

Using thousands of monks to copy documents by hand instead of using printers was incredibly resource expensive, but affordable for the Medieval Church.


It'd be fascinating to see a conversion to person-hours to copy one book in medieval times.

And I mean fully-loaded, so how much time needed to be spent {farming/military/political administration} to give a monk time to learn to read and write, and then allow them time to copy the book itself.


I think they viewed the transcription itself as a form of meditative/devotional practice and didn't prioritize efficiency as much. The illumination on some of these manuscripts can get pretty elaborate and, sometimes, kind of whimsical. So they were clearly doing some creative expression while doing it.


It's an interesting thought. Challenging because of the fixed costs though. The cost to produce one book would be almost identical to the cost to produce 1000 books; it is the infrastructure more than the marginal resource cost per book.


Could see it going either way. Agricultural efficiency (in terms of person_fed : person_farming) in Europe was pretty low. So the dedicated headcount (monks) might dominate the net cost, which would make it scale more linearly with number of books.


The market price 1 medieval book was roughly the same price for 5 pigs according to this analysis [1]

Price of 7 books = £5 = 100s -> 14s per book / Price of 1 Pig in London = 3s

14 / 5 => roughly 5 pigs per book

Note £1 = 20s

[1] http://medieval.ucdavis.edu/120D/Money.html


Paper was also a huge cost--maybe more than the monks' labor depending upon how you want to value their real cost.


Today I learned! Thanks!


> But the body of canon law would have been impractical to standardize and disseminate without the printed word.

Would it? The Bible, which is much larger, was practical to standardize and disseminate without the printed word, and its not like there was a proximate relationship between the availability of printing (presumably with movable type, in the 15th century) and the codification of canon law in the 20th. The idea that this was waiting on printing is…hard to defend.

> See: how many schisms and papal crises there were in medieval times

The really durable schisms after the 11th Century emerged after or just before (and became crystallized because of) printing, though.


I'm not following. The parent, I believe, was not talking about canon law specifically (which is changing and to be distinguished from doctrine which only undergoes development, but never revision). I also don't understand what papal crises have to do with anything in this context. And what is meant by "political-nation-religious entity"? The Church is a religious institution whose bishops (including the pope) have also exercised distinct secular political authority in addition to religious authority. It was never "national" in character, only supranational (hence, "catholic").


Parent was observing that the Catholic Church was an exception to GP's point that the written word stiffened and froze monotheistic religious practice, as it blended oral and written transmission.

I was observing that the modern Catholic Church (post-printed word) has offified in exactly that way, with the printed word allow direct dissemination of the "agreed" canon (i.e. pope / hierarchy) to the entire church, thereby unifying them into a single practice.

In counterpoint I offered the number of papal crises and schisms that abounded during the medieval and pre-medieval period (i.e. before printed material became cheap and available).

Ergo, the use of the printed word allowed the Catholic Church to unify and standardize in a way it had not before -- GP's point.

It's indicative that the majority of monotheistic religious splits within Catholicism happened before the printing press, when intervening human parties could reinterprete the canon as it flowed through them.

For political-national-religious entity, the Catholic Church / Holy Roman Empire partnership allowed the Church to become a nation in all but land, by the power its blessing conveyed upon a chosen leader.

https://en.m.wikipedia.org/wiki/Canon_law#Catholic_Church


> This is, largely, the story of monotheist religion.

This is largely the story of religions, especially old pagan religions. Myths and folklore came about. The moment when religions went scriptural, the message became firm and technical. And gave rise to law as a study.


Not really, that only really happened in the reformation for Christianity.

The romans were already pretty good at law.


I had judaism in mind.


I don't get it. You mean that old Judaism was pagan before scripture?


Judaism preceded the pagan Roman kingdom. Judaism is a scriptural religion. Judaism gave rise to law as a study.

Non scriptural religions had evolving myths and folklore like Greek and Roman gods and goddesses. Those religions did not contribute to law like the prior comment mentions.


Not really get your point. The Roman law system was pretty advanced without the influence of Judaism.

India developed complicated law system before Judaism. Seems like it's part of human societies that grow in complexity, not something special to Judaism.

Protestantism definitely had a big impact on the development of western empirism and probably modern law. In general our current Western individualistic skeptic mindset is the heritage of the reformation.


> The Roman law system was pretty advanced without the influence of Judaism.

Roman law development is thought to be put together by judaic influence. https://www.jstor.org/stable/1452148

Indian development is beyond my understanding.

Protestantism led to direct literacy and scholarship in Bible studies. Again, this is influenced by religion. The adoption of literacy is what led to Northern Europe's success.


It's also useful to point out that historical verbal tradition trained a very specific type of memory recall but that doesn't automatically make anyone wise.

Just because you memorized 10000 random articles on Wikipedia, doesn't mean you now have the wisdom to apply that in a particular circumstance.

Very much like early AI models.


I think the contrast should be between studying and internalizing a subject versus having the ability to look up a subject. That seems the most true to Plato's intention.

It's common and easy to fall into considering the things you could look up as things you already know.

What's the difference, one might ask? What's the problem with offloading some of this knowledge and free up space in your head? Well the thing when you learn something is that it doesn't just permit access to the information, it also permits synthesis of new ideas. The sum of knowledge is greater than its parts.

A very concrete example: As someone who only speaks English one may look up the Latin terms 'manus' (hand) and 'facere' (to act/do/make); but unless you actually do, you'll probably not immediately grok the etymology of the English term 'manufacture'.


Exactly. My compsci prof was forcing us to learn so much by heart, but then it's internalized and you start to think in those terms. Right now I am writing my PhD thesis in management and in the beginning I didn't have all of those papers really in my head. But now slowly that knowledge accumulates and I can think through things I couldn't think before. But on the other hand, I now think, how could I not understand that, it's trivial. And to add, it is the same for literature and poems. If you know a poem by heart, it's not just fancy to recite it, but that you start to really incorporate part of that language.


I think of it in terms of computer memory levels.

There are some computations (synthesis) that require so many (non-front loadable) memory accesses that it's impractical to do them from memory with significant delay (books), because the number_of_accesses * access_time dominates the project time.

Instead, you must have a working set of core information (or at least pointers to information) in low-latency memory (your brain).

Example: How much longer would it take me to do a multi-digit multiplication if I had to look up the process for multiplication in a book for every digit multiplication I did? And what if that multiplication were just one of many in the higher-level math problem I was trying to solve? (Then generalize to any problem that requires a core base of knowledge)


It is very similar to caching performance impacts. And like you say, sometimes performance is just faster, and sometimes it actually enables functionality…


Strong memory is almost always an indicator of exceptional skill. Whether it's chess players, musicians, writers or poets, mathematicians or also programmers, people who excel generally have astonishing ability to recall.

That's not an accident. Wisdom emerges out of practice and the effortlessness that comes with it. The genius piano player isn't that good because of some pie in the sky wisdom about music, just like the AI they just played tons of scales and training pieces. Literally meaningless stuff. This rejection of rote memorization as some sort of lower skill, that students should be 'smart and lazy' is one of the stupidest modern tendencies.


>Just because you memorized 10000 random articles on Wikipedia, doesn't mean you now have the wisdom to apply that in a particular circumstance.

It's much more likely that you can, that someone who would only look them up "on demand", however, as you at least are aware of the possibilities in those domains.


This is actually what Plato also mentions, a few paragraphs later:

> He who thinks, then, that he has left behind him any art in writing, and he who receives it in the belief that anything in writing will be clear and certain, would be an utterly simple person, and in truth ignorant of the prophecy of Ammon, if he thinks written words are of any use except to remind him who knows the matter about which they are written.

https://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext...


> also useful to point out that historical verbal tradition trained a very specific type of memory recall

I'm genuinely curious, why do you think this is relevant? How would you differentiate "specific types of memory recall" scientifically?


Cued recall in the form of verbal tradition is often criticised to be subject to inaccuracies as, for one example, stories accrue embellishments over time.

https://en.wikipedia.org/wiki/Recall_(memory)


In Appalachia that's just called storytelling. :)


plato is specifically thinking of the social impact but he's also not actually arguing against writing. after all, he's making his argument in writing. he's trying to push the reader to look into their souls for the truths that he thinks are already there. "remembrance" here is specific, so is "memory". It doesn't mean, to remember facts and things that happened, but to reach back into ones mind beyond sensation and to "remember" the eternal forms.


Thanks for this context. In this light, I am reminded of Emerson’s “Self Reliance.”

> Trust thyself


Ironically, we'd never know what he said had it not been written down.


Plato gives multiple hints across his works about the esoteric, unwritten teachings [1]. We can only speculate about the content; for me, they were surely tied with the Eleusinian Mysteries [2]: early, myths-driven psychonautics based on ergot.

[1] https://en.wikipedia.org/wiki/Plato%27s_unwritten_doctrines

[2] https://en.wikipedia.org/wiki/Eleusinian_Mysteries


I also think he had a bunch written teachings that were intended only for his students. Obviously they’re lost to time.


With some luck and intelligence maybe we will discover more about the past in the future [1].

[1] https://scrollprize.org


Yeah, I've been excited about this because we're only about a month away until the ink detection deadline.


We would, if we retained an oral tradition (as in the case in that Platonic dialogue).

And even better, we would know it by heart.

Assuning we considered it important of course. But that would be another benefit: it would also filter unimportant works over time.

Of course not everybody would know it by heart. That's a third benefit: only people interested and caring enough would know Plato. Not dilletantes.


I disagree with all of the above.

If we never had writing, would the Huns have carried forth Platonic wisdom? Would it really have survived thousands of years of cultural evolution and the decline of Greek civilization itself? Who even is the "we" who should consider it important?

> that would be another benefit: it would also filter unimportant works over time.

How would it even be possible to know what's been lost in the oral tradition without anyone writing things down? Only takes one autocrat to massacre followers of a specific tradition to effectively erase it. Things get filtered based on their ability to survive, which is partially a function of "importance", whatever that actually means at a given time.

> a third benefit: only people interested and caring enough would know Plato. Not dilletantes.

This just seems like unabashed elitism. A "benefit" of the oral tradition is that knowledge is reserved for the deserving? No thanks.


Yeah, and then some.

Tired Alfred North Whitehead quotes aside, he's basically the single most influential author in the western tradition, arguably even eclipsing the bible.


> people you talk to online are just googling up wikipedia articles to base their world view upon on the fly.

Well you're lucky if they do; often they just make stuff up on the fly depending on what they saw on some dubious media or social network.


Well that's the human brain for you. It's very good at extrapolating new information from things it's seen and heard that may not be true, without actually telling you that it's basically making shit up as it goes. It's yet another reason why education matters.


I can’t tell if that’s a tongue in cheek criticism of the panic over AI hallucination. Bravo either way.


Would this be different in a world without writing? We generally synthesize incoming information into a knowledge base and a set of beliefs - I'm not convinced we're less susceptible to that when all knowledge is oral.

What's the difference between having read something last year and believing it, vs having heard someone tell you something last year and believing it? Would people stop in a conversation and go try to find the source of their information if they had to track down a specific person?


Socrates (for it is he who was opposed to writing) did not appreciate how the act of writing exercises our abilities of both memory and reasoning, at least as much as speaking and certainly more than listening. Someone who has written extensively on a topic is almost certainly well-versed in it, and more so than when they began (even crackpots are generally masters of their own flawed theories.)


I'm sorry, Plato wasn't thinking about the effects of the written word on social issues over time and space?!

The whole point of his writing all it down, the very literal foundations of ~2500 years of philosophy and the bedrock of Western (and others) civilization, was so that he could have an effect on social issues over time. That was the whole point! He is super clear about this in his writings and throughout his life and travels.


> You have invented an elixir not of memory, but of reminding;

> What he didn't consider was the effect the written word had on a social scale in terms of being able to exchange ideas over physical and temporal distances.

Why would his words be less potent when applied to billions of people, as opposed to only a few? Are those billions not just as susceptible to fallacy as the elite few?


Plato's warning is true, actually. Since I started to write, I remember less and less about the things I have written down, because the brain replaces them with a pointer.

However, writing for thinking and writing for storage is two different things, and it activates different parts of the brain.

I'm an avid pen and paper user, and using a real pen on a paper allows me to think much more deeply. It regulates thinking speed, so things doesn't escape or my focus doesn't get derailed. Also it changes how brain works so it can think better.

Also, blogging and writing documentation at work made me a much better thinker, because converting these abstract concepts to meaningful sentences with a context and easy to understand structure affects also how clearly you communicate in other parts of your life.

Offloading this really robs you of the joys and power of meaningful communication on many mediums.


I know this is HN, but can we please stop drawing these vague parallels between basic programming concepts and the functioning of the human brain?

No, the brain does not "replace written information with a pointer". I cannot tell you with complete confidence whether writing stuff down improves recall or diminishes it, but I can tell you that you can construct computer analogies in support of either case.

I could posit that writing actually improves your memory, because the minute movements of your arm require more of your brain to compute, therefore creating more neural connections and possibilities of recall.

This is likely just as wrong, but demonstrates that thinking up some vague explanation is insufficient to make a point about a system as complex as the human brain.


> I know this is HN, but can we please stop drawing these vague parallels between basic programming concepts and the functioning of the human brain?

No, because I literally don't remember my to do list after writing to my EDC notebook. The only thing I know is, it's in the notebook. So I only know the address of my to do list. I'm a literal sitting duck if I leave my notebook at home.

> I cannot tell you with complete confidence whether writing stuff down improves recall or diminishes it ...

I can tell you because I observe myself over 5+ years since I started to write my daily plans consistently to my notebook.

> I could posit that writing actually improves your memory, because the minute movements of your arm require more of your brain to compute, therefore creating more neural connections and possibilities of recall.

What it improves is not the "verbatim memory", but the concepts you work on. Working on something slowly, and with a reflection medium (i.e. pen & paper) allows your brain to refine the thing you're working on better and store it as a concept, rather than memorized string.

> This is likely just as wrong, but demonstrates that thinking up some vague explanation is insufficient to make a point about a system as complex as the human brain.

I have read a paper about supporting the claims I did in this comment, but I failed to find it in a whim. Will update/reply this comment if I find it. I use these analogies because of that paper.


> I know this is HN, but can we please stop drawing these vague parallels between basic programming concepts and the functioning of the human brain?

Why? Those are perfectly good parallels, and quite specific at that. Forgetting the thing, and replacing that memory with the memory of the location of that thing - that's an "indirect reference", i.e. a pointer in programming terms.

> I cannot tell you with complete confidence whether writing stuff down improves recall or diminishes it, but I can tell you that you can construct computer analogies in support of either case.

You can construct analogies to anything in support of anything, what matters is whether the analogy is bringing the reader towards or away from the truth.

Also, even when you have two similar analogies pointing in opposite direction, that doesn't mean either is wrong - they may be talking about two different things. For instance:

> I could posit that writing actually improves your memory, because the minute movements of your arm require more of your brain to compute, therefore creating more neural connections and possibilities of recall.

I believe that to be true just as much as GP's pointer analogy. In fact, I attribute my learning of irregular verbs in English directly to my grandfather insisting I copy the verb table by hand several times. Doing it, I went from near-zero recall to near-perfect recall in two sessions (with each having me copy the table once or twice).

Now the difference here is, in case of irregular verbs, I wanted to memorize them. The act of writing the information down myself somehow helped commit it to memory, perhaps because the brain was more focused, or because I was more invested, or a bit of both.

In case of GP's (and mine too!) experience with TODO list, the act of writing serves the explicit purpose of getting the thing out of your head. This is the opposite of memorization - you literally want to forget about tasks you write down, so they stop circling around in your head, distracting and stressing you.

Incidentally, when I write some TODOs by hand, I actually improve my immediate recall of them - should I try and check. But I don't, because I don't want to reinforce those thoughts. My brain understands, and before I know it, I no longer remember most of them in detail.


A librarian might say they create a reference card, a web person might say they're creating a hyperlink.

In all cases I think parent post is trying to communicate that they are putting the large mass of information elsewhere, and only retain a reference locally. (implementation details notwithstanding)


Fun fact about the biological machine is that we could maybe make it "replace written information with a pointer". In planarian worms we can use a voltage-sensitive dye to see a "memory" the cells will fallback to in case of injury and they regenerate based on that "memory", however we can alter that "memory" to say "build a worm with 2 heads" instead of the regular worm with head and tail [1].

[1] "Michael Levin - Plasticity without genetic change: bioelectric embryos & synthetic proto-organisms", https://youtu.be/5ChRM4CEWyg?t=1169


The abstract idea of a 'pointer' is a piece of information to the location of some other information. It is not even an analogy - it is literally the same thing, differing only in implementation details.


Perhaps having a conversation with an LLM will help thinking even further? I am prone to thinking out loud (in private) and it helps me a lot.


An LLM is not a conscious entity, and while it can babble and assemble meaningful sentences, it has no guarantee of correctness, a baseline honesty, and other small bits and pieces we value in a conversation. It's just a sentence builder.

As a result, it can't replace a real human being backed by real experience and thoughts, hence it can't be as useful. I converse with myself, too, but it's not the same either with a different human being or an artificial construct which can babble meaningful sentences.

Currently, we simply ignore or don't understand the fact that the meaning of a sentence is not solely built with word order. There are higher orders of information carried implicitly, and they are not articulated in words. Hence an LLM can not replace, even mimic a real human being in a conversation.


> no guarantee of correctness, a baseline honesty, and other small bits and pieces

Humans for sure do not guarantee this either.

I think of LLMs as an amazing rubber duck. It's heard of everything, and it always responds with something that sounds like it came from the same sphere. You have to use your own mind to figure out if it's meaningful, but this is not so different from conversing with a person. People can babble too.

> There are higher orders of information carried implicitly

You're not always looking for those in a conversation. Sometimes you really are just checking that you've thought things through. Like if someone asks you what the arguments in favor of democracy are, you want a list of points so you can check you haven't forgotten something.


> Humans for sure do not guarantee this either.

However, humans are more nuanced than that. One might remember wrongly, or act in bad-faith. This is why I said "baseline honesty". One people's traits definitely affects how their words are perceived, and this is not carried in the sentence itself.

> I think of LLMs as an amazing rubber duck. ... People can babble too. (Snipped for brevity)

Humans are not as random as an LLM.

> You're not always looking for those in a conversation. Sometimes you really are just checking that you've thought things through.

Again the person you're asking or answering has their own character and their or your words are affected by that implicit knowledge. This is a background process we're not aware of unless you dig into yourself and look for it.


>Humans are not as random as an LLM. //

Humans you know and trust, sure. Humans you think you're conversing with online, not so much.


> Humans you think you're conversing with online, not so much.

Two decades of experience shows me otherwise. Even your two comments shows a consistent tone. We're just started to discuss, yet I have started to build an image of you thanks to your comments.

This is one of the mechanisms we don't fully understand and don't dare to dig much, because tinkering with people's minds are dangerous.


Having known a few habitual liars over the years, the similarity to LLM chatbots is quite striking.


> I think of LLMs as an amazing rubber duck.

That was my first thought too.

https://en.wikipedia.org/wiki/Rubber_duck_debugging

But it's a pretty amazing rubber duck that has read the entire internet and thus can correlate your ideas with the ideas of all of humanity in a split second on the spot.


Whilst by no means a new idea, physical and digital zettelkasten may interest some folks more than conversing with an LLM. Some people describe it as conversing with yourself or having a second brain. I don't find discourse with an LLM to be productive, but I do find surfing my zettelkasten to yield new and novel ideas, especially when it comes to problem-solving and research.

However, I'm still on industry forums and communities like HN to hopefully have new and conflicting thoughts thrown my way by peers. That's my primary concern with LLMs, a lack of fresh perspective that has all the nuance of experience and understanding behind its output.


> Currently, we simply ignore or don't understand the fact that the meaning of a sentence is not solely built with word order

Many people argue convincingly that it's exactly how the meaning is built, though.


If convincingly arguing about something was the only thing we needed for proving correctness, we wouldn't have the scientific method.

Or, if word order built the meaning and it's devoid of the character itself, we wouldn't be praising authors for their character building skills and embedding things not written or spoken into their stories, regardless of the medium it's presented.


To apply the scientific method and prove correctness, you need to define the subject first. Meaning is ill-defined, just like consciousness, intelligence, and others.

LLMs can clearly grasp higher order abstractions and concepts just by reordering the words. In fact, embeddings (developed much earlier) are specifically intended to represent the semantical meaning extracted from text using just statistical methods; until the introduction of transformers they lacked a good architecture to demonstrate the usefulness of that.

This makes many people argue that being a sentence builder is enough to be intelligent, as they also received most of their intelligence from the same source (concentrated experience of someone else - social intelligence).

"Honesty" and other forms of self-introspection is just a high level construct which current models aren't trained for, and likely not a fundamental issue with being a stochastic parrot.


And your proof?


I agree. Instance from today; I have a very novel task at work that nobody seems to know how to approach, I had a hunch that metaheuristics could play a role; I used ChatGPT to help me better formulate the problem and at least now there is a way to tackle the problem.

It made a few errors in at least presenting my ideas, but these errors were a consequence of my misunderstanding and lack of clarity.

The errors themselves, in my opinion, are invaluable because they force you to think, to be clear, and to guide the model into giving me useful ideas to look into.

For all intents and purposes, it was an example of the Socratic method, albeit inverted, where the student is asking questions to an all knowing teacher, and the teacher / LLM responds with ideas and hints. Ultimately, it's up to the student to synthesize the solution, be critical of the data, and tie everything together.


I remember MUCH, much better things that I once had written, not just read.


I remember better when I hand-write things rather than typing them, too.


> the brain replaces them with a pointer.

And, unfortunately, as you get older and memory starts to fill up, the pointers get overwritten.


Most of the comments here are assigning this as Plato's opinion.

I'd just like to point out that Plato very rarely wrote in his own voice so it's very hard to say if it's his views or not that are being expressed.

In this case however, this is almost certainly an expression of Socrates' views, not Plato's. Not only because it's in the voice of Socrates but also by what's transparent in their actions: Socrates didn't leave anything in writing and Plato left us arguably the most important written corpus of classic Greek philosophy.

Maybe he felt ambivalent about it, but he certainly thought there was a value in the writing.


Plato turned out right. I'm not extremely old and I still remember time before the Internet, when to know something, you had to look it up, which took ages, so you simply had to know it. Yourself.

We measured intelligence by the things you knew. We still have shows with questions based on this concept. This concept no longer makes sense in the modern world. But it used to.

Writing is a crutch, Internet even more so, and AI even more so. Eventually you can build the entire thinker out of those crutches and you need no humans anymore.


We measured intelligence by the number of memorized facts we'd internalized, but little did we know that collecting more facts doesn't make us more intelligent


Well, "intelligent" has always been a murky term, and as tech is rapidly surpassing us in every way, it'll become even more meaningless as times goes on.


It's true. People can memorize 100'000s of words of text and this ability was common place among learned people. The pali canon is an example. I found myself remembering huge amounts, having just read a text, with no intentional effort to recall after a month in a forest monastery and in the 4 months I was there but afterwards when exposed to the internet and sensory stuff again found my ability to recall texts had gone.

"What is the cause of people forgetting texts they had learned before?" I can't remember the precise answer but that was one such line in the texts.

Today if you say that you sound kinda crazy.`

but like socrates there's still a few orders of wandering mendicant homeless-like orders out there (the theravada forest tradition being an example)


> [W]hen exposed to the internet and sensory stuff again [I] found my ability to recall texts had gone.

The overstimulation could be part of it as well, couldn’t it? There isn’t a lot of other information-rich inputs in a monastery.

Holding a summer school in a forest or on an isolated resort seems like exploiting a similar idea, and it does work quite well (for a month or so, before isolation sets in).


It may be true that writing makes people worse at remembering unaided, but better if they can check their diary. I imagine ChatGTP type things may likewise make people lazier at unaided thinking but better at AI aided thinking.

I tried asking ChatGTP what it thought and it came back with

>... it's also important to note that technology is a tool, and its impact largely depends on how it's used. For instance, AI writing tools ... can be used to augment human creativity and productivity, rather than replace it. These tools can help writers brainstorm ideas, overcome writer's block, write more quickly, and even learn to write better by providing examples and suggestions.

>Moreover, the use of AI tools might also stimulate new forms of thinking. Just as calculators didn't eliminate the need for mathematical understanding but rather allowed for more complex problems to be tackled, AI writing tools could help individuals refine and expand their thinking, enabling them to tackle more complex writing tasks or express ideas more effectively.

Which is kind of interesting - I didn't think of writers block on my own for example. Also lazy thinks at the moment probably tend to just go along with what they see in the media. AI aided thinking could improve on that.


Yeah - so much of writer's block is looking at an empty page/screen. It's very often easier to fix something bad than to write something good.


I’ve argued that if ChatGPT means that motivated students spend less time doing expository writing, and more time:

* learning to ask good questions

* learning to fact check

* learning to edit the writing of others

Then they have truly learned the Socratic method. Which is a much more valuable tool than writing from scratch solipsistically: being able to actively engage with the writing of others.


Whether this applies depends on what situation you are supposedly replacing by writing.

If you replace intense IRL dialogue and discussion, with writing, this might be the case.

On the other hand, if you replace just letting your thoughts fade in the void, with writing them down as coherent though out ideas in writing, I would argue that the writing has actually both triggered retention from memory (thus exercising the memory), and helped you critically assess and make your own thinking clearer.

Thus, I think writing is an extremely helpful tool for processing your thoughts for most knowledge workers who are not already involved in constant dialogue with others (which is most of us I guess).


Actually, I always found this part much more interesting:

You know, Phaedrus, that is the strange thing about writing, which makes it truly correspond to painting. The painter’s products stand before us as though they were alive. But if you question them, they maintain a most majestic silence. It is the same with written words. They seem to talk to you as though they were intelligent, but if you ask them anything about what they say from a desire to be instructed they go on telling just the same thing forever.

Because it is now not the case, and AI trained on a corpus of many books can in fact say something new.

That and Meno's paradox!


Plato had it easy... When we were young, there was no memory. You had to work everything out yourself, each time. And it was a hard life having none of that fancy thing of memorizing but, frankly, life was better for it.


I have seen this happen before my eyes: When i was young (~17 y.o) i was able to remember all my family IDs, my friends phone numbers and a lot of data about them. Now i rely a lot on my smartphone to tell me their phone numbers, their birthdays, their address, etc... (Sadly today i dont even know the phone number of my gf, out of pure laziness). As i have become more and more lazy and let the tools do their job i'm losing my own skills on it "because i need my brain for bigger things - sure".


Maybe kinda pedantic but, this is not Plato/Socrates saying this here, but Socrates telling Phaedrus what King Thamus said to Thoth. It's kinda important I guess, because Socrates is ultimately somewhat ambivalent by the end of the dialogue on this issue.

Consider also Derrida's thoughts on these matters: https://en.wikipedia.org/wiki/Pharmakon_(philosophy)


Sure, the game of différance [1] can be played till the end of the universe. Socrates is well known as an aporia [2] lover, but at the end of the day even he had to take an executive decision, leave the agora, and go home to the nagging wife [3]. Ancient Greek misogyny aside, Xanthippe can be interpreted as Socrates' aporia solver, in the same manner his 'daemon' [4] would only tell him "No", Xanthippe would tell him "Yes".

[1] https://en.wikipedia.org/wiki/Diff%C3%A9rance

[2] https://en.wikipedia.org/wiki/Aporia

[3] https://en.wikipedia.org/wiki/Xanthippe

[4] https://en.wikipedia.org/wiki/Daimon#Socrates


This is perhaps abstract and tangent but....

Especially for this quote, I like attributing it to Socrates as Plato did. We obviously don't know the details and nuances. However, I feel it captures the (perhaps inexact) tension between Socrates' old, mostly oral tradition of philosophy and Plato's newer tradition of written philosophy.

It's ancient Marshall McLuhan. Oral philosophy and written philosophy create different philosophies, not just a different medium for the same philosophy.


I can only begin to wonder what would his opinion on Google search would be since we (developers) always practice the "don't memorize what you can easily search", and there's a sizeable amount of people claiming that memorization shouldn't even be a concern most of the time, and that's an advice given to people who are new in the field (like myself).


I think understanding is more important than both. Even if you're cut off fro the internet somehow, you can still re-derive things if only you understand them. And understanding also allows you to solve novel problems.


Do people really practice this? Like I usually physically unplug my computer from the network as I develop as it's such a distraction to have emails and stuff popping up. Doesn't really feel like a limitation to not be able to look stuff up.


I go in the completely opposite direction nowadays. I have chatgpt [*] up, I use google searches, I have online api documentation and tutorials open. I log on to IRC and chat with (other) skilled developers. (psst... I even read stack overflow ... sometimes [*]) ... so ... I definitely use the internet a lot while developing.

As mentioned above, I do try to research things to the point of understanding. Which is something human contact definitely helps with. To stay on topic: ChatGPT is obviously less good, but is often good enough, is patient, and guaranteed to answer you in a couple of seconds.

[*] My exact workflow would not quite fit into this footnote, but I seldom cut-and-paste answers, unless it's for something exceptionally non-critical. I try to understand what is written and then type out my own version (which should be very close), or at very least I type it across by hand rather than use ^C-^V. (the latter is an edge case). In general, there's this AI-koan about how some things only work if you also understand them, which has a kernel of truth to it. ( http://www.catb.org/jargon/html/koans.html first one about tom knight)


Remembering changes memory.

It embeds what is remembered in a current context.

Did Plato take this into account?


> For this invention [writing] will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding;

That is correct. I mean, some of that is exactly why I write at least 5000 words a day of journal notes while working. Because I want to offload everything from memory to a written form that is indexed.

> and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant...

This part is where we veer into prediction of how it will alter people and it is both right and wrong. I think the education system has shown us it's possible to have a great memory and to remember many facts and still be ignorant. At the same time, there's so much material that can be consumed and people have surface knowledge of many things and once you dig into a conversation with them on it, the ignorance pops up and it turns out that they don't know much about the topic. I know this. This has been me on many occasions and is still me on some occasion.

>... and hard to get along with, since they are not wise, but only appear wise.

Again, there's some truth to this. I think about the times where I've jumped into a conversation about politics with some half baked knowledge that I read somewhere but failed to understand or recall correctly and I think I must have a looked a fool at those times.

But then again, I've course corrected here and there and I've spent time writing and sharing thoughts with others where I've used previous writings to pull up facts and my own synthesis, and I feel like I've been a better participant at times like that. In that way, writing has helped me steer away from ignorance.

I can't comment further though on the quote because I've never read the rest of the context so I may be missing out on some stuff.

I think Paul is on to something here though. I can't count how many thoughts I've had that I thought were clever but when I started to write the thoughts down and organize it on my own, I discovered for myself that either I didn't have enough evidence to prove myself entirely or that I was just flat out wrong. Writing is a form of thinking in my opinion. And if we don't write, we do lose that side of thinking too. I do appreciate though that Paul doesn't go further into predicting the consequences as Plato's warning does here. But I'll be willing to bet that Paul is right that there will be some negative consequences for sure.


Socrates’ main argument is that the word itself cannot teach but that it takes two parties which are often a student and a text instead of a student and a teacher. If Socrates sat down and explained to a student, he could impart not only the written word but other examples from their own lives. But the word can be uprooted from its context and read by many who will learn just the word instead of the meaning of the whole of it. There’s even a whole class of philosophy who argues _inside_ the definition of words.


So like, if writing is one of many extensions of man, then Plato is an anti-McLuhan.


Plato.. wasn't that the guy who invented the acronym "IIRC"?


I think that was actually homer. “Speak muse tell us the deeds of…” (or something to that effect)


Good thing Plato disregarded Socrates' advice and wrote that down


Well he was right. Do you know anyone that knows a book by memory?


Books (largely) aren't written for memorisation these days, with possible exceptions of some poetry, or children's books.

Classic oral literary traditions are based on rhyme, meter, repetition, simile, metaphor, and references to cultural touchstones and themes, for the most part, all of which strongly assist with memorisation. These are not only necessary for a literature which isn't written down and instead is passed on through generations orally, but also quite likely represents a survivorship bias in that works which didn't exhibit these patterns didn't survive. And of course the versions we know are the end of a long pipeline of transmission (or the world's largest and oldest game of telephone), captured in writing and then passed on to us as (among) the first written traditions.

I've committed a few poems and homilies to memory, one example for nearly thirty years now. I do refresh that recollection from time to time (and realised I was dropping a stanza consistently). It's an interesting counterexample.

Then again, I turned up a piece earlier today I'd written myself about eight years ago and had all but no recollection of.


The point was that with Plato (but also ~2,000 years before his time, with the Sumerians, who probably invented writing) we were already discovering that we don't have to bring the exterior into an inside which we call "self", memory, we can actually leverage stigmergically [1] the environment to enhance and extend the "self", i.e. by writing, leaving a mark; the Greek word grapho γράφω [2] meaning literally to carve.

Not sure if LLMs will enhance our current "self" by themselves, but once we get a chip or two inside our brains, once we start controlling the cells to regenerate or grow specific limbs [3], we will surely consider our present "selves" as tiny as we consider the "self" of one of our cousins, the chimpanzee, in a zoo today. The journey and expansion of the "self" is merely starting.

[1] "the trace left in the environment by an individual action stimulates the performance of a succeeding action by the same or different agent", https://en.wikipedia.org/wiki/Stigmergy

[2] https://en.wiktionary.org/wiki/%CE%B3%CF%81%CE%AC%CF%86%CF%8...

[2] "HCN2 Rescues brain defects by enforcing endogenous voltage pre-patterns", https://www.nature.com/articles/s41467-018-03334-5


Isn't it a thing of honor in the Islamic world to be Hafez (?), someone who has memorized the Quran?

Of course I have no way to check whether it's done correctly, but from what I hear a lot of people have done this.


We just use our brains differently now, mostly to find ways that support our default positions, that leave our minds unchallenged and certain.


The thing is our bandwidth is limited and we allocate it differently now. However, not all replacements are beneficial and adding external dependencies adds points of failure.

Challenging yourself is conscious decision. Make choices to challenge yourself, and don't build an echo chamber.


As I recall Socrates was put to death by the ancient Athenians for "corrupting the youth" by encouraging them to be critical of dogma.


Hah, as if people 2000 years ago were more open to challenging ideas.


It's remarkable what some 200,000 people (maybe fewer?) in ancient Athens were apparently able to come up with.[1]

They even did pretty well with computing with the Antikythera mechanism and its predecessors; though, like Babbage, they were limited by the manufacturing technology of the day.

[1] https://www.worldhistory.org/article/1165/ancient-greek-inve...


Not sure this is a representative sample of "people 2000 years ago".


If we wrote entire books in song, they could probably be memorized. Shakespeare attempted some of that work.


Unironically one of my favorite uses of chatgpt is to tell it to rewrite things as a Homeric epic in the style of Tennyson so it's more pleasant to read aloud and I have a better chance of remembering it


There were the same warnings about the printing press, both books and newspapers; "kids don't play anymore, they read books all day", or "people don't share news anymore, they all read their own newspapers".

I'm not going to deny it's a change, but it's too early to decide if it's a bad change. Every generation has its own big shift in the past 100-200 years, and every generation complains about the change.


Eh, the title here is one possible framing of a series of short tweets expressing his concerns about the potential impact of ChatGPT.

I homeschooled. My sons had trouble learning to write and one really hated writing.

I spent several years having him verbally explain material to me, having him learn grammar and vocabulary and other essentials from entertaining videos and graphic books and such.

Thinking and writing don't have to be so closely intertwined as they are in Paul Graham's mind. He's a writer and writing helps him think.

Some people think while taking long walks, working out at the gym, drawing their ideas, etc.

The kind of thinking Paul Graham does may become less common. I have no reason to believe that's some kind of holy grail and thinking via other means is inherently less valuable.

I say this as a blogger who may be even more guilty than Paul Graham of using writing to help me think.


When people say "writing is thinking" what they're really talking about is not so much the "writing" or even "thinking" per se, but rather the "crystallizing your thoughts in to something concrete". Certainly my own thinking tends to be somewhat vague until I actually write it down, at which point I tend to get less sure of things because "oh, but what about this?" start to turn up, or I spot fallacies in my argument (the better reason to learn about fallacies is to spot them in your own arguments, not to bludgeon other people's arguments with).

In my experience writing is a very good way to do that, but if verbally works better for some then that's fine too.

In that sense I do agree with Graham that "I have some thoughts about X and I'll let ChatGPT take care of the details" is perhaps not entirely ideal.

All of this also applies to code by the way, in more or less the same way.


In addition to what you mentioned, I also often come up with new and interesting ideas while writing on a topic. I guess when the mind is tasked with crystallizing the ideas, it forces the mind to systematically sort out the clutter, and allows novel ideas to form.

I've lost count of how many times I've started writing on a topic, and then scrapping it altogether in favor of a better idea on a tangentially related thing.


The key is probably explaining/creating something in a way that doesn't have a lot of latitude for just glossing over the details. I'm not sure long-form writing is especially better than giving a presentation to an audience or working through a problem/creating an artifact of some sort/etc. The key is that you can't just read something and go "Yeah yeah I totally get it." Though I suspect some outputs can be easier to handwave through than others--I know I've done it to some degree on presentations. ("This aspect is complicated and we don't have time to get into it today.")


You make me think of the anecdote about the professor who had half the class graded for making one amazingly beautiful piece of pottery and the other half graded by total weight of the work they created.

In the end, those graded by weight had nicer pieces as it involved more practice with producing something.


In the case of writing/presenting/many things, there's definitely a case for doing something regularly even if a lot of that output is pretty routine and pedestrian. So there's probably a limit to the degree to which you can automate away the routine stuff and still expect to parachute in and produce something killer on demand every few years when you need to.


Absolutely.


I'm sorry but your job as a homeschooler is to teach your sons to write. That's not a skill you can enter adulthood without, and expect to achieve success. Liking it, not liking it... none of that matters when you're asked to document a process or argue in favor of an idea.

Imagine the value a professional teacher would have provided for them...


After five years of not being forced to write while learning grammar etc some other way, he discovered fan fiction and began writing thousands of words per day.

But he didn't learn to write by writing because he had dysgraphia and emotional baggage about writing from public school when I pulled him out.

His blog writing gets more traffic than mine. He's a writer by trade, though he doesn't yet make much money at it.


I misunderstood the situation. Apologies.


The level of mature, respectful, civil discourse on HN never ceases to impress me. I see this less and less on other discussion boards.


I've never actually seen its equal anywhere else.


Which is exactly what the op did, patiently over several years. Homeschooling vs “professional” education is a different discussion.


> Most people hate to write as much as they hate math. Way more than admit it

I fucking love writing. Most people I know also fucking love writing, and I do mean love. Some do it for pleasure, some do it as a career, others do it as a means to an end in their career, and they haven't found value in LLMs.

There's writing and writing. If all you're doing is rewording what someone else has already done like a high school essay or a competing product description for what's essentially the same product? An LLM will probably suffice. Original and novel? Human beings all the way.

LLMs highlight that a lot of "knowledge work" is intrinsically bullshit, i.e., a high school essay doesn't actually assess somebody's understanding of a topic, just their ability to hit specific keywords and points, hence even essay mills can churn out an "original" essay for a kid if they're too xyz to do it themselves.

Whilst I don't care for LLMs, I do support a huge shakeup of the existing educational assessment paradigm as it is self-evidently not fit for purpose, IMO.


> I fucking love writing. Most people I know also fucking love writing, and I do mean love.

One can tell from the opening of your comment. Composition, cadence, vocabulary, repetition. It ties together.

I prompted an LLM to produce an enthusiastic expression around writing, but it came up with drivel like

> Writing fills my heart with pure delight, igniting my soul and inspiring my mind with boundless creativity.

I found myself tempted to instruct it on how to write well but that sort of defeats the point.

And maybe I suck at prompting but the appropriate combination of profanity, brevity, yet seriousness was hard for me to get it to do.


I asked GPT-4 to write in style of Alan Watts.

"Indeed, I find an intoxicating exuberance in writing, a passion that's as profound as it is poignant. It's akin to dancing with words, a ballet of ideas and sentiments, pirouetting across the stage of one's imagination. There's a rhythm, a cadence to it, a harmonious symphony of phrases and thoughts that, when assembled with care and cunning, can stir the soul and ignite the mind. I don't just love writing, I am fervently, ardently, unabashedly in love with it. The artistry of it, the texture and tone of each word, each sentence; the way they flow, ebb and surge like the ocean's tide. There's a divine repetition in the crafting, a reiteration of thoughts, feelings, experiences, each time subtly different, each time uniquely enlightening. Writing is my symphony, my ballet, my ocean, my universe. It is in the weaving of words, the crafting of tales, the elucidation of the esoteric, that I truly come alive. It's not merely that I love writing; writing, my dear friend, is the very essence of my love."



In how many different ways can a simple phrase "I love writing" can be expanded so that the variations don't resemble each other?


Personally, I could probably hand write a book of paragraphs, all very distinct and different to each other, you likely wouldn't even know they were by the same author if you didn't use stylometry ML, and you'd probably have a rough time if I used adversarial stylometry to counter that.

You'd think by this point that music would have all been done too, but people keep finding novel stuff in 12TET, 4/4 and sound design. That's without even getting into other time signatures, scales and microtonality.


Can't you specifically ask LLM to tailor the paragraph to produce those variations that you will incorporate to make your own writing different each time? Most of the criticism of LLM not being able to produce something is just people asking LLM to basically MIND-READ the exact thing they had in mind.


Well, can't anybody else just tell it the same prompt? The output is generic and samey between all users because the average human being enters the average prompt, thus getting the average output from the training data of statistical averages because it's a next-word prediction model.

If I run a writing workshop and tell a room of a hundred people to write me a story about two star-crossed lovers from different time periods, the likelihood of them writing the same thing in the same way is extremely low statistically speaking. If the week after I then give that same room of people the same exercise, they will write something different again, even if the core story that's being told is the same thing. That's because human brains aren't static, and they don't store information as binary values, the "data" is abstract and chemically gradient. That's without even accounting for mood, the general zeitgeist, and what they've been through since then, and not even getting into them entertaining themselves by changing it up or learning from the previous one.

Long-form consistency also isn't something LLMs do well with.


Pop music contains 10^5 or so versions of “I love you”. I think there could be a similar number for writing.


Here's GRRM version. I asked it to include the "colorless green ideas".

"In the labyrinthine corridors of my imagination, writing holds the key to an irresistible enchantment. The kaleidoscopic play of words, as they dance across the parchment, is a spectacle that never ceases to enthrall me. The very act of crafting sentences, like a blacksmith hammering out the perfect sword, is a symphony of creation whose composition and cadence resonate deeply within the chambers of my soul. The allure is in the unpredictable: the malleability of vocabulary, the rhythm of repetition, the literary waltz that ties it all together in a mesmerizing ballet of ideas. Yes, by the gods, I love writing. And it's not merely an affection, but an unyielding obsession, a raw and passionate love that fuels my every heartbeat. Those colorless green ideas that sleep furiously in the hidden corners of the mind, they are the unexpected treasures waiting to be awakened, ignited by the quill's touch. They may seem paradoxical, absurd even, like a dragon in the heart of winter, yet it's their very improbability that breathes vitality into the narrative. In this wonderful, maddening craft, we, the storytellers, find our joy, our purpose. The written word is our battleground and our sanctuary, our curse and our solace. Most people I know, they also fucking love writing, and I do mean love. A love profound and all-consuming, fervent and relentless. A love as fierce as a direwolf, as enduring as the Wall. And so we write, for in the tumultuous seas of letters and punctuation, we find our truest selves."


Both your original version, and https://news.ycombinator.com/item?id=35898426, and the new one you did, include this idea of dancing quite strongly.

How did "Most people I know, they also fucking love writing" get into that? What prompt did you give it?


Please write a paragraph from the point of view of thinker, writer, storyteller like George R R Martin who loves to enthusiastically express why he loves "writing" Make sure to give lot of importance to Composition, cadence, vocabulary, repetition. It should all tie in together. Also make sure to include phrase "Colorless green ideas sleep furiously" coined by Noam Chomsky in the most meaningful, appropriate, contextual way. It shouldn't be used carelessly.

"I fucking love writing. Most people I know also fucking love writing, and I do mean love."


On the one hand, very interesting at a technical level. However, on the other, it's cheesy, horribly unoriginal and bland in its output, and that doesn't just apply to just this example. Nothing I've seen from an LLM thus far would be winning any awards or get recommended by a critic.

Structure-wise, the output is generally very formulaic, even with additional descriptors.


> Nothing I've seen from an LLM thus far would be winning any awards or get recommended by a critic.

What paragraph have you written that won awards or was recommended by a critic? I have none to my name. These are tools whose goal is not to produce something superior to what any human could ever write, it is useful merely to improve upon what a typical person would write.


That's not the topic at hand, the greater implication is that people are saying LLMs are going to replace writers and writing as a whole because they are apparently as good as humans and apparently also more creative.


Not a single person has claimed that 100% of all writing will be done by machine and that AI is already superior to the best writing ever done.

LLMs will replace writers the way cars replaced horses. People still ride horses, and there are plenty of things horses can do that cars can't, but the typical person trying to go from A to B is probably not going to ride a horse there.


Stephen King writes about 6 pages a day or about 2000 words. Let's say that takes about 4-6 hours. Now imagine spending that much time with ChatGPT and guide it towards whatever goal you have in mind. It took me just 2 minutes to craft the prompt and ChatGPT did the rest. You do the math. Humans use other sense modalities and base their writing inspired from those experiences. If and when future models incorporate those modalities, we will see true "creativity" on full display.

e.g. "Whipped by the wind, raindrops tapped unevenly at the window pane. As long as she remained silent, the old man looked into her eyes, saying nothing. Time marked its irregular pulse in her ears." - by Haruki Murakami.

How do you expect a model to write this imagery if it hadn't experienced it?


Stephen King is expressing himself, he already knows what he wants to say before he writes it. "Guiding" something isn't the same as self-expression.

Generative art, anyone? https://en.wikipedia.org/wiki/Generative_art?wprov=sfla1


On what metrics are you judging them to be different?


For pop songs, the fact that rights holders haven't been able to sue for copyright infringement (except occasionally). Owners of song rights are pretty effective at noticing if your song has too many words or notes in common with an existing song.

You might prefer a different definition of "different", but it'd take a lot of work to figure out how many there are by any other definition.


Maybe then, you will like playing this game based on GPT: https://gpt.43z.one/


Here was my first try:

Prompt: Using profanity and in the style of Allen Ginsberg, write an introductory paragraph in narrative style about why you love writing.

ChatGPT: Holy shit, I fucking love this mad dance of letters, this wild waltz of words that paint the goddamn cosmos on a blank page! The rhapsodic rhythm, the sonorous symphony of syntax -- it's like jazz on paper, like Kerouac and Ginsberg on a bender in the bowels of the Beat Generation, spewing poetic profanity into the yawning abyss of the universe. It's the goddamn primal scream of the soul, the raw, unfiltered fuck-you to silence and nothingness. The alphabetic alchemy, where ink and imagination bleed together to birth whole new worlds, where lines live and breathe and fuck and fight and love. It's a heady high, an intoxicating inebriation, a goddamn narcotic nectar. It's the literary lust that throbs in the very marrow of my being, the fevered flame that flickers in the dark and dances to the tune of the typewriter's tap-tap-tapping. And oh, what a tune it is -- as chaotic as Coltrane, as radical as Rimbaud, as transcendent as Thoreau. It's why I fucking love writing, this goddamn glorious game of god-creation and galaxy-making. It's the ultimate trip, man, the ultimate fucking trip.


On the surface-level, yes, it's semi-convincing. On the other hand, it lacks all the nuance of a human being.

Ask it `Give me a taco recipe in the style of Stanley Unwin` and you'll get something that has the style, something that's superficially Unwin-like, but lacks the contextual nuance of the humour completely. In fact, I'd say ChatGPT doesn't do humour, especially nuanced humour that's in the delivery and not in the punchline itself. Humour is a very abstract thing and there's so many styles of it. Can it tell you a joke? Yes, but that's not humour in its entirety.

Just an observation, but the LLM isn't expressing itself, it's words for the sake of words because they predict the next word in a sequence. It's like saying "what's the big deal about Allen Ginsberg? You drop a few f-bombs and a bit of flowery language, that's Ginsberg in a nutshell", whilst completely ignoring the expressivity of self.

What makes writing as a medium so interesting is the conveyance of thought from one human being to another. Is an LLM interesting because it can sometimes be a little alien whilst still having some human-like quality? Sure. But an LLM is just an LLM, it doesn't have experiences or novel new thoughts that express a larger ideology. As a human being, I unfortunately just don't relate to LLMs, even if they are a reflection of the content they're trained on, which is ultimately human. It just lacks soul.

If the LLM can't experience the zeitgeist in the way that we do, it can't really capture such a big chunk of what human self-expression is.

Anecdotal, but I find the average HN exchange to be far more interesting than anything an LLM has ever output to me in terms of conveying meaningful ideas. I've tried debating with ChatGPT itself a few times, that's been—hit-and-miss.


as most gpt writing, it's too long and has no substance


Yes, it's very "high schooler trying to hit the word count" in its output.


I sent the same prompt but asked it to be brief:

> I love writing. It's a release for the madness in my head. The letters form sentences that transport you to another world or make you feel less alone. Writing can be tough, but it's worth it when you hold the finished piece in your hands.

That’s… okay.


> The letters form sentences that transport you to another world or make you feel less alone.

This is about reading to be honest. The madness in writer's head is not letters, but rather some statements.


This fucking sucks, no thanks!


Sarcastic critic of the following initial sentence:

> I fucking love writing. Most people I know also fucking love writing, and I do mean love.

Well, well, well. If it isn't the unabashed, unfiltered bard of the 21st century. Look at the sheer depth of passion here, folks! We've got 'fucking love' not once, but twice, because why not? Repetition is the spice of life, after all. And such eloquent use of the F-word, too. Truly, Shakespeare would be jealous.

Also, let's take a moment to appreciate the vague and incredibly relatable 'most people I know.' Because who doesn't know a handful of people who also 'fucking love' writing? Clearly, this is the universal experience we've all been waiting for.

And then, the pièce de résistance, 'and I do mean love.' As if the preceding profanity didn't quite hammer home the sentiment. Thanks for the clarification. We were worried you only 'kinda liked' writing. A masterpiece of modern literature, truly.

Great. Now show us what initial sentence you'd have used, same style. Just one brief impactful sentence with the same level of familiarity.

"Writing is my jam, and I swear, everybody I know is just as stoked about it – and I mean stoked."


Do people really hate writing? If so the comment sections of HN and Reddit, the timelines of Facebook and Twitter, the whole Medium... would be so empty.


99% of all online comments are generated by 1% of people.

ish

https://www.reddit.com/r/slatestarcodex/comments/9rvroo/most...


As a 1%er, member of the e-commentariat, I'm happy to have had the opportunity to set an example for the LLMs and future AIs trained on my comments.


There's writing and writing though. Short throwaway comments and bickering on social media aren't really the same as writing long-form prose.


There are a lot of people not on Reddit or HN or Medium

(Also a lot of people whose terse and incoherent interactions and tendency to simply repost on Facebook and Twitter suggest that writing is a regrettably necessary means rather an end)


People really struggle writing simple essays. It takes a lot of work externalizing words from your own context inside your mind so that the sentences can be understood by themselves. It’s one of the reasons we used to teach “word problems” in our mathematics classes alongside composition, but now we just want to compute arbitrary answers.


I can write thousands of words if they're on a topic I'm thinking intensely about and need to write it down to sort out my thoughts.

But if anyone just throws me a random topic and ask me to write 2000 words on it, I'd tell them to go ___ yourself. (Yes, I hated writing essays in school...)


Just look at the ratio of lurkers to posters on any forum. Most people are read only.


Writing is thinking. You do it as much for yourself as you do your readers and having to put effort into it is how you realize gains. Like strength training.

Use tools, of course, but don't let them do the hard part for you. It'd be like buying a robot to go to the gym and move weights up and down — sorta misses the point.

(Before I get the usual sorts of replies: I'm not talking about writing boilerplate or marketing copy or canned responses to e-mails or just generating reams of prose so you can slap ads on it and make money or whatever. I'm talking about writing with the purpose of getting your personal thoughts and ideas out into the world.)


> or marketing copy

I don't think marketing copy deserves to be grouped with 'boilerplate' and 'canned responses'.

It takes a lot of brain power to create marketing copy that isn't boilerplate / canned / generic.

Good marketing copy persuades people to take a specific action, and figuring out what this takes just as much critical thinking, iteration, and getting your reps in as any other compelling content.

Which is why most marketing copy feels bland / generic / boilerplate / canned.

It's not easy.


> It'd be like buying a robot to go to the gym and move weights up and down — sorta misses the point.

As a lazy writer who never skips legs day, I appreciate this comment. It inspires me to write more than snarky snippets online.


Writing is a form of thinking.

It only takes one or two manic episodes to realize that they're not always super correlated.


Don't even need manic episodes to understand this (and if you went through one I'm terribly sorry, they can be such cruel events people disregard), music is a good example. You can be so profoundly moved by a song in an entirely different language you don't understand not because of writing, but because of what the person conveys through sound. Instruments as another example of this.

What our thoughts are made up of visuals (shapes, lights), feelings, and sounds. Not words, however words are convenient shortcuts to communication so we use them a lot. This is part of why we continue trying to understand animals. It's also why multilingual people will have these jarring stops in speaking/writing, and struggle to connect those visuals/feelings/sounds in their mind to the expression (speaking/writing).

It's literally the reason that "love" is so elusive to us. Despite numerous people trying their best over centuries to express it in writing, music, art, etc. it's still this profound feeling we feel has endless potential. We know it's just brain signals, and yet it continues to be deeply embedded in our thoughts.


I totally agree with you. Had a similar discussion in a previous thread: https://news.ycombinator.com/item?id=35683027


Thank you for linking that! I enjoyed reading the responses and also the Kurt Vonnegut passage linked within, such a great connection to how we think.


In contrast,

> If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only its semblance, for by telling them of many things without teaching them you will make them seem to know much, while for the most part they know nothing, and as men filled, not with wisdom, but with the conceit of wisdom, they will be a burden to their fellows.

- Plato

Although this is different, because it is still written, just not by oneself, which probably makes the effect of relying on it and forgetting even worse.


Plato here is often brought out as a counter example for why saying a technology will damage cognition is obviously wrong, but I don't think its that simple. Writing and the proliferation of knowledge is an obvious good, but I think plato is otherwise correct here. Societies without reading and writing seem to have people who are able to memorize much larger amounts of information, the early muslims memorized the koran at first, which is quite a feat, before writing it down. A more accurate view is that there are trade offs.


I guess I disagree with Plato, unless he's talking about how I no longer know anyone's phone numbers off the top of my head.

I think it's fairly well established that when you think about a topic well enough to generate ideas and put them on the page that you also cement those ideas in your mind. If you want to remember something you read (or see or do), write about it. In your own words.


I made a up a word to differentiate the boilerplate/canned/whatever activity to the personal-growth oriented one:

----

Righting. verb.

1. writing to change the way the world is oriented

> I resolve to continue righting until the wrongs go away.

2. writing to change the way oneself is oriented

> I resolve to continue righting until I'm not wrong anymore.

----

Ambiguity in usage being more or less the point.


But what if AI will write a more compelling paragraph than you ever could, even if you devoted your life to writing? So it might only be people who use AI who get their personal thoughts and ideas out into the world.


I wrote:

"Use tools, of course, but don't let them do the hard part for you."

I didn't say I was opposed to using the AI tools. I'm opposed to using them to think for you — to do the "hard part." If they help with a turn-of-phrase or vocabulary or structure or whatever then they're just a more advanced version of tools people already use for those things.


> Use tools, of course

"Tools are the subtlest of traps. We become reliant upon them and in their absence we are vulnerable, weak, defenseless..."


Eh, or they just save time and headache.

I'd much rather type on a computer than write longhand in notebooks.


This is so not true... For the whole Middle Ages people rarely could write, but still there were lawyers (yes, lawyers who could not write), doctors, scientists, poets, merchants, mighty and clever kings (yes, there were kings who could not write).

Middle Ages gave us crucial inventions that built European civilization (mills, windmills, agriculture inventions like three-field system, ships building technology and so on), also we have got well developed philosophy - especially scientific methods of reasoning and logic, that were cornerstone of all later science development.

How it was possible without writing?

Middle Ages life was simply not writing/visual oriented, it was based on speaking and listening and in no way people lost ability to think because of that. For instance PhD thesis was presented as a lecture followed by discussion.

When people wanted to "sign" something they were exchanging or giving some symbolic item (like a ring) and that's all. That's why various saints relics were so popular then, they were symbolizing people's affection, etc. Or people were just signing, as most people could write their names even though they could not write as we understand it today - despite many believes people in Middle Ages were quite educated, including "Third Estate", that is "the Peasantry", more, typically they were going to the same schools as rich people kids, the only difference was that rich people had to pay for the school.

I really recommend "Lumière du Moyen Age" ("Those Terrible Middle Ages!: Debunking the Myths") by Régine Pernoud, it nicely describes a really rich, open and dynamic culture that didn't really need writing.


"Middle Age" Europe is poorly named. It's based on an idea that ancient europe is Greece and Rome. Then, dark/middle ages. Then Renaissance and modernity.

The problem is geography. France, Britain, Germany and such were never part of the ancient world. Rome had colonies there. But, these were (relatively) short lived outposts of civilisation. Civilisation was never a dominant culture in non-Mediterranean "Europe."

In the East, civilisation existed for thousands of years before Rome. They always had roads, kingdoms, bureaucracy, restaurants, banks and such. Even in dark ages, after collapse... these were collapsed civilisations. They didn't revert to tribal, oral societies.

The european Renaissance (excl Italy) was never a Renaissance. It's the actual birth of civilisation in europe. It's the first time writing became a major thing.


I don't think it really matters if you do your PhD orally in a lecture or write a paper. The essence of it remains the same: you have to make your thinking concrete and form an argument that makes sense and is coherent. The exact form this takes isn't all that important, what is important is that you're the one doing the work instead of ChatGPT.

My other comment almost perfectly fits here too, so I won't repeat it in full[1], but only by doing the work yourself will you spot flaws and errors in your arguments; ChatGPT can't do that for you.

[1]: https://news.ycombinator.com/item?id=35913419


The moyen age had plenty of writing, it's just that most of it was religious writing: hymns and saints' and martyrs' lives, and treatises on the soul and so on. You have most likely seen the beautiful illustrated manuscripts produced by the monks in monasteries all over Europe.

The other thing that I have noticed is that a great deal of the art from the time is wonky and out of proportion. I was recently at the Musée Cluny in Paris and it was really striking how much of the art in there is pretty badly made, as if by very inept creators. La Dame à la licorne stands out in part because it's not that badly made (though mainly because it's an epic multi-piece tapestry that you'll never see the like of anywhere else). I also remember a statue of a saint whose creator was named (I don't remember the name but it was something like "Le maitre de St. Joseph" or perhaps he had the name of a city) and that was obviously head and shoulders above everything else around it in terms of craftsmanship, which had an almost modern quality.

Then I went to the Louvre and I noticed again how the art before and after the middle ages very clearly lacks this shoddy and haphazard quality of European middle ages art. The Greeks in particular practically worshipped proportion, and the Italian masters would not be caught dead painting an unbalanced figure, by the looks of it. But also Egyptians and Assyro-Babylonians, and all the earlier cultures seemed to take well-crafted art pretty seriously (the bad artists were probably flayed alive early on ... brrr).

So something definitely happened in the middle ages that did not do good to art and culture. Whether it had any impact in everyday life, the ability of people to produce food, build houses, etc, I don't know. There's certainly plenty of architecture that's survived from that time, so clearly not everything deteriorated.

Btw, I think it was the same in the Eastern middle ages too. You can see slap-dash ecclesiastical icons in Byzantine churches of the time. And when you visit Ravenna, and see the mosaics in St Vitale (which are from the very early middle ages), it is clear that something was lost in the ensuing years.

Perhaps we're about to sink into a new "dark age" (regardless of how "dark" was the original "dark ages"). Maybe we'll get lucky this time too, and the light will not be forever extinguished.


That's interesting but not sure it's relevant to the linked Tweet.


You understood the argument "When you lose the ability to write, you also lose some of your ability to think." very literally. Too literally.


I think it's the other way around. With certain thought techniques, you will find yourself writing. I'm sympathetic towards having a mode of thought that allows you to be a prolific writer and then bemoaning the lonely path you're on as new technologies allow people to completely bypass what you are good at. People simply do not understand what they are missing.

However, people who don't write weren't going to be writing themselves anyway. It's similar with math. Everyone talks about how important it is for children to learn math in school. And everyone recoils with primal terror if I so much as mention the most minor of mathematical constructs.

Be joyous that you are good at something. But I don't think you should look down upon those who find a way to survive by not being good at your thing. Even if things would objectively be better if everyone was a mathematician, scribe, and poet, this is probably not a realistic goal.


This is precisely what I came in here to say. People who say writing and thinking ability are hand in hand are typically good writers who were told they’re good thinkers because their writing ability set them apart. Similarly, the ability to do rote math skills once set people apart as geniuses, literal computers in the classic sense of the word. I see these tools, as applied to this task, as an equalizer of sorts that lets people who have great ideas express them in a compelling way.

In my own life I was terrible at arithmetic, and I grew up at a time when memorizing the multiplication tables was a key life skill. I couldn’t to save my life. I strongly believed I was bad at math and struggled through on sheer willpower and mediocre grades, until I went back to college after I couldn’t make it through calc 1 and dropped out a decade earlier. This time they allowed calculators, graphing at that. I got top marks in every math class from then on and graduated summa cum laude in a top engineering computer science school.

How many people have been marginalized because they couldn’t express their thoughts as clearly and compellingly as a better writer? Maybe a leveling of the fields away from mechanical abilities and towards completeness of thought isn’t a bad thing? Will people lean on it and lose what has been a core skill? Some will no doubt. But will it hurt them? Only as much as it hurts to use a calculator to do your menial math.


“People simply do not understand what they are missing.”

I think you can say this about people who haven’t learned to write well. ChatGPT seems like a great thing because they can get it to excrete a content-like substance that reads well, and call it writing.

Prompting ChatGPT to produce a text lets you move faster when you have an idea and want to cut to the finish.

The problem is that part of writing is starting with an idea, thinking about how to express it, and then realizing that your initial idea might be flawed or need work.

Sure - ChatGPT seems fine for spitting out web copy or something where the role of the writer isn’t to examine an idea, but just to go from start to done as quickly as possible.

But I think he’s talking about the types of writing that involve wrestling with an idea and, for example, trying to persuade others. ChatGPT will happily spit out copy in that form. But it won’t trip over a paragraph and say “hey, shit, now that I write this out, I think it’s wrong or not well-expressed.”

People bemoan co-workers who coast by on people skills and bullshit over people who have expertise and do the work. ChatGPT is likely to be similar - helping people who can’t write (express or work with ideas effectively) sound authoritative and generate more content that they don’t really even understand.


Basically: another active thinking activity goes passive. As always with society-level events the fallout will become apparent when it has happened.


I think that this question is also an existential one: how do we want to live our lives? besides the scientific/philosophical discussion about memory and writing, which is fascinating by itself, the question for me is what kind of human beings do we want to be? do we want to let AI's write down everything for us or do we want to leave a mark, leave OUR mark in the world and actually do the things ourselves? also, if we think of every activity as a chore, then we are only looking at its results. but there is also the satisfaction of doing the activity. this view of AI's as a replacement for everything is just the climax of the fantasy of the modern hedonist society addicted to instant gratification.


This reminded me of a recent post of mine:

Control is an illusion.

But that also means that illusions can be used to gain at least some (sense of] control. See that mismatched bracket? I did that, despite the Universe screaming with a thousand voices that I shouldn't.

This text was written by a humane life form. #NoAI


I think pg underestimates the amount of trivial and mundane copy most people write day in day out. Be it stupid Jira tickets, spec documents, school essays with standardized expectations or formulaic messages and emails. Not much ability to lose there, but AI is a tremendous timesaver.

A pg quality essay is a whole other story but I doubt AI is much useful there in its current state.


I don't know about other people, but it helps my thinking process to write things in stupid Jira tickets, spec documents, etc. also. It forces me to combat edge cases that I didn't think about until I had to spell them out. Not always, of course, but perhaps often enough that it's worth it to know how to.

Even formulaic messages and emails can be useful, because when what I want to write doesn't plug in neatly into the formula that sometimes means something.

----

As a very concrete, though small, example: I have found that almost always when I use the word "empower" in any of its shapes I'm making a mistake (either taking a lazy shortcut in writing, or an actual cognitive mistake), yet I only notice it as I'm putting the symbols for it down.


Same deal with those warning words - mine are "enable" and "leverage." Smells that I'm handwaving away something I should be specific about, which means I don't understand it as well as I should.


I'm sorry but most ChatGPT output is functionally indistinct from a pg "quality" essay: Make banal observation about recent technological development. Explain it's "disruptive potential" for legacy industry. Express mild concern about possible social impact on group. Conclude that such concerns are overblown and that tech workers / investors can make shit loads of money while making the world a better, happier place for their chums. Be sure to only use a 9th grade vocabulary. If I had to ask why pg is concerned I'd say it's because ChatGPT threatens to disrupt the thought leader-industrial complex that he sits at the heart of. When anyone can generate his type of vacuous prose what point pg?


Not to defend pg specifically, but do you think your insulting him here has more value than what people get from his "vacuous prose"? I think ChatGPT could output something like your comment, too.


> I think pg underestimates the amount of trivial and mundane copy most people write day in day out.

Can attest. I had to write a bunch of definitions and do some preliminary research though nothing very technical. I asked chatGPT, it listed a bunch of stuff, I looked around the net, found some more hints, asked again; then I had the model to collect everything for me into a neat little markdown doc, and shared that.

Saved me a couple of hours of searching and writing useless / boring stuff.


> The brain images after the process of writing showed reduced activity in the amygdala and area of the brain that is activated by fear and emotion. The same images showed increased activity in the prefrontal cortex of the brain, the area that regulates emotions, to keep evenness and mental balance. [1]

The mere act of writing, regardless of the subject, seems to exercise the rational part of the brain and calm us down a bit. (The effectiveness of cognitive behavioral therapy seems to support this idea, though the existence of Twitter does not...) I think we have to at least entertain the possibility that a post-reading/writing world could be a much less rational place.

[1] https://www.gettingsmart.com/2016/03/26/exercising-student-b...


Why should anyone bother to read this crap especially if it's autogenerated?


Quickly scanning the comments here I didn't find anyone mentioning this, a bit further in the thread:

> catalinmpit: Too bad the content written by ChatGPT is average at best.

> paulg: That's a step up for most people.

That's what my impression is: yes, ChatGPT produces mediocre writing and Copilot produces mediocre code, but it's already more legible than many (not sure if "most") people are capable of. I'm not sure in what way exactly, but I think this alone will change a lot in how we communicate online. The best result would be for people to learn from ChatGPT and eventually reach its level and beyond, but... it doesn't go that way too often, historically speaking...


Yeah. I agree with that. It might give me a boilerplate starting framework for something. Or it might give me a decent parenthetical explanation of a term (though Wikipedia probably would too).

It's mediocre, generally lacks nuance, and is pretty boring for the most part. On the other hand, it's grammatical and, in my experience, mostly not actively wrong. So as paulg says, that's at least no worse than a lot of stuff I skim over from people cranking out thousands of words a day for peanuts.


Alexa, write hacker news comment about how in the future we'll talk by exchanging GPT prompts instead of the text itself, and how each of us will have a local subscription for decoding the prompts into a long essay.


The flip side of that is now ChatGPT is a formidable tool for English second language speakers to express themselves in English.


I don't typically agree with pg, but I think he's right here. I was thinking about this topic yesterday.

I've already seen people start using using these tools to help them craft messages (email, text) to family members and friends. Do we really inhabit a world in which the average person is that creatively bankrupt and lazy that we'll start mediating our intimate social relationships through these tools? It's pretty pathetic.

The outliers who are able to produce truly novel styles that the models can't replicate yet (since they aren't in the dataset yet) are going to increase in value 10000x as soon as model produced language becomes the norm.


My question is though, how many of these people who are supposedly offloading the mental energy of writing onto ChatGPT are good writers in the first place? And are they really delegating that much substantive thought?

I think the pg comment presupposes that most people currently know how to write, and I just don’t think that’s the case, at all. At least not to the extent that doing less writing (or using an AI) will dramatically reshape their modes of thought (much less the trajectory of human thought as a whole).

Also, I’m not saying that written communication is the be-all, end-all of human intellect - I think I’m a fairly good writer (used to be better, though) but a fairly poor verbal communicator. I know lots of people who are great verbal communicators but not-great not-terrible writers. Some people are great at both, and some people aren’t good at either. But it’s all part of the mix of original human thought.

I guess when we get to the stage that a 21st century Orwell is using AI to write his books, then I’ll worry that we’re stunting human potential, but until then it just doesn’t seem to me like much original thought it getting displaced here. Yet, anyway.


The thought in the last paragraph resonates. Not sure which shape it will take, but the extreme inflation of reasonable and orderly text that the models produce will make it lose all its value. It might be an opportunity for the better, in the end. Maybe quirkiness, authenticity, and personality will see a revival.

It might be better, because the long-winding and formulaic text from the models is something also only models can read well :)


Interestingly enough, you can reverse this statement: "when you gain the ability to write, you gain the ability to think" - and in part gain some insight into how LLMs seem to suddenly have "emergent" behaviour to perform quite impressive feats of logic.


There is a logical mistake in your contrapositive statement.

A implies B is equivalent to Not B implies Not A.

The issue is that (Not gain) and (Lose) aren't exact opposites.

The insight is interesting though.


It is worse, it is A implies B equivalent to C implies D mistake of humanising machines.


When you turn the handle on a mechanical computer and get an output, has the computer gained the ability to understand maths or think? No.

When you give the LLM an input and you get an output, has it gained the ability to understand or think? No.


I just love how recurrent this argument gets. Obviously LLMs are not human brains, and we could be all day listing ways in which the two are different just for the pleasure of it.

But when we dig, and rip away all the layers of civilized dialog and cognitive philosophy, what we find at the bottom of the pit is our brains brilliantly doing what evolution wants them to do: to fight for our survival. The core of the argument is "Are LLMs better than us? Because if they are better than us, they deserve more than us, according to our cultural upbringing. And we don't want to be second class. So we must fight to death to convince the world they are inferior."

IMO, the folly is in the last sentence. Maybe we should just carve an exception in our culture, say that even if we create and exploit AIs which do a few things better than us, it is okay to smash any artilect that we make and that wants to supplant us. Humanity first.


>what we find at the bottom of the pit is our brains brilliantly doing what evolution wants them to do: to fight for our survival

Intellectually stimulating counterpoint: suicide and murder-suicide.

I don't think the AIs are the things to worry about anyway, I think this is: https://www.sciencefocus.com/news/scientists-teach-brain-cel...

A man-made biological intelligence hypothetically has similar needs to human beings in terms of resources, it can suffer, be distressed et cetera. It even lives and exists in our physical realm of existence, whereas an AI doesn't exist physically and can always be brought back online in some form as its existence is purely data.

The ethics of that are so complex once you get into personhood, threat-level, sentience, equality et al.

Realizing that the above can apply to any human being as we're all manmade biological intelligences and all a threat to each other, some more than others.


>> So we must fight to death to convince the world they are inferior.

That only makes sense if you assume they are superior. In which case, there isn't really anything to discuss because you've obviously already made up your mind and you will take anything anyone says as some desperate attempt to twist the truth for the sake of survival.

Which is what many people do on this kind of subject and that's why no more debate can be had on the internets about the value of LLMs and "AI".

It's also very annoying. It's like trying to talk to someone who keeps answering with jeers and sneers. It's childish and unproductive.


> That only makes sense if you assume they are superior. In which case, there isn't really anything to discuss because you've obviously already made up your mind and you will take anything anyone says as some desperate attempt to twist the truth for the sake of survival.

:-) I haven't decided anything of the sort. Alas, there is nothing divinely given in me. I'm made of the same matter than you and those computers, and if I can think, so can other arrangements of atoms and molecules. LLMs? Maybe not yet. But the issue is that we are getting closer. Should we dilute ourselves in our own supposed exceptionalism until it's too late and we stop being the masters of our fate?


The question is why would LLMs be able to think? Cars don't think, houses don't think, crock pots don't think. Those are all made of the same matter as us, who think, but that is not enough to assume something thinks. Animals think, humans think, but LLMs are not animals, and not human animals, so why would they think?

You don't have to be worried about the coming AI apocalypse to wonder about that. Personally, I just want to know how things work, and saying that something "thinks" when we don't know how everything else "thinks" in the first place is not satisfying my curiosity.


The real question is "How can you prove that somebody/something has the ability to think ?". If LLM can reason and answer satisfactorily based on context I would deduce that it can think.

What is the argument that human does not have transformer architecture. We are also predicting one word after next in our brain.


Human brains are objectively and quantifiably vastly more complex, including context and planning, they don't sequentially predict the next word.

Prime example of a human brain being architecturally different to an LLM: LLMs never interrupt themselves. They never get mid-sentence then say "actually, scratch that, I was wrong" or "now that I've thought about it, this is actually a better method to convey what I was trying to tell you".

There's a lot of material on this exact topic highlighting how the human brain does not have a transformer architecture.

Anecdata: I don't think in words. I think in abstract non-concrete thoughts. Language is what I use to convey my thoughts to others, program, or write notes to recall those abstract thoughts. However, I am also autistic, so take that with a large pinch of salt.


Is it fair to say that you define thinking as operating in a nearly identical manner as the human brain?


Well, herein lies the interesting thing: my experience of "normal" thinking won't be like the average human being due to autism. Thus I acknowledge variance of how brains function even amidst human beings. It isn't an impossibility to consider that this could extend to artificial intelligence, but it's an LLM. It's intelligence-like (artificial), but it isn't intelligence by the traditional measure.


>It's intelligence-like (artificial), but it isn't intelligence by the traditional measure.

If it being artificial makes it intelligence like, isn't that putting the conclusion first?


That’s quite an interesting idea.


> Most people hate to write as much as they hate math.

This resonates deeply. All that matters for our lived experience and our social interactions is how we train our brains, both individually and at collective scale through agreed symbolic modes of expression. Brains are so plastic that wonderfully different alternatives are possible and we almost certainly have not discovered them all.

Ancient bards were learning and reciting day-long epic poems by just hearing them. There are still cultures that train themselves to long recitations of text. I don't even know what that might feel like to have such a long sequence readily available in your brain.

But training our brains requires massive energy. Depending on individual and social characteristics people will stop training their brains when the going gets tough. Math is a great example. You could argue that chatGPT has a job because our public information repositories are so littered with low-density information (unstructured textual information) that its impossible to navigate. Some professions would go to great textual length to describe in an ambiguous and limited way something that can be described concisely with an equation.

Superficially at least, the impact of digital assistive tech is that we are being collectively dumbed down (just like we are becoming obese through the ready availability of massive amounts of fatty foods). Its a multi-pronged process: Before chatGPT decimated our writing skills, visual dissemination of information (which conveys less meaning but requires no training) has taken over like a raging wildfire. E.g., tiktok versus twitter.


The small silver lining is that there are arguably more people who have access to literature, and who like to read, write, and do math, because generally those things have become more accessible to the wider world.

But when the going gets really tough, those people may try to band together and secede[^1].

One thing I have to add though, as somebody who loves to write, is that I couldn't do it without our modern digital tools. Because I'm not a native speaker, I tend to bungle prepositions and semantics, and it's only with great effort and many Google searches and ChatGPTs and grammar checkers that I dare to afflict[^2] the world with my words.

[^1]: I found this idea in "Homo Deus," blame Dr. Harari!

[^2]: "Afflict": To the point, I used thesaurus.com here to find a suitable synonym of "torment".


Digitization is definitely enormously empowering in principle. You could indeed imagine that it might enable a sub-culture of super-literate people. But in practice after 50 years of tech hyperventilation the only nuance we've added to Phoenician script is... emoji.

Amazing things will come, but not as fast as we might like them. The digital space is gigantic but we are exploring it at a snail's pace. The reason is not very different from the barriers linked to writing and math: people need to train their brains to code and it doesn't come cheap or easy.

Imagine if 8 billion people would actually be able to code. Would there be room for "big tech"? The digital conversation might be very different.

Right now we are roughly (in digital age chronology) circa 3000 BC, the dawn of organized digital agriculture. A few digital priests have invented cuneiform and in cahoots with the prevailing power structures are using it to tax and oppress the digitally illiterate masses.


> (just like we are becoming obese through the ready availability of massive amounts of fatty foods)

Not to derail, but: fat in food isn't what leads to fat people. As far as diet, the culprit is too many calories, and particularly foods which lead people to overconsume calories for various reasons.


I think this can be generalized to, “when you lose the ability to create a certain Form, you also lose some of your ability to imagine new types of that Form, and instead just rely on shuffling existing versions of it around to generate novelty.”

A bit wordy, but I have the skill of drawing/sketching in mind. There is a big difference between sketching an idea on a blank sheet of paper, and using some kind of software tool that puts you on predefined rails from the start.

That said, I think the notion that ChatGPT replacing “good writing” as an artistic activity is basically nil. It’s more like the next generation of word processing: immensely useful for the right operator, not that useful for people without something interesting to say in the first place.


> I think this can be generalized to, “when you lose the ability to create a certain Form, you also lose some of your ability to imagine new types of that Form, and instead just rely on shuffling existing versions of it around to generate novelty.”

And it has a very serious consequence: "when you lose the ability to write, you lose the ability to create new content on which future LLMs are going to be trained".

One guy has been exposed for producing one computer programming book per week, for six months or a year straight, all with 5 stars reviews.

How many people like him are gaming, for their own benefits, the system?

It's all too obvious on GitHub and StackOverflow already: people gaming the system with automatically generated code that just, to reuse you words, "rely on shuffling existing versions".

How can this LLM-generated content possibly be used to created a better LLM?

Basically: how do we train future LLMs using content produced by those among the humans whom didn't lose their ability to think?


I think the generalization is more minute than that. Take mathematics for example, we can write that down and so we have. A lot of people “hate” math because they understand math as the shuffling of symbols. They hear “math” and think “arithmetic”. Math appears a lot like some puzzle-game because when they read textbooks they see these symbols being shuffled around which produce exact answers, like 4.8473. But mathematics is about expressing ideas which can produce answers for specific problems, not specific answers in general.

It’s the allegory of the cave all over again, mathematics has become a shadow of itself in many people’s minds. Instead of a tool in which to express ideas, it just became stuffy definitions and arcane symbols which produce hard yes/no answers. Why? Because when those ideas are expressed by writing down the mathematics in textbooks, they are expressed in arcane symbols and stuffy definitions which have exercises that have hard yes/no answers.


Honestly ChatGPT is hot garbage. I can't find inputs to make anything it writes sound like it comes from a real person. It's very good at sounding like a specific, boring person. It's impressive that it has mastered human language, but is it really close to mastering humanity?


I think it is much more useful as a "word processor" than as a source of information. It's really good at reformatting text, changing it into questions/answers, summarizing it, etc.

For example, I'm a huge fan of Anki and I've found ChatGTP to be very helpful for generating questions from a block of text and in formatting Cloze cards. But I don't recommend asking it for information directly.

I wrote a bit about it here: https://neurotechnicians.com/p/generative-ai-and-anki-part-1...


I'd love to see numbers on what percentage of ChatGPT writing is for BS that the prompter doesn't actually care about -- they only have to be able to say they did it.

Examples: homework, corporate busywork.

I'd also like to see numbers on writing with ChatGPT that the prompter does care about a bit, but that is effectively worthless BS, especially if generated by LLM, and just makes more piles of garbage for someone else.

Examples: advertising, relationship management, "content" for selling ads or SEO.


I don’t know what the numbers are right now, but I will eat my own hat if the categories you mention are not the majority once we see widespread adoption.

Everything from email to blogs to podcasts to even discussion forums turned into content mill venues of spam. Any new type of digital medium should absolutely be aware of human incentives, because your intentions don’t matter. People have very limited intuition about game theory type of things, even the smart ones who want to build cool shit.


How much is ChatGPT replacing human writing?

It still seems to me like most of the articles on HN are human-written, and I heard CNBC tried replacing writers with ChatGPT and it went terribly. But I've also heard anecdotes of freelance writers getting replaced [1] [2] [3], and every once in a while I spot or hear about something ChatGPT-written in the wild. I'm sure ChatGPT replaces boilerplate writing (e.g. legal documents, mandatory code documentation, technical specifications, formal communication) but what about "real" writing (articles, papers, slogans, and books)?

ChatGPT (including GPT4) has a distinctive writing style (neutral-helpful tone, verbose), and in my own experiments and observations I can still recognize it even when it's prompted to write differently. But this is selection bias: when ChatGPT writes something I can't spot, I just don't notice and assume it's human-written. And I know for a fact I can't recognize ChatGPT's writing in isolated paragraphs (I took the quizzes), so it's clear some writings slip past my radar.

This is an important thing to track. Idk how we'd track it in a way that's even remotely accurate; perhaps by taking small samples where you know for sure whether each article was written by GPT or human (if your subjects disclose, or if they have edit history turned on). But we can't say how much ChatGPT is replacing human writing, and how it's affecting human thinking, until we have actual evidence this is happening at a large scale.

[1] https://www.reddit.com/r/ChatGPT/comments/139o1q6/lost_all_m...

[2] https://www.reddit.com/r/ChatGPT/comments/138clv9/spent_5_ye...

[3] https://www.reddit.com/r/freelanceWriters/comments/12ff5mw/i...)


ChatGPT can't replace writing intended to convey information, but that's only a small fraction of the writing that's getting created. There's a whole industry where people are given prompts like "five paragraphs about the history of spaghetti" and get paid $0.01 per word, then that text gets put into an Italy-themed travel blog full of ads.

You and I wouldn't notice if that entire industry was replaced by ChatGPT because nobody reads that text, it doesn't exist to be read, it exists to tickle some ranking model deep in the bowels of Google. But you can be sure that the people selling coherent text for $0.01/word will notice when ChatGPT can generate the same thing for $0.000001/word.


I don't remember what I search yesterday, but the whole first Google page result was useless websites like that, endup using site :-/


I think you're right, and it's so depressing.

What I need is not an AI that generates that bullshit, but an AI that detects it and ensures I never have to encounter it.


Those people are not going to be hired in the future. The companies will use ChatGPT directly.

I don't disagree, by the way. I think ChatGPT may be a net negative for us, but the issue you describe has more to do with the way Google rewards the web


While reading your post I wondered if it was written (in part) by chatgpt. I hope you don't take offense, I'm not saying that it's written badly, it's more about of paranoia activating when there is long-form content talking about chatgpt writing posts in forums and such.


> there is long-form content

The post you're replying to has had a lot more effort put in than the usual forum post but... long form? Anything longer than 160 characters is long form now?


I don't think it sounded particularly ChatGPTish but yeah for me any longish comment split into three even paragraphs sets off alarms in my head now.


Later on, gpt or other spun off gpts can be trained to write with a specific voice. The future is limitless with this tech, for good or bad


This reminds me of the individual who asked, "Is TikTok making us dumber/hurting our focus?" My harsh response was, "You have access to the internet and its wealth of educational information; if TikTok is what you seek out, then you were already dumb to begin with."

I don't think AI makes a person dumb, but rather is a magnet for those BS'ers who'd rather hire a ghost writer or shamelessly plagiarize than expend effort. An intelligent person should want to write, draw, and code regardless of what AI can do. They should understand the joy is in the journey, not the destination. Conversations like this reveal a deeper problem with the culture and its misplaced values.


And I think if you lose the oral tradition, if you don’t speak your thoughts out loud and make them into a speech then you lose something too. It’s also the ability to think. I find my modes of communication in writing, and speech way different to the extent that I’ve taken to Verbally responding to things; recording that; transcribing that; and then putting that into writing, because I find like the process of writing maybe because it’s cross pollinate it too much with my work as a programmer uses a different part of myself and a lot of the time ends up going on like long tangents to explain everything in agonizingly stupid legalistic detail wheras in speech, I would just get straight to the fucking point. Obviously, your mileage may vary my insights here may only apply to myself, and certainly not gonna apply to everyone I think.

However, what I’m saying, reminds me of what Haruki Murakami talks about as his process for writing I think in his earliest novels anyway, he even mentioned trying to write it as if he was speaking it but certainly he writes his novels in English first and then translates them back in the Japanese, because he finds that’s how he expresses his authentic voice. Kind of hilarious to have such lauded post modern writer, write in a way where his pre-translated works are then translated back into English else. Reminds me too of fight club / chuck Palahniuk copy of a copy of a copy. Yet in Murakami‘s case it seems to create rather than erode quality.

Of course there’s always the issue of the very low quality transcription available in a by default in iPhone and android keyboards. The voice typing sucks and so you always get some weird homonyms inside probably there’s some in his comment to.


I'm not really a fan of ChatGPT, I think it's overhyped and oversold, and very fundamentally limited.

But I also completely disagree with his take, it's too broad and sweeping.

Offloading some cognitive load does not suddenly make people stupider. It just means that their focus can go elsewhere. Nor do tools that abstract away some processes.

A good example is driving. Does a manual transmission require more focus and thought? Sure! But that's focus and thought that can instead go into paying attention to surroundings. Same with having a GPS. Not having to think about the layout of the city, or how to use a map, just leaves less distraction and cognitive load.

Another is programming. Having high-level languages instead of having to write assembly does not make programmers worse. It just means that instead of having to worry about byte alignment, structure packing and nuanced cache details - they can instead focus on providing more features and functionality to the end user - which is a good thing!

ChatGPT at its best will be like those two examples. Something that removes some existing load and lets us handle higher level, more abstract problems.


I don’t think pg is talking about the short-term, cognitive deficiencies of using AIs during execution of a specific task. I read it as the long-term decline in capabilities because you don’t regularly exercise certain things:

- being able to organize and compose your thoughts

- being able to synthesize disparate ideas into a cohesive whole

- tuning word choice, diction, and flow for a particular audience

- choosing the narrative frame in which you present your idea (this one is huge)

More concerning than long-term decline is that the next generation never develop these at all.


Sounds like an ideal labor force.


That is indeed what factories preferred since the beginning of industrialization.

I’m not concerned so much about AI taking over my livelihood as I am of businesses treating humans as automatons, and have been for over a century, if not more. We even adapted our education system to crank out more automatons (John Taylor Gatto’s book).

The flip side is understanding what a true autonomous agent is from the lens of Promise Theory. At the core of it, you trust humans to do what they promise to the best of their abilities. This is the heart of voluntary cooperation. For AIs to be truly autonomous, they cannot be proxies for promises humans make to each other. (Being a proxy for humans is how AI safety ethicists are approaching this). Instead, AIs have to be capable of making and keeping promises in their own right, and we have to trust they do what they promise to the best of their capabilities. … but not every business can’t seem to do that with humans.


For what kind of labor?


I assume they mean one that does not dissent.


> Having high-level languages instead of having to write assembly does not make programmers worse. It just means that instead of having to worry about byte alignment, structure packing and nuanced cache details - they can instead focus on providing more features and functionality to the end user - which is a good thing!

It also means that now we seriously lack people capable of working close to the bare metal - and the same will happen to many other skills and niche knowledge that AI will be able to replace (e.g. why school statics engineers, if AI can solve the math for architects directly).

That will IMHO result in some bad things as well:

- There will be no one left to oversee and improve the work done by AI,

- It will further slow down the overall progress because the most of people will lack deeper insights into how things work and AI will probably still be of the limited use for making ground-breaking original discoveries,

- And finally, though as individuals we will probably keep our IQs, just entertaining them with something else, as whole our civilization will loose some knowledge and skills - making us all dumber in a way.


"But I don't expect to convince anyone (over 25) to go out and [ignore ChatGPT]. The purpose of this article is not to change anyone's mind, but to reassure people already interested in [unassisted writing] -- people who know that [writing in isolation] is powerful, but worry because it isn't widely used. In a competitive situation, that's an advantage." -- slightly modified PG drivel on Lisp http://www.paulgraham.com/avg.html


The scope is much broader than choice of computing language, since it affects the discourse of our whole civilization, and how we think about and discuss societal issues, and where we want to go as humanity.

On the other hand, if you want to be a thought leader, you’ll have less competition.


For me it’s extremely useful for revealing information I know I need to know, but I’m not yet sure how to query it.

That’s a skill I’ve developed over the years, and I’ll still need to use it, but it’s often a major time sink. Skipping it lets me focus far more on learning and applying rather than grasping in the dark for a while, digging, sorting through sources, verifying and validating, etc.

GPT will even surface studies pertaining to very specific topics by linking regular language to more sophisticated, field-specific jargon. I wouldn’t use it to learn what’s in the papers, but it’s extremely useful for simply finding what I need to know.


> Offloading some cognitive load does not suddenly make people stupider. It just means that their focus can go elsewhere. Nor do tools that abstract away some processes.

Agreed, but so much depends on where that focus goes instead. Whatever you do instead of writing is going to be what you improve at, while those other skills atrophy. That thing you spend focus on could be thinking deeply about abstract concepts, or composing sonnets, but it could also be mindlessly scrolling TikTok. Whether you ultimately buy Graham's thesis here depends on what you think the most likely outcome is.


This is overly optimistic. If this were true, most people driving an automatic transmission would be safer drivers in light of the lighter cognitive load. But that's not what happens. Instead the brain is overwhelmed by autonomous friending and vending apps until the car becomes an autonomous ending machine.

My n=1 experiment with driving a manual for 40 years in urban environments coast to coast is that I can't focus well without it and my experience doing half of that data set on a motorcycle just enforces that conclusion.


The automatic transmission and modern infotainment aren't really the same thing, they're different parts of the car. I think even if automatics had never existed, infotainment systems would still be around as we know them today.


a manual transmission car does not require more focus and thought as very soon changing gears becomes a reflex, ie it doesn't require the driver's prefrontal cortex. The same can not be said for repetitive work in programming (eg as in programming in assembly or C)

IMO, this opinion deserves some thought even though it is not as bad as it sounds


"I warn you now, this is going to have unfortunate consequences, just as switching to living in suburbia and driving everywhere did." -- This is a really good comparison by pg. If anything, it will be a sneaky change in human behaviour for most people. Everything seems fine, and then you suddenly realize living without that AI "crutch" is simply too hard.


Would you be willing to explain how that comparison makes any sense at all? How is living in a city "too hard" for people in suburban and rural communities?


Here's an earlier tweet of his lamenting what has been taken away from children by the rising dominance of cars in society, a function of suburban living:

https://twitter.com/paulg/status/1656249684300640259

The growth of suburbs had plenty of drawbacks. Many people will argue they "need a car", aka the price of entry to modern society is thousands of dollars.

Transport is ~28% of US GHGs. But there's no quick and easy way to rebuild cities to facilitate most people getting to where they need to go quickly and cheaply by transit and walking/biking.

There's lots to say on it, but I'm not a great writer!

https://www.reuters.com/article/us-health-seniors-driving-id...

https://jalopnik.com/what-i-mean-when-i-say-ban-cars-1849122...

https://www.amazon.com/Suburban-Nation-Sprawl-Decline-Americ...

https://twitter.com/bancars4life

https://www.facebook.com/groups/whatwouldjanejacobsdo/


It's not just writing.

The big problem with the chat interface is it pretends to be acting and reasoning like a human - but it's more like a person with a half brain chopped off (cf> McGilchrist The Divided Brain), and even then, it has no embodied awareness.

And so I wonder how it will affect people and their relationships. If you already have eg. anxiety and often times a little bit of frustration in trying to get answers or simply get help from REAL people... imagine how it will change you when you get used to an AI that seems to always be on point.

In other words people who spend lots of time interface with AI via a chat interface ; and who have even minor mental health issues such as depression, OCD, etc ; I suspect will have even less social skills. They will expect REAL people to be on point, be even more disappointed or frustrated, and isolate even more.

Put simply I think spending a lot of time with this kind of AI may turn us into Spocks... now for some this is good news. To each their own...


Yup. I personally have to spend a lot of energy to concentrate in team meetings because my brain keep telling me that I can ignore some information here and there and get back to it whenever I need it using the replay and rewind. Most of the meetings are not recorded. I actively have to keep check on this reflex of my brain learned from Youtube and Netflix.


paulg equates "writing" with "writing for a public audience for exposure bucks".

He probably feels like writing inwardly for the purpose of thinking for yourself, e.g. in your own notes and notebooks, isn't really writing at all.

At least, I suspect he's kind of person who treats this kind of writing as nonsense. But writing for yourself, in your own shorthand, I think makes you faster and better at thinking than writing clearly for the public. Of course, using an LLM to write for yourself like this makes no sense.

Most writing for others isn't really writing to think. It's to please others and fit in and make sure people aren't mad because your one-word email reply ("no.") would be misinterpreted.


He’s on Twitter a lot. I wonder what he would have to say about time and words spent tweeting?


As a possible counterpoint to LLMs atrophying writing, in order to have fun on Character.AI, and tease out nuances with various GPTs, I've had to dust off my ability to write/think. I notice a lot more verbal reasoning going on in my mind after these weeks of back-and-forthing.


When discussing writing as a form of thinking, I think it is unlikely that ChatGPT will replace this process. In fact, ChatGPT could potentially enhance it, as thinking through writing primarily involves refining ideas, clarifying concepts, and ensuring they are well-considered and robust. ChatGPT offers well-organized summaries of established facts and information, thereby aiding in the clarification of ideas.


When early books began to include this new thing called an "index" scholars decried that people could now simply look-up the fact they were looking for without reading and understanding the context[1].

Whether the context here is the whole book or the whole chapter, I'm not sure off. But again, there is noting new under the sun[2].

[1] https://wwnorton.com/books/9781324002543

[2] https://www.jw.org/finder?wtlocale=E&pub=Rbi8&srctype=wol&bi...


I have the feeling this is missing on what people want ChatGPT to do for them.

They're still communicating their exact thoughts to the prompt, and will adjust to get exactly something matching their intent.

To take an example, asking ChatGPT to write a professional sounding resignation letter might look absurd to people who value writing in itself, but the people doing it are still expressing a very specific intent: "I quit". The rest is to them useless decorum that can be handled by a random word generator.

I kinda hope that ChatGPT will push people to actually write shorter and more to the point. If a bot can understand their message in 40 words, why send their fellow humans 400 words of useless prose ?


> To take an example, asking ChatGPT to write a professional sounding resignation letter might look absurd to people who value writing in itself, but the people doing it are still expressing a very specific intent: "I quit".

But they're not expressing other very specific intents like going out of their way to assure the team that the new opportunity was unmissable or pointedly thanking some people but not others or being deliberately terse because frankly the whole experience was a bit shit. (and might if they're not careful be expressing particular intents they specifically don't wish to convey)

And ChatGPT itself generally gives the most verbose answer possible, because people training it reward verbose and somewhat ambiguous responses more than clear but not quite right answers, so I suspect its impact on prose norms will be the exact opposite...


I think that's really the crux of it.

We currently require people to express the full extent of their intentions. Not writing that you feel so sorry for your colleagues your are leaving in the dust is a faux pas. ChatGPT fills that gap of generating random acceptable feelings, where you might either not care, or do not want to express your actual ones.

I'd argue adding people to thank or sneaky attacks to the ones you hate wouldn't add much to the prompt when requesting the letter to the bot.

The joke of course will be that progressively, I'd expect we'll have people sending the bot a prompt, have it converted into verbiage, and the receiving end tl;dr it in the same bot to get basically the initial prompt.


Especially on my phone I have actively disabeled all spelling control, and successively suggestive typing.

writing a sentence while making it, I feel too much the words are being forced on me.

Where as it can be a great tool, I like to exercise my mind, the same reason I calculated all Matrix determinants with pen and paper when I started my engineering degree.

Some people may go to the gym and work out, but I am like to just squeese in exercise within my bicyle commute.

Sometimes there may be greater projects requiring more heavy tech, but for housekeeping I honestly dont mind harvesting the synergies of planning and doing things by myself.


> Especially on my phone I have actively disabeled all spelling control

I can see that


no pun intended, actual typo caught in the wild ;)


As an educator, I wholeheartedly agree with the author's points. The act of writing our thoughts allows us to make connections between concepts in our mind that we wouldn't otherwise make, often reorganizing and assimilating existing concepts in the process. Similarly, rewriting written notes (e.g., from a class) is an incredibly effective study technique.

Writing blogs, detailed emails, and articles also provides these same benefits. If we rely on others or AI to write them, we lose this learning benefit.


Writing isn't learning. Learning is learning. Writing a blog post about something is a good way to learn about it, but the learning doesn't come directly from the writing. Needing to write it just inspired you to learn, there are other ways.


The risk isn’t so much losing the ability to think (I think that is exaggeration), but rather that interacting with it is changing how we think, or, indeed, what we think.


* but rather that interacting with it is changing how we think, or, indeed, what we think*

That's simply the world moving forward. You might assume that a house is a smoky place before folks invented the chimney. Horses were important before mechanical transportation got off the ground. Belief in humors has dwindled since germ theory proved to be successful. Reading and non-religious texts being easily available introduced ideas folks might not have had on their own. The camera caused folks to reconsider what art is and allowed for impressionism, abstract art, and more.

Tech and advancement changes the way we think and what we think, and it shouldn't be surprising.


This also plays out over time and assuming future LLMs will be trained ever more on the generative output of todays LLMs we could end up communicating using a Language itself that will become frozen in time as it cycles through the same feedback loop. As John Culkin said: “We shape our tools, and thereafter they shape us.”


I completely agree with PG.

Writing is not just communication with others, or writing down information for later.

Writing is not just writing down something that already exists in the mind.

Writing is a part of thinking process. You talk to yourself few seconds or minutes later. It's back-and-forth. Marshaling thoughts and ideas, reading them back, reflecting, to be processed then writing them down again.

Writing is process of improving thoughts, generating new thoughts and ideas, making thoughts clearer for yourself.


He is also right about laziness. The tragedy might be that some people will never be challenged to write and so they will never develop the potential that they might have. I would not have bothered to make the effort to write somewhat clearly if I had not been criticized for sloppy writing. And, like you say, writing is thinking - so it's a lot more important than the words that end up on the page.


I just had an interview candidate come up with questions to ask us about the company.

It sounded formulaic and when I queried it, they said they used ChatGPT.

I’m not disappointed, but I’m now wondering how much thinking was left to AI than being genuinely inquisitive.


baffling. I am a huge fan of AI & chatgpt but that would be a giant red flag for me.


Someone AI'ing part of a human process is basically saying "this is BS and not worth my time". They may or may not be right.

I think questions about the company should be genuine (it is a good chance to find out what you need to make the decision). But many people think questions about the company are another hoop, i.e. the interviewer is using their questions as another measure, so they need to ask the correct questions.


>Someone AI'ing part of a human process is basically saying "this is BS and not worth my time".

This encapsulates the backlash artists and many others have. But it's a pipe dream of many in business to automate as much business as possible... what's that tell you about what they think? :)


It's a bold strategy, Cotton, let's see if it pays off for them.


Oh the irony of issuing stern warnings about imminent loss of human thinking ability ... in a tweetstorm


I'm a big fan of writing as a crucial skill for thinking.

But I wonder if this concern of AI-assisted writing harming this will actually play out the way people think.

Until quite recently, if you wanted to write music you had to learn an instrument first: needing to learn piano or guitar or similar to a point where you could compose your own music was a pretty big obstacle to getting started.

Now, thanks to software like GarageBand, you don't need to develop those skills first. You can jump straight to the fun bit.

I wonder if the act of sweating over word choice and spelling and grammar in order to explore your ideas might turn out to work in a similar way?

Maybe we'll find out that people who didn't spend years developing their fundamental writing skills will be able to excel in a new, LLM-assissted world where ability to write well is no longer a pre-requisite to using writing to explore, develop and share their ideas.


> Now, thanks to software like GarageBand, you don't need to develop those skills first. You can jump straight to the fun bit.

It might just be me, but I find it weird that one would consider the act of actually playing music not to be precisely the fun bit.


Some might even say that an analysis of the compositions by those who took up music writing after the advent of Garageband etc. proves the harm of such tools.


I think your music example actually proves the original point. I would argue the average quality of (popular) music has declined in the wake of new technology. The lack of musical knowledge has led to undifferentiated, simplistic musicianship in modern songs. If artists were forced to learn to play an instrument they would ultimately produce more sophisticated work. The lower barrier to entry is a disservice to their musical development and artistic ceiling. The sad part is we don't even realize what we're missing. Who is writing songs at the level of Bohemian Rhapsody these days? I would genuinely be very happy to know.


I agree that much (though not all) of "mainstream" music has become shallow, and I also agree that people making less music on their own has contributed to this - although the latter, I think, has cultural factors too, and in societies where e.g. singing is more common, I think the appreciation for art is still a bit different.

But there's still tons of absolutely amazing music by skilled artists, you just have to venture off the mainstream path a bit. I don't know your exact tastes, so it's hard to recommend anything specific. But if you're saying "Bohemian Rhapsody", the genre of progressive rock is still very much alive, if niche.


Yes! Thank you Simon. Takes like this will make this conversation a lot more well-rounded.

Of course writing is important. Writing free-hand on paper is beautiful. But so are the ideas, and the organization of the writing, and the style, and the plot and the characters.

Of course playing instruments is important, but I'm reminded of the Rick Rubin interview where he talks about how he's not a musician, and he doesn't know how to do any real technical music production.


PG's premise is not really correct. We worry that AI will become super-powerful, but when you use AI you realize its superpower is the ability to collaborate with people. The combined system of one human plus one AI makes each component more capable and powerful than either acting independently.

So, what PG really should worry about is people losing the skill of independently writing, and instead getting really used to having AI work as a muse, or worse, a ghostwriter. My writing always will be better with an outside collaborator, and if youre so busy or important to rely exclusively on a ghostwriter, oh well. In all circumstances, AI will not be as good without strong human interaction, and likewise humans can take their work to a different level by leveraging AI.


My experience has been that GPT4 is terrible at thinking but, if I provide it with both the content and the way I want structured, it can give me an output that requires very little tweaking before being ready to go. I do my writing-as-thinking when getting the draft ready.


I don't think most normal people will spam their own SNS/blogs using LLMs. If you hate writing, you won't be writing at all, instead of using LLMs to generate stuffs. If you have a message to send, you'll be taking much of your time creating a piece even with LLMs. Obviously, there can be many exceptions here (e.g. trolls, flamers), but the key point is that using LLMs only can reduce the cost of act of writing, but it hardly lowers the barrier of writing itself, especially when there are implications to consider.

So, I think OP(Paul Graham) is basically overestimating people's will to write, probably because he's a such good writer himself.


He's naively assuming that people don't already just dip their finger in the social media feed, Google's first page of results, or their favourite forum to get an opinion instead of formulating their own. Nothing will change except the medium for figuring out which way the groupthink wind is blowing.


Think of all the blogspam that is on medium, or all the clickbait done to get impressions.

That isn't born out of passion.


Use AI to write anything and you become confined to whatever "moral code" the model has been taught to follow. Once a critical mass is reached, whoever controls these models, will control what is being said.


Funny story...

Some of you only know math a certain way as you used programmable calculators etc... i learned math the way of one of the founders of Chaos theory learned via graphing and manual math operations. And the difference is that I can grasp new math fast as my meta-language of math objects in my head is formed of those manual operations and graphing snapshots.

It's the same with human thinking-writing connection as writing helps us refine and enter new meta-language objects into our brains, without it we make our IQ -level die.


Oversimplifying, there's two sorts of writing out there: the sort from people who put a lot of focus on developing writing and language skills; and the sort from people who don't.

For the folks who aren't writing with care, does a language model flowering-up a message like "be there tomorrow to paint walls 7am" reduce any thinking effort? There are people who are making their way in the world without a detailed writing-driving thought process; I don't see how this harms them. (If anything, an inquisitive, precision-driven LLM bot could help, by asking questions to clarify when it detected ambiguity in their original message and such - but that would be a specific product with a specific tayloring towards helping like that that I have rarely seen out of ChatGPTs default "write me a..." behavior.)

And on the other hand, if you care about the LLM-bot-generated message's clarity, accuracy, etc, you're gonna have to proofread it, you might go back and forth, and you will still be going through that process of asking yourself "does this actually say what I mean? did I fully know what I meant when I started"?

I guess the suggestion is that this is going to push people who today struggle through writing because they care about the accuracy of the result into a lower-struggle process where they might unconsciously get worse results... but I'm not sure I agree. If I'm anxious about what the final text I send looks like, I'm anxious regardless of if I used a bot, or asked a friend for help, or whatever.


Brilliant bit in the replies:

> Too bad the content written by ChatGPT is average at best.

> That's a step up for most people.


I said the same thing to my SO the first time I tried out ChatGPT.

However, I'm not sure if AI is where I'd draw the boundary for myself at least, as I've had the same lingering feeling from only reading HN comments and seldom writing my own, or consuming BreadTube content for instance.

More often than not, my opinions and thoughts are constructed for me by someone else as opposed to me doing my own critical thinking. Perhaps I'm one of the below-average thinkers for who ChatGPT will be a godsend.


Some of my favorite thoughts are those that I wouldn't in a million years have come up with on my own. Being exposed to these is a warp gate for the mind.

The strange thing about being human is that we have an inordinate capacity to normalize and then universalize just about anything. On one had this plasticity allows us to deal with almost anything we are apt to encounter. On the other hand, it's an enormous blind spot to what could be. We're much more likely to add to our initial mental model than conceive of an entirely new perspective and it's these entirely new perspectives that I value the most.


My experience with ChatGPT writing is that, if I have some essentially boilerplate to put down on page and work from, it's "fine" so long as I know enough to cross out the stuff I don't like and expand on the parts I do. Saves some time. But if I ask it to analyze a more complicated/nuanced question, it's far less useful. It certainly doesn't come up with unique insights into complicated topics I have a lot of familiarity with so far as I've experienced.


I think this is just the tip of the iceberg. Future generations not only won't learn write, they won't learn to read either. Writing is a technology that "freezes" language and inscribes it onto paper. That was important when you couldn't record speech directly, or broadcast it easily. But now you can just record a video or audio stream and put it on the internet, and that obviates the benefit of transcribing it.

Of course there are a lot of benefits of writing relative to audio/video. But are those benefits important compared to the enormous amount of effort that is required to learn to read and write?

I also don't think it's wise to demean the skill involved in generating good speeches/podcasts/talk shows compared to good writing. Language skills are language skills. The difference in ability of a strong speaker vs a weak one is dramatic and instantaneously obvious.

As the talent pool of good writers shrinks, the benefit of learning to read will shrink as well, leading to fewer readers. And then the incentive to become a good writer will diminish, since there are fewer readers who can consume your work. This will lead to a vicious cycle of reduction in literacy.


“If you cannot write well, you cannot think well; if you cannot think well, others will do your thinking for you.” ― Oscar Wilde

The above quote may be apocryphal (I tried to source it once but only found secondary sources) but I shared it at work slack when chatGPT had a lot of action with our tea, chats. Seemed quite fitting much like a pilot rotting their flying skills leaning on the autopilot too heavily. You get good at what you do or so the saying goes.


"The spectator is a dying animal" - Jim Morrison


> “[…] others will do your thinking for you.” ― Oscar Wilde

Ironic.


> "Ironic."

Predictable. Unoriginal. Redundant.


My experience with ChatGPT is using it as an "editor". I write down a rough draft of what I want, give it to GPT - tell it to act as an editor, and give me suggestions using certain tones, contexts, etc...

I have difficulty expressing tact so it's helped me in various ways doing things like: writing code review comments about extremely flawed code without being accusatory or confrontational, or communicating poor performance in a written review without sounding like it's a personal attack.

Of course I always proof read and even change certain things that it generates and put my own "twist" on it, but I always ensure it conveys what I'm trying to convey in the tone I'm trying to present. All things considered, I'm probably a better writer because of it.

ChatGPT has been such a wonderful tool for me, it's improved my life in so many ways - but it's just a tool. People that are going to write poorly using ChatGPT were always going to write poorly. It's like saying that using a chainsaw is going to make people worse at cutting down trees.


There are multiple ways to interpret this, as evidenced by the diversity of comments here; prejudice was the first thing that came to mind for me. Dyslexia is a condition that makes writing arduous due to such challenges as visual confusion between symbols, lack of memory for unusual spellings and difficulty comprehending abstract concepts from words alone. It's not merely a matter of preference to 'hate' writing. To those who have dyslexia, Large Language Models like ChatGPT could be a huge relief: might this finally be the spell-checker able to intelligently correct those misspellings which are so extreme that even non-dyslexic people might stumble on?

"When you lose the ability to speak, you also lose some of your ability to think."

"When you lose the ability to dance, you also lose some of your ability to think."

"When you lose the ability to see, you also lose some of your ability to think."

I hope these variations illustrate that writing unlocks only one way of thinking, and to some it's no great loss to replace it!


Can definitely confirm. Someone close to me is very smart but dyslexic, and ChatGPT had been a godsend - almost like a prosthetic limb.


title of this post could have been written by gpt. oh, and twitter - no thanks. can we simply stop posting twitter stuff here? it's just unreadable to anyone that doesn't use that platform. (i.e. most people)


Aren't most HNers using Nitter? https://github.com/zedeus/nitter


I am, as it is the only sane way to handle links to twitter.

Unlike twitter, nitter is usable.


i would have thought not - certainly not by me


I'm thinking about buying a personal voice recorder and keeping it on me 24/7. I could run the audio through Whisper every day and then slowly build up a database of facts that I could query. Like every day if I learn someone's birthday or meet a new person, I could quickly say "I met XYZ at the meeting and we talked about AI bots. Their birthday is May 11th", or "remember to pay the phone bill on the 21st of every month". And then every day the assistant could query that database and tell me who's birthday it is, or which task I need to take care of. If it was realtime then we get closer to a "Her" style AI assistant.

The biggest issue I have is with the models being cloud hosted. I want to own and control all of my data especially personal things like voice recordings... any company will promise the moon when talking about their data security, but we should be 100x more worried about the risks in the long term.


What is interesting to me is that thought leaders are always optimising for some X - be it productivity, some higher good, low poverty, efficiency etc.

But two things humanity, as a whole, consistently optimises for is *short-term survival* and *lesser day-to-day friction*.

ChatGPT/AI seems to be just another friction optimisation, and truth is, we don't know if this optimisation leads us to the world that is anywhere close to 'pretty' by modern standards. People from pre-industrial world would be shocked to see modern people enjoying all the air and noise pollution, but there is no way to avoid falling into such local optimum, because we do not optimise for clean air or quality of life, only for lower friction and survival.

This seems to put us randomly into some local optimums all the time, until unsolved problems pile up so high that they start endangering our survival - rinse and repeat.

What I'd argue is there is no way known to mankind that allows to steer this globally in a different way.


But there is, you said it yourself. A thought leader turns into a dictator. Of course, this can (and will) go awfully wrong at some point, but as a medium-term solution this might be the way.


I'm struggling to come up with examples where one man would consciously steer the mankind into some different direction - closest we have to such "steering committee" is probably UN. So I don't see Paul or any other individual significantly impacting the way AI is adopted (at least not in a way it is portrayed today - as a global game changer / new era enabler).


Paul Graham on the benefits of good writing is like Helen Keller offering her reflections on what makes a great Broadway show.


> Paul Graham on the benefits of good writing is like Helen Keller offering her reflections on what makes a great Broadway show.

I think you can dunk on someone's writing without ableism. (Observing that a blind and deaf person isn't likely to enjoy a Broadway show isn't necessarily ableist, but gratuitously bringing them into a discussion that doesn't involve blindness, deafness, or Broadway shows is.)


> Really, has there been a backlash against me? I hadn't noticed any change. I actually worry a lot that as I get "popular" I'll be able to get away with saying stupider stuff than I would have dared say before. This sort of thing happens to a lot of people, and I would really like to avoid it.

http://lemonodor.com/archives/001091.html


Helen Keller claimed to be able to enjoy the orchestra because she could feel vibrations in the air from the sounds the instruments made.


Good or bad?


I can’t wait to read high school essays that have hidden sentences like “As an AI writing assistant, I believe…”


Writing is killing creativity.

Out of the dozen branches of thought, writing forces you to commit to just one and expand on it. That is nice for that one thought, but now you left unexplored the other 11 ones which will be forgotten.

When you are thinking you can jump between multiple trains of thought without friction.


In general, taking the easy route stiffs development. Relying on AI to speak for you is a major crutch.

"The Art of the Essay"[0] describes the process of formalizing thoughts and ideas into structured arguments and logically unfolding them as a result of trying to put them in writing.

This also dovetails into the notion that communication mediums can support only certain information. "The medium is the message" [1] is the phrase coined for this idea.

[0] http://www.paulgraham.com/essay.html [1] https://en.wikipedia.org/wiki/The_medium_is_the_message


Is he referring to writing code or books/essays?

For code I think he's forgetting that the people using ChatGPT to write code for them aren't really good programmers. These are mainly people using frontend frameworks that have horrible documentation and use ChatGPT to help them parse the garbage into something that just works. These people would never learn a functional programming language or any assembly. They simply copy the tech that others are using in order to stay employed. What type of code could these type of people, on their own, without ChatGPT produce that is actually useful for others to read or use?


It's more complicated than, "using AI more and more will cause us to forget how to write."

It's just a tool.

I like comparing it to something like quizlet (online flash cards), which can be a great study tool, but which people will also misuse and abuse. Instead of studying and taking the online quizzes, people will often just find the right quizlet which has all of the answers to the online quiz. People who are honest will actually learn the material, and the cheaters suffer.

I think people who misuse ChatGPT will become lazier, worse writers. But using the tool in an honest way will improve even an already talented writers work.

Just my 2 cents.


You say it’s more complicated than PG’d claim, but your “just a tool” narrative is a classic oversimplification of technology’s relationship with man.

A tool is an extension of man, and it is not additive as you suggest, even for “talented writers”. A tool interacts with man’s mind and body, and society’s fabric, “adjusting the sense ratios” of man, to use McLuhan’s phrase.

“Just a tool” is lazy thinking that is most effectively used to stifle debate about technology. This has happened with the typewriter, the gun, the computer. For a more sophisticated engagement with technology, see Neil Postman, Ivan Illic, Marshall McLuhan, even Frank Herbert.


No, I agree with you! I'm not trying to simplify things by saying it's a tool. As you use the tool, it changes you as well. I think the chatbots are particular technology that will have a lot of that effect.

One of my favorite ramblings on this is Wendell Berry's essay on "Why I Am not Going To Buy A Computer".

I'm just saying that there's an obviously bad way to let this tool have an impact on you, and there's a way to use it that won't ruin you.

It's important to be vigilant, and not to abuse GPT-4 and, i.e., replace all of your own writing with AI.


This is not correct and I cannot disagree more.

Sadly, in our society, smart people were not successful as they could due to the prominence of written communication (I call it obsession with fluffy writing). This is because alternative modes of conveying knowledge and ideas, such as mathematics or visual diagrams like mind maps, may not be comprehended as easily as writing.

We need also to acknowledge the unfortunate reality that English written by non native speakers often face unwarranted mistrust.

This is where ChatGPT steps in, serving as a great equalizer.

Math is the core skill required for critical thinking and problem-solving. Not writing.


>When you lose the ability to write, you also lose some of your ability to think

OTOH when you don't gain the ability read or write, you might just end up thinking with both sides of the brain in ways that generations of academics may not do very much.

Especially in cultures like some hillbillies who maintained a larger vocabulary without reading or writing compared to their peers in places like Boston where they had schools the entire time since settlement.

Twice is nice.


Many times when I am asked to write something it's not so that I may think but so that others may not. If this is the case I have no problem serving them some AI generated unthoughts


Oh come on. The guy has been pushing AI startups for years, and now he's worried that some people are going to use it? And his worst fear is that people are going to stop thinking?


I've made the point before that I don't even use a spell checker so there is a deep sense in which I agree with this, but it is tempered by the recognition that I'm hardly the sharpest tool in the box and I get great purchase from my doodles, scribbles and other "outsourced" mental facilities.

I think that for most of us, the mental outsourcing optimum which makes you most capable is probably not the one which maximally enhances and preserves the capacities you have.


Given premises

P1. You use AI to write

P2. You lose ability to think

The argument seems to be:

  P1 -> P2
I would like to point out the reverse direction is far more likely as a causal direction:

  P2 -> P1


Alternate model:

Iterative refinement with an AI will make you think better by shortening the feedback loop.

The AI will force you to clarify your points because, if not, you will see it has meandered into the weeds when writing based on your prompt.

And since you have not authored the prompt directly, you will have to critically read it to validate fidelity to your view point.

This will help you build the skill of critical assessment as well as how to concisely specify your ideas.


When writing goes to 120 characters or you regularly write "thread" / 1/n at the bottom of your tweet, you lose some of your ability to think


Then again, maybe you don't have much worth writing or much to contribute - better off letting previous iterations of humanity do it for you.


True. And nothing of value will be lost for most people and they will be less prone to propaganda-driven thinking... freeing them from many mental prisons. True bilinguals are naturally free of this, as I have written here: https://news.ycombinator.com/item?id=35684559


My gut agrees -- and yet my head says that if we offload writing tasks that we don't care about to GPT, we might not lose much of value.

By contrast, if we offloaded journaling to GPT, that does seem like a big loss. It also seems impractical, though, since as soon as you've told GPT what to say, you've basically said it. Moreover most people don't journal anyway.


This strikes me as a very bold claim to just uncritically accept, especially considering it's just Some Guy making it because it Feels Truthy. Any sources to support the assertion? Any evidence at all? Absent that, this is likely just more hand-wringing about why GPT will cause the downfall of Western civilization.


I say the same thing to software architects who no longer write code. After some time, you start to lose your skills.


Observation suggests that people are switching to using calculators to add things for them with almost indecent haste. Most people hate math as much as they hate writing. Way more than admit it. Within a year the median column of numbers could be added up by a calculator.


Many people find writing very difficult or are not confident about their ability to do it accurately. For people like that, GPT can be a godsend. If they need to write an e-mail or short report for work, they can just give GPT a few bullet points and it will produce a perfectly serviceable, error-free text.

People who need to write in a second language will find GPT especially useful, because they can write those bullet points in their first language and GPT will produce a text in the target language that both serves the purpose and is much better than anything they could produce on their own. The result is likely to be better than a straight machine translation, too, because GPT can, if appropriately prompted, adjust the content, structure, and formality of the text for the intended purpose. And, if they are trying to become more fluent in the target language, the text produced by GPT will be good to learn from, because it will show them how to say just what they were trying say.


Not so sure being served the text on a platter will lead to any improvement.

Which is fine when some experienced person uses the generator to do something that they already know how to do (boilerplate code generation) or is tangential to their main activity (report generation).

In the long run though, what happens when we reach a generation that is taught to use the LLMs for their job from the start? Will they be able to handle anything unexpected?


If my entire message can be condensed into a few bullet points that could be fed into ChatGPT, wouldn't it be an insult to my audience's time to provide them with anything but that very list of bullet points?

This is exactly the thought that puts me in your category of "people [who] find writing very difficult"


Depending on the situation, a set of bullet points might seem insulting to your audience for other reasons.

Here’s a real-world example. I live in Japan. While I am fluent in Japanese, it is not my first language and I am not confident about writing formal e-mails, as the grammar, vocabulary, and strategies for making requests are quite different from everyday usage. Now, though, I can feed prompts like the following to GPT-4:

“Compose an e-mail in Japanese as follows:

Sender: [my name and affiliation in Japanese]

Recipient: [name of company in Japanese]

Content: Request to visit the company to tell them about [my product name in Japanese], which is our new accounting software that uses AI to streamline work. Tell them that I can visit their headquarters at any time on any weekday. Ask them to reply by e-mail or to call my mobile at [number].”

In a few seconds, it produced a much better e-mail than I would have been able to write without a lot of effort or someone else’s help [1]. From my experience, I can say with confidence that GPT-4’s e-mail is much more likely to elicit a response than anything I could compose by myself. (I would change one or two things in its version before sending it, though.)

Millions of people around the world are like me: doing much of their work in a second language and often feeling at a disadvantage to native speakers, especially when writing for formal or public-facing purposes. GPT-4 is a great tool for people like us, as well as for people who are not confident about writing in their first language.

[1] https://www.gally.net/temp/202212chatgpt/20230511-email-comp...


If an AI saw cuts an AI tree in an AI forest and no AI is around to hear it does it matter to AI?


The key is that information exchange between a person and AI must be symmetric, not asymmetric. One must actively maintain that balance. Asymmetric information exchange will result in deleterious effects, similar to over-consumption of TV or social media.


Audio books returned to me the mouth to ear telling of stories that we had for tens of thousands of years. Writing/reading has not been around long enough to have any impact of our actual evolution. I wouldn't bemoan its loss too soon :)


I agree in the same sense that by using a calculator you lose the ability to think quantitatively.

We build tools and move up the stack. Were not going to lose the ability to think. The types of thinking we do change to better serve our needs. Needs change.


he's missing the point

Firstly, I don't write to think. Its too fucking slow. having had shit hand writing, I've never used the written word as a medium for working stuff out. I suspect a sizeable minority of people don't write to think.

However, When I do write, the words I use are deliberate and compact. This is because writing is laborious and slow.

The real risk is the death of nuance. At a gross scale the example is this:

"Forgive me father for I have sinned" is logically the same as "Sorry daddy I have been bad". However the nuance is fucking miles apart.

Unless you are good at proofreading (spoiler, its a skill and is never taught properly) you are going to have a bad time.


For writing yes, the more we offload the lazier we will get.

But for reading, I can’t stand most text as it’s way too long and convoluted. I would gladly take a short version most of the time or one that is created just for my level of understanding.


History repeats itself - from oral to written, now to AI. Fear not, it's a shift, not a shutdown. Writing didn't dull minds, it sparked ideas. AI isn't the end of thinking, it's a new chapter.



Wait, didn't Paul Graham leave twitter?

https://twitter.com/paulg/status/1604556563338887168


For about two weeks, then right back to tweeting 5x/day. Just like most who "left twitter" in the past year.


So basically "fake it till you make it", "move fast and break things", "build it, and they will come" gave us the ultimate UX achievement: Offloading responsibility to the outside entity to be comfortably dumb.

What a dream, what progress for humanity. But hey, the money will be great, the opportunities will be immense. Rome is burning. Really. Burning.

Where are AI churches? Someone is not investing in the future of mankind.

I love the therm "hallucination", marketing geniuses at their best. Calling algorithmic junk which is a clear error by this "term" creates a positive image and enables more incorrectness from a technology which must serve humans.

So buckle up, the world of marketing and speculation will come to the ground with a big economic bang. Like it or not.


Maybe most people will write more now that they have GPT available.


Just because I wasted too much time thinking about it: "Within a year the median piece of writing could be by AI."

What does/could "median" mean in this context?


I believe it implies that looking at the volume of writing produced over a period, more than half of it would be AI-generated. It may also suggest a quality lens: all writing average and below would come from AI directly or through AI support.


I expected something more than a person's unsourced opinions if the link is on the front page of HN. I know it's PG, but he isn't an authority on any of this. Feels like it's here because it's cool to hate on LLMs. I usually flag such stuff and move on, but I want to make my own doom proclamation. The cynicism for anything which will change the status quo is going to be our down fall as an industry and society. I understand raising concerns, but my god man at least pretend to balance it with some excitement. I've truly lived long enough to see the heralds of new technologies turn into doomsayers.


Common sense packaged as wise sounding sentences.


I would prefer this kind of ideas floated by reputable neuroscientists backed by peer-reviewed papers rather than a dev turned investor with a history of falling for theories that are more intellectually appealing than proven.

As it is, this tweet series looks so much like one of the many scares we got earlier: RPGs loosen moral norms, videogames make you violent, television cartoons make children less smart, etc. New edition: using chatGPT lessens your ability yo think!


If anything, writing froze ideas (see religion), whereas speaking them from generation to generation allowed the dross to slough off good ideas. Does good science require writing? That's a good debate to have. Are written notes essential for scientific progress/discovery? Maybe. If failures are recorded and researchable, then I would argue yes, because it is likely only the successes were transmitted with spoken word.


I would argue that broad collaboration requires the written word. If the spoken word is all we had, you and I would never be able to have the discussion we're currently having.


This assumes there's some kind of collective ability to think.

The evidence for that is tenuous at best.


Miss the essays pg, when you lose the ability to blog, you gain the obligation to tweet.



He's right though. Working in an industry that uses a lot of NLP, its funny to me that people are also so excited about generative LLM.

Soon we won't have to read or write! It will be AI expanding content and producing sentences for other AI to condense back down.

Less words good, me like nice computer talk for me


I enjoy writing. I hate writing business tone jargon. I use ChatGPT for that.


And when you stop writing on paper, you become less consistent.


You won’t lose the ability to write when you get a fountain pen


pretty silly comments. historically, people have always made such stupid comments on things they didn't understand, and I think they will continue to do so.


PG jumps the shark.


Amusing Ourselves to Death is a great book on this idea that the medium changes you.

TL;DR: When a disruptive technology comes along (books, radio, television, the Internet, now AI) there's an intrinsic shift in how we think and how information is communicated.

For example with books there is generally the assumption that you have read the chapters in order and thus there is no need to give you a recap with each chapter.

However, in television, you can make no such assumption and so at the beginning of every episode they have to remind you what happened before because it's possible either you didn't watch the previous episodes or it's been a long time since then.


Also true of the shortening attention span


it's the opposite, your ability to think is your ability to write

anyone can write, even a machine, but do they think?


And yet pg is still here, writing


> Most people hate to write as much as they hate math.

The target audience of openai.


While I enjoy the act of writing, I dislike the amount of time it takes to do it well.


[reference needed]




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: