Hacker News new | past | comments | ask | show | jobs | submit login

The problems of epistemology and informational quality control are complicated, but humanity has developed a decent amount of social and procedural technology to do these, some of which has defined the organization of various institutions. The mere presence of LLMs doesn't fundamentally change how we should calibrate our beliefs or verify information. However, the mythology/marketing that LLMs are "outperforming humans" combined with the fact that the most popular ones are black boxes to the overwhelming majority of their users means that a lot of people aren't applying those tools to their outputs. As a technology, they're much more useful if you treat them with what is roughly the appropriate level of skepticism for a human stranger you're talking to on the street



I wonder what ChatGPT would have to say if I ran this text through with a specialized prompt. Your choice of words is interesting, almost like you are optimizing for persuasion, but simultaneously I get a strong vibe of intention of optimizing for truth.


I think you'll find I'm quite horseshit at optimizing for persuasion, as you can easily verify by checking any other post I've ever made and the response it generally elicits. I find myself less motivated by what people think of me every year I'm alive, and less interested in what GPT would say about my replies each of the many times someone replies just to ponder that instead of just satisfying their curiosity immediately via copy-paste. Also, in general it seems unlikely humans function as optimizers natively, because optimization tends to require drastically narrowing and quantifying your objectives. I would guess that if they're describable and consistent, most human utility functions look more like noisy prioritized sets of satisfaction criteria than the kind of objectives we can train a neural network against


This on the other hand I like, very much!

Particularly:

> Also, in general it seems unlikely humans function as optimizers natively, because optimization tends to require drastically narrowing and quantifying your objectives. I would guess that if they're describable and consistent, most human utility functions look more like noisy prioritized sets of satisfaction criteria than the kind of objectives we can train a neural network against

Considering this, what do you think us humans are actually up to, here on HN and in general? It seems clear that we are up to something, but what might it be?


On HN? Killing time, reading articles, and getting nerdsniped by the feedback loop of getting insipid replies that unfortunately so many of us are constantly stuck in

In general? Slowly dying mostly. Talking. Eating. Fucking. Staring at microbes under a microscope. Feeding cats. Planting trees. Doing cartwheels. Really depends on the human


I would tend to agree!!

> Talking.

Have you ever noticed any talking that ~"projects seriousness &/or authority about important matters" around here?


I think most people do that all the time. Projecting authority is one of the most important skills in a world dominated by human institutions, because it's an effective means of manipulating most humans. Sad but true


Do you know any single person who can stop the process, at will? Maybe not always, but at least sometimes, on demand (either internally or externally invoked)?


What, like not project authority? Admit that they are lost, confused, powerless, don't know something, aren't in control? Break the empire society kayfabe?

Yes, absolutely. I view this as one of the criteria by which I assess emotional maturity, and despite societal pressures to never do so, many manage to, even though most don't

I'm not a sociologist, but I think the degree to which people can't turn it off maps fairly well onto the "low-high trust society" continuum, with lower trust implying less willingness or even sometimes ability to stop trying to do this on average, though of course variation will exist within societies as well

I have this intuition because I think the question of whether to present vulnerability and openness versus authority and strength is essentially shaped like a prisoner's dilemma, with all that that implies


> I'm not a sociologist, but I think the degree to which people can't turn it off maps fairly well onto the "low-high trust society" continuum

We're not fully aligned here....I'm thinking more like: stop (or ~isolate/manage) non-intentional cognition, simulated truth formation, etc.....not perfectly in a constant, never ending state of course, but for short periods of time, near flawlessly.


Sure. There are people who can do that. I think it's a hard skill to master but definitely one that can be performed and improved somewhat reliably for people who manage to get the hang of it initially and care to work at it, and which I have seen a decent number of examples of, including a few who seem better at it than me

Could you name any such (famous) people?

I think we're not talking about exactly the same thing though, which I'd say is my fault. I would like to modify this:

> stop (or ~isolate/manage) non-intentional cognition, simulated truth formation, etc.....not perfectly in a constant, never ending state of course, but for short periods of time, near flawlessly.

...to this (only change is what I appended to the end):

> stop (or ~isolate/manage) non-intentional cognition, simulated truth formation, etc.....not perfectly in a constant, never ending state of course, but for short periods of time, near flawlessly, without stopping cognition altogether (such as during "no mind" meditation or "ego death" using psychedelics). Think more like a highly optimized piece of engineering, where we have ~full (comparable to standard engineering or programming) access to the code, stack, state, etc.


I'm not close enough to anyone you'd probably consider famous to claim to know the inner workings of their mind, and you keep adding more weirdly circuitously specified conditions. At this point I'm not sure what point, if any, you're trying to get at, and it's hard not to form the impression that you're being deliberately obtuse here, though it also could just be the brainrot that comes of overabstraction

> and you keep adding more weirdly circuitously specified conditions.

1. I explicitly acknowledged I misspoke and wanted to clarify: "I think we're not talking about exactly the same thing though, which I'd say is my fault. I would like to modify this:"

2. What is circuitous about my question? Is my refined question non-valid?

> At this point I'm not sure what point, if any, you're trying to get at

I encourage you to interpret my question literally, or ask for clarification.

> ...and it's hard not to form the impression that you're being deliberately obtuse here...

obtuse: ": lacking sharpness or quickness of sensibility or intellect : insensitive, stupid. He is too obtuse to take a hint. b. : difficult to comprehend : not clear or precise in thought or expression".

I'd like to see you make the case for that accusation, considering the text of our conversation is persisted above.

Rhetoric is popular, and it will work on most people here, but it will not work on me. I will simply call it out explicitly, and then observe what technique you try next. You do realize that you people can be observed, and studied, don't you?

> ...though it also could just be the brainrot that comes of overabstraction

Perhaps. Alternatively, my question could be valid, challenging to your beliefs (which I suspect are perceived as knowledge), and you lack the self-confidence to defend those beliefs.

You are welcome to:

1. genuinely address my words

2. engage in more rhetoric

3. stay silent (which may be interpreted as you not seeing this message, regardless of whether that is true)

4. something else of your choosing


I'm telling you that what you've asked is too obtuse to be comprehensible, even setting aside that I've already nominally answered your question based on the other criterion you've added, which is that the person specified must be famous. To reiterate, I'm not intimately familiar enough with anyone you would likely consider famous to claim I have significant insight into their capacity for metacognition whatsoever. My issue with your question is not that you've modified it, it's that in the form that you've asked it, it reads to me as somehow both impossibly nebulous and overspecified. I can't even be sure I understand it. I can barely parse it. Maybe I'm dumb for that, who knows? I have said nothing ingenuine and have addressed your words to the best of my ability. I generally view rhetoric as being for the benefit of an audience, and this deep on a hn comment thread there is virtually no chance there is one. If you perceive me to be expressing exasperation or frustration, I encourage you to interpret this not as a rhetorical ploy but as a genuine expression of my state of mind in trying to make sense of what looks to me like some kind of thrice-cooked mincemeat made of psychobabble. Also please understand that I have no expectation that expressing my exasperation in this way will be well-received by you, or even witnessed by someone who might appreciate it. I'm really genuinely this mean in my head and sometimes just keep writing things like this because the act of submitting the post seems to stop the thought from being stuck rattling around in my head for days. Maybe at best I can call it a mental hygeine practice. Sweeping up the cobwebs before they become whispers and add to the cacaphony

This seems pretty reasonable!!

FWIW I don't understand a lot of what either of you mean, but I'm very interested. Quick run-through, excuse the editorial tone, I don't know how to give feedback on writing without it.

# Post 1

> The problems of epistemology and informational quality control are complicated, but humanity has developed a decent amount of social and procedural technology to do these, some of which has defined the organization of various institutions.

Very fluffy, creating very uncertain parsing for reader.

Should cut down, then could add specificity:

ex. "Dealing with misinformation is complicated. But we have things like dictionaries and the internet, there's even specialization in fact-checking, like Snopes.com"

(I assume the specifics I added aren't what you meant, just wanted to give an example)

> The mere presence of LLMs doesn't fundamentally change how we should calibrate our beliefs or verify information. However, the mythology/marketing that LLMs are "outperforming humans"

They do, or are clearly at par, at many tasks.

Where is the quote from?

Is bringing this up relevant to the discussion?

Would us quibbling over that be relevant to this discussion?

> combined with the fact that the most popular ones are black boxes to the overwhelming majority of their users means that a lot of people aren't applying those tools to their outputs.

Are there unpopular ones aren't black boxes?

What tools? (this may just indicate the benefit of a clearer intro)

> As a technology, they're much more useful if you treat them with what is roughly the appropriate level of skepticism for a human stranger you're talking to on the street

This is a sort of obvious conclusion compared to the complicated language leading into it, and doesn't add to the posts before it. Is there a stronger claim here?

# Post 2

> I wonder what ChatGPT would have to say if I ran this text through with a specialized prompt.

Why do you wonder that?

What does "specialized" mean in this context?

My guess is there's a prompt you have in mind, which then would clarify A) what you're wondering about B) what you meant by specialized prompt. But a prompt is a question, so it may be better to just ask the question?

> Your choice of words is interesting, almost like you are optimizing for persuasion,

What language optimizes for persuasion? I'm guessing the fluffy advanced verbiage indicates that?

Does this boil down to "Your word choice creates persuasive writing"?

> but simultaneously, I get a strong vibe of intention of optimizing for truth.

Is there a distinction here? What would "optimizing for truth" vs. "optimizing for persuasion" look like?

Do people usually write not-truthful things, to the point it's worth noting that when you think people are writing with the intention of truth?


As long as we're doing unsolicited advice, this revision seems predicated on the assumption that we are writing for a general audience, which ill suits the context in which the posts were made. This is especially bizarre because you then interject to defend the benchmarking claim I've called "marketing", and having an opinion on that subject at all makes it clear that you also at the very least understand the shared context somewhat, despite being unable to parse the fairly obvious implication that treating models with undue credulity is a direct result of the outsized and ill-defined claims about their capabilities to which I refer. I agree that I could stand to be more concise, but if you find it difficult to parse my writing, perhaps this is simply because you are not its target audience


Let's go ahead and say the LLM stuff is all marketing and it's all clearly worse than all humans. It's plainly unrelated to anything else in the post, we don't need to focus on it.

Like I said, I'm very interested!

Maybe it doesn't mean anything other than what it says on the tin? You think people should treat an LLM like a stranger making claims? Makes sense!

It's just unclear what a lot of it means and the word choice makes it seem like there's something grander going on, coughs as our compatriots in this intricately weaved thread on the international network known as the world wide web have also explicated, and imparted via the written word, as their scrivening also remarks on the lexicographical phenomenae. coughs

My only other guess is you are doing some form of performance art to teach us a broader lesson?

There's something very "off" here, and I'm not the only to note it. Like, my instinct is it's iterated writing using an LLM asked to make it more graduate-school level.


Your post and the one I originally responded to are good evidence against something I said earlier. The mere existence of LLMs does clearly change the landscape of epistemology, because whether or not they're even involved in a conversation people will constantly invoke them when they think your prose is stilted (which is, by the way, exactly the wrong instinct), or to try to posture that they occupy some sort of elevated remove from the conversation (which I'd say they demonstrate false by replying at all). I guess dehumanizing people by accusing them of being "robots" is probably as old as the usage of that word if not older, but recently interest in talking robots has dramatically increased and so here we are

I can't tell you exactly what you find "off" about my prose, because while you have advocated precision your objection is impossibly vague. I talk funny. Okay. Cool. Thanks.

Anyway, most benchmarks are garbage, and even if we take the validity of these benchmarks for granted, these AI companies don't release their datasets or even weights, so we have no idea what's out of distribution. To be clear, this means the claims can't be verified even by the standards of ML benchmarks, and thus should be taken as marketing, because companies lying about their tech has both a clearly defined motivation and a constant stream of unrelenting precedent


> There's something very "off" here

You mean on this planet?

If not, what do you think of that idea? Does something not seem....weird?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: