In what pre-ChatGPT world did "write clearly and provide ample information" include such familiar and long-known tips as:
* Ask the model to adopt a persona
* Use delimiters to clearly indicate distinct parts of the input
* Specify the desired length of the output
> others' poor experience is a result of their more layman writing style.
I guess we'll have to pass the hat around for those wretched souls. In the meantime, someone needs to tell the English teacher that "layman" is not an adjective.
> sentence structure and word flow
In my experience ChatGPT doesn't care about those. It's able to infer through quite a large amount of sloppiness. The much larger gains come from guiding it into a model of the world, as opposed to direct it to respond to lean perspectives like, "What do I eat to be better?"
It's perfectly acceptable to use nouns to modify nouns in English. "Beach house". "Stone hearth". "Microphone stand". Go looking for more, I bet you can find a lot.
The distinguishing feature of an adjective isn't that it modifies a noun. It's that it has no other use, at least in standard American English.
The fact that everyone knows what a "layman writing style" means that the only place it's failing is your personal list of acceptable attributive nouns. But English isn't static. It runs on consensus. And consensus here that there's nothing weird about that use.
If you're right, I would say you're making a pedant argument. If you're wrong, I would say you're making a pedantic argument.
And, fine, let's call it an attributive noun rather than a noun used as an adjective. I was taught the noun as adjective thing in high school but happy to update my terminology. Indeed "layman" is not on my list of acceptable attributive nouns.
Did you notice how your first link says it's "incorrect" to use Kyoto as an attributive noun?
Of course it's not incorrect to use Kyoto as an attributive noun. "Kyoto accent" is perfectly correct. The "rules" laid out in that link are more like common patterns, not prescriptions.
Hard lines rarely happen in the real world. It's best to be flexible such that you can accept unfamiliar instances of familiar patterns without trouble.
I'm certainly disagreeing with the part that claims there is an explicit list of correct uses. English isn't that simple. They were cited as a big list of examples, not as having all the rules.
Dude... an argument can certainly be made that English evolves by consensus; you're right about that. At the same time, that doesn't mean anything goes! It perhaps would be going too far to say that something is "correct" or "incorrect" English. But we can certainly give a proposed fragment of English a score measuring how well it matches the current consensus regarding what is a valid sentence in the language. Now,
> while others' poor experience is a result of their more layman writing style.
would receive a low score: "Layman" is not a noun commonly used as an adjective where common is relative to the overall usage of "layman".
While I am aware that there's prior use (centuries ago even) "layman writing" jumps at me a lot less than current use of things like ask, spend etc as nouns.
This is an incredibly amusing hacker news interaction. You are 100% in the right in terms of having a breadth of accurate and passionate knowledge about your topic and it being relevant to the discussion @chowells
> * Use delimiters to clearly indicate distinct parts of the input
> * Specify the desired length of the output
You should do this if you ask a human to write something too, given no other context. Splitting things up with delimiters helps humans understand text. The desired length of the output is also very clearly useful information.
> * Ask the model to adopt a persona
This is maybe a bit more of a stretch but if you hired a writer to write for you, you would tell them who you are so they can have some context right? That’s basically what this is.
This is bad advice. In my experience asking it to take on a persona can muddy the text that it writes as that persona. I have told it to be a biographer persona, for it to write a biography that then claims the person was a biographer.
It's best to treat it as a Language Model, and set it up to complete the text you've provided. All this chat model stuff is a waste of time that degrades text quality.
Honestly, those seem like the guidelines that SAT/ACT question writers probably use to remove the most ambiguity when trying to reduce the ambiguity and provide clear and consistent directions, so my guess is that they've been best practices for those that care about clearly defining a task for a long time.
* Ask the model to adopt a persona
* Use delimiters to clearly indicate distinct parts of the input
* Specify the desired length of the output
> others' poor experience is a result of their more layman writing style.
I guess we'll have to pass the hat around for those wretched souls. In the meantime, someone needs to tell the English teacher that "layman" is not an adjective.
> sentence structure and word flow
In my experience ChatGPT doesn't care about those. It's able to infer through quite a large amount of sloppiness. The much larger gains come from guiding it into a model of the world, as opposed to direct it to respond to lean perspectives like, "What do I eat to be better?"