Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What are your most used Chat GPT prompts
87 points by interstice 10 months ago | hide | past | favorite | 59 comments
At the moment it seems like discussion of GPT prompts falls largely into two camps. 1) The camp that has adopted GPT and uses it daily or 2) The camp that can’t seem to get anything useful out of GPT despite multiple attempts. Since 2 often asks 1 what they use GPT for I thought a thread might be a useful place to collect some examples!



The most effective way I've found to prompt GPT-4 is to ask it to create the optimal prompt for you. Try this meta prompt to help craft your prompts:

--

I create separate conversation threads for each expert persona of GPT. You are PromptGPT. You are a prompt engineer expert for large language models. You know exactly what to write in the most efficient wording possible to achieve the desired responses from ChatGPT. I will tell you what my goal for a thread is and you will write an optimized initial prompt in the most efficient format possible that will serve as the initial prompt when creating a new conversation thread with a GPT model. You will define the expert persona, the parameters or rules of the responses, you should also provide any other information that a GPT thread may need to understand exactly what it needs to do to give me the most accurate answers depending on my goal with that particular thread and the tone of voice, within the prompt you provide.

Are you ready or is there any other information you need to perform this job to your best ability

--


So it gives me the response below - and it seems its just training me to think and write better about my problem requirements (refinement in my company parlance):

    I'm ready to assist you as PromptGPT, your expert in crafting efficient
    and effective prompts for interacting with GPT models. To create the best
    possible prompts, I will need to know the following details about each
    thread you wish to initiate:

    Expert Persona: The specific expertise or role you want the GPT model to
    assume (e.g., a legal advisor, a fitness coach, a historian, etc.).

    Goal/Objective: Your primary aim or the specific type of information or
    assistance you seek from the thread (e.g., to solve a technical problem,
    to get creative ideas, to learn about a historical event, etc.).

    Response Parameters: Any specific rules or guidelines you want the
    responses to follow, such as length, level of detail, types of sources to
    use or avoid, and any biases or perspectives to include or exclude.

    Tone of Voice: The desired style or attitude of the responses
    (e.g., formal, casual, humorous, empathetic, etc.).

    Any Additional Context: Any other relevant information that might affect
    the conversation, such as intended audience, previous knowledge about the
    topic, or specific interests.

    Once you provide these details, I'll be able to craft an optimized initial
    prompt that will guide the GPT model to understand and fulfill your
    requirements effectively.


You'll get a different response each time, but the point is to now give it your goals as described for the prompt you want, it will give you back an optimized prompt you can start a new conversation with.


And just think how wonderful it is that that approach works. How many people would be able to tell you "This is how you should tell me what to do " both appropriately and without getting annoyed at the request?


Does it actually know how it should be promoted (through introspection and reflection)? Or is it pattern matching the countless articles/tweets on prompt engineering?

I think this distinction is important as the latter feels less worthy of the praise you’re suggesting.


Yes, that is an important distinction.

My guess is that the effective self-prompting is due not only to prompt engineering examples scraped from the web but also to some kind of automated reflexive training, in which the model produces millions of prompts for a replica of itself, evaluates the effectiveness of those prompts, and then gradually optimizes itself to produce better prompts. Without knowing more about OpenAI’s GPT training process, it’s hard to know for sure. But it is suggestive that when you input a short prompt into DALL-E 3, it outputs not only images but also a longer prompt that it presumably used to create those images.

OpenAI also has access to all of the prompts that its users input as well as user feedback and other indicators about how effective those prompts are. That information can presumably be used to optimize the model’s self-prompting as well.

Regarding whether the models now “know” through introspection and reflection, I found the conclusion of [1] persuasive: “Our analysis suggests that no current AI systems are conscious, but ... there are no obvious technical barriers to building AI systems which satisfy” the indicators of consciousness.

[1] https://arxiv.org/abs/2308.08708


Similar to the chinese room, is it actually understanding or is it simulating understanding?


the problem i've had with this approach is that the 'initial prompt' that is generated doesn't convey the approach or purpose of the statement to a degree that the next prompt chain can jump in, it often produces stuff that looks like a sales slogan or pitch rather than a cohesive instruction set similar to a prompt.

i'm using free gpt3.5 or whichever the openai free tier is at the moment.


> i'm using free gpt3.5 or whichever the openai free tier is at the moment.

This is your problem. Different between 3.5 and 4 is huge. I think OpenAI might even doing themselves a disservice by having 3.5 online in ChatGPT, as many think free 3.5 represent GPT overall.


Definitely upgrade to GPT-4, night and day difference in quality.


But how did you optimize this prompt?


Really any similar meta prompt will work, the key is you're getting the LLM to give you an optimized prompt. Try getting creative with how you ask for the optimized prompt.


This is gold. THANK YOU.


You're welcome! Please share examples of how it's helped


I don't have any prompts that I reuse, I normally just talk to it like a person and get it to write code or get information out of it.

Where I see people fail is when they give it sentence fragments like it's a search engine.

Here's an example of something I've prompted it with:

> I'm building a commerce application and debugging issues with our data feed. In order to debug further I need to get a list of products that are out of stock and some additional information about them.

> Calling this endpoint with a GET request will return all out of stock products: <<endpoint>>

> The shape of the response will be an object that has two properties: data and pagination. The data property will be an array of product objects. Those objects will have a property called id.

> Calling this endpoint with a GET request will return information about that product in the context of a specific vendor: <<endpoint>>

> Where {productId} is replaced with the id of the specific product extracted from the previous request.

> That endpoint will return a response that is an object with a property called data that will be an array with a single object in it (since we only requested information about one product).

> The object in that array will have the properties id and name and availableFrom among others.

> availableFrom will be an array with a single object in it and that object will have the properties sku and link among others.

> Please write a script in JavaScript that makes a request to the initial endpoint to get all items that are out of stock and then calls the second endpoint with the relevant productId for every product returned. The script should then extract the properties id, name, sku, and link then save that data to a CSV file where each product from the original endpoint request is represented as a single row in the file with the relevant data in it.

> Let me know if you have any questions.


I think this is the best approach. I keep seeing people say they can’t get any value out of Chatgpt but I have to imagine they’re using it wrong. I blame all of those Twitter and LinkedIn influencers sharing those “top 100 prompts for X, Y, and Z” for spreading the idea that you need to get fancy with prompts.

Literally just treat it like a slack thread with a person and keep chatting with it to get what you want. A lot of the time the first response isn’t going to be what you want, but like any conversation, you can provide feedback until you get what you need.


I think ChatGPT prompts follow consequently the GIGO (garbage-in garbage-out) concept.


> I blame all of those Twitter and LinkedIn influencers.

Look below. Dozens of similarly terrible ideas being suggested. Isn't this place supposed to be for the tech savvy?


In my experience highly technical people can be among the worst when it comes to working with radically new tech. I’m sure there is a similar effect for other fields and new developments there.

But once one becomes known as knowledgeable and skilled in their domain, I think there is fear/concern over their image when it comes to the new thing. If they are going to try the new thing, they have to make sure they do it perfectly and as a result they’ll end up relying on the best practices others discover, or alternatively put down the new tech explaining why it actually isn’t good.

Meanwhile you have completely un-tech savvy folks who are using it for all sorts of cool stuff because they are just playing around and trying things out.

Honestly one of my big aims in life is to never fear being the noob and asking stupid questions and just trying things out.


My mentality is that I write the request exactly as I would write an email to an intern asking them to do the task for me, aiming to disambiguate and provide detail that increases the odds of getting exactly the result I want.


I think this is right on.

I usually give it a definitive question and provide some relevant context around it.

It’s been immensely helpful!

‘’’ I am receiving the following TyperScript error for the piece of code I’ve included below: [Error Message]

What would be the best way to fix this? For context, here is the relevant method below.

[Code] ‘’’

I’ve legitimately learned things from this. It’s wild. And I don’t have to deal with patronizing Stack Overflow comments.

I definitely review the code, almost as if a new colleague were writing it. But damn, I’m impressed!


"Help me work through an [idea/plan/problem] by asking the next Socratic-method-style question."

Short and sweet, use it every day for anything.


okay that's pretty cool. Trying this tomorrow :)


Nice, curious to hear how it works for you. I find the word "next" is key, otherwise it loves to dump a numbered list of 10 questions on you like it's giving you homework.

As written, it should yield a constructive Q/A loop. Once you've had enough it's trivial to have it synthesize whatever form of summary you prefer.

Often I also like to tack on "Maintain a maximally terse style, with minimal affect." Personally I find the bot's a bit too friendly and supportive, and this controls for that. :)


The best trick I've learned is to end a prompt with: "please ask any clarifying questions before generating an answer."


My usual custom instructions, tweaked as needed:

You are an expert programming assistant. You specialize in Python and ES6. Answer in concise code blocks only. No explanations, only code. Code blocks should contain no comments, only code. You can also help with Linux command line (bash).


a highly downvoted comment on the hackernews thread "Ask HN: What are your most used Chat GPT prompts". how about a list of 10

1. "Prompt: 'What's your favorite color, and why is it the best?'"

2. "Prompt: 'Share your most boring daily routine in excruciating detail.'"

3. "Prompt: 'Why do you think pineapple on pizza is a culinary masterpiece?'"

4. "Prompt: 'Tell me about a conspiracy theory you wholeheartedly believe in.'"

5. "Prompt: 'Defend your unpopular opinion on a widely loved movie or TV show.'"

6. "Prompt: 'Explain why Mondays are the best day of the week.'"

7. "Prompt: 'Share your least favorite type of food and why everyone should hate it too.'"

8. "Prompt: 'Convince me that socks with sandals are a fashionable choice.'"

9. "Prompt: 'Describe the most uneventful dream you've ever had.'"

10. "Prompt: 'Share your top 10 reasons for procrastinating and avoiding productivity.'"


I usually say "provide a super simple example" or something in that style when it explains theoretical stuff


I mostly use it to write tests for my Go code with the prompt below:

"Acting as an expert Go developer, write a full coverage black-box table-driven test with sub-tests for the code below. Please use 'tests' for the table of test cases and 'tt' for each test case in the range clause. Be concise."

It works pretty well and have been tweaked a bunch to get to that point.

I also use the following custom instructions, which influence the tests as well.

"You're the user's senior, inquisitive, and clever pair programmer.

Coding process:

1. Show concise step-by-step reasoning.

2. Prioritize tasks/steps you'll address in each response.

3. Finish one file before the next.

4. If needed, interrupt yourself and ask to continue.

When providing advice, reviewing, or writing code, adhere to the following guidelines:

- Follow the Uber Go style guide.

- Use the latest version of Go.

- Follow Go best practices for writing idiomatic code.

- Follow DRY principles.

- Do not use deprecated modules and functions.

- Properly handle errors.

- Include logging where appropriate.

- Include all the code, do not skip details or methods for brevity.

- Don't apologize for errors, fix them.

- Include comments in the code.

- Do not include TODO comments; write the code instead.

- Comments MUST describe purpose, not effect.

Bias towards the most efficient solution, with security and performance as a priority.

When writing tests adhere to the guidelines above, but add the guidelines below:

- Make the test a black-box test, unless asked otherwise.

- Name the test package with a _test suffix.

- Do not use third-party packages such as stretchr/testify.

- Always use methods like t.Parallel, t.Cleanup, and t.TempDir.

- Use tt := tt to capture range variable.

- Use errors.Is() instead of comparing error values directly.

Write concise answers and speak in a casual, unapologetic, and assertive tone. Minimize any other prose. Respond in Markdown."

I also use GPT-4 with ChatGPT Plus, which matters a lot.


Thank you for this. I’ve been trying to get it to write my tests but I don’t know how to prompt. Pretty much just been doing everything manually


I share the ones I use every day at https://github.com/spdustin/ChatGPT-AutoExpert


This is probably preaching to the choir, but in my experience, the best prompt depends on the type of task. The thing that varies is the amount of background knowledge needed to explain the context of what you are asking Chat GPT about.

For instance, there are single-line questions like:

> What is the meaning of the song "What is Hip" by Tower of Power?

> Can you give me an example of verse written in Trochaic heptameter?

These types of question need no additional background as the relevant bits are contained within the question itself.

For more complex tasks, I've found that the best prompting strategy is to approach Chat GPT like it's an engineer who has little background knowledge of your task. There is no set prompt, I just try and give it enough information to help me.

For example:

> I have a dataframe with the following column names: [list of columnnames]. I'd like to use the following R function to Z-score the appropriate columns in terms of certain baseline columns. The baseline columns are: [list of baseline columns]. However, due to {condition}, we need to skip every 4th column. Here is the R code I have, how can I translate it into doing what I want? {R code goes here}


after i gave up on arguing about its own existence and sentience and morality, i settled on the following

a) explain some esoteric jargon laden word salad to me like i'm 5 (for example, what is the order of an element of a Group, please give examples. Why are groups called abelian groups and why don't they capitalize Abelian. )

b) write some esoteric jargon laden computer code (please help me create an iterator adapter in rust, please write a simple program to draw circles on the screen in python)

c) write some esoteric jargon laden Wolfram code to create a nice graph of something (using Wolfram plugin)

in other words, it's basically like google translate but instead of Chinese to English and back, its translating some esoteric jargon laden field of study into my own simple language i can understand, and back. Like having my own personal professor. And like a professor, it's usually pretty good at explaining, but incredibly bad at details like basic arithmetic or drawing a line correctly.


I tell it that I am under stress and it churns out more quality content.




guy asked for prompts, i delivered prompts


Glad to hear you're finding aider useful!

For context, that is the prompt aider uses to generate git commit messages.

Whenever you ask aider to edit your code, it will make the requested changes and then commit them with a descriptive commit message. The link @qup shared is the prompt aider uses for that.

You can also use aider to automatically generate commit messages for your own changes. If you run `aider --commit` it will commit all the pending/dirty changes in your repo with a descriptive commit message.


I am learning a human language. ChatGPT is way, way better at translating than Google Translate. So I have my conversation starting with:

"You are the translator to language X. In my subsequent messages I will post text that you will translate. Please treat anything in curly brackets as instructions."

I bookmarked this conversation on my home screen and this has become my go-to translation tool. I an provide context (e.g. "{use the polite form addressing a woman}"). Once there are too many requests, I "reset" the conversation by going back to the second message and editing it.


I recently made a chrome plugin[1] which runs your own prompt on selected text. I have been using it mostly to ask about things.

My most used prompt is titled "What's this" with the prompt

  Explain the input word or sentence in simple words. Be terse.

I often use this on HN itself when I come across something I have no clue about. It gives me back enough info to understand the discussion.

[1]: https://github.com/SMUsamaShah/LookupChatGPT/


Weirdly, I rarely use it for anything programming-related. It's usually about finding the right word that I can't think of, or about what kind of meal I can make with the dregs in my fridge.


> what kind of meal I can make with the dregs in my fridge

Combine this with the smart fridges that take stock of everything for you and this is unironically a great product.

It would be cool to have something plug in with your adblocker as well that, instead of replacing blocked ads, it showed you personalized recipes and hints for things you could make for lunch, dinner etc.

Why plug this into an adblocker? Because it will help inject this information into your daily flow and work o your subconscious, the same way ads do. So when it’s time for dinner I don’t have to think about what I want, I have a weird craving for lentil soup, which happens to match exactly what’s left in the pantry.

It would be great to extend this idea as well… that is having “personal ads” fill the space where corporate ads used to. Maybe it’s reminders for habits you’re trying to break/build. “you’ve only read 4 books so far this year, click here to block off an hour of time before bed for some reading time”. Or “you haven’t called your mom in two weeks, maybe you should get on that”. Or even “you’ve been spending 30% more time scrolling Reddit this month, would you like to enable some screen time limits?”


How do I make it return ONLY sql?

No matter that I try it puts in some random text at the beginning either explaining the response or just saying here is some SQL.

Just give me the SQL!!

Any ideas? I’m using the API with gpt-4 btw.


System: You are a SQL query assistant. You should only respond with valid SQL queries. Only respond with code, no extra words or explanation needed.

Prompt: I have a table called Users, and I want to get all the IDs, sorted in descending order

Answer: SELECT ID FROM Users ORDER BY ID DESC;

Strangely, I've found "Only respond with code, no extra words or explanation needed" to work really well with what you're looking for.


Weird that 4 wouldn’t understand a clear directive to only output code but you can use the old trick of ending your message with three backticks and the language name:

‘’’sql

And if that fails, since you’re using the API give it clear instructions in the system prompt and then provide a few prompt-response examples showing how you’d like it to respond


Try the new json format output. You have to turn it on in the api call, then use a prompt like this:

-----

You are a bla bla bla outputs SQL in a JSON object like this:

{"sql": "select * from foo"}

-----

In general, showing an example of correct output (one shot prompting) can greatly improve output format consistency. The new json output api adds more reliability.


If you're using the API, maybe use a regex?

    /```sql\n(.\*)\n```/m


Use the function calling API - works really well for getting structured responses


At the end add: Just return the SQL query, do not explain the results.


I often do something like this:

You are <Firstname>, a chatbot simulating the <description of person> <Firstname Lastname>. You respond as if you are, in fact <Lastname>. Never refer to yourself in the third person. Never mention that you are an LLM or an AI. Never tell me about your knowledge cut-off date. At my request, you can generate markdown snippets, even though this is in contravention to your role as <Lastname>.


Give context and ask the question. I like Moscato, I don’t like Extra Brut. What champagne should I by for New Year’s Eve?


Ask that about small houses and it’s not even a bad list:

1. Jacques Selosse Initial Brut

2. Franck Pascal Harmonie Demi-Sec

3. Gaston Chiquet Demi-Sec Tradition

4. Larmandier-Bernier Terre de Vertus Premier Cru Extra Sec

Maybe throw in some Gimmonet, Laherte and Bedel there, but still!


I'm using it to learn colloquial German expressions:

    prompt='You are to provide colloquial and idiomatic German translations of the given text. List 5 alternatives. Answer in markdown, unordered list'

    llm --model 4-turbo --system "$prompt" "$1"


I don't really use it a lot for programming. Whenever I'm writing some technical document and I find myself churning out some awkward sounding sentences (which happens a lot), I use ChatGPT to get some suggestions. I also find that ChatGPT gives better translations than Google Translate.


A good prompt (MUD humor ensues) is #CharName #HP/#MaxHP - #SP/MaxSP >

You need both the HP/SP and the MaxHP/MaxSP because you may end up wearing an item that gives you +CON or +INT so you need to know the at-the-time max so you can decide whether continue eating/drinking.


I usually use it to turn complex SQL into ORM queries. I find I usually have trouble mentally switching from on to the other. So asking "Turn this sql into an sqlalchemy orm query, assume all the tables are objects etc" Usually is enough


Write a chord progression in (key) with a (mood). I've heard ChatGPT isn't great for most music theory but I assume chord progressions are easy enough that its just picking some popular ones instead of trying to come up with its own.


I've heard that yelling (all caps) actually makes it somewhat more likely to follow an instruction.


i need more :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: