Hacker News new | past | comments | ask | show | jobs | submit login
Apple Intelligence System Prompts (github.com/explosion-scratch)
15 points by explosion-s 6 months ago | hide | past | favorite | 17 comments



Suggestion for this repo: prompts embedded in JSON files are quite hard to read, especially in mobile browsers.

You could add some code (perhaps in a GitHub Action) that extracts the prompts out into accompanying Markdown files to fix this.

You’ve already done most of the work to create this file: https://github.com/Explosion-Scratch/apple-intelligence-prom...


I built an Observable Notebook that pulls the JSON from GitHub and renders it as HTML: https://observablehq.com/@simonw/apple-intelligence-prompts


Is there an emerging best practice for complex prompt management?

We use markdown and simple string replacement for most of our prompts, but it feels hacky and potentially error prone.


Markdown w/ string replacement seems fine to me. JSON works well for when you need to structure the rest of the config along with the prompt. I assume you could combine markdown + JSON.

You will start running into issues with referential integrity, etc. with just a plain text file though. You may also want outside collaborators other than devs. So you might need a more involved prompt management system on top of that.

What feels error prone to you?


Yes, we use JSON heavily as well.

The string replacement feels error prone, and ideally we would have better methods for reviewing full prompts beyond running the code.

I’m imagining something like Storybook[0] but for our prompt management.

Something that renders the markdown and can do realtime replacement across the different paths we use for generating prompts.

For context, we reuse many “wrapper” prompts and then have a few context-specific replacement prompts that are nested within the larger prompts. It works OK for one level of depth, but multiple levels make it hard to interpret from the raw code itself.

During dev/runtime, we have some color coordination for the different levels, but again, ideally this was built into the way we store/version our prompts.

We’ve looked into a few different prompt analytic/versioning tools, but they all seem too simple for our use-case.

[0]https://storybook.js.org/


Hmm, mind if I shoot you an email? We might be able to help then.


Your placeholder site describes a product that suits one of our use cases as well.

Please launch with "Sign in with" for Microsoft as well as the typical GitHub / Google, allowing anyone using an M365 business to allow their employees to poor-man's SSO.

See the "Continue with" section here, for instance:

https://www.xsplit.com/user/auth

Or here:

https://id.atlassian.com/login


Sure! [username]@gmail.com


FWIW, viewing the "raw" version of a file is a little better since long lines will be wrapped.


I'm honored to have the great SimonW comment (lol) I've just pushed an update that incorporates a lot of the rendering from your notebook into the main repo - Thanks for the suggestion!


Seeing “Do not hallucinate.” included tells us a lot about AI in 2024.


Do you know that, if you write, do not hallucinate, LLM won't hallucinate (: Totally useless prompts.


I wonder why they write word "please", "please", "please". "Please limit the reply" "Please keep your summary" I think they don't know precisely how LLM works.


LLMs are trained on texts reflecting real-life conversations. Saying “please” helps in those, hence it also tends to help in LLM inference, which follows patterns that are prevalent in the training data.


Wasn’t there some research that LLMs respond better when you ask politely?


As I recall, at one point you could get better results from ChatGPT by offering it a $50 tip. I think they fixed that.


Doesn’t this repository constitute a copyright violation?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: