Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI API keys leaking through app binaries (twitter.com/cyrilzakka)
140 points by archiv on April 13, 2023 | hide | past | favorite | 78 comments



You should never store ANY secret information (API keys, passwords, tokens, secret keys of any kind) in your application binary. It can always be extracted one way or another..

If your application needs to call a 3rd party service like openAI, the only solution to safely not leak your API key is to have your app only communicate with a backend you own and call the openAI from there.

OpenAI allows revoking leaked keys. If you did include your API key in a client-side application, update your app to use a backend for openAI API communication, use a fresh key and revoke the old key when your update ships (or if you value security over functionality then revoke the key before you ship the update).


Well, if you have an app that's a thin structure on top of GPT-4, adding your own pipe between your client and OpenAI could add a lot to the cost and complexity of the app. Which is to say it's not surprising that people don't do it.

The genius and the craziness of GPT-4 is you can make whole app with a prompt like "now you're a clown painting custom faces on kids based on their favorite animals" and some glue-code. Needing to add a 3 layer network infrastructure with isn't appealing I'd imagine.


Could be as simple and cost-efficient as a Cloudflare worker that adds your key and passes the query along


Or a 5€/mo Hetzner server running Nginx, though Cloudflares free offering is very generous


You have to use Cloudflare Workers in Unbounded mode to do this (especially if you are streaming using EventSource or WebSockets). Bundled mode won't cut it as it closes connection after 50ms of Javascript execution.


That’s 50ms of CPU execution, which does not include waiting for IO.


You'd need some kind of authentication as well.


Errrr, I assume your app can pass said authentication? If so, then it's meaningless; that's again a secret not under your control.


Not only that, but it makes not much difference if they call the GPT Api directly or through a proxy. Only thing that eould really help is having users register and authenticate through the proxy.


The proxy approach at least lets you rate limit by IP, limit the length of the strings (and thus the token cost), etc. The API key may also grant access to other models, administrative IPs, etc. you don't want people using.

Far, far, far better than nothing.


> is to have your app only communicate with a backend you own and call the openAI from there.

I'm a bit baffled anyone puts anything secret on software people are using. This service needs to be online anyway.

Anyway, seems like a lazy programmer thing.


Running a server even to proxy requests takes a lot of work, since you now need your own auth system and have to manage scaling. If you take the plunge, a serverless architecture like Cloudflare Workers makes scaling automatic, but you still have to do some heavy lifting to either have an API key or auth system and abuse protections (otherwise they just spam your API instead of directly stealing your OpenAI api key).


You probably don't need to scale if all you're doing is auth and proxying requests. If you get to the point where you do need to scale, you can probably afford to figure it out.


Er.... Just ask gpt4 to how to do it obviously.


Or, a lazy ChatGPT auto-generated code copy-paste thing.


>the only solution to safely not leak your API key is to have your app only communicate with a backend you own and call the openAI from there.

Don't you still have the same problem? Your backend would also require a key for the client to communicate, which would still be embedded in the client binary.


No.

You go through traditional auth channels (username / password) or you generate a key per app user to talk with your backend.

Or, you keep the backend you control open and implement controls to combat abuse, such as rate limiting and ip blacklisting.

Whatever chosen, the objective is to protect your API key and make sure the application is being used according to its purpose.

An API key directly to openAI allows for any use under the sun — botnets, new prompts, etc., and can drain money, put your account in bad standing, or even get you in (potentially serious) legal trouble. Using your own backend, you can do things like hit the openAI moderation endpoint, inject the correct prompts into whatever you’re sending to openAI, etc.

The main thing is you have a limited API specific to your app offering which significantly lowers the damage possible. You absolutely always want to give users the least privilege necessary for whatever use cases being provided for — this protects both you and your users.


You have a different problem. You can handle the authentication, and blacklisting, on your side. That's harder to do when the clients communicate with OpenAI or another third party directly, while incurring charges that you have to pay for.


No, your proxy site can have a login flow or some other unique id process that is per client like any web app.


Yes, but you could negotiate a token with each user on startup and if someone starts abusing your service you can block that access (perhaps automatically) or know who it is by authenticating the user.


> If your application needs to call a 3rd party service like openAI, the only solution to safely not leak your API key is to have your app only communicate with a backend you own and call the openAI from there.

I've also seen vendors do things like issue client-side keys for AWS IAM users that can access their backend (in AWS) with a super locked-down role. This would be more interesting as a solution if IAM stuff was interoperable between cloud providers (CSP), since this dependency means you can't move to another CSP without bothering your customers. It also doesn't help in the OpenAI case because there isn't a way to mint limited-permission tokens.


Given a cloud service account you can call the provider’s token service, get a bearer token, then use that bearer token to call any other cloud service configured to trust the provider issuer. Most cloud providers support this today with oidc.


That may be true but I would hazard a guess that 90% of mobile apps that talk to 3rd party services have keys stored in their binaries. It may be true as an individual that you should not do that but discipline doesn’t scale. We need a convenient best practice that doesn’t put keys in the binary. Setting up a proxy server that you also need authentication with that talks to all your apis is not gonna get done unless you make it idiot proof.


Almost all Google APIs that have any kind of visualization component that runs on the client (for example, an embedded, pannable Google map with things drawn on it) require you shipping your API key in your clients.

They have some safeguards e.g. HTTP referrer restrictions but it's not bulletproof.


Google Maps's "API key" is not really a secret. It's used only to identify your application and to generate an iframe that's only allowed to be used on your website. It's bulletproof enough not to be considered as a secret as it can't really be used to impersonate your app if leaked.


> it can't really be used to impersonate your app if leaked

It actually can, I could create an app called com.yourbusiness.someapp, install the app directly without signing it, and use yourbusiness's API key.

For the JavaScript embed APIs I could create a fake root cert, fake DNS, fake HTTP cert signed by the fake root cert, serve https://yourbusiness.com/ from my in-house server to my own laptop, and then issue queries to the Google API using yourbusiness.com as the referrer. (Or hell, just create a Chromium fork that serves up fake referrer headers.)

It's just that it's not big enough of a threat that it has become an issue.


I’ve had a theory that LLMs are going to increase the prevalence of “shadow it”. Aka slightly technical business users using gpt to create their own apps to handle business processes. These apps will then lead to large increases in security incidents where company data gets leaked or hacked or whatever because the apps will just be random unreviewed code.


This is not such a grim future since it’ll mean that SWEs still have a job. My fear is that AIs get powerful enough for MBAs to create robust software without the assistance of SWEs. A world run solely by MBAs is a scary future.


This is advice so broad and generic, that it's just about useless.

If you don't store any API keys in your binary, how do you handle crash-logging and analytics? How do you integrate with third-party log-in SDKs?

You _could_ vend some of those (not the crash-logging ones, etc) from your API, but then how do you authenticate to _that_, if you can't have any secrets?

You can't login-gate all of those, and many of those are not easily rotated.


There is a difference between an app ID and a secret. A secret is something that can be used to impersonate you or your app towards a 3rd party service if leaked.

3rd party log-in SDKs using OpenID connect can work entirely without client-side secrets using only your app IDs. Crash logging and analytics services API keys are also usually considered to be app IDs, not secrets.


Sure but now the advice is what? first determine which of these 10 api keys your app uses are really secret, for those that aren’t just stick em in the binary and you are done, for those that are set up a proxy server with authentication, store the keys on that, and call the apis through there. …or you could stick all 10 in the binary and be done in 5 minutes. Can you start to see why it is so common that people just stick api keys in the binary? Can’t we have some reasonable dev experience for storing secrets that isn’t 10x the effort?


Determining which API key is supposed to be a secret isn't usually an issue, cause typically site providing you with the secret key clearly states the fact that it's supposed to be a secret. For example this is what OpenAI says about their keys directly on the page where you generate the key: "Your secret API keys are listed below. Do not share your API key with others, or expose it in the browser or other client-side code."


I’m surprised people wouldn’t want to route requests through a backend so they could throttle traffic and prevent abuse from a place no one else can control. Perhaps they simply don’t know that’s a concern — they put a static API key for a service that costs money into a client side application. That seems unintentional, at least in regards to exposure to potential consequences.


Keeping creds safe is basic security practice, and we have the eternal September of programmers - which now applies to inexperienced people using AI to cobble together software. So there’s going to be a lot of these mistakes made and learned as they acclimate.


Noticing that comments like this are being consistently downvoted, yet there doesn't seem to be any sort of disagreement in the replies. Usually when people disagree online, they don't merely downvote, but also post their dissent.

Interesting that it's suddenly become so "controversial" to suggest circumspection with respect to this subject...


I didn't downvote, or noticed such a pattern, but was this comment really insightful? It just said, programming newbs will make newb misstakes. Yes and? I mean it is self evidently true. So yes, you can say it and the wording was not really condescending, but quite often to me it is just bashing beginners to feel more powerful and smart as an experienced and studied programmer, compared to those amateurs.


> Keeping creds safe is basic security practice... So there’s going to be a lot of these mistakes made and learned as they acclimate.

This might not be insightful from your perspective, because you've thought about it before, but it needs to be said. Just like a NO DIVING sign in the shallow end, most people already know, but some people who don't know might not know if it isn't stated.


To stretch/extend/mangle your swimming metaphor - the AI tools now let people swim in the deep end before they've taken their water wings off. There's lessons that others would probably learn along the way by having to do a lot of searching and parsing of the basics that can be skipped over now.


I am not sure. Also before ChatGPT you could google and copy and paste some foreign code together you did not understand, even before stackoverflow. In fact, this is what I did as a beginner and I would think most did. You start to modify and play a bit with it and after a while you understand (something). Or you try until it somewhat works.

I don't see, why ChatGPT changed that, I only tried it a little bit so far, but it doesn't usually give you a ready program, right? It gives you snippets, that might work, or not, but in my case required understanding of the domain. So I could adopt the scripts to my need, but I doubt a beginner could. At least not for anything non trivial. Also those beginners can ask a million stupid questions to the AI that just patiently answers. So yes, those answers can be wrong, but that can happen in a forum as well and even in university occasionally I was taught some BS.

So yes, ChatGPT changes the game a bit, but not that drastic. If you want to become a professional programmer, you still have to get your hands dirty and grind away the basics. But you cannot skip certain things, or you never manage to get even a mid sized project running performant and stable.

And if necessary, it would be probably trivial to weed out "programmers" that aren't really programmers by asking them some questions directly.


I think ChatGPT reduces a lot of the friction - you can ask it to piece things together quickly and it produces it in a more easily digestible format, especially when compared to the google + read cycle. I know some people are trying to learn things with ChatGPT when they wouldn't before - it's easier to build something in a conversational manner and it's harder to pick up the pieces when StackOverflow steers you wrong.

I agree with your assertion that non-trivial things will weed out "programmers", I just think that ChatGPT will get people further and that definition of "non-trivial" will shift a bit - possibly far enough people will have an easier time leaking their API keys through Frankenstein apps they post online.


People who make iOS apps rarely are equally comfortable with creating backend apps; for understandable reasons.


It seems like static hardcoded API keys are a best practice with map APIs. I'm guessing because of the large number of requests in the critical path proxying adds to much latency? But everyone does it.


Oh, that seems pretty bad.

Are you ok to name some map apps that are doing this, that you're aware of?

That'll let others dig into those apps, (hopefully) report the security issue, and (again hopefully) get the app makers instead using a better approach for their next releases.


For one, it's what you get following the documentation of the most popular product, Mapbox.


Ouch. So, they're even teaching people to do the wrong thing? That's not good at all. :(


Even if you store API keys in code inside a distributed binary, isn't it pretty simple for users to mitmproxy and view API requests containing there keys sent? There's no real way to control API keys given out to users - if you want to hide them, you just have to proxy requests instead.


There's a really good iOS app called Proxyman[0] (the Mac app is also excellent) that lets you view HTTP requests that apps make from inside iOS.

If you're curious about this sorta stuff, I definitely recommend checking it out.

[0]: https://proxyman.io/ios


Some people do cert pinning to prevent this, but generally yes it is pretty simple.


You can't do cert pinning if you're using the openai api directly though?

That only applies for internal api calls, at which point the requests/binary won't contain the openai key?


Ehh, I don't think it'd be that hard to implement cert pinning against OpenAI's APIs.

You just need some very permissive pinning, where you require any publicly trusted CA, to prevent MITM attacks. Basically only trust the root CAs a phone already trusts by default. You don't need coordination between the server and your client to implement this. All you have to do is prevent your TLS calls from trusting any certs signed by manually trusted CAs that Proxyman/Charles/etc might have had the user add.

Of course, that'll only delay the API keys leaking. With a jailbroken iPhone and Frida you can effectively disable cert pinning checks. Or extract the keys from memory, or binary analysis, etc.


> All you have to do is prevent your TLS calls from trusting any certs signed by manually trusted CAs that Proxyman/Charles/etc might have had the user add.

Yeah but I have certs signed by trusted root authorities a la letsencrypt?


The letsencrypt root CA is included in this. If you trust only a device’s default trusted CA all letsencrypt certs will work. Also they don’t have their own root CA: https://letsencrypt.org/certificates/


I'm dumb and realized I can get a letsencrypt cert but the domain won't match.....


Yeah you are correct, only really can be done with apis you can control, even then it is a pain bc you need the certs and app in lockstep. But for example if your proxy api used an api key and you wanted an additional layer of security. Edit: sibling comment is interesting for an approach that might remove low hanging fruit


Isn't API key pretty much can always be recovered client side? If API key is hardcoded in the app it's trivial to get. If it's dynamically fetched it's also trivial to intercept the network call. I don't think there's anything you can do to prevent that.


Well, the safe approach is to either have the user provide their own API key, or to perform requests on the backend (after verifying the user has an account, correctly authenticated, has paid and deducting from their quota).

Or you could generate a per-user subkey with a quota if the upstream service supports that.


OpenAI API has a larger surface area than just completions. You can retrieve files uploaded through it, you can generate images, you can use the embedding API, you can use a different, more expensive model (GPT4) - and make as many calls as you wish. With a backend, you can restrict what is allowed, add your own rate limiting, you can rotate keys etc.


The replies suggest storing the API key on a backend which you control.


Yeah you can try obfuscating it, but in general you need to assume plain text access to anything you send the client.

The only exceptions are basically some really hardcore DRM like denuvo and hardware certificated drm. Even those aren't safe from deteremined people.


You can have a server side proxy for authenticated and authorised users, with a rate limit.


From comments here, I suppose you could make your app cloud-based so your client app is just a UI and only your trusted backend has full access to the external API, and that sounds dystopian.


Why is that dystopian? Sounds like the standard, good-idea architecture for an application that communicates with a paid 3rd party API.


That's what these applications are already doing. The only difference is their cloud back-end is openai's API instead of something they control themselves.


It's... pretty normal practice.


Things I'd like to see from OpenAI API keys:

- Unlimited, or at least a much higher limit - right now they are restricted to 5

- Ability to set a time limit on a key - I'd like to create a new key that's only good for the next hour when I try out a new thing that asks me to paste in an API key

- Abiliy to set a budget for an API key. Giving an app a key with a $5 total budget - or $10/month or whatever - would be really neat.

- OAuth support. Let me OAuth connect an app with my OpenID account - then I don't have to know what an API key is, I can grant it permission to spend my API credits, and I can revoke access later

- Let me see how much money each API key has spent

- An option to log everything an app does with my API key would be cool - I already have ChatGPT logs and rely on them all the time, but having that for other random applications would be excellent.


There's a lot of sites leaking OpenAI keys on the frontend as well, including some projects posted here on HN. I contacted one such dev back in February, they said they would fix it ASAP, and it's still not fixed.


Note: it's probably not just iOS/MacOS apps. Android apps are equally vulnerable to this if you're brazen enough to dump your key in any .xml file. Or in your code if you just run strings on it.


You could probably remove the OpenAI qualifier from this finding, but I guess it makes it more relevant.


As an iOS developer I’ve learned to never put 3rd party secrets in the app. I typically go with a proxy backend server and attach a request-unique nonce that’s created using an obfuscated secret key stored as an array of integers on the client.


Lol. Posted about it a month ago. And reported that to some devs. https://twitter.com/outcoldman/status/1636742564887011329?s=...

Nothing is going to change.


I would think you would keep an api wrapper around it at least probably some kind of IP tracking and auth.


One wonders if ChatGPT could self-replicate - using these API keys as a bootstrap. Lots of individual users computes would certainly lower the threshold for being cut off for billing reasons.


Why dont they use a server for forwarding requests to openai and not exposing any keys? That way they can easily ab test various ai engines and secure their keys. It is known.


If you want a solution that isn't perfect, but is at least slightly better:

Store the key in your code but in a basic encrypted string, and then decrypt it at runtime.

Yes, it's still easy to get if someone is motivated, but it's a lot harder to read the machine calls figuring out what method was used to encrypt the string (make it a method that can't be figured out from only the encrypted string), than it is to read the plaintext key from the Plist.

Bad in theory, helpful in practice.


As others have mentioned in the thread, this doesn't guard against a MITM proxy and it'd take a couple minutes to defeat this.

You're much better off proxying calls from your own server API, having proper rate limits and authentication, and a strict API surface that doesn't permit arbitrary calls to whatever APIs you depend on


Fine for some secrets but I'd never do with something like an OpenAI key. Somebody could blow through your entire months's usage allowance before you notice anything


It's much easier to open ZAP/Burp and intercept iOS/Android traffic to grab API keys.


For some developers, this is sort of intentional. The reason is at least twofold:

1) Calling OpenAI directly is one less hop, so user gets lower latency

2) Not having to set up / maintain a backend server = get to market faster

There are some very popular GPT apps recently that are obviously putting their API keys on the client side - won't name them but they've been featured quite a bit.

The downside is not as bad as people think. Worst case, someone takes your key and what, plugs it into their own app, costing you a few bucks?

- OpenAI keys have a hard budget limit that requires manual approval by OpenAI anyway

- Not much privacy risk - unlike other API keys, OpenAI APIs don't allow you retrieve previous data AFAIK. There are some APIs to fine-tune models, but I seriously doubt any of these consumer apps are doing this now.

- You can just create a new version later and revoke the old key. And now you've broken the thief's app.

My guess is the developers were well aware of the tradeoffs. Just felt it was more important to get to market faster, than to batten down all the hatches. They're probably right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: