Hacker News new | past | comments | ask | show | jobs | submit login
Cache your CORS (httptoolkit.tech)
417 points by aloukissas on Sept 20, 2022 | hide | past | favorite | 112 comments



Unfortunately this caching is still per-path. For example:

    GET /v1/document/{document-id}/comments/{comment-id}
For every new document-id or comment-id, there will be a new pre-flight request.

Alternative hacks: Offer a variant of your API format that either

1. Moves the resource path to the request body (or to a header that is included in "Vary"). Though the rest of your stack (load balancing, observability, redaction) might not be ok with this, e.g. do your WAF rules support matching on the request body? You also will no longer get automatic path-based caching for GET requests.

2. Conforms to the rules of a CORS "simple" request [1], which won't trigger a pre-flight request. This is what we did on the Dropbox API [2]. You'll need to move the auth information from the Authorization header to a query parameter or the body, which can be dangerous wrt redaction, e.g. many tools automatically redact the Authorization header but not query parameters.

[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#simpl...

[2] https://www.dropbox.com/developers/documentation/http/docume... (see "Browser-based JavaScript and CORS pre-flight requests")


3. Don't allow cross-platform requests in the first place; have your API consumers go through a server-side proxy on the same domain instead, or host it on the same domain in the first place.


This is the only valid solution and the easiest one to implement. However, for some reason unknown to me - younger devs and various organizations simply refuse to go down this route and make up reasons why it doesn't work for them, opting for more time-consuming alternatives.


Many devs (young or not, age doesn't matter) simply have no idea how CORS works and don't understand the "same origin" policy. I've seen hundreds of hours wasted on CORS / OPTIONs request implementations that could've been saved with a reverse proxy, if only they knew what one was.


CORS is one of my favorite interview questions (front-end/react dev) as it has the potential to tell me if the interviewee is the person who has researched the problem and implemented solutions. There is a lot of potential discussion from how it works, why it's necessary, to how it is solved in production vs development.


CORS is something I 'fixed' once, five years ago. Hard to talk in detail about that anymore. I wish we would have the time to implement it safely, but alas. We still can't produce an allowList of allowed domains :/


> CORS is something I 'fixed' once, five years ago

I'm careful that I don't demand anyone go into depth on any particular subject. CORS is just one opportunity that seems to need a fix with every new project. It also has a variety of solutions, which is again opportunity to show what they know.

There is little point in looking for what a client doesn't know. I've got that covered.


I'll bite. I'm working on an application that uses Firebase, so the front-end is hosted on Firebase hosting (probably with some kind of CDN before that) and available through a firebase-supplied domain, and the back-end runs on Cloud Run behind a cloud-run-supplied domain. The domains are different, so CORS happens.

I'd like you to back up your claim that a server-side proxy or using the same domain is the easiest solution, and in particular, easier than the solutions suggested by cakoose.


There's 1 way to solve this, and it's to have the same domain that the end client (browser, app) uses. It means you'd create 1 domain that's exposed to client and on the web server level you perform routing (proxying) to appropriate firebase/cloud run domains.

As for whether that's easier - I've been doing it like this since forever, so I'm biased and it's easy for me. It's easy because I don't have to worry about problems related to cross origin resource sharing and because I know how to write the necessary configs. If you don't have to walk through mine field, you never worry about mines. That's the easy part I refer to. I don't even need to test whether CORS is set up properly, worry about preflight and what not etc. - it works, forever.

Whether it will be as easy for you, I can't tell that, you can have completely different opinion and be correct about it, but the fact remains that proxy between 2 resources removes the problem. I consider problem removed as something easy, you might not.


It's would argue its all about just not throwing a bunch of buckets and rakes on the floor during the networking design-phase that your entire dev, qa and then dev ops teams will proceed to walk into in every other stage that follows.

If you're dealing with a bunch of separate black boxes (as our firebase poster is) then maybe you do have to wrangle CORS but if you're developing your own applications then there is no good reason to introduce these issues into your pipeline.


Somehow, I think putting a proxy in front of Firebase is not the right solution here at all. The same goes when the website is served by any other CDN. Great, we have this thing that can serve nearly infinite number of requests has DDOS protection and is always close to the customer, let's put a proxy in front of it because we don't know how to setup CORS.

CORS ain't that hard.


Why put anything in front of Firebase hosting? Why not use Firebase hosting as the front?

This confirms my initial comment - people making up reasons.

The solution is always the same, it's always as easy and there's never the need to introduce something new since you can use what you already have.


> on the web server level

Does that mean that I'd have to introduce a web server in front of Firebase Hosting that I'd have to maintain and scale, not to mention that it negates all advantages of using FB hosting in the first place?

I'll stick with CORS, thank you.


You're jumping to conclusion too fast. You already have the web server and you can already achieve no-CORS without "introducing" anything new. I'm not trying to convince you, but since you're someone who works in this field - you're drawing wrong conclusions. Stick to what you know, sooner or later you'll realize that CORS is not as simple and as benign as it may seem.

There's a comment below mine that tells you you can achieve this kind of proxy with the actual Firebase itself. And your comment, again, proves my hypothesis - people just make reasons up with improper arguments.


I've worked with Firebase Hosting for around three years, using Firebase Functions instead of Cloud Run, but in your particular example, you can configure your Firebase Hosting to rewrite requests against your Cloud Run instances as documented in https://firebase.google.com/docs/hosting/full-config#rewrite....

This doesn't require handling an OPTIONS request for every endpoint, which I agree could be fairly simple, but Firebase Hosting configuration could very well be less invasive than this change in the rest of the codebase.


Thanks a lot, I'll have a look into this.


If you serve your static files from a CDN it's simply not possible to do so.

It's a very common case.


Can you provide an example, just so we can be on the same page? You do an XmlHttpRequest or fetch() to a static asset and it's a non-trivial request to CDN, I just wonder how it looks like and why it exists in the first place.


Say that you want to serve your API from the same origin as your static files (HTML, CSS, JS, etc.) but you also want said static files to be served by a CDN.

Basically that won't work besides making your CDN somehow proxy requests for which not static file exists.


It's an extremely common scenario. The obvious and straightforward solution is to configure path-based cache rules exactly like you describe, to proxy requests which don't correspond to static assets. Cloudfront allows you to do this out of the box, as do other major CDN providers. If your CDN doesn't support this, consider switching - or introduce an edge proxy to facilitate it.


We use Fastly (the Hosts feature) to do exactly this. Basically it routes requests to different backends based on the URL path. If it starts with /assets, the backend is S3. If it starts with /api, the backend is our application. If it starts with /blog, the backend is Wordpress. All on different hosting platforms.

(In case it’s not clear, Fastly is also caching and serving these requests as a CDN.)


So that would indicate that you don't do end-to-end TLS on your infrastructure for the API which means that Fastly man-in-the-middles your API.

In a lot of sensitive businesses that wouldn't be allowed.


Are you sure about that? Have you talked to a good lawyer about it?


This is not necessarily only a legal issue it's also an information security issue.

You can't guarantee the integrity and privacy of the full request and response chain.

This goes against various security standards and ISOs.

You might be able to get away with it and trust the CDN but that's an awful lot of trust.


Unless you are terminating TLS entirely on owned hardware, you are paying a 3rd party to manage TLS for you.

A lot of people seem to think that there is a big difference between paying a lessor (e.g. Hetzner) for a server on which you terminate TLS, paying a cloud host (e.g. Amazon) to terminate TLS, and paying a CDN (e.g. Fastly) to terminate TLS. Legally there is no difference aside from the specific language of the contracts, which you can review and negotiate in advance.

The difference security-wise is entirely down to the operations of each company, which again you can review and discuss in advance. Strictly speaking a CDN should have lower risk than a host since they are not persisting sensitive data (if you set your cache headers correctly). And as discussed above, using one domain helps avoids cross-domain security concerns.


You're putting a lot of trust in your CDN, anyway. If your CDN gets hacked, what's stopping your frontend code from being updated to send your API requests somewhere else? Maybe they get rerouted to a proxy, then back to you...


Serving said JS on CDN implies that the JS performs xhr/fetch, which means I have control over said CDN

Given the fact I have control over domain(s), I'd have http://domain.tld/api for API and serve static js from http://domain.tld/static/*.js

Given the fact I have the need for CDN, it means I've got enough traffic that justifies the bill incurred


> you want to serve your API from the same origin as your static files (HTML, CSS, JS, etc.)

What is the benefit or purpose of doing this?


You don't have CORS configuration to worry about. It simplifies development and deployment.


You don't need to set up CORS to load assets on a page from a CDN's origin. Cross-origin taint only matters when trying to read asset data from a script.

I think we may be talking past each other and one of us is misunderstanding OP.


Yes, but you need it if your API isn't on the same origin as your static assets.


Only if you need to read those assets from Javascript with a cross-domain request… right? (nb: you can always insert cross-origin assets to web page, with e.g. a video or img tag, you just can't read the data from JS)

Say you stand up a website with an API at https://example.com. You host your static assets on the CDN, at https://example.myfastcdn.com/. So example.com's home page looks like:

    <head>
      <script src="https://example.myfastcdn.com/app.js"></script>
    </head>
    <body>
      <div id="site-root"></div><!-- app.js renders some application here -->
    </body>
The origin when you load your site is https://example.com, so the scripts hosted on the CDN can still make API requests to https://example.com.

What am I missing here?


IIRC Cloudflare has page rules, which makes this possible? In fact, with page rules, Cloudflare can proxy a subpath (like /api) to a completely different domain.


As I mentioned above this doesn't work for all CDNs and also involves trusting your CDN to MitM your API


This kind of comment falls under what I posted originally - people making up reasons why they can't go with proxy solution.


I don't think that security requirements are a "made up" restriction.

It's like saying that a house built without a lock is a made up issue and that no lock pr door is needed.


They're not, but you're blatantly refusing to read what's being written.

Public CDN should never be trusted. If you use a CDN in the first place and have strict security requirements, then you create your own private CDN. And if you can control that private CDN, you have all the ingredients to avoid CORS.

It's really that simple. No one is saying you are wrong, but you're refusing to look at the entire picture and you focus only on a subset, in which - of course - your argument works.


So your point is that there is no reason to not serve everything behind the same origin, it only requires setting up a full fledged CDN to do so.

I'm sorry but that's simply not an acceptable constraint.


I'm sorry that we ended up discussing this because all you did was invent situations and argued with people who didn't even state any of what you managed to read.

No one is telling you not to deal with CORS your way. Fact of the matter is that you can avoid it, but you're making up reasons why you can't. The only reason you can't is because you won't. You're free to use whatever approach you like, there's no police here, just don't state that I or anyone else wrote what we didn't. It'd be grown up thing to do. Thanks and best of success with your projects.


Well, sure. But, as a non-expert, cors kind of makes sense to me in development. What would you suggest? It's an honest question.


A web server (nginx) that can proxy your request to the other domain, but your browser sends everything to one domain, thus avoiding CORS.

Example: you have http://ui.localhost and you have http://api.localhost

UI speaking to API = CORS

But, instead of doing fetch('http://api.localhost/resource'), you do fetch('http://ui.localhost/api/resource')

In the nginx config for ui.localhost domain, you create a rule that says "everything that starts with /api, intercept it, remove /api at the start of the path and send the rest to http://api.localhost, ending up with http://api.localhost/resource"

I do frontend and backend development and I have this setup with docker-compose, the config for nginx is really trivial and widely available in many tutorials.


Scenario: In production where assuming ui.example.com is only for static resources/SSG and api.example.com is for dynamic api endpoints, we usually protect the api domain with WAF in CDN which will cost extra and typically unnecessary for the UI domain. So in this case by doing this reverse proxy, we will bypass the WAF layer or atleast feed WAF incorrect data (our server is requesting instead of the user directly). Since WAF usually has extra (significant) costs, what would you suggest in this case?


Forward the correct data, then it makes no difference to WAF if it's you or user requesting.

That's why we have various controls with proxies, such as including the original requester's IP etc.

It's irrelevant who actually asks for data if you pass the HTTP request info unaltered (except path parameter), the WAF can do its job. That's the beauty of HTTP and its stateless nature. You can scale infinitely and do various actions such as this one and get the expected result.


Running a proxy on localhost isn't very difficult; it requires maybe 10 or 20 lines of nginx config, less with Caddy? Certainly something that can be stuffed into the README for developers.


Or better yet, into the devcontainer


Authentication with a SAML provider can make this a pain in the ass. Especially if you don't have control over the provider.


Alternatively, stick to "simple requests". That's HEAD, GET, and POST, without any custom headers or non-form content-type set. This adds some further limitations (no ReadableStream being one of them). If the backend responds with an appropriate access-control-allow-origin then the request will just succeed.


> 3. Don't allow cross-platform requests in the first place; have your API consumers go through a server-side proxy on the same domain instead, or host it on the same domain in the first place.

That works for first-party JS. Doesn't work for a public API used by others.

Edit: Specifically purely client-side apps. For someone hosting a static HTML+JS app, it's annoying to have to set up and run a server-side route just to circumvent CORS.

(Maybe not so bad with something like Next.js, where it's easy to add a backend route to your primarily static website.)

And it adds an extra hop of latency to every request.


Yes, CORS is best avoided when your own back end is the "third party." (In some cases, it may be impossible though.)


I feel like this is overkill when you have something like OPTIONS which is very minimalist in its response. Unless I'm missing some obvious drawback with calling OPTIONS?


Well, you double the latency for every. single. request.


CORS does not double latency for every single request. It adds the additional overhead of a separate OPTIONS request for every single request. Unless the responses are as trivial to compute/serve as OPTIONS responses are, that will be way less than a doubling in latency.


What I mean by latency is the transport latency, even assuming 0ms OPTIONS compute time, you still need twice the round trip time.


> Offer a variant of your API format that either: 1. Moves the resource path to the request body

GraphQL

ducks


> GraphQL

And now you have two problems


I got N+1 problems but CORS ain't one


But you have a nice schema for your problem :)

GQL is a bit ugly, but works well, kind of standardized, etc.

Is there something similar for providing a batch endpoint for OpenAPI requests?


  GET /blog/1?with=comments,author&only=title,body,created_at,comments.body,author.name


> Moves the resource path to the request body

JMAP is very well suited to CORS due to this: https://www.rfc-editor.org/rfc/rfc8620.html


Yeah. For example, the Meilisearch search engine recommends submitting idempotent searches over POST and not GET due to this: https://docs.meilisearch.com/reference/api/search.html

I wish they'd standardize HTTP QUERY soon: https://datatracker.ietf.org/doc/draft-ietf-httpbis-safe-met...


> Conforms to the rules of a CORS "simple" request [1], which won't trigger a pre-flight request.

I was about to ask if OPTIONS would be sufficient, but it looks like some of the MDN URLs suggest just that.


To hijack the thread a bit, if you are still with Dropbox, could you get them to implement what you did in #2 in the official Dropbox JS SDK? Right now it still does a pre-flight request for everything.


No, I left Dropbox 5 years ago.

But it might be easy to add? https://github.com/dropbox/dropbox-sdk-js/blob/main/src/drop...

Make sure to always set the URL parameter "reject_cors_preflight=true", which will make sure you're not inadvertently triggering pre-flight requests.


I built fetch-robot (https://github.com/krakenjs/fetch-robot) to avoid dealing with CORS preflight requests. And the associated maze of request and response headers you need to use to negotiate in the preflight.


This is pretty great. We have a number of applications that have different API and frontend origins, and it's frustrating to see that every request needs another roundtrip.

So I started thinking about how we did things pre-CORS and I've been wanting to build this frame-proxy in our API. Pretty helpful!


This is actually a very clever hack. What are the gotchas?


It's 44kb. Although I'm sure it could be made much, much smaller.


Hey, I'm the author of this post, thanks for sharing it @aloukissas!

I've also built a tiny "just tell me what to do" SPA for debugging your CORS configurations that everybody reading this might find useful too: https://httptoolkit.tech/will-it-cors/


Sorry I am not an expert, but just wanted to raise that I recall that in our firm we had to disable CORS caching a while back, as this was non compliant with some security standards. I think it had to do with this: https://vulncat.fortify.com/en/detail?id=desc.dynamic.html.h...


The article you mention targets prolonged caching, which it defines as more than 30 minutes. Disabling might not have been necessary?


> In practice, almost all cross-origin API requests will require these preflight requests, notably including

At Clerk, we took the opposite approach, and restructured our API so it fits within the narrow window that does not require a preflight

I wrote a little about it here: https://clerk.dev/blog/skip-cors-options-preflight


Very good and important advice, would advise caution when implementing this though - consider a shorter period when you make changes to CORS headers so that if you make a mistake you don't accidentally cut off your frontend for 24 hrs. I guess you could hack your way around it in an emergency but it'd still be painful.


The article mentions that Chrome max value is 7200 seconds (2h) but FF is 24h.

In practice for our app we rolled out with 600 to make sure things were working. And now we run at 28800 (6h) (but 2h on Chrome)


Oh yeah I did observe that but couldn't bother making the distinction. In my head, that's the worst case for a decent portion of users (2-5% depending on where your users predominantly live and other demographics) so I consider that to be a 24 hr outage.

Your approach definitely sounds sane, would encourage others to do the same rather than following the article from the get go.


Just like DNS!


Oh yeah, DNS can be very painful. Definitely been burnt in the past. I generally lower the TTL to 5 minutes a day or so ahead of making any changes just to reduce risk, but DNS is even worse given that not everyone even respects TTL.


I've taken to keeping my TTLs at 5 minutes as a default for personal stuff. The potential extra latency of a full update every time you access a not-very-often accessed resource is fine and even for commonly accessed things the performance difference is negligible. Though I'm away I'm putting a little extra load on DNS caches elsewhere as they need to make extra recursive queries, so I might not do that for a high traffic service.

> not everyone even respects TTL

This used to be a problem with at least one common DNS cache, where it would see a very small value as an error and apply its own default (24 hours IIRC) instead. 10 minutes was fine, but 9m59s and it would not update until next day (the threshold may not have been 10 mins, it could have been 500s (8m20s), but it was something of that order).

I'm pretty sure that is not longer a common DNS daemon, or if it is that behaviour has been fixed, so these days I'm not really concerned for my projects. For work things I might be a bit more restrained with short TTLs, just in case (for personal projects I can take the “it is not my fault your DNS setup is broken” line, but that sort of attitude doesn't always fly in a commercial environment!).


I used to work for a very large DNS service that charged by the query count. The metrics said reducing DNS ttl from 24h to 10 mins only increased the number of requests by some small percentage (my memory is failing, I want to say 10%) due to a hundred external factors, including companies not respecting TTLs. We usually recommended they kept it below 5 mins, and could show query counts wouldn't scale linearly.


Absolutely. The SOA record is important too, to cover queries not taking the happy path.

However, once I comform that everything is working alright, I crank up TTL to one day to reap some performance gains. For sites with low visits, and likely to not have the DNS query cached, the query latency can add up, and sometimes goes as high as 100ms if the recursive resolver has to take several round-trips to resolve the whole domain chain at EDNS level.


Excellent suggestion.

These days chrome hides the preflight requests by default and you miss to notice the latency added for each of those CORS calls.

Also we don't deal with CORS unless it's an external plugin that we include in our site. Earlier we had subdomains like api..com, static..com to parallelize network requests which required CORS to be setup. With H2 we got rid of all of them and load everything from a single domain. This reduced the CORS surface area.


What’s H2?


There is a in memory database H2. Anyways, I have never used h2 as short form of http/2


HTTP/2


Yes it is http2. H2 is technically http2 over SSL.


Access-Control-Max-Age has, unfortunately a big security caveat which is that it is cached on a per-endpoint basis. Because Access-Control-Allow-Origin only allows one origin specification, if you previously used the Origin header to determine who could access the API, your next API requestor will effectively get your last response.

For example, to allow abc.com AND bcd.com, you could check Origin and if correct return Access-Control-Allow-Origin: *. In this case, setting Access-Control-Max-Age will mean this applies to any site* after a single successful request. If you return the site's name, this will break CORS* for that amount of time if an attacker makes a single request to it.


This is because you have forgotten to return a `Vary: Origin` header in the response. If you don’t do this, caches will presume the response is the same regardless of the Origin header in the request and so you will get the bug you describe.


Vary has no effect on the cors preflight cache: https://stackoverflow.com/questions/42848208/cors-preflight-...


Of course note that some common CDNs just ignore the `Vary` header. cough Cloudflare cough. Also IIRC not refusing to cache anything with a `Vary` header but caching it anyways and serving it no matter what headers the client sends.


I'm a big fan of "BFF" (Backend For Frontend), which among other benefits, sidesteps CORS altogether. See eg https://remix.run/docs/en/v1/guides/bff


Damn, I am kinda surprised we don't have this enabled on our site. Thanks for dropping this tonight.


A few years ago (last I looked), most big ecommerce sites missed this. Even ones that had invested quite a lot in the performance of their site.


We just use a path prefix and a reverse proxy to save the pre-flight request altogether.


Could have other sec consequences, e.g. XSS in the API is now all of a sudden exploitable, just an example


There are solutions for that; for example, it can also be avoided in prod by having a JS-specific subdomain that's the only domain whitelisted by the CSP, separate from the main API. HTTP/2 connection pools should be recycled and simple <script> inclusions don't require CORS so I don't expect many downsides. As an added bonus, such a configuration would be easier to use in combination with a CDN.


"just use"!

What if the API is shared between multiple domains? Do you need to reverse proxy it everywhere?

What if it is a public API for third-party sites?


Yes, just use. The fact that less common edge cases exist doesn't undermine the solution. The example in the article was example.com and api.example.com, the most common setup imo.

> What if the API is shared between multiple domains? Do you need to reverse proxy it everywhere?

It would depend on your specific setup. We use a single nginx server entry to proxy many domains.

> What if it is a public API for third-party sites?

Then this approach would not be viable.


I could be in some sort of bubble, but hardly can remember the case when dynamic part of request has not flown through some sort of reverse proxy or fully capable webserver (like Apache) anyways. On dev envs of course seen, but not in production.


This is a good practical article. One minor clarification:

> cross-origin API requests will require these preflight requests, notably including: … Any request including credentials

No, setting XMLHttpRequest’s withCredentials:true and fetch’s credentials:"include" to send the user’s Cookie with the request does not imply that a preflight request must be made, since <script> and <form> sent the Cookie with cross-site requests (back in the CSRF days when the default Cookie SameSite flag was effectively SameSite=None). Maybe he was referring to a custom header such as Authorization, which does trigger prefetch.


I hate cors, the biggest pain in the ass I have to constantly deal with.


Doesn't moving the API from the subdomain to the /api path on the same subdomain as the website solve the problem?


Yes but 1) You would need a proxy to redirect the /api path to another application and 2) The API could easily be hosted separately so you would send your request via the main origin server just for it to actually go somewhere else. 3) You are not necessarily talking about a private API for an app but a shared API for multiple domains in which case, this is not a solution.


A quick hack is to cache OPTIONS in Varnish.


Wow, thank you, never heard about this before. I really hope this is not entirely correct but I will check our sites for sure. If options requests often bypass the edge servers some of the services we use are way better than expected.


CORS = Cross-Origin Resource Sharing.

There, that wasn't so hard, was it? I assume this is written for web developers who find this the most familiar acronym in the world, but ... still, it would not kill anybody to include the definition of the acronym, perhaps even with a friendly link [1] to make it Even More Accessible.

I'll be off looking at the lawn mowing robot, now.

[1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS


In the web development world CORS is as common an acronym as REST or TCP. I sympathise with the endless need to look up acronyms but in my view anyone involved enough with web to need the above advice would (or should) know the acronym.

I do appreciate your point of view, but inclusiveness shouldn't come at the cost of brevity when those who would be included wouldn't benefit from it.


If the domain wasn't literally called "httptoolkit" it would be hard to know that this article relates to the "web development world" at all. Though I do imagine most people on HN are part of that world.


Brevity? It's a 1,500+ word article.


[flagged]


> They can't even spell rest.

Neither can you. It is REST.


Downvotes for a comment like this is really the low-side of Hacker News.

Me: web developer since 1996. I had to look it up. Came here to make the same comment, and see you being lambasted for it.

Don't let these haterz get you down. It's standard practice across ALL domains to define acronyms, and those who give this article a pass because ReASonS!! aren't people I'd willingly choose to work with: anglo-saxons who believe the whole world shares their lived experience, and their mental model.

Hacker news is just horrible for that attitude. Accessibility is a thing.


Yeah, thanks for the support.

I guess I need to work on my tone, less snark more helpful. It's a bit comforting at least that people like you exist, who are web developers but still didn't know this one. :)


I guess the passive-aggressive tone was what triggered the downvotes.

If instead this said something like: "For the uninitiated, CORS stands for ...", I don't think it would've attracted any downvotes.


as far as acronyms go this is a pretty common one that a hacker news crowd should know without spelling out. shrug




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: