Hacker News new | past | comments | ask | show | jobs | submit | e1g's comments login

Yes. As a rule, US early-stage investors will not write a check to a non-US entity. The only exception I've seen is when the investor had personal connections to the founders, but no rules apply then.


Our front end is ~200k LOC of TypeScript and all changes are instant (<1s).

TypeScript compiler is too slow for hot module replacement so it’s used only for IDEs. During development, all transformation happens via esbuild/swc and is abstracted away with Vite https://vitejs.dev/


esbuild does not do type checking. You must invoke tsc explicitly to do that.


Type-checking is helpful in your IDE (for developer hints) and in your CI (for verification), but you don't want type-checking in your hot-reloading dev loop.


I pointed that out because your previous comment could be misinterpreted to mean you do full type checking on your dev cycle, which you probably don't.


On what hardware? I have a m3 and yeah, it’s terrible with ts. Instant (milliseconds) with cl (of el even). Go is not terrible.


Same, M3. The DX within a modern frontend stack is indistinguishable from Bret Victor’s ideas (even if a decade late).


Ok, when can we meet? I have never seen it work, and, as said, I review 100s of project a year; everything ts is super slow so far. Maybe you have something.


Their hot reload cycle is fast because esbuild doesn't type check the code, it just removes types from Typescript so it turns into JS (it may do minification and tree shaking but in dev, they probably remove that). I've written some esbuild plugins and can confirm that on incremental builds, esbuild will probably never take more than a few ms even on larger projects (because it doesn't matter how big your project is, esbuild will only rebuild what changed, which usually is a few files only).


No one wants to show me though. Is that not weird? Fanbois say it is, but not one person even sends an open source project that demonstrates it. I don’t understand that? Please show me a non trivial project on GitHub that does this in ms like you say. All I try are slow af. Emacs is notoriously slow as a lisp; it is always faster, for me, than anything ts and definitely not trivial. Sbcl/cl blows it all away. Please an example in ts or it didn’t happen.

Download some horror show like Unkey and show a video of its millisecond hot reload…


You are asking for someone else to do a demo of something trivial that takes 15 minutes to setup yourself (with Vite). Nobody is biting because it’s a strange (and, frankly, lazy) request.

If you tried and still think such a setup is not possible, send me an email (in profile) and I can do a 10-15 minute show&tell.


I moved to NYC and London and both were great for integration because so many “locals” were not born there, and do not fall back on their college friend group.


RHEL is free for up to 16 systems (physical or VMs). For CI, you can run Rocky/Alma to ensure continuous compatibility if you grow past 16 beefy VMs.


>RHEL is free for up to 16 systems

If they don't change their mind, i would not build my business on a promise made by IBM ;)


But you'll build your business on software you get for free on the internet with absolutely no commitment behind it?


>absolutely no commitment behind it?

I think commitment to your baby (your project) is far more morally superior than commitment to just making your stakeholders happy...no?

BTW: I buy Github DVDs, I don't load it from the "Internet".


Until they have an actual baby and don't have free time to spend maintaining large projects for free.


>to spend maintaining large projects for free.

What is your point, anyway?

That all of Debian will have a baby? Or that all of ArchLinux gets impregnated at once? Big projects are usually not led by a single person....oh whait...linux is, and now Linus cannot have sex anymore, thank you OP.


> They were all looking to raise nearly identical rounds: $1.5 million to $2 million

On top of $500k from YC. Tiny $2M+. Barely anything for two people and a dream.


Perplexity recently released something like this https://www.perplexity.ai/hub/blog/perplexity-pages


That is almost certainly what’s happening here. They raised $3M three years ago, at the peak of evaluations, and don’t have the metrics to raise a Series A in the current climate. Running out of money and want to leave some artifact behind. A very difficult and emotional transition.


> The CDN can maintain a persistent connection to the backend that is shared across users

We considered using Cloudflare Workers as a reverse proxy, and I did extensive testing of this (very reasonable) assumption. Turns out that when calling back to the origin from the edge, CF Workers established a new connection almost every time, and so had to pay the penalty of the TCP and TLS handshake on every request. That killed any performance gains, and was a deal breaker for us. It’s rather difficult to predict or monitor network/routing behavior when running on the edge.


This didn't sound right to me so I did some investigation and I think I found a bug.

Keep in mind that Cloudflare is a complex stack of proxies. When a worker performs a fetch(), that request has to pass through a few machines on Cloudflare's network before it can actually go to origin. E.g. to implement caching we need to go to the appropriate cache machine, and then to try to reuse connections we need to go to the appropriate egress machine. Point is, the connection to origin isn't literally coming from the machine that called fetch().

So if you call fetch() twice in a row, to the same hostname, does it reuse a connection? If everything were on a single machine, you'd expect so, yes! But in this complex proxy stack, stuff has to happen correctly for those two requests to end up back on the same machine at the other end in order to use the same connection.

Well, it looks like heuristics involved here aren't currently handling Workers requests the way they should. They are designed more around regular CDN requests (Workers shares the same egress path that regular non-Workers CDN requests use). In the standard CDN use case where you get a request from a user, possibly rewrite it in a Worker, then forward it to origin, you should be seeing connection reuse.

But, it looks like if you have a Worker that performs multiple fetch() requests to origin (e.g. not forwarding the user's requests, but making some API requests or something)... we're not hashing things correctly so that those fetches land on the same egress machine. So... you won't get connection reuse, unless of course you have enough traffic to light up all the egress machines.

I'm face-palming a bit here, and wondering why there hasn't been more noise about this. We'll fix it. Talk about low-hanging fruit...

(I'm the tech lead for Cloudflare Workers.)

(On a side note, enabling Argo Smart Routing will greatly increase the rate of connection reuse in general, even for traffic distributed around the world, as it causes requests to be routed within Cloudflare's network to the location closest to your origin. Also, even if the origin connections aren't reused, the RTT from Cloudflare to origin becomes much shorter, so connection setup becomes much less expensive. However, this is a paid feature.)


> So if you call fetch() twice in a row, to the same hostname, does it reuse a connection?

In my testing, the second fetch() call from a worker to the same origin ran over the same TCP connection 50% of the time and was much faster.

We want to use Workers as a reverse proxy - to pick up all HTTP requests globally and then route them to our backend. So our use-case is mostly one fetch() call (to the origin) per one incoming call. The issue is that incoming requests arrive to a ~random worker in the user's POP, and it looks like each Worker isolate has to re-establish its own TCP/TLS connection to our backend, which takes a long time (~90% of the time).

What I want is Hyperdrive for HTTPS connections. I tried connecting to backend via CF Tunnel, but that didn't make any difference. Our backend is accessible via AWS Global Accelerator, so Argo won't help much. The only thing that made a difference was pinning the Worker close to our backend - connections to the backend becamse fast(er) because the TLS roundtrip was faster, but that's not a great solution.


> The issue is that incoming requests arrive to a ~random worker in the user's POP, and it looks like each Worker isolate has to re-establish its own TCP/TLS connection to our backend, which takes a long time (~90% of the time).

Again, origin connections are not owned by isolates -- there are proxies involved before we get to the origin connection. Requests from unrelated isolates can share a connection, if the are routed to the same egress point. Problem is that they apparently aren't being routed to the same point in your case. That could be for a number of reasons.

It sounds like the bug I found may not be the issue in your case (in fact it sounds like you explicitly aren't experiencing the bug, which is surprising, maybe I am misreading the code and there actually is no bug!).

But there are other challenges the heuristics are trying to solve for, so it's not quite as simple as "all requests to the same origin hostname should go through the same egress node"... like, many of our customers get way too much traffic for just one egress node (even per-colo), so we have to be smarter than that.

I pinged someone on the relevant team and it sounds like this is something they are actively improving.

> The only thing that made a difference was pinning the Worker close to our backend - connections to the backend becamse fast(er) because the TLS roundtrip was faster, but that's not a great solution.

Argo Smart Routing should have the same effect... it causes Cloudflare to make connections from a colo close to your backend, which means the TLS roundtrip is faster.


Thank you for looking into it in such detail based on an unrelated thread!

Cloudflare seems to consistently make all types of network improvements behind the scenes, so I’ll continue to monitor for this “connection reuse” feature. It might just show up announced.


Were you using a Cloudflare tunnel for your origin?


Yes, tried tunnels too. There is significant variability among individual requests, but when benchmarking at scale I found no meaningful difference in p50 and p90 between “Worker -> CF Tunnel -> EC2 -> backend app” and “Worker -> AWS Global Accelerator -> EC2 -> backend app”


sama recently said they want to allow NSWF stuff for personal use but need to resolve a few issues around safety, etc. OpenAI is probably not against sexting philosophically.


This deal makes sense if it's connected to revenue, not tenure.

If the contract is framed around time, then understand that you'll be just one of many hobbies for a newly retired exec who wants to feel young and important by dabbling in startups. Those nine months can go by quickly with minimal part-time participation, and then you'll have dead weight on your cap table.

A better alternative is to base their earned equity on milestones - for the next 12 months, every $100k ARR they directly bring in earns them 2% of the company, up to 10%. If they play golf while you grow the business, they get nothing. If they are a killer and bring $500k in ARR, you'll have your GTM, raise a strong Seed round at a valuation of $15-$20M, and they'll get their 10% (less the shared dillution of the Seed round).


That's a great way to put it. It makes a lot of sense. Exactly what you said there must be an incentive for him to do the work and we need a way to measure it. Thank you very much for the insights! This is super helpful. I'm not sure how he will see this, but I think it's a good option to put on a table.


Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: