Hacker News new | past | comments | ask | show | jobs | submit | darsoli's comments login

Although I read about the supply chain bottlenecks, I don't feel like it's impacted me yet. In fact it feels like there are more "Same Day" options than ever before (Amazon specific, but ditto for in-person shopping elsewhere). I live in NYC so not sure if that makes a difference or not.


I think it heavily depends on what type of goods you are buying. I've not seen issues for basic consumer goods, but in my professional role laptops and other goods that require chips are experiencing some significant delays. Steel roofing for the gazebo I'm building doubled in price, steel tubing for welding has gone way up and orders are delayed. My argon tanks for welding cost 25% more to fill today than pre-covid. Hard to say how much of this is shortage vs price gouging though.


Is there only one transform option, or could you offer several? That might be a good way of satisfying those who want a very fancy looking image to others who want specific functionality (magnify certain portions of a window, etc).


Help centers, along with "self-service knowledgebases", are just ways of making it harder to find the contact us page.


That's sometimes true, but I think (like mentioned in the article) a combo of a knowledge base with clear ways to contact a human usually leaves a good taste in my mouth.


I agree that SEO has made it harder and harder to find good content on the internet.

I don't agree with Google being blamed for this. They're trying desperately to fix the problem. Maybe they're not as effective as you'd hope, but why would a "multipolar" search world be better? Wouldn't all search engines be plagued with people trying to game the system?


Google's dominance creates a monoculture, so that any weakness in their ranking is immediately exploited by SEO folks. If there were multiple real search engines using different ranking algos, it would be harder to game them all at once, which would shift the balance somewhat towards just making good content.


Even with multiple search engines, if they are all trying to solve the same problem, you'll get convergent evolution.

Let's say you have 10 search engines each with their own algorithm. 1 of them has an exploitable weakness. Either they close this weakness by switching to one of the superior algorithms or they get exploited and become much less useable than their competitors. In either case, you soon have 9 search algorithms. Repeat this process a few times and you're back to a monoculture. Best case scenario, you reach a stable equilibrium where a few different search engines are all plagued by different but roughly equivalent issues, such that none is good enough to beat out the others.


> Wouldn't all search engines be plagued with people trying to game the system?

Remember that a big selling point for Mac computers is that "they don't get viruses". Since that's not true, it's taken to mean that the number of viruses targeting MacOS is minuscule compared to Windows OS.

An alternative search engine that doesn't play favourites (AMP, Youtube results) and isn't predicated on selling ads could definitely do better in answering queries in specific niches.


Google is the source of the problem: If they didn't rank search results and aggregate "eyeballs" there would be nothing to optimize.


If you don't want ranked results, you can always type random strings into the url bar.


Yeah, they're trying to combat it, but they also fill your screen with ads, which aren't any better than the "SEO optimized" trash on the first page.

Half a screen of shit, either way. The difference is in who makes the money from it. And it's not the user.


The product seems to hit a really important use case (I have created my own SaaS products in the past, and subs, authenticaation, and permissions are huge PITAs).

One suggestion - your homepage copy is very developer oriented and 'feature' oriented, whereas it could be less technical and more benefit-oriented.

I think changing the copy / adjusting the focus could go a long way. See Outseta.com (I have no affiliation to them) for an example of a company that seems to be doing something similar to you but with more benefit-oriented, business/revenue-focused copy.


Thanks for the suggestion!

I agree with you re: the copy and being more benefit oriented. As is probably obvious I am a developer-first and a marketer a distant second.

Also appreciate the pointer to outseta


Now if it can generate different styles of pies (detroit, chicago, NYC, neapolitan) instead of what looks like Pizza Hut fare (not an insult, just a fact), then Instagram pizzaporn account here we come!


The trick would be to hook this up to some kind of automatic biological feedback loop so that it could measure how delicious the pizza seems _to you_ and iterate until it has produced your perfect slice.


Hawaiian will always win for me


You are a person of fine tastes


This post seems to go into more detail on the technical architecture.

I've read a lot of these style posts, and oftentimes the results don't end up being that interesting - but I have to say this post is different. Nice job and good overview of all the different layers in the stack. I didn't realize there was a meaningful difference between Cloudflare Workers and Lambda, but now will have to check it out.


Indeed Cloudflare Workers have much lower latency, and much much quicker cold starts (to the point that you probably won't notice them). IMO this makes them much more useful for typical web APIs than lambda.


Workers and cold-starts? If you count 0ms as coldstarts, then sure.

Also, Workers supports HTTP3 (QUIC) and ESNI already.

https://twitter.com/eastdakota/status/1288855462931177472


My understanding is that 'when hot' AWS Lambdas are pretty quick, and so long as there is basically any traffic, the remain hot? Do you have any details?


There are also cold starts every time the number of concurrent requests increases. Also: lambdas are only really useful for the low-to-no traffic situation. If you have constant traffic you're better off using a small VM.


"Also: lambdas are only really useful for the low-to-no traffic situation. If you have constant traffic you're better off using a small VM."

I don't understand this.

The entire point of Lambda's is to offload the overhead and complexity of managing severs - and - to allow for large occasional spikes in traffic.

Even with the limitation of '5 second warmup per concurrent' Lambda - that's not a big deal. It means maybe 100-200 users have to wait a few seconds while 100-200 Lambdas warm up - and with 1.5 user spike ... you really don't know what you're going to get, but with Lambda's at least you're pretty much guaranteed it will work.

For cost efficiency, with stable tech, stable/predictable traffic, and enough scale that you have the right team to be able to manage your EC2's properly - yes that makes sense. But you need some scale to get to that point wherein that cost efficiency is worth it given that Lambda's 'just work' fairly well fairly easily.

But I definitely could be missing something.


I believe what is being implied is that because Cloudflare Workers operate inside of a much lighter construct, they are able to burst to higher concurrency inside of shorter time windows. Lambda can burst to 3000 and then an additional 500 more per minute after that.

https://docs.aws.amazon.com/lambda/latest/dg/invocation-scal...

Based on the scale mentioned in the article (hundreds of RPS) it's likely Lambda would also have been able to handle it just fine.

On another note, using non-provisioned infrastructure (aka "serverless") for an expected bursty load (TV campaign) is bordering on negligence. It sounds like lots of potential donations were missed because the Stripe account was not set up to cope with the load. It turned out to be a wash because Stripe donated 100k but if you change the business context of this system from "receiving donations" to "taking credit card payments" ... this outcome would not be considered acceptable.


Thanks, I see more details on how Cloudflare may be more performant than Lambda.

But why is running a bursty tv load on serverless negligent?

"the Stripe account was not set up to cope with the load." - I've used Stripe and I don't know what this means. I don't see any 'product' limitations towards accepting many payments, that said, they could have some internal financial controls which should have been accommodated by calling ahead and letting them know about the burst.

And how would there be a technical limit with Stripe? If payment processing was directed to Stripe.com - surely they can handle the traffic.


"Why is running a bursty load on serverless negligent?"

Because Lambda and any other FaaS platform are always going to have some limitations as to how fast they can scale out. When you put stuff on TV, this is as bursty load as it gets. You are literally telling everyone viewing that TV channel at that time to pick up their phone and browse to some link.

If you are expected to handle all this load, the only way to ensure you can handle it is to over provision. The FaaS may promise some burst rate, but ultimately it's a shared hosting platform and if they have to choose between giving you your 3000 containers or keeping the other X customers running on those same boxes healthy they will always opt for the latter.

Worst case for them, they refund you the few bucks you paid for Lambda during that time. Worst case for you is you miss a ton of payments/donations/whatever. Welcome to the cloud.

The Stripe API has rate limits of 100/sec: https://stripe.com/docs/rate-limits

They put that to protect themselves from getting overloaded and Im sure these limits are more than plenty for typical use of Stripe which would be well spread out throughout the day. But when you put something on TV, 100 RPS is no way near going to be enough.


Ok thanks for that.

Stripe: 100/second seems like perfectly ample headroom to support their payments. At $10/payment, that's $1K/s which is $3.6 Million in one hour.

If that's 'The Stripe Limit' that every Stripe customer has, wouldn't it kind of imply that's plenty enough for even much bigger customers?

As 'FaaS' scaling - as far as I knew that was the entire point of Lambdas - that they could scale quite quickly.

The alternative, EC2s would need to also 'scale very quickly' and could very well run into the same 'prioritization' problems, no?

In reality, I don't think there's a problem here with Lambda - while possibly not the most ideal option - I think that AWS has ample overhead to supply this little company with their little burst.

Amazon accommodates some pretty big workloads. A call to AWS support may very well have supplied them with the answer as to the real limits on scale.


Agree with many points in this article. However, there are some enterprise software tools (Salesforce for example) that have terrible UIs for both casual and power users, and offer no performance upside despite longer term usage.

JIRA used to be this bad too, but over the years I've found their UI to improve bit by bit as they adopted more consumer UX standards around their interfaces + adding workflow rules. These improvements have seemed to work well for casual and power users alike.


Ah, to write a whole article about a known concept without giving it due service. Methinks "The butterfly effect" deserves a mention.


I think it's a bit different than the butterfly effect. When a major event affects everyone somewhat, it's like billions of chances for a butterfly effect all happening simultaneously. An aggregate butterfly effect where major change is practically guaranteed.


some of these cards are wild!


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: