Hacker News new | past | comments | ask | show | jobs | submit | tjwds's comments login

  Location: Philadelphia, PA
  Remote: Optional
  Willing to relocate: Yes (Philly, RVA, SF, or NY preferred)
  Technologies: engineering leadership, JS (particularly a fan of Next.js), & more
  Résumé/CV: https://joewoods.dev/resume
  Email: hnwhoishiring@joewoods.dev
  LinkedIn: https://www.linkedin.com/in/tjwds/
I'm currently Head of Engineering, US at Fastmail. Previously, I worked in eCommerce, and before that, I managed the digitization program at a university library. I'm looking for my next opportunity; likely as a player-coach, but I'm open to basically anything on the management/IC spectrum if it gives me the opportunity to help build something. Even if you're not hiring, feel free to shoot me an email — I would love to have a chat if you're working on something you're passionate about.

https://joewoods.dev


I am very slowly working on a non-profit community management / RSVP collection platform. Think of this as an alternative of the websites you tend to think of if you want to collect RSVPs for a social club or meetup group, but run and maintained by a Wikimedia Foundation-style non-profit.


This is, in my view, one of the primary benefits of using your own domain for email.


In my opinion, there really is a lot of value in the Samuel Beckett quote:

> Ever tried. Ever failed. No matter. Try again. Fail again. Fail better.


Yes, but courage is needed to try again. I feel disappointed :(


(n.b. that I am not the author, the "I" in the title is Douglas Sonders, not me)


`// I am so, so sorry for this code.`

Nice.


If you go to the playlist's "playlist radio," you can find even more variations; I figured > 100 was a good number to stop at. https://open.spotify.com/playlist/4pyIM5We0dd01U4KQyVmkU


JQBX is amazing. IWYM forever.


Sure: https://dev.joewoods.dev/img/june-to-present.png

But my website seems to have disappeared, so I'll have to update the post later!


Thanks for hugging my site to death! It's hosted on a $5 DO droplet and I'm honored to have this problem. https://archive.ph/AVbPV


A $5 DO droplet will easily handle peak HN traffic with a bit of optimization. Set cache headers, cache static assets on the server side (or even better, with an external CDN), or even cache the entire page if there's no dynamic content.


Even without optimization. Peak HN traffic is in the 20 req per second order of magnitude


Without optimization, Nginx defaults to allowing a maximum of one worker process with 512 connections and 60s timeout. 20 unique visitors per second will lead to 500 errors in less than a minute.


Are you assuming each request takes a second, or how long? A blog should return fast enough that this isn't the case

Edit: Yes, even WordPress. Unless there are 50 plugins, an unoptimized theme, zero caching, and a resource constrained server.


It does not matter how fast the backend is. A persistent HTTP connection will last 60 seconds following the latest client request, unless the browser goes out of its way to explicitly close the connection.

P.S. OP's website uses Apache but the same issue of overly conservative limits still apply.


There's no way the connection just sits idle and the worker can't serve other requests for the full timeout, right? That just sounds... Wrong. And is not consistent with load testing I've done before with a nginx setup


Apache will spawn a process (at least with pfork) and the process will wait for a keep alive connection to send a new req.

Everything old is new again. Gotta tune it out.


Is this true? So Apache basically launches a Slowloris attack on itself for every connection?


With prefork there is one process per connection. Look at server-status. There was a threaded version as of 2.4 but I don’t think it worked well.


Depends on the concurrency ability of the web server (async) or parallelism (threads)


I can’t imagine a case where the browser wouldn’t do the decency of closing a connection.


I would expect they keep it open for an improved user experience. Much like the prefetching that browsers seem to do by default.


> zero caching

<strike>When each request is from a different person, you get essentially zero cache.</strike> Nope, server-side caching reduces back-end processing.


There are multiple strategies for caching in the server, without them IIRC the php code will be interpreted on each request, files parsed, and obviously hit the database for each request.

There's fastcgi caching in nginx, php opcode in php-fpm, and WP specific cave plugins like Super Cache. At least this was the case ~10 years ago.


Also for $5/mo you can use Cloudflare APO to cache WordPress pages at the edge. Yes it will cache even the "dynamic" pages (unless you're logged in of course)


$5/mo is roughly my server electricity cost. The Cloudflare offer is not for me.


That's fine, I was just listing another option. It should be noted you should still do server side caching. This just lets you serve from Cloudflare's caching layers too


Also something like Varnish, which is what Wikipedia uses.


You are correct. I didn't think of the back-end delay because I have a static site (with a comments plugin loaded separately).



Are you sure? My understanding was that nginx would fill up all free connections up to the max, but then would begin draining idle connections so it could create new ones for new visitors.


I can only find a reference to what you said with regard to upstream connections. Are you sure this also happens with downstream clients?

https://www.nginx.com/blog/overcoming-ephemeral-port-exhaust...


Do you serve static files? Static file server on a $5 DO droplet should handle the HN front page.

There's also free tiers on many CDNs like Fastly/Cloudflare. Github Pages is also free.


I used to use zola hosted on Github Pages, but I wanted to improve the publishing workflow, and WordPress was a good fit for that. (Notes on the migration here: https://blog.joewoods.dev/technology/technical-notes-on-migr...)


I run a few big sites on WordPress, and would totally recommend you use the simply static plugin, and configure nginx to serve those cached files directly so it never hits php. I also usually put cloudflare in front as well, but you can have a screaming fast wordpress site that scales really far by itself without it just with that trick.

If you need some dynamism, w3 total cache is also a great choice.

If you're not an nginx hacker, there are some great examples around the internet. This page is pretty helpful: https://wordpress.org/support/article/nginx/


Check out this thread for recommendations on static site generator plugins on top of Wordpress. Best of both worlds for folks who want high-level publishing.

https://news.ycombinator.com/item?id=31396961


you don't even need those

a cache with long-enough TTL will do the magic for you

https://mish.co/posts/always-cache-your-website/


A (local) cache is mostly useful for repeat visitors, not for thousands of unique visitors unless you use a CDN.


Putting the Cloudflare CDN free plan in front should solve the hug of death problem.

https://www.cloudflare.com/performance/ensure-application-av...


funny to hear that

i actually migrated from Wordpress to Zola


Even DO has their free tier for static hosting with global CDN and DDoS mitigation:

https://www.digitalocean.com/community/tutorials/how-to-depl...

Looks like the free one has only 1GB outbound transfer which wouldn't be enough for a single hug, but the $5/month one has 40GB which might be enough for a hug with text only content.


Now you can write another article about how you optimized the site to handle HN traffic. :)


I love this idea: "Cheapest way to run a static site that can handle hugs of death".


I run https://www.hntrends.com on a $5/month box and never had a problem with spikes. Generated, static resources all the way.



Then you need a faster web framework or CMS. Nexus is made with Nim and is designed to be fast (https://github.com/jfilby/nexus).

But most problems like that can be solved with caching no matter which tech you're using.


They were grateful the thing got so much attention, and deferred to other hosts to supplement serving that attention. Not looking for uptime solutions.


Faster software will always save you time, money and effort.


For simple static sites like this, sticking a default Cloudflare stack in front helps a lot


you’d (I’d) think ~managed cdn serving an s3/blob origin would be a no brainer. Hell, GitHub pages even: BYODomain, + hosting/automation/cdn…

Where’s the free lunch in that line of thinking? which cases are a droplet better in?


Being able to recycle the vps for other self-hosted apps is nice (eg vpn, dns-level adblock, etc).

Also not having to learn another tech for something you'd only have to set up once every year or few years.


(Now that I’ve tried to RTFA, it seems homie has a runtime dependence (db?) or something colocated on the host failing? OWASP, right? Having a resource limit reached maybe? Nextjs seems like a cool approach to manage partially ~runtime-dynamic systems like this/cms/etc)


Hey. Nice post.

I was thinking about this same problem for a couple of days. Another question I have is: do we have a higher number of top-level comments on a given number of months or not?


So the "pullback" seems similar to the one that was seen in pre-pandemic 2019.

That about correlates to my impression that this is not the recession.


It's a static site, no reason for the server not to be able to handle thousands of connections at once with almost no configuration changes with some like Nginx or Apache. Or even a domain which points directly to a S3 bucket. Hope you're not looking at devop roles on who is hiring posts.


It's a WordPress blog


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: