Thank you so much. Please just make a free site and add your email with notifications enabled or signup for the newsletter. I'll be posting some pretty detailed and mind-blowing updates soon
video is a good start but is a little bit long without cuts. for the kind of background music you chose you probably want bigger/more dynamic text and videos to match the energy.
Thanks! It's been a fun side hobby project for me. I typically crank out contract work day in and day out.
I got the proof of concept done in a month or so. It took a little longer since I wanted to build it on 100% Cloudflare Workers for ideally low cost, scale, and speed. Other low-pressure things I was trying too.
It has sat around for almost a 1/2 year now unlaunched where I have just added small features here and there.
Over the next few months, I plan to release a ridiculous amount of high quality Tailwind themes for people to start from amongst some other marketing efforts.
Thanks Ryan. Great question. It's more of "build a full site via small Tailwind playgrounds". I chose that language to best describe quickly the idea. Hope that helps clarify
It looks like you’re using the same domain for the app and user generated content? You usually want to host the user sites on a separate domain because the whole domain can be penalized in search engines for the content users create.
I don’t think it will “feel” much faster like the Intel -> M1 where overall system latency especially around swap & memory pressure got much much better.
If you do any amount of 100% CPU work that blocks your workflow, like waiting for a compiler or typechecker, I think M1 -> M4 is going to be worth it. A few of my peers at the office went M1->M3 and like the faster compile times.
Like, a 20 minute build on M1 becoming a 10 minute build on M4, or a 2 minute build on M1 becoming a 1 minute build on M4, is nothing to scoff at.
I guess it’s only worth it for people who would really benefit from the speed bump — those who push their machines to the limit and work under tight schedules.
I myself don’t need so much performance, so I tend to keep my devices for many, many years.
I have a MBP M1 16GB at home, and a MBP M3 128GB at work. They feel the same: very fast. When I benchmark things I can see the difference (or when fiddling with larger LLM models), other than that, the M1 is still great and feels faster and more enjoyable than any Windows machine I interact with.
I do a lot of (high-end mirrorless camera, ~45MP, 14 bits/pixel raw files) photo processing. There are many individual steps in Photoshop, Lightroom, or various plug-ins that take ~10 seconds on my M1 Max MBP. It definitely doesn't feel fast. I'm planning to upgrade to one of these.
“Extremely minor server-side functionality thing” is where SSGs fail miserably. All of a sudden you’re using SaaS forms or whatever else, or your self hosting some other tremendously inferior CMS and your margins go out the window.
Wordpress killer that accomplishes these things you mentioned would interest me. Statamic looks interesting in this context but it wasn’t super well formed 4 years ago when I dug deep into this ecosystem
20 years ago, CGI covered this use case pretty well. I wonder if anyone has tried to make a modern equivalent.
Maybe something along the line of a Cloudflare Worker (but using the open source stack) or possibly something minimal and flexible based on WASM or JS that could be invoked from different servers could work.
CGI / PHP is really not a bad way to work in 2024, even though I was traumatized by PHP in my early days. I’ve only experimented with it though, I think it would be hard to maintain at scale, and I’ve heard there are not insignificant security concerns. It’s a lot easier to hire somebody to maintain a Wordpress install than it is to mess with Apache or PHP.
When I figured out that Wordpress literally is executing every piece of PHP every time the site loads it was kind of a “woah” moment for me
A problem with Wordpress and with most CGI setups is that there is no privilege separation between the script and anything else on the site. It would be nice to let individual pieces of server side script be deployed such that they can only access their own resources.
I don’t think Cloudflare workers, as deployed by Cloudflare, really tick that box either. Some of the university “scripts” systems, with CGI backed by AFS, came kind of close.
> I don’t think Cloudflare workers, as deployed by Cloudflare, really tick that box either.
They mostly do. You can map different Workers to different paths in your site. A Worker can only access the resources it is explicitly bound to. E.g. if you create a KV namespace for storage, a worker can only access that namespace if you configure it with a "binding" (a capability in an environment variable) pointing at the KV namespace. Workers on your account without the binding cannot access the KV namespace at all. Some more on the philosophy in this blog post:
There are a couple of caveats that exist for legacy reasons, but that I'd like to fix, eventually:
* The HTTP cache is zone-scoped. Two workers running on the same zone (domain name) can poison each others' cache via the Cache API. TBH I want to rip out the whole Cache API and replace it with something entirely different, it is a bit of a mess (partly the spec's fault, partly our implementation's fault).
* Origin servers are also zone-scoped. All workers running on a zone are able to send requests directly to the zone's origin server (without going back through Cloudflare's security checks). We're working on introducing an "origin binding" instead, and creating a compat flag that forces `fetch()` to always go back to the "front door" even when fetching from the same zone.
Note that if you want to safely run code from third parties that could be outright malicious, you can use Workers for Platforms:
The worker binding system seems pretty great. I'm thinking more about the configuration / deployment mechanism.
In the old days, if I wanted to deploy a little script (on scripts.myuniversity.edu, for example), I would stick the file in an appropriate location (~username/cgi-bin, for example), and the scripts would appear (be routed, in modern parlance, but the route was entirely pre-determined) at a given URL, and they could access a certain set of paths (actually, anything that was configured appropriately via the AFS permission system). Notably, no interaction was needed between me and the actual administrator of scripts.myuniversity.edu, nor could my script do anything outside of what AFS let it do (and whatever the almost-certainly-leaky sandbox it ran in allowed by accident).
But Cloudflare has a fancy web UI [0], and it is 100% unclear that there's even a place in the UI (or the command-line API) where something like "the user survey team gets to install workers that are accessible at www.site.com/surveys and those workers may be bound to resources that are set up by the sane team" would fit. And reading the "role" docs:
does not inspire confidence that it's even possible to pull this off right now.
This kind of thing is a hard problem to solve. A nice textual config language like the worker binding system (as I understand it) or, say, the Tailscale ACL system, is nice in that a single person can see it, version it, change it, search-and-replace it, ask an LLM about it, etc. But it starts to get gnarly when the goal is to delegate partial authority in a clean way. Not that monstrosities like IAM or whatever Google calls their system are much better in that regard. [1]
[0] Which I utterly and completely despise, but that's another story. Cloudflare, Apple, and Microsoft should all share some drinks and tell stories of how their nonsense control panels evolved over time and never quite got fixed. At least MS has somewhat of an excuse in that their control panels are really quite old compared to the others.
[1] In the specific case of Google, which I have recently used and disliked, it's Really Really Fun to try to grant a fine-grained permission to, say, a service account. As far as I can tell, the docs for the command line are awful, and the UI kind-of-sort-of works but involves a step where you have to create a role and then wait, and wait, and wait, and wait, and maybe the UI actually notices that the role exists at some point. Thanks, Google. This is, of course, a nonstarter if one is delegating the ability to do something useful like create two resources and link them to each other without being able to see other resources.
1. If you have a relatively small number of users whom you want to permit to deploy stuff on parts of a Cloudflare account, you may need to wait for finer-grained RBAC controls to be fleshed out more. It's being worked on. I really hope it doesn't end up as hopelessly confusing as it is on every other cloud provider.
2. If you have a HUGE number of users who should be able to deploy stuff (like, all the students at a university), you probably want to build something on Workers for Platforms. You can offer your own completely separate UI/API for deploying things such that your users never have to know Cloudflare is involved (other than that their code is written in the style of a Cloudflare Worker).
Workers for Platforms looks pretty neat, and I hadn’t seen it before. I don’t think it’s targeted at the low-effort CGI-like little bit of script on an otherwise mostly static site market, though. But maybe someone could build that on top of it?
Heck, one could probably even build middleware to deploy regular workers for this type of use, where the owner of the worker has no Cloudflare credentials at all and only interacts with the middleware. (Other than the origin and cache API issues.)
Right, that's exactly the idea. You could build your own CGI-like hosting platform using WfP to run untrusted JavaScript.
To be clear the two caveats don't apply to WfP. The cache API is disabled there. The origin thing can be solved by installing an "outbound worker", which intercepts all outbound requests from the untrusted workers and so can block unwanted requests to origin.
I agree re simplicity of the old way of doing things. There's another benefit that most cgi-bin systems had: lack of build step or exotic runtime requirements.
Eg, you'd drop some html into public_html folder and an executable into cgi-bin dir. I would performance engineer some scripts into C++ binaries and just checkout the source & run make to produce binaries in-place. This approach made it easy to use local dev tooling to test/debug stuff instantly via oldschool emacs TRAMP/sshfs.
There is a system that replicates the simplicity of what we lost (while letting you use fancy modern JS frameworks): https://www.smallweb.run/. It also offers a path to cloudflare-like edge computing migration without any code change via deno deploy. With smallweb, one drops a bunch of files into own dir (eg, you could give a dir to each student), which results in https://<dir>.domain.name running stuff ondemand in that dir. No build step, no exotic runtime to transpile into, full ability to use local dev tooling to test/debug stuff instantly. It's still early days for smallweb, but it's specifically designed with the philosophy of "edit some files and stuff runs..while remaining compatible with standard deno way of doing things".
I love the concept of cloudflare workers[1], their fancy state management, bindings, etc, and the fact that they took inspiration from cgi-bin. However, the fact remains that it's an exotic system with weird restrictions (hello changing your apps around 500mb chunk limits ala https://github.com/cloudflare/serverless-registry). This limitation can make it difficult to work with libraries that aren't tested against the cloudflare runtime. 90% of code I write would run better with cloudflare than with deno (due to awesome cold startups), but dealing with these restrictions is too much engineering overhead upfront.
In contrast, with deno/smallweb, I just drop some files into a directory, don't need to bother with package.json, lockfiles, etc, but can gradually opt into those and then gradually switch to CI/CD mode of operation. You can't expect a student new to web development to deal with the exoticness of cloudflare's solution from day 0.
[1] Kenton, it's a fantastic design, I sang praises to it in https://taras.glek.net/posts/cloudflare-pages-kind-of-amazin.... But after trying equivalents in deno ecosystem like val.town and smallweb I would love for it to be less exotic(I know you guys have more node compat work happening).
To be honest I think comments as a pro is the worst possible example in modern times. I am (unfortunately?) fairly industry.
Pretty marketing sites that look fancy is what WordPress primarily is used for and expected.
“Hey devs and designers make me something better than Squarespace but give me edit access.”
Comments and everything possible disabled. The point of value for organizations really is that it is self-host and self-updating. Essentially simple portable auth with easy template system.