Hacker News new | past | comments | ask | show | jobs | submit login
There's never been a better time to build websites (simeongriggs.dev)
314 points by adrian_mrd on Dec 20, 2021 | hide | past | favorite | 324 comments



It's definitely a great time to build websites, but saying they _NEVER_ have been easier to make... I'm not 100% sure about that.

Yes, tailwind might make css a lot easier, and github copilot might make coding a lot faster... but is this really easier than in the early 90ies, when you could just type <html> into notepad and make a website that didn't require any CSS or images or JS or even more than just the most basic html tags?

When it comes to reading a blog or article or consuming any other form of textual information on the internet - I find myself increasingly enamored with "Reader View" in Firefox, which basically ditches all the crap, and displays the web-page in a default-style, as if it was just the most basic html - just like that we did write in 90ies.

Of course this doesn't hold true for any webApps that do more than just present textual information (and the occasional image) - but why do project like ViewPure exist? There seems to be at least some demand for getting rid of all the clutter.

Is tailwind really easier to work with something like - let's say - pico.css? Of course if you want to do super-elaborate layouts, then probably yes - but if you embraced a more minimalist approach?

Isn't Hacker News itself a really great example of a website that successfully uses only very basic html with very, very little styling and fancy extra features?


> Yes, tailwind might make css a lot easier, and github copilot might make coding a lot faster... but is this really easier than in the early 90ies, when you could just type <html> into notepad and make a website that didn't require any CSS or images or JS or even more than just the most basic html tags?

The thing is you can still do all of this. But we have since built tools and frameworks to let you still do this, while giving you superpowers. I think Svelte is the best example of this. You can create a dead simple site with it and have the result you are talking about, but the real power is now you can employ many of those advanced techniques that were reserved for large web apps in the same dead simple site.

> Is tailwind really easier to work with something like - let's say - pico.css? Of course if you want to do super-elaborate layouts, then probably yes - but if you embraced a more minimalist approach?

I've used tailwind quite a bit, it's killer feature is the fact that it's just intuitive. With most CSS frameworks, even minimal ones, I always refer back to the docs to see how they do a specific thing. With Tailwind you refer to the guide a few times but very quickly pick up how it does things. I think that's why it's so popular, I've typed out classes thinking "is this a thing", and almost every single time, it was.


The problem is that expectations changed, so now you're "required" to do more in certain contexts.

I absolutely loath frontend developments in companies with more than 5 engineers. In the past you may have found some spaghetti code and various abuses of jQuery but you could probably grasp the codebase in an afternoon. Nowadays every mid company codebase is a spaceship. For sure, you'll find a pseudo technical VP of engineering which enable whatever cool technology to be used and resume driven developers happy to pile on the latest Big Tech framework - which is likely to be an over-engineered, over complicated exercise in engineer retention and engineer roles marketing.

Then you have Pieter Levels pulling a mil per year with PHP and jQuery.


> In the past you may have found some spaghetti code and various abuses of jQuery but you could probably grasp the codebase in an afternoon.

My first job out of college involved a Rails codebase littered with conditionally rendered jQuery snippets. It was a nightmare to figure out what code was even loaded on any given page. Give me a modern JS codebase over that any day.


Yeah, as someone who started out in JS with procedural jQuery, moved onto prototype overloading shenanigans with classes, and now React with functional components - the JS ecosystem and tooling is still a huge mess, but I wouldn't want to go back to jQuery or prototype class hacks. At least a modern React codebase has a whiff of engineering to it. No matter how carefully we did stuff the old way at the companies I worked for, it sooner or later became an unmaintainable mess of hacks and overrides.


One big question: These "expectations" you are talking about, who's expectations are those?

Might it be possible that those expectations, and those requirements for doing "more", are not coming from the actual users/visitors to the websites - but rather from the companies owning the websites?

Because I, as a visitor of HN, come here for the IT news and the nerd stuff. I happily put up with the layout - because the sites actually meets my expectations for content. At the same time, the best, most modern, snazziest website ever... why would I go there (more than once), if there's no content of interest to me?

And that - to my experience - holds true for almost ALL visitors. They come for the content, and they put up with the layout. And this table-based layout with spacer gifs right here, imho is a lot easier to put up with, than some desing-heavy website that's perfectly optimized for conversion, engagement and retention rates and whatnot.

I think the actual problem is, that when we talk about "expectations" and "requirements" - we talk about what the HOST/SELLER wants - not what is best for the visitor.


> The thing is you can still do all of this. But we have since built tools and frameworks to let you still do this, while giving you superpowers. I think Svelte is the best example of this. You can create a dead simple site with it and have the result you are talking about

I don't know. I just tried to use Netlify to upload a simple static Html/Javascript page I made. Now I'm trying to figure out CORS errors. Pretty sure that didn't use to be a thing. For better and worse.


If you're trying to figure out CORS errors, it means you're either loading fonts or making XMLHttpRequests across domains. Neither of those were available in the 90s era OP is rhapsodizing about. You don't have to use the additional features/complexity of the past 15–20 years… but if you do, it's not surprising that things get a little more complex!


You actually do have to use HTTPS nowadays, and that's a non-trivial amount of complexity compared to just serving up a static page over HTTP.


Eh, most static site hosting provider offer it out of the box nowadays


What if I want to run a simple but non-static website? It was a lot easier to just do CGI with a small perl script in the 2000s.


Pretty much every web provider supports PHP, so it would be as simple as changing the extension from .html to .php


No it’s not. Certs are pretty dumbed down. So much so that there is a thing called certbot that just does it for you.


Yeah, we run certbot. It fails for some reason at least once a year and we have to do something to fix it. At the very least you have to find a way to run it on the regular and you can't forget about it or else your simple static website starts looking like malware. Tell me again how trivial it is for a 15-year-old kid using their parents' Windows 10 machine to get certbot running reliably.


IE had ActiveX since 1996, which allowed for things similar to Ajax.


I used https://neocities.org/ recently to make a simple static html website with no issues and no configuration. And I'm someone who usually uses the more "advanced" stuff. It's nice having the freedom to chase a little nostalgia sometimes.


Is that like modern geocities?


Exactly. Same in the spirit.


Welcome to CORS errors, so sorry you made the trip. :)

CORS will eventually click for you and then it will always make sense. Until then, sorry you’re going through the muck.


I honestly feel like CORS was invented just to get people to use reverse proxies more.


CORS enables site operators to prevent third-party sites from offloading requests for them onto clients, and conveniently reduces XSS exposure as a side-effect.


Question: Are people using tailwind for layout? Even when CSS grid is available? If so what is the benefit of tailwind layouts over modern CSS, beyond what tailwinds usually brings?


I am personally, for a couple of reasons.

(1) Tailwind uses utility classes that feel like syntactic sugar over full CSS, so I don't feel like I'm "using Tailwind for layouts" as much as "using useful CSS grid presets"

(2) Much like with margins and paddings, text sizes, and colors, Tailwind helps me "pare down" the number of different values available to me. Much like having preset "m-1", "m-2", etc. values helps me be more consistent, having preset grid columns and gap spacing helps me stay consistent and not go too crazy.

(3) Because I'm using something closer to CSS-wrapped-in-classes rather than a set of components built by someone else, and because tailwind gives both a naming convention and an easy mechanism for adding my own classes that fit right in to Tailwind's, if Tailwind doesn't give me the exact values I want for grid spacing, then I can modify theirs. If I need to add a couple new presets in addition to Tailwind's, then I can do so easily.

(4) Early on when CSS Grid was new, I got bit by some bad layout bugs that were very hard to find, and impossible to fix (they were browser bugs). So, while I'm sure the bugs I encountered have since been fixed, I'm a bit gun shy on grids. I occasionally use Grid when it makes sense (via Tailwind), but more often than not I find it easier, more flexible, and safer to just use nested flexbox cols and rows. For which I also use Tailwind, if the project is of suitable size.

To sum it all up better, I view Tailwind more as syntactic sugar over CSS that helps me build my own design system. So, I would totally use vanilla CSS for small projects that don't need a design system. But for anything more than that, I like Tailwind for the same reason I like using web frameworks on the back end: it helps me keep things structured and consistent and more maintainable, even if I could do it all on my own without the framework.


So it is basically, because you’re using tailwind anyway for styling, using it for layout is nice for consistency sake. I get that.

For (2) however I usually use CSS custom properties to reach for preset values, e.g. setting "column gap" to "var(--margin-inline-wide)", and "row-gap" to "var(--margin-block-short)" etc.


That's fair. If the question is "would I use Tailwind primarily for layouts and little else" then the answer would be "no". I would definitely use CSS variables along with grid or flexbox, because its essentially the same end with less complexity.

I would also point out that Tailwind only makes sense to me when I'm also breaking up each individual UI element like cards or buttons into reusable components, either using a reactive framework like React or Vue, or a server-side template system. I'm not sure the maintainability of Tailwind survives with more monolithic HTML pages.


You still use grid or flexbox but instead of writing a class adding grid:0 1 and referencing the class name you use a shortcut with predefined values.

Easier to remember and quicker to type. Centering content on a screen is class="place-content-center" which makes things easier.


This question doesn’t really make sense. Tailwind is just modern CSS, it’s just a composable shorthand for applying styles. There’s plenty of CSS grid utilities in Tailwind: https://tailwindcss.com/docs/grid-template-columns


I rarely use CSS grids like when I’m doing layout. The only times I use grid-template-{columns,rows} directly is when I’m using the auto layout (e.g. a stack of cards). Whenever I do an overall layout I use named grid areas with different sized cells (i.e. not repeat()). A layout scheme could be something like:

    .container {
      grid-template:
        "nav    .      header header header" auto
        "nav    .      .      .      .     " 1ex
        "nav    .      main   .      aside " 1fr
        ".      .      .      .      .     " 1em
        "footer footer footer footer footer" auto
        / 40ch  1em    1fr    1ex    15ch
    }
You could achieve something like this in Tailwind using nested flexboxes, but that is not the same as doing layout in CSS as you have to work with the shortcomings of the framework. Now I’m not saying it is not worth it, but saying it is the same is missing a huge aspect of how we do layout in modern CSS.


Tailwind is more of a method of writing CSS than a framework. Many people are writing modern CSS layouts IN tailwind, which you can consider a dialect of inline styles.


C. 2000 I was able to match state-of-the-art web designs, solo, without much difficulty, writing raw HTML & JS. And get paid for it.

I was in high school.

That was the best time to build websites.

(Though at least "flat" trends and terrible UX out of several major companies mean my shitty designs are back to looking about as good and working about as well as "pro" designers, so that's, kind of, an improvement over ~2005-2014)


Hacker News still uses tables - for layout!

But it always loads super fast, and it just works.

We figured out how to display text on a web page decades ago. It's a shame more people aren't just doing it the direct way anymore.

(Applications are a different matter obviously)


I would have been horrified if I saw what websites would look like in 2010+ when I first started using the web. The amount of unnecessary garbage and JS added to sites goes well beyond the adtech part. Half or more of sites using frameworks probably should never have used them.

A good contrast is Reddit vs hn. Reddit’s current site is basically unusable and sluggish on modern hardware that can you could train ML models with.


Just look at all the overhead of the article‘s website… When I first clicked the link the "website" crashed, so instead of the hot take, all I could read is that "addEventListener is not a function". That tells you all about the current state of affairs

(Yes I get that webdevs' personal sites are their playground)


That website also loads 184 KB of Javascript to display a 13 KB static document


It could sit on a shit ton of interpreter too.


My favourite recent incarnation if this is when YouTube has its JavaScript go sideways and then it blames the internet being out instead of doing anything productive


I don't think any of this is "good" web-design.

But my argument never was about the layout or design of Hacker News being "better".

What I meant to argue was, that content is king, and that everything else should serve the actual content - not distract from it.


"What I meant to argue was, that content is king, and that everything else should serve the actual content - not distract from it."

Thank you! So many sites have utterly garbage contrast, making content unreadable - it's so annoying.

I spam this excellent discussion of color and usability as often as I can: https://designsystem.digital.gov/design-tokens/color/overvie...

but getting stupid 20 something designers with perfect vision to pay attention is almost impossible :p


And if you try it on mobile, it's mostly crap.

Let's not treat HTML tables as being some super special ability.

If HN were implemented exactly the same but with flexbox or grid for layout, it would be objectively better with no drawbacks I can think of.


(Semi) honest question to the commenters below parent, do you all use styluses, or do you just have small fingers, or is there some predictive touch system in the default browser for your devices that isn't present in Firefox? How big are your screens, are you running phablets?

Trying to understand how so many people are reaching the conclusion that HN works well on mobile. Maybe 25% of the time I try to upvote or downvote a comment on mobile, I fat-finger the wrong arrow. That shouldn't happen in a mobile interface.

I feel like just for the sheer number of comments I'm seeing saying that they don't know what the problem is, there must be something I'm missing. To me it's pretty straightforward, you have to zoom the screen in 20-40% to press any of the buttons or accurately target on any of the links, and when you do that you have to interrupt reading flow because the text doesn't wrap during that zoom.

When I turn off the mobile site and request the desktop version it's even worse, so I don't think it's a browser setting. I don't think I have particularly large hands, but the links on the top of the site are still only about 1/3 to 1/4 the size of my pointer finger.

What are you all seeing that I'm not? Is there a font-size setting you have checked? This is a pain to use without a precise pointer.


This button clicking issue certainly exists in Android Chrome. It's just that almost every designed-for-mobile website is missing functionality or behaves in unpredictable and terrible ways. Browsing not-designed-for-mobile web sites on mobile and zooming whenever I need to click on something is a big upgrade.


This is an interesting/illuminating comment, and it makes me wonder if some of this might be a personal bubble thing. I feel like I generally tend to have a better experience on mobile than other people describe, but I'm also running mobile adblockers, and more importantly, I also might just not be visiting all of the same sites as other people?

People complain about Reddit on mobile, and I totally agree, Reddit mobile is awful, but it's also a really small part of my life, I generally don't visit Reddit on a phone that often -- so it's easier for me to think about Reddit's mobile site(s) as being some kind of outlier?

My experience has been I read a lot of blogs on my phone, I do searches (that go through duckduckgo, which I don't really have a ton of complaints about as a mobile site), I look up quick pieces of information on the fly from random sites, I look at MDN documentation, trying to think what else...

I also spent a long time turning off Javascript entirely on my phone browser and I've only very recently started changing that practice (largely because of Gorhill deprecating uMatrix), which probably biases things even more, because a lot of the browsing I do on my phone works without Javascript, and that gets rid of a nontrivial number of annoying behaviors from more aggressive sites, and so I wonder if I'm just not giving the same amount of "credit" for HN not doing the Reddit bullcrap where it pops up a notification asking me to install an app, and that makes it easier to stare at the flaws.

If I was doing a lot of Reddit browsing on my phone, I would appreciate HN more on mobile, I will give people that HN is much better on mobile than Reddit.


One workaround would be making a new account whenever you get enough hacker news points to unlock downvoting.


I dunno, it looks great on Firefox mobile.


HN is fantastic on mobile. Much better than Reddit, even old.reddit and i.reddit.


Yeah it's easily one of THE best mobile sites I use.


The touch targets are way too small, and I don’t even have fat fingers.


Anecdotally, I've found that iOS handles small touch targets much better than Android. I had an HTC Raider, a Moto G, and a Moto X Play, and I felt very fat-fingered on all of them. Then I switched to an iPhone 6s and now an iPhone 12 Mini and with both of them I've had zero problems with small touch targets, even ones that are close to each other like with Nonograms.


What problems on mobile (small screens) do you see?

> no drawbacks

Would it be slower or use more CPU or memory to render?


> Would it be slower or use more CPU or memory to render?

With flexbox layout, probably (?), but likely imperceptibly (don't quote me on that, I haven't actually done the math to check if the benefits from decreased DOM elements would outweigh the increased cost of flexbox, maybe it would get faster just by virtue of shipping less HTML).

That being said, GP is being kind of extravagant, there is arguably nothing on HN that requires tables or flexbox. I always felt like inline spans and maybe a few floats/margin:autos for stuff like headers/menus would probably handle the majority of the layout.

This is part of my criticism of "HN picked the simple answer" takes; even if you go back over a decade and even if you take flexbox off the table, tables weren't really the simplest answer after CSS `margin:auto` was invented. This really isn't a website with columns of data, it almost doesn't need any layout tool at all.

I think the, "now flexbox exists so we can do it correctly" takes are also wrong in their own way; HN is a single-column reading experience with very simple menus, why bring flexbox into this? I always try to temper this claim because I haven't technically ever sat down and built a pixel-perfect replica using normal HTML, I don't technically know there's nothing that wouldn't require modern CSS. But I have poked around at different pages of the site and messed around with resolutions, and I've never seen a situation on the site that I felt required all of that extra HTML and complexity.

I almost wonder if the point of the embedded font tags and image spacers and crud is that site is trying to make it sort of render the same even with CSS turned off. But if so, that's bad practice and the site should stop doing that. Anyone who's turning off CSS is doing that for a reason and the site should just respect that and ship them the pure content.


Are we really worried about the CPU/memory impact of flexbox vs HTML tables?


Well, maybe we should! The site should be super smooth on my Amiga! :-p


Definitely works fine on mobile for me


You never miss the upvote/downvote buttons?

Quotes don't sidescroll to infinity for you?

I could go on and on...

It's okish, but it's no pinnacle for usability, that's for sure.


Eh, yeah I guess that happens, I guess I'm so used to the rest of the internet being so awful on mobile that it doesn't phase me. Which is funny, because I'm always super irritated by the rest of the internet.


Because of your comment I just looked at the source - yup tables. And a spacer gif too https://news.ycombinator.com/s.gif


I think it's old enough to be a retro design choice now.


> it just works.

Going to push back on this line specifically, HN has a number of issues. They're not dealbreakers, but they're also not particularly hard to fix with modern HTML. HN has generally kind of bad accessibility/semantics, it's pretty frustrating to use on mobile, and it degrades kind of poorly without Javascript (collapsing thread buttons still appear even though they don't work, also, collapsing thread buttons don't work without JS).

HN is (unironically) a great example of how kind of hacky you can make something and how little you can iterate on it while people still are mostly able to use it for its intended purpose. And (with the exception of maybe its blind accessibility problems) it should be held up as a great example of that.

But it's not a good example of "do things simply and they'll just work well." If anything, HN is a great example of why stuff like tables were abandoned. And the HTML isn't even that simple, this must have been a royal pain to build, everything everywhere is another embedded table. It's a weird dig at how bad some major websites have become that people don't see HN's HTML as bloated or convoluted.

Ever really dug into how HN threading works? Everything is a top-level comment, and it inserts transparent images to create the illusion of indentation. It's a wildly out-of-left-field solution that makes parsing out and styling threads way harder than it needs to be. Seriously, I've spent way too much time trying to figure out how to make CSS selectors for custom user-styling work on child comments/replies on a site that is supposed to be displaying a comment tree in the DOM tree. There's a one-to-one mapping there, you don't really need a complicated visual slight-of-hand to display this information; just put the comments in the tree.

Again, not to get mad at HN; but I think people use it as a positive example in the wrong situations. HN has bad HTML with obvious downsides that would be pretty easy to fix, but it turns out that creating an elegant website and filing off the rough edges is actually a really small part of running a community, and doesn't matter that much in the long run when compared to other things you could be doing to foster that community (like moderation/curation), and that is a very good lesson for tech people to learn from HN. But nobody should praise the HTML on this site, there are much better examples out there on the web of sites that use simple HTML to great effect.

----

> (Applications are a different matter obviously)

Much more minor push-back but I actually would love to see more applications embrace the interactive document model even when fully native and fully offline. Not all applications, but a bunch of them. Stuff like calculators, calendars, even bigger applications like database software/image viewers/file browsers, etc...

User-accessible stylesheets for applications, user-accessible scraping tools for applications, etc... I think there's a lot of potential for user computing hidden behind a willingness to say, "no, many applications are just text displayed in tree/table form when you think about it, and the app/document divide was always sort of nonsense."


> Ever really dug into how HN threading works? Everything is a top-level comment, and it inserts transparent images to create the illusion of indentation. It's a wildly out-of-left-field solution...

Having worked on an implementation of comment indentation before, I think it's a technical design choice rather than wildly out of left field. From the backend perspective, it's more performant (and simpler from as far as code maintainability) to have a flat db table with a number representing indentation level, rather than have each comment point to its parent, and then have to recursively build the html.

When you're getting as much traffic as HN, such design tradeoffs can make a huge difference in site responsiveness.


I might be misunderstanding what you're referring to. To be clear, I'm not really talking about recursively building anything about the HTML or fetching any additional data about other comments, or storing it in the database as a tree, a flat structure and iterative builds are fine for all of this. I'm talking about, purely off the top of my head, moving away from:

  result = reduce(comments, (result, comment) => {
    return result + build_comment(comment);
  }, '');
to:

  indent_level = 0;
  result = reduce(comments, (result, comment) => {
     missing_lvls = indent_level - comment.indent_level;
     close_str = missing_lvls > 0 ? repeat_str('</ul>', missing_lvls) : '';
     indent_level = comment.indent_level;

     return result + close_str + build_comment(comment);
  }, '');
I assume you're right and there's some kind of extra complexity somewhere, but HN isn't just storing the comments with no context other than indentation as far as I can tell. It is maintaining parent-child relationships at least well enough that the buttons to minimize/maximize threads work, so I am not sure what process is being skipped here by shipping flat HTML lists.

I guess I haven't ever read through HN's clientside Javascript, maybe it's calculating how to minimize threads on the fly completely clientside by iterating over the DOM, and maybe that's why the buttons don't work with JS disabled. But.. oof, I wouldn't hold that up as an example of good, simple clientside code if that's the case.

----

But again, I'm not going to argue with your conclusion, I assume there's something I've missed, I assume you're right.

This is still a bad example to use for "see, simple HTML is better". What you're describing is a good example of why it makes sense to say, "see, convoluted table setups that are worse on the client are faster overall than clean, simple output."

Which is not a bad lesson to learn from HN. It's a great lesson, sometimes performance requires us to do hacky things. But it is a very different lesson from where this thread started. You still would never point at that and say, "this is a good example of what HTML should look like", you would say, "these are the kinds of messy sacrifices that might be necessary to maintain a performant backend."


> maybe it's calculating how to minimize threads on the fly completely clientside by iterating over the DOM, and maybe that's why the buttons don't work with JS disabled

Holy crud, I just opened the JS up and this is actually what it's doing. Heck me and my little 'not going to contradict you' statements, you might just be completely entirely right about what's going on I guess.

  function kidvis (tr, hide) {
    var n0 = ind(tr), n = ind(kid1(tr)), coll = false;
    if (n > n0) {
      while (tr = kid1(tr)) {
        if (ind(tr) <= n0) {
I'm still going to reassert that any architecture that leads to someone doing this kind of logic every time a button is pressed is neither simple nor elegant, and at best this is an example of making architectural sacrifices and introducing complexity for the sake of performance; there's no way that this kind of page logic is easier to maintain or to understand than something built around a more semantic structure.

It's also definitely not easier on the client, I've seen some comments on here argue that HN might use all this weird stuff to save client battery life, and I feel more confident now saying that's not the reason, because if someone is somehow, someway legitimately in some incredible situation where they're actually worried about the battery drain of a browser rendering a table vs a some extra CSS, then this is not the code you would want to run every single time you press a button on the page.

But yeah, the "we just base everything off of an indentation integer and comment order" theory does seem a lot more plausible to me now, because I'm having a somewhat difficult time thinking why else collapsing comments would work this way.


> Yes, tailwind might make css a lot easier, and github copilot might make coding a lot faster... but is this really easier than in the early 90ies, when you could just type <html> into notepad and make a website that didn't require any CSS or images or JS or even more than just the most basic html tags?

I mean you still can, it's just that nobody's going to hire you to do that and nobody is going to be impressed by it. Even back in the 90s when it was that easy, it was all hobbyists. By the time the dot-com boom started taking off, it was complicated. CSS + JS, no responsive design (remember designing 3 different sites to deal with the 460x640, 600x800, and 1280x1024 resolutions? Frames and no frames? Different sites for each browser [or ones that only worked in one bc things were even less standard than now - 'this site only works in IE'?] etc.)

Now it's the tools adding the complexity, but back then it was hardware, browsers, etc.


“no responsive design (remember designing 3 different sites to deal with the 460x640, 600x800, and 1280x1024 resolutions?”

Using media queries with breakpoints, and doing exactly what you describe, was the original “responsive design”. What you have in mind (I think) is what might be called “fluid design”—responsive design without media queries. You could always do this to some extent with such things as inline blocks, auto margins, and other ancient CSS technology (and an unstyled site also reflows responsively), but flex and grid allow more elaborate fluid designs.

There is also some advantage to making separate designs carefully crafted for a variety of screen shapes, that you give up with fluid design techniques.


Thanks for the correction/deep dive here. I was a kid + don't have an official education in cs, so I mix up terminology and what things were called sometimes.

I remember when media queries came out. It was so nice. Amazing. 10/10 for the time. You can do a lot with auto-margins and other ancient CSS, but there are definitely quite a lot of limits. (Sighs in someone who does a lot of email design and academic CMS design... let's party like it's 1999...)

> There is also some advantage to making separate designs carefully crafted for a variety of screen shapes, that you give up with fluid design techniques.

I agree strongly with this from a UX/UI point of view and as an educator.


> is this really easier than in the early 90ies, when you could just type <html> into notepad and make a website that didn't require any CSS or images or JS or even more than just the most basic html tags?

Nothing is stopping someone from doing that today, but that's no longer the only option, thus it seems accurate that it is easier today than ever before.


In a way, its more difficult today because there are so many options for web development that it's paralysing to even find out where to start. Still, can't go wrong starting with the most basic option.


That's like saying "it's harder than ever to cut wood because there are 100 different types of saws in Home Depot". I don't really buy this line of reasoning, different tools exist to tackle different types of problems, if someone has no idea what they're doing then it's their responsibility to get educated, that's as true today as it ever was, except today it's much easier to get educated than ever before.


I'd like to reframe your argument around expectations. Regardless of the difficulty of building websites, the bar is dramatically higher than it used to be. Look at what Amazon was able to get away with at one point [0]. The need for good security practices is also the highest its ever been (and will continue to matter more in the future).

So, in some ways, its never been more difficult to build a website. Thankfully FOSS lets us outsource a lot of the work.

[0] https://i.imgur.com/OAyZWnZ.png


You can still hack in HTML (and CSS) in a basic editor. I do occasionally. No design or js frameworks, no elaborate build systems, just me and my editor.

I don't think I'd have been so cocky five or ten years ago. Bootstrap solved real problems with cross browser layouts, jQuery fixed serious holes in some browsers' standard libraries. But the things they did then are things I'd happily freehand now.

In short, it's easier because 99% of browsers in use aren't awful and flex and grid make the float mess of the mid-Naughties irrelevant.


I am positively confident that I could teach a bunch of art students (so definitively non tech people) how to create their basic website only using html and css (without any framework/library etc) in 2 to 3 days including the domain and hosting part.


Weird flex, but okay.


What I wanted to hint at with this was, that good old HTML and CSS is both enough and probably trivially simple to get started with for the HN crowd if they don't know how to use it yet. A lot of simple websites don't need more than that. And if you sprinkle a bit of js on top you can also dig into the land of not so simple websites.


agree it's false that it's never been easier, and like to add a few small datapoints.

20 or so years ago you had microsocft frontpage - anyone could build a site with little learning. heck microsoft word could spit out the doc as html I dunno how long ago.

some 15 years ago or so, I ran into a 'junk hauler' I found via google, asked who did his site, he said he did. I was like wow you were #1 in google organic, that's amazing - he said he opened microsoft publisher, put in the text, added a picture of his truck and clicked publish.

A lot has changed since then of course - google demands responsive layouts to rank well, and most people use a phone to surf.. which for a while made frameworks the magic since css grid was not baked into word/publisher/ie...

anyhow as far as 'easy' the are many other easier tools to use than tailwinds to make a site, and have been for more than a decade.

Now that browsers can auto fit via flex and grid - tailwind and similar are the bloat, not the easy-button.


> Isn't Hacker News itself a really great example of a website that successfully uses only very basic html with very, very little styling and fancy extra features?

Hacker News is an aesthetic that would be unacceptable to the majority of businesses in the world. The login page alone would cripple sales on any e-commerce site.


why do you think that's the case? is it because people will feel 'sketched out' by the bareness and leave?


Yes exactly. Stripe built a whole business on this concept by creating an easy to implement, trustable experience for otherwise dodgy looking small shops. Trust is huge, and a sketchy looking form destroys trust.


So... Are you trying to argue that it's easier today to make an acceptable company website than it was in the 90ies... or that it was easier in the 90ies than it is now?


Neither. I'm arguing that Hacker News is not a good example to use when discussing web development. It's easy to make a site simple when you refuse to implement any features, but that refusal is a luxury most businesses do not have.


The reader view (from any of the browsers really) has become my base performance review tool.

If a webpage is meant for presenting textual content and is actually _improved_ by a reader view, that means the design has failed.

There actually are websites out there that don’t benefit much from that view HN being a prime example.


In the 90s we didn't have the developer console or sophisticated debugging tools.

The only way to debug was "alert("foo")" placed in strategic locations!

We also had to handle significantly varied browser differences.

You can do a 90s style site today and vastly benefit from modern tech.


I remember putting border: 1px solid red around my layout "blocks" to help me build layouts (of course the extra pixels could mess with the layout). Then dev console came and you could inspect and mouse over those blocks and those red lines would appear and it was just amazing.


In terms of accessibility I’m not sure we’ve done better than GeoCities and AngelFire. WYSIWYG page design tools have gotten better, but those are only “necessary” because page designs have become more complicated and simple, less “professional” styling has fallen out of popularity outside of technical circles.

Those old page hosting services also served to get non-technical users to dip their toes into writing HTML, setting them on a path of learning, whereas newer page hosting services are either skewed no-code (e.g SquareSpace) or technical (e.g. GitHub Pages and Netlify) with little in-between, which keeps non-technical users locked into the WYSIWYG tools to a greater extent.


But all the things you mentioned are optional. The web is amazingly backwards compatible. Best example: one of Germany's most read blogs: https://blog.fefe.de


I understand the point you are making but is HN really the best example of modern web app? For better or worse web apps have replaced desktop apps. Not everything on the web is a simple list of articles and comments. Some things require extremely complex interactions and workflows. I don't think minimalism is the cure for everything, sometimes you need _some_ complexity and yes sometimes things are easier to use with a little bit of interaction instead of just text, forms, and plain html.


No, it's most definitely not the best example of a modern web app.

But everyone reading this knows it, and it does serve as proof that you can build a successful website with extremely little design & features. Therefore it seemed a good choice to illustrate my point.


I seem to recall Dreamweaver was also good at pretty websites. And it is hard to deny that flash enabled some massively interactive sites back in the day. Of a nature that I don't see much nowadays.

I suspect if you are wanting to create interactive things scratch is a gem.


Is tailwind much better than tachyons?


Tailwind seems to be more mature and better documented. Apart from that, they're pretty similar, so if you're used to one you probably won't get much of a benefit from switching to the other.

I see that the example websites from Tailwind look much better than those from Tachyons, though I'm not sure whether that's just a matter of taste.


no, but the tachoyns devs are a bit "opinionated" as to which directions the thing should go (and which ones it shouldn't) and seem to have moved on for the most part. one of the reasons why even though tachyons came before tailwind, the latter has a much wider adoption and more active development


You still can build a simple web in 1 min with Next.js and with greater speed.

Pico.css is for zero customize website. Once the website requires customization, it does not work. Tailwind is easier.


I'm also developing for 25 years, I also really really like websites and coding, but not for the reasons the OP sums up. For me, it's not about all kinds of solved or unsolved technical questions.

Today is a very good time to build websites, because a good website is the only way to push back to Big Tech and it's practices. Your website can be build on techniques invented in a time where the dream of the internet was not yet shattered by Big Tech.

Of course you can choose to put your website on AWS and svck on the bolls of Jeff a little. But you don't have to! And that's what all kinds of young devs just miss.

Oh, and RSS is not dead. Not at all (thanks WordPress, for putting a feed on every instance). RSS is the only workable way to make the web social again.


"because a good website is the only way to push back to Big Tech and it's practices"

I agree with this. But what really spins my beanie is the amount of power a website gives a single person, small business or non-profit. It's really amazing.


So true! I hear small businesses and freelancers complain about being kicked of Twitter, OnlyFans, Youtube, Facebook, etc.

One might not immediately make money with a website (although you sure can!), but the moment you are kicked of some platform, you have a safe haven to fall back on. A place where you can share the same textual or visual media with your visitors(/followers/friends/fans/connections). But instead of organizing a party and renting some space, you make it a house party, where you are the host.

The only moderators on your website are you, your webhosting provider, and the government (I call it GovMod).

Thanks for your reply!


My biggest concern is for the people relying on the big platforms you mentioned but not using them to build their own lists and lines of communication with their clients.


I recently heard a podcast by the comedian Kevin Hart. In the early 2000s, when he was an unknown and social media didn't exist, he would have a sign-up sheet at his shows, where people could enter their names and email addresses. This way, he'd maintain an email list for each city.

Each time he scheduled a return appearance in that city, he'd send out an email letting people know the date and time of the show and a link to buy tickets.

Today, people build their followings on social networks that barely let you even post links to your own site (Instagram). Had Twitter/IG/FB existed back then, people would have no way to maintain independent contact lists.


Now, I would probably never seen his email because it would be filtered out with or lost in the thousands of other promotions.

Spam is a huge problem.


Agree. The big platforms make it all way too comfortable for everybody.

And I get it completely. Over the course of years, Google and the likes probably have more phone numbers of my contacts than I have. Only three weeks ago I made a text file with every name and number in it, so I have a safe copy for myself.


This is the type of thing people mean when they bring up regulations for big tech companies.

You shouldn't have to worry about suddenly losing your list of contacts. Google should be required by law to provide your data to you in cases such as account termination.


> One might not immediately make money with a website (although you sure can!)

Not if you get banned from payment processors. Which is a thing that certainly happens


True. This might be less problematic for freelancers or small businesses in Europe though. Paying by regular bank transfer is easily possible between all member states (and even more countries!). All you need is - and everybody has - an IBAN bank account. Not even your administration has to be adapted for international business (as long as you don't grow that big). The only downside is waiting one meager business day for your money, so it's not really instant payment.

I don't know about the US, South America, Asia or Russia, but I guess people have good old bank accounts, have an internetbanking app installed on their smartphone, and sometimes have to pay money to a neighboring country? It might not be as easy as here, or just completely different, but I guess there are ways to skip payment processors, as the fee-stealing middle man that a lot of people take for granted.


> Oh, and RSS is not dead. Not at all (thanks WordPress, for putting a feed on every instance). RSS is the only workable way to make the web social again.

What we need, IMO, is more experimental protocols to enrich the web, which maverick developers can use to good effect, the same way we did originally with RSS. More browsers forking Chromium or Servo to add support for these new features. Hell, maybe even something that doesn't resemble the Web at all. David didn't beat Goliath and nor did Heracles beat the Hydra by using 'the same old weapon but better'. The only thing that wins is a paradigm shift.


> is more experimental protocols to enrich the web,

New protocols are DOA until we significantly nerf the incentives to "own" the user.

IOW it's not gonna happen until we outlaw anything that resembles spying on users. And maybe also ads generally.


> And maybe also ads generally.

This is all well and good, but then we need a serious alternative funding model for websites.

Right now, a lot of people with adblockers are benefiting from a situation fausse where they free-ride on the ad-click revenue generated by others. I think in some people's minds this leads to an impossible expectation that they can continue to enjoy freely provided services, provided at considerable expense to the service provider, without paying anything or even having the inconvenience of having to see some ads.

I'm all for abolishing ads, but it needs a serious proposal, not just "what we have right now, but no ads". I'd also be all for an online payment mechanism embedded into browsers through a new protocol - something like https://www.w3.org/TR/payment-request/ but designed with more of a view towards paywalls - but I'm under no illusion: most people's revealed preference is consistently for ads over paying anything, as many startups in that space have discovered.


> This is all well and good, but then we need a serious alternative funding model for websites.

Why?

Without competition from free-but-funded-with-$billions ad-supported services, most of the valuable stuff would probably be replaced by volunteer and non-profit efforts.

Others would survive by charging (more) money.

Some would be replaced by protocols (several social networks would be among those replaced). Clients & hosting may be paid, or not. It'd work out fine.

Most of the rest isn't valuable.


> Without competition from free-but-funded-with-$billions ad-supported services, most of the valuable stuff would probably be replaced by volunteer and non-profit efforts.

It wouldn't just be 'non profit', it would be 'considerable loss'. You can't provide a service like YouTube or Google without incurring enormous expense, even if you're only counting the infrastructure costs.

> It'd work out fine.

You have no idea whether it would work out fine. Neither do I. I'm intensely sceptical of anyone who issues hand-waving proclamations about how a dramatic change would affect an almost indescribably complex system.

You may have your own wishes and preferences, but it's not a good idea to let those invade the rational, evaluative part of your mind.

> Most of the rest isn't valuable.

Anything that's used by someone is valuable to someone. I don't like paella, but I don't propose to eradicate all paella restaurants for that reason. Again, this feels like a hand-wavey and not very wise answer to dismiss problems with your idea.


> It wouldn't just be 'non profit', it would be 'considerable loss'. You can't provide a service like YouTube or Google without incurring enormous expense, even if you're only counting the infrastructure costs.

I'm not a bit worried we'd go without capable search engines, without ads. Very likely there'd be donation-supported ones that are at least as good, and maybe better for some purposes (IMO Google's utility peaked around '08).

The free side of Youtube is a UX problem to be solved by something like torrent clients (maybe plus some RSS). Or probably a dozen other ways. It's far from insurmountable, there's just no motivation to fix that now (because there's no demand for it). That's the story for most of the services that could be replaced by [two or three existing protocols] + [some not-exactly-rocket-science UX effort]. The commercial side of it is solved by... hosting videos. Yourself, or paying a service to do it for you (these services already exist, despite YouTube's dominance, all the way from simple video-hosting to full white-label video streaming services).

> Anything that's used by someone is valuable to someone. I don't like paella, but I don't propose to eradicate all paella restaurants for that reason. Again, this feels like a hand-wavey and not very wise answer to dismiss problems with your idea.

It's plain that a huge percentage of online content could be replaced with Snake Game on an old Nokia with ~0 loss of enjoyment for the consumer. A perfect replacement for them is a book of Sudoku puzzles. People look at the stuff but the value is extremely close to zero, in that nearly any other time-wasting activity is just as good. And that's after dismissing the ~75% of the Web that's spammy garbage of negative value (because it drowns out better material covering the same thing).

> You may have your own wishes and preferences, but it's not a good idea to let those invade the rational, evaluative part of your mind.

Beats accepting the wishes and preferences that created the bad situation that exists now, right? Why should that be privileged over what I'd prefer? Has zip to do with a lack of rationality on my part, though it's easier to dismiss ideas if one first paints them as irrational.

We can have useful, widely-used open protocols or we can have spying (ads may or may not also be on the table, but take away the spying and there goes much of the advantage of the huge tech companies, anyway). The two very clearly cannot co-exist. I'd prefer the former.


> I'm not a bit worried we'd go without capable search engines, without ads. Very likely there'd be donation-supported ones that are at least as good, and maybe better for some purposes (IMO Google's utility peaked around '08).

This isn't necessarily wrong. I personally use Gigablast, which is excellent and entirely independent (unlike many 'alternative' search engines it isn't backed by Google or, more often, Bing).

However, pace the problem of other minds, I am not the only person in the world, and many people enjoy and rely on Google. I think this conversation is continually falling into the trap of muddling up what you personally prefer vs what would most satisfy the majority of people, and thus achieve adoption.

It's not a good solution if most people consider it worse for their needs, irrespective of your own personal preferences, or your feelings about what other people should like.

> The free side of Youtube is a UX problem to be solved by something like torrent clients (maybe plus some RSS).

Come on. This is as near as possible to an objectively worse solution. Again, I think you're struggling to see beyond your own preferences and abilities, to how most people in the world interact with technology.

> It's plain that a huge percentage of online content could be replaced with Snake Game on an old Nokia with ~0 loss of enjoyment for the consumer.

I refer back to my previous sentence. [Also, both Snake and old Nokias are exactly as available today as they ever were, and I see no sign whatsoever of this happening, despite the clear advantages in price, battery, uptime, etc.]

> People look at the stuff but the value is extremely close to zero, in that nearly any other time-wasting activity is just as good.

I refer back to my penultimate sentence.

> Why should that be privileged over what I'd prefer?

I refer back to my antepenultimate sentence. The answer is: because you are one person in a world of seven billion, and your solution is not going to go anywhere if the mass of people don't like it.

---

Look, in summary, this is not a useful conversation if all you have to contribute is moralising about the worth of other people's preferences. I don't care if you think most people should spend their time knitting or listening to Brahms. I'm trying to come up with a solution that satisfies people, and, therefore, can actually compete.


You seem to be assuming I don't consume a bunch of content that could be replaced with Snake Game or Solitaire at ~0 loss of enjoyment, because it's incredibly low-value entertainment, so am somehow looking down on others. What do you think this is? That I'm doing right now? The value, in every sense, of nearly all online activities can be found next to "marginal" in the dictionary.

[EDIT]

> if all you have to contribute is moralising about the worth of other people's preferences

Definitely a complete characterization of my views on this, and of these posts. You've looked carefully, considered thoughtfully, and discovered the entire thing. Very good.


You make a very good point about adblockers having that negative second-order effect where they continue to let people have the expectation of getting things that are intrinsically expensive (storage, bandwidth, sysadmins) for free - I didn't think about that before.

As for alternative funding models - why not microtransactions? Attaching an explicit price tag onto website access (subscription model) or individual media/document objects (standard "pay for what you use" model) would have some other beneficial effects, such as reducing extraneous media consumption (mindlessly scrolling for hours suddenly starts costing you money, better to buy a book and get value out of it) - most advertisements are a mental cancer that we should try to get rid of anyway.


Thanks for the kind reply, I appreciate it. I do think that's the kind of mindset that adblockers are inculcating in people - they don't quite realise the extent of all the costs that are borne by everyone else. Perhaps especially so because the sort of person who uses an adblocker is probably the sort of person who can't imagine himself clicking an ad, and so underestimates the amount of revenue made from ads. And thus also likely underestimates all the costs which that revenue pays for. (And then you end up in a predicament like the very-self-aware fellow in the other subthread, insisting that Google and YouTube and Facebook could be run by non-profits, and 'it would all work out fine'.)

As for alternative funding models - which is definitely a much more interesting conversation - I actually considered starting a company in exactly that space. I have some experience in fintech ("very credentialised" according to my former Anglo-German lead investor, haha) and so I thought I could pull it off. I couldn't, and it didn't get past the MVP stage ... luckily. The trouble is that people aren't willing to pay even the $0.01 to access an article. There's something deep in people's brains which is averse to spending money, no matter how small the amount.

I believe - and this is more second-hand evidence from other founders rather than first-hand - that the approaches which typically see the most success are those where people 'top up' a certain amount and then spend it gradually. That doesn't set off the same psychological alarm that directly spending money does. However, that kind of approach would be much harder to implement - especially as something like a browser protocol - because it would require holding probably-vast sums of money in escrow[0], which is an extremely burdensome legal and regulatory position to be in.

Personally I think Brave - much as it's a stupid company started by a stupid clever man - might be onto the right big idea here (despite getting a million little things wrong, and alienating virtually all of its users and most of its non-users too). The core idea of buying attention tokens which are paid out to websites to which you pay attention is a brilliant one. However, it needs a lot more refining, since the crude version of that model is not particularly well-equipped to deal with the difference between e.g. a movie-streaming site, on the one hand, and a shorthand news site, or even a site like Twitter, on the other hand. I may well watch a movie for 180 minutes but get less value from it than I do a tweet. So attention != value, or at least the concept of 'attention' needs refining to be more than simply 'time I spend on a website', but there's a promising kernel there, I think.

[0] Compare it to Starbucks's gift card program. Starbucks is one of the largest commercial debtors in the world just by virtue of the vast number of Starbucks gift cards in people's drawers. These things add up quickly and bigly.


If you're not already aware of it, Gemini is an interesting project that aims to do exactly that. https://gemini.circumlunar.space/


Interesting, thanks for the pointer! I was aware of Gopher, but not Gemini. It seems to address what would be one of my main concerns about any proposed alternative, which is - for the beginning at least - interoperability with the web.

I'm sure people have other bold ideas. I personally think there's a lot of room for something which builds on the 'progressive web app' paradigm: i.e. a web for websites which are more like apps, downloaded once and then exchanging data ad hoc with a backend server, with potentially much richer and more performant experiences. WebAssembly (WASM + WASI) would be a great foundation for something like that. But that's just one among countless compossible paths.


In a bygone era, everything was built of forms, tables and bullet lists. It would be interesting to experiment with a browser that drops JS support in favor of a more robust list of native components.

Alternatively, the modern browser has basically evolved into a virtual machine anyway. You seem to be suggesting that we could do exciting things with a more intentional version of that idea, and I don't disagree.


I've often thought about this. People nowadays build their websites predominantly on React component kits which resemble more sophisticated HTML elements. If HTML were extended to include that more sophisticated functionality -- and come on, it's been 20 years and people are certainly sufficiently aligned on this approach to now include it in the spec -- then I wonder whether JavaScript would be necessary, or at least as necessary as it is now.

I think either direction would be interesting. Or both. The browser was an experiment in the first place, but people seem to have stopped experimenting and are just doing the equivalent of what building on an IBM mainframe would have been in those days. It's disappointing. (It's not unlike people who quote Martin Luther King today, not appreciating that the essence of his radicalism was the direction of travel towards justice, not the political compromises themselves which are now firmly established.)

I hope at least that the renewed interest in systems programming that's come with the Rust fandom might spur some actually innovative developments to replace the antiquated model we're all stuck using. But sadly the tech community seems to be split into two communities, one of which has no interest in innovating, and the other of which seems to be set on innovations so absurdly impractical (¡world wide web on the blockchain!) that they could never take off.


I agree with your sentiment, but RSS and the likes are not 'the same old weapon but better'. It was never 'weaponized' in the first place. It never really came to fruition for the masses. To me, RSS is like two sticks and a string, and it needs some people who can see that they can make a bow and arrow out of it (to stay in the realm of 'weapons').

I don't like talking about tech and business in a sense of 'weapons', 'winning', 'smashing the competition', etc. Talking about it in this sense makes it a struggle, because one will look at tech from the pov of a stockholder, capitalist, or just a narcissist. The wording is very important, and the moment you make that wording your own, you can't see it in a different light anymore.

To be honest, it's only since recently that I myself take RSS seriously. Before, I developed a sh1tload of websites without even really knowing what RSS is capable of. Ignorant me.

Thanks for your reply!


Interesting point! What would you say the full realisation of RSS would look like?

Also, the weapons metaphor was mostly just responding in the frame of the original language about 'pushing back on big tech', 'shattered by big tech', etc. And I do agree with that framing: I think there is a tussle over the direction of the internet – tussle, battle, tug of war, whatever you want to call it – and I'm not sure it helps to avoid the martial metaphor out of distaste when you are in a battle.


RSS is pub/sub, right? Doesn't social media like Facebook, LinkedIn or Twitter - where you post stuff and other people post stuff - resemble that? Replace 'your' account, profile or page with your own website.

With the right website software it's easy to have an RSS feed nowadays, so the only thing that's missing is website software that also works like an RSS reader. In the most basic sense you only have to read XML with your program.

So now there's your own feed and other people's feed, mixed up in a nice and honest timeline (however you program that). To me this sounds a lot like the first principles behind social media, but without the obscure algorithms to fvck your timeline and without the intruding ads (I have nothing against ads per se).

P.S. Sorry for ranting about the weapons metaphor. I triggered on it and felt the need to tell what I think about language.

Edit: I forgot to say that RSS feeds have been there all the time for a lot of websites based on Wordpress. That has big value, because it means there's no need for technical adoption. These are all websites with feeds up and running right now. I've been hating on Wordpress for other reasons, but this was a good move for the open web from them.


https://micro.blog for example does exactly this, sort of RSS based Twitter (i.e the idea is that your feed contains mostly shorter posts, but that’s not a hard restriction) . Give it the RSS of your website (they can also host one for you), "follow" other people (=subscribe to their rss feed) and boom there is your "social media"


Nice! I really like their idea, and very friendly UI. I get a social vibe from it.

Your explanation is exactly what I was thinking of as the ideal social media, but I don't see that explained on their website. No mention of RSS. As long as the feeds from my software can work with their feeds (and the other way around) I'm good.


Have you heard of Scuttlebutt (https://en.wikipedia.org/wiki/Secure_Scuttlebutt)? The entire protocol revolves around 'pulling' other people's feeds - unlike most all modern social media, it's a pull-based rather than push-based model. It sounds quite similar to what you're looking for, at least based on the way you describe it in this particular comment.


Yeah, I've seen it come across here on HN. For my intended web application I already chose RSS as the way to go, especially because there exist so many feeds and because I'm familiar with XML.

If you have a blog and/or RSS feed somewhere online, let me know. I'll add it to my 'to follow' list.


> the only thing that's missing is website software that also works like an RSS reader. In the most basic sense you only have to read XML with your program

This is a fascinating idea. I often think that Twitter's success, unlike Facebook's or Google's, is fundamentally as a protocol - and one which shouldn't have been centralised under the control of one company. And incidentally it seems Jack Dorsey thinks the same way, since he's suggested the possibility of having one core protocol for tweets, on top of which people could build their own frontends, and users could choose from a marketplace of both (a) frontends and (b) algorithms for filtering and ordering what they see.

I do agree with you: what's missing from RSS is not the existence of the protocol, nor even necessarily the 'supply side' of websites providing it (like you say, largely courtesy of Wordpress), but the 'demand side' which really needs a well-designed interface to consume that kind of content. I absolutely agree with you that this feels like a huge area of potential.

And thinking on a more second-order level: I wonder if one thing that's preventing these innovations is a suitable, easy 'base' for people to build this software on. For example, take `create-react-app` for the web. Countless things have been made because people know that they have that simple base to start with. For building a web browser alternative, there's no equivalent for most people: they don't know where to start. If we had a simply bundled toolkit such that people only had to write some business logic, I wonder how much more would be done.

> P.S. Sorry for ranting about the weapons metaphor. I triggered on it and felt the need to tell what I think about language.

No prob at all! Susan Sontag wrote a really interesting essay 'AIDS And Its Metaphors' in the same vein, specifically about the use of war metaphors about AIDS and cancer: "fighting", "losing the battle", &c. (It's the culmination of a series of essays on the same topic, but this is the most thought-provoking of them, IMO.) You might enjoy it. I particularly liked:

> The metaphor implements the way particularly dreaded diseases are envisaged as an alien 'other', as enemies are in modern war; and the move from the demonisation of the illness to the attribution of fault to the patient is an inevitable one, no matter if patients are thought of as victims. Victims suggest innocence. And innocence, by the inexorable logic that governs all relational terms, suggests guilt.

Wikipedia has a great summary: https://en.wikipedia.org/wiki/AIDS_and_Its_Metaphors#Militar... https://en.wikipedia.org/wiki/AIDS_and_Its_Metaphors


> I don't like talking about tech and business in a sense of 'weapons', 'winning', 'smashing the competition', etc. Talking about it in this sense makes it a struggle, because one will look at tech from the pov of a stockholder, capitalist, or just a narcissist. The wording is very important, and the moment you make that wording your own, you can't see it in a different light anymore.

I like this paragraph a lot, in my perception this is why most of cryptocurrency initiatives and recently hijacked and massacred `web3` concept are fruitless so far - too much focus on how to get rich quickly


Thanks! Getting rid of notions like 'winning' and 'competition' in areas where it's not necessary was a mind changer for me. Even personal stuff like friendship becomes a competition this way (and I see it everywhere around me). Nowadays, I prefer to think about it as 'challenges' instead of competitions.


It is against the TOS of the largest ISP in my country to even host a server of any sort. Colocation isn't that fun.


There are tens of thousands of hosting providers around the world. You don't have to pick AWS, GCP, or Azure.


That's a shame in itself, I see self hosting as the holy grail of the internet. There are a lot of good webhosts though. Worked in the 90's, and still works today.


I find lots of love for RSS by "innovators" and "experts" of HN community. Despite of it, the thing (RSS) is only going low and I don’t see anything other than RSS readers/mergers invented?


I think RSS is still in heavy use by podcasts.


Stay tuned. ;)


> Of course you can choose to put your website on AWS and svck on the bolls of Jeff a little. But you don't have to! And that's what all kinds of young devs just miss.

Where are you hosting your websites?


Not OP, but Linode. Used to be on their $10 a month VPS, downgraded when they added the $5 a month one. I think they have some fancy Kube stuff now but I just have a makefile that rsyncs a directory to my VPS and nginx picks it up. Honestly even the $5 VPS is more power than I need for static files, but it's nice to have the option to throw up a flask app or run some web-scraping scripts overnight or something.

Admittedly it's behind Cloudflare because DDOS skiddies, but I think the CF lock-in isn't much if you're not using workers or anything proprietary to them. Admittedly it does suck for people with really privacy-customized browsers or using Tor, but idk a better solution unfortunately.


But personal-website-tier static file hosting is free these days!


It's called webhosting at a hosting provider. ;) It works for me like this since the 90's. Here in Europe there are literally hundreds or thousands of webhosts. Some pricier, some faster, some cheaper.

I host some websites at Antagonist in The Netherlands. But I'm strongly considering moving, although they are by far the fastest and most reliable party I've seen.

Why? Because they - just like a lot of other European webhosts the last years - are now part of some vague sh1tshow called Group.one or something. I don't know what this organisation is up to, but I don't trust it for a second. I guess they are trying to become some European version of AWS or something. Data grabbers, probably.

Every sincere webhost that became part of their 'network' (they sell it like this, but it's just a merger/acquisition) uses stupid wording in their press releases, customers are not kept up to date about these important changes (only after the fact, of course) and they all of a sudden are now offering 'cloud storage' like it's Dropbox.

This was a bigger reply then I intended. But thanks for yours!


Smaller web hosts tend to use AWS on the backend. Sometimes they use Azure or OVH, but they rarely have their own datacentre.


Can't speak for OP, but mine are on vultr. There's a whole industry of web hosting, virtual private servers, or colocation facilities to choose from.


Contobo, Digital Ocean just to name two.


Scaleway


here here. keep fighting the good fight


> Static was a fun diversion, but we're back to what works.

I won't be moving away from static website. It is perfect solution for a blog/CV type thing.

>you might think that minutes-long build processes are normal

If compiling your static site takes minutes there is something fundamentally wrong with your site in my opinion.


I laid my first lines of JavaScript(JScript, actually) in 2001, have been doing this professionally for around a decade now and I judge this blogpost as generally ignorant, but that first quote is especially so.

90%+ of the Web is just static sites with JS sprinkled on top. That has always been the case and will stay that way for the foreseeable future, because it's the most simple, pragmatic and accessible solution.


I'm amazed by the amount of shade I've seen lately being thrown onto SSGs and building websites with static HTML. Though, it seems to be permeating from folks that have a vested interested in having you run code on a (or their) SSR platform.

There are some interesting properties of server rendered websites and it can be the best option in some scenarios, but in others it adds complexity and burden on the end user. To suggest that static was a "fun diversion" detracts from the value and I feel we'll be going full-circle once again after we've realized that "what works" isn't always server render HTML.


If a framework needs compiling to view changes instead of pressing F5 in a browser like god intended, that's gonna be a no from me dawg.


My static site is build by a generator. A generator that literally runs after each save and it takes milliseconds to compile my site. I also have autorefresh plugin for development, so as soon as I save a file whatever page I had open in my browser "instantly" gets refreshed.


When I make a change to any file in dev, Eleventy rebuilds the site (takes a few milliseconds), and then automatically reloads the browser. That's a yah from me, dawg.


> > Static was a fun diversion, but we're back to what works.

> I won't be moving away from static website. It is perfect solution for a blog/CV type thing.

Yeah, I'm keeping an eye on these new "MPA"/"transitional" frameworks, but the thought of taking a static marketing page, which costs fractions of a cent to serve and is easy to put on a CDN, to now requiring a backend server which costs magnitudes more, seems foolish.


Assuming you don't mind Microsoft or Github you can host a blog or whatever on github pages for free. You can even use a custom domain. That's how I host my site.


From reading this I get the sense the problems are lower level and there is no impetus to fix them.

I'm embarrassed to say, having programmed for over a decade, and running Linux for half of that, I still have no idea how to setup my computer to serve a webpage (or even a file)

I probably need to `apt-get install apache` and then deal with some magic config files and incantations and hope I don't mess something up and expose my whole computer to the open web. Then there is the whole mess of NAT (wasn't IPv6 supposed to kill off NATs?).

I need to then figure out the endless (and impenetrable) configurations in my OpenWRT/LuCI router to open a port and have it forwarded to my computer. Or maybe that needs to be done "upstream" in my landlords internet cabinet..? I'd have no idea how to even figure that one out b/c my router doesn't make it obvious in any way.

Then I need to find some DDNS service and figure out how to get that to route traffic from a URL to my IP (and then it's somehow supposed to reconfigure when my IP dynamically changes? Is that some cron job I need to write?)

Hosting webpages from home is still complicated and no progress has been made. And if you do it wrong someone will hack you and steal all your files :) The incumbents are probably happy it's so painful and you still need to do technical gymnastics to punch through NATs and whatnot. This text further confirms it by just telling people to host int he cloud. And understandably.. you'd have to be a total nut to serve a file from your home instead of dropping it on Google Drive.

When I was studying this stuff in college I just figured tech is in an awkward teenage phase and this will all get worked out and streamlined - but I think it's not going anywhere This isn't the tech future of 80s scifi

I don't mean to be wholly negative, I'd actually appreciate if someone pointed me to a good step by step to set everything up. At the moment I'm a sellout :) and I just use Github Pages and git push HTML/CSS files there. Ideally it really should be just as easy to git pushing to your own home computer but till then I guess I'll do that..


If opening the 80/443 port is such a nuissance in your setup, I would not bother with hosting from home and I'd just drop the website folder in Netlify Drop (https://app.netlify.com/drop)

That's what I did for the first few iterations of https://lunar.fyi and it really helped with giving people the right information fast while I could keep spending time on the real work (developing the Lunar app)

But if hosting from home is what matters the most, there is an easier way nowadays using Caddy (https://caddyserver.com) and ngrok (https://ngrok.com).

For example, I just hosted this website (https://af62-2a02-2f0e-d00f-e100-f513-b43-fbc1-cf5d.ngrok.io) using the following commands:

    caddy file-server -listen 0.0.0.0:6001
    ngrok http localhost:6001
If you want to go the extra mile and have a nice custom domain, Freenom provides you with a 1-year free domain for the following TLDs:

    .gq .tk .ml .cf. ga
For example, I just registered geokon.gq for 1 month and forwarded it to the ngrok endpoint: http://geokon.gq/


Thanks you for all that! It does look like it simplifies stuff in that there are no more config files! So that's really cool.

I will need to read the docs a bit later b/c from the landing page it's all still confusing. This is probably a function of the tech "debt". It has example commands like

   caddy file-server --domain example.com
- If it's a file server.. shouldn't it be something like FTP/FTPS? Everything on the webpage keeps saying HTTPS .. which is Hyper Text Transfer Protocol.. ie HTML webpages. So is it serving files or webpages? (I guess webpages are a type of file.. but in practice the two aren't the same)

- Do I need to stick this into my .profile? Or I need to configure a systemd service?

- What is it even serving out..? Is it just serving out everything in the immediate directory I'm in where I run the command?

- How is it "hooking into" example.com? How does the registrar know to point at my IP after I run this command? (or if that's done separately - say on the registrar website, why do you need to specify the URL locally at all?)

In any case, these are just immediate questions that come up. It's all stuff that probably makes sense if you know it already :)

And again.. i need to rtfm - so I'm not complaining or shooting the messenger here haha. Thanks for the info


File server in this context only means a HTTP(S) server that serves static files (which can be webpages like index.html, but not limited to that).

When you run `caddy file-server` in a folder it just starts serving all the files in that folder. You can serve MP3 files if you want, or .txt files, it doesn’t have to be webpages. It just happens to serve them over the HTTP protocol because that’s what the browser speaks.

Keep in mind that caddy doesn’t allow a user to get out of the folder you ran the command in, and it also doesn’t allow the user to list the files in your folder if you don’t explicitly allow that using:

    caddy file-server -browse
For example, I use the `-browse` option to allow users to download any previous release of my Lunar app here: https://releases.lunar.fyi

The domain option is not for pointing that domain to your IP address. That can only be done from your DNS provider (e.g. Cloudflare, or even Freenom which I pointed in the last comment)

What the domain option does is allow you to serve multiple websites on different domains from the same computer, and it also automatically generates SSL certificates for them so that you have encrypted `https://` support by default.

A few years ago, you would have had to buy SSL certificates from someone like Verisign, download those certificates, figure out where to put them securely and configure Apache or Nginx to use them for each domain. Caddy does all that automatically now.

Also keep in mind that if you run a file server like that: `caddy file-server -domain example.com`

… then caddy will only serve those files if you access them using that domain. If you try using your IP address directly, or any other domain instead of example.com, it won’t respond with your files.

In this way you could have multiple domains serving different files from the same computer.


Woah - thank you for taking the time to explain everything. It's challenging to find all the info so concisely in one spot. This has been really useful and educational :)


Hey - awesome work on that lunar page, i know it's a landing page and the goal is conversion but it was genuinely enjoyable to read, fun even. Fantastic.


Well thanks! Lately I’ve been afraid of the page becoming too information-dense because there are so many features and edge-cases I need to let people know about.

I’m really glad to hear that from you!


> Hosting webpages from home is still complicated and no progress has been made.

Now this would be a real improvement, not some new framework of the week.


Glad to see it isn't just me. I've been doing networking tasks on and off for over a decade and I still fear the nginx config file.


Most of the things you've mentioned are caused by the delayed IPv6 rollout, self-hosting just isn't a priority for residential ISPs. I'm sure that there are plenty of good tutorials on setting up Apache or some other web server, those tutorials can't exist for actually exposing it to the Internet, there's just too many routers and ISPs.


If you have python here is a 1liner:

python3 -m http.server

The home networking / dynamic IP stuff will probably not go away until ipv6 becomes more commonplace, but honestly that is for the better. Ask any business whose upnp-enabled receipt printer started outputting antiwork propaganda the past few weeks how it is going.


Spot on. Some of us are working on it. IMO the best solution currently (ie until ipv6 takes over and assuming we get rid of NATs when that happens) is tunneling. I maintain a list of options here:

https://github.com/anderspitman/awesome-tunneling

If you wanted to self-host a website from your home computer today I would recommend buying a domain from Cloudflare, and using Cloudflare Tunnel.

6 months from now I hope to be suggesting some variation of my open source alternative, https://boringproxy.io. It's not quite ready yet.


I found yunohost.org really simple to set up for all my homeserver needs.


The technology is far more advanced and bordering on magical in some cases, but most front-end web-development is far more painful and less fun to me than it once was.

Say about 15 years ago or so, if you had a creative inspiration for a personal site, you could whip out a text editor and start building top-notch stuff very easily. About the most complicated thing you had to do to start building something was adding jQuery to a project.

Today you have to research a pile of unnecessarily complicated (for most personal projects) frontend frameworks, libraries, and build systems, figure out the compiling and development environments, figure out a complex mess of containers and virtualization, etc, etc, etc. This stuff sucks the joy out of building anything, even if the actual coding itself is far more structured and convenient.

By the way as genuinely nice as Tailwind can be, calling anything a solved problem is kind of arrogant. There will probably be other design approaches and ideas invented over time that are even better.


I am not sure why you say that, nothing stops a developer from opening notepad and building a website. One of the wonderful things about the web is it's backwards compatibility.

It's just that there's better tooling available for you to build it faster/featureful now.


Exactly. So many comments here are acting as if they're forced to use npm+react+tailwind+etc. but you're absolutely not. Web standards (querySelector, fetch, display:flex, ...) are so far along compared to a decade ago that you could use zero libraries/frameworks and be significantly more produtive than we were back then.

Web standards still have a long way to go, but it's comical to claim that things have gotten harder.


> Today you have to research a pile of unnecessarily complicated (for most personal projects) frontend frameworks, libraries, and build systems, figure out the compiling and development environments, figure out a complex mess of containers and virtualization, etc, etc, etc.

Well yeah, but once you've learned the modern paradigms, you can get right back into hacking away in your text editor. I've been doing web development stuff for longer than 15 years or so, and it's no more difficult to start a project now than it was back then. I can run a single command to create a template nextjs project and be hacking away in no time at all. Yeah there's a learning curve initially, and for certain use cases you may not want or need these modern tools, but it's no different than when all that time ago I finally, after much foot-dragging and moaning, I finally learned CSS and transitioned away from "tables for everything" HTML I'd learned on Geocities, which seemed like the pinnacle of productivity at the time.


The way you described building a simple website (text editor only) works just as well today.

You need the frontend frameworks, build processes and million NPM dependencies only when working inside of a developer team for an enterprise software of some sort.


Do you really have to? I'm still building sites front-end with mostly HTML, CSS and jQuery or even vanilla JS.


The title is pretty much the only part of the article I agree with.

While the industry was largely distracted by various NPM packages, the foundation we are building sites on got really nice. And I believe in some situations there's value in trying to use this foundation directly, not via a plethora of abstractions. Browsers are great, DOM API is great (well, certainly provides some really nice utils we previously had to look for in external libraries like jquery). CSS is great. I will write "display: flex;" multiple times just because it feels nice to type that, compared to the dance we had to do a couple of decades ago to achieve same layouts.

Static sites are great. They are fast to build, and they build fast. They are really easy to deploy as well. And they also work really well on end user devices. No single line of JS required, but if you want to have some liveliness on the page which can't be easily achieved by CSS animations, very few lines of JS are actually required nowadays. And 0 NPM packages.

Modern JS web frameworks do solve problems some people actually have. But they're best suited for big corporate style webdev. For personal projects I prefer something more… artisanal? Dunno, it's like building a piece of furniture yourself instead of buying one from IKEA. Probably less practical, but feels nice. And may actually fit better in this one weird corner you have.


I'm honestly not sure if this is satire or not.

> Static was a fun diversion, but we're back to what works.

Static works really well for many use cases. And you are never going to beat the performance of static in a cache close to your user. I agree that there are many cases where server-rendered is the best option but static with a good sprinkling of JS and server-generated embeds definitely works.

> Tailwind CSS is the best thing to ever happen to CSS

This is obviously controversial. Maybe for non-static websites where every `<p>` in your site is generated from the same line of code it is trivial to add `class="m-4 text-gray-900"` when I may use the same component in multiple places that gets boring really fast.

> GitHub Copilot

Maybe if you weren't writing `m-2` on every p this wouldn't be as helpful /s

Maybe I haven't seen the light yet, but in most weakly-typed languages Copilot felt like a huge loaded footgun. It also generally worked on the simple cases that didn't require much work anyways.

---

I think it is clear that this person found an approach that works for them, but it seems like there is still a lot of from for improvement here.


I have a feeling that this is the next religious war in dev.

I read this article and shudder in horror at, well, all of it. I like static sites (that wouldn't take minutes to build if they were written in a decent language). Adding lots and lots of JS dependencies and frameworks gives me the screaming ab-dabs - it's just adding complexity and dependency. I like writing code, not plumbing together bits of other people's code with bizarre config files. I object to CodePilot for all sorts of reasons, but fundamentally because I enjoy writing good code.

But I know experienced devs who have exactly the reverse opinions. Like OP, they see all this as going in the right direction and making their lives easier.


Same.

I started making websites in 93, and to me, websites are about communication and sharing. They exist to facilitate communication. The current system of crazy dependencies and frameworks just doesn't do this; it values speed and ease over every other aspect of communicating. Imagine if TV developed in a way where they sat and tested how much they could speed up the show and be understood and made that the standard so shows could be 'watched' more quickly: That experience would SUCK.

Or cars that can go from 0 to 100 in .5 seconds but have no seat belts, no horns, no airbags, no turn signals...


I started in '00, and I still favour server-side rendered websites, with a smattering of JavaScript where needed.

I've worked with Angular and React, and they have their place, but IMO 95% of websites just don't need them.


Oh, actually, I started building websites earlier than '00, maybe something like '95, but started doing it for a job in '00.


> I object to CodePilot for all sorts of reasons

I've been testing copilot for about a month now, one thing I realized very quickly is it's not as useful as you might think it is. Copilot is great for writing repetitive code, so if you just wrote a function to select an item, copilot will correctly guess what the deselect function implementation should be.

Outside of that though I find that it gets in the way far more often that it helps, by far the most annoying part is that it interferes with intellisense sometimes, picking values from an enum in Typescript is now a PIA if you have Copilot enabled.


I feel the same way as the author but I work with php. Recently it's all been TDD, statically typed, and JavaScript gets web packed into one or two files so instead of juggling 15 script tags I only have one or two and a style tag for css. With CI this all happens in the background on every commit.

What you pay for when you hire a dev like me with 17 years of experience in the field is the ability to know which libraries are the best to use and which techniques are worth using in the process.


Is that not something you can learn by picking up any number of good books in the area in much less time?

I am just getting into TypeScript programming, and I am really having fun. I learnt about the details of how Javascript works, and it is quite cool to have a static layer over a dynamic one. For learning JavaScript, I like "The Good Parts" and the "You don't know JS" series. It took me a while to learn how to create a datatype that can defend its invariants not only statically, but also at runtime, but that seems quite feasible as well, especially with decorators. It is interesting though that nobody seems to be particularly interested in doing that, there is not much direct information about this available.


> Is that not something you can learn by picking up any number of good books in the area in much less time?

I'm shocked that you believe this. Work at a moderate or higher sized tech company. The juniors, mid levels and seniors all make the same mistakes and slowly grow out of making them. If it were the case that everyone could stop making the same mistakes and choosing the wrong abstractions simply by reading a book then professional programmers would be good on day one and make none of those mistakes and college students who've read dozens of books would come in at the top of the rankings.

> I am just getting into TypeScript programming, and I am really having fun. I learnt about the details of how Javascript works, and it is quite cool to have a static layer over a dynamic one. For learning JavaScript, I like "The Good Parts" and the "You don't know JS" series. It took me a while to learn how to create a datatype that can defend its invariants not only statically, but also at runtime, but that seems quite feasible as well, especially with decorators. It is interesting though that nobody seems to be particularly interested in doing that, there is not much direct information about this available.

This is learning to code, it's not learning to engineer. Assuming you finish your book(s) you'll start on the path of making lots and lots of mistakes until you grow into someone with more experience who stops making them.


> you grow into someone with more experience who stops making them.

or at least fewer, and sometimes more complicated mistakes.


Of course you need experience as an engineer. But knowing which Javascript libraries to pick is the easy part. I was not saying that you don't need experience, but you don't need much Javascript / Typescript experience. This can be done in a few months.


It will take a few months just to bring yourself up to speed on mouse/keyboard event bindings and how they interact in one specific browser and one operating system let alone all browsers and operating systems and mobile and that's without even thinking about rendering, templating, network operation wrappers, local storage, camera and sound APIs. The list goes on.

I've been doing this since literally IE 5.5 and I'm still learning new stuff all the time. The fact that you think you can learn all this in a few months is hilarious.


I already learnt most of what I need in one week. I was being pessimistic with 3 months.

If I need to know a particular event, I look it up in the documentation. The stuff you cite is just APIs, I can look that up too. I've used APIs before, you know. Right now I am piping data from Swift to Apple Metal and back, and write GPU code to estimate measurements from the FaceID camera in realtime.

Real programmers are coming to the Web, babe.


Furthermore, from what I see, a lot has changed in JS land. You can basically assume ES2018, with a few exceptions in the library, module systems are part of the standard now, and package managers seem to catch up with that. A lot of what you learnt what is older than 5 years about JS and accompanying APIs, you can basically forget now. Arguably, it might even be better if you had never learnt it in the first place. Because, let's face it, what was going on then was a hot mess of steaming shit.


Not really. You can't replicate experience. Everything you just wrote about with data types, invariants, and decorators is just flavor which you only have control over in your own personal code however as soon as you interact with web browser APIs, nodejs, and 3rd party libraries all that goes out the window and you end up conforming to other people's patterns.

In my honest opinion Typescript is a fad. You're still just coding JavaScript with an alternative syntax. Since you're borrowing dependencies from JS you need to know both and context switching between the two slows you down. From a CS perspective I get why TS syntax is better but from a functional perspective it's harder to find engineers that actually want to work with it because every TS project I've ever seen is really a mix of TS and JS.


Web browser API's usually do keep their invariants. I wouldn't use libraries which don't. Libraries which do not support TypeScript will vanish. See, all your experience (in a limited field) is telling you the wrong thing.

And it is quite obvious to me that you need to know BOTH TypeScript and Javascript, of course, you cannot just learn TypeScript, because it is just a thin layer on top of Javascript without many runtime guarantees otherwise.


> See, all your experience (in a limited field) is telling you the wrong thing.

You seem to be conflating objective wrong/right with your own personal stance. Do you have an objective argument in favor of your assertion about the longevity of non-TS libs?

> And it is quite obvious to me that you need to know BOTH TypeScript and Javascript

That's a reasonable strategy on the assumption that TypeScript will be used, but the very fact of its being so is actually an argument in favor of the point made by gp which has been left unaddressed (RE context-switching).


Obviously, there is no objective wrong/right here, as only the future can tell. My subjective experience tells me, it will be that way.

There is no need to context-switch. Just accept Javascript as part of TypeScript, because it really is. The argument is clear: TypeScript drastically reduces the amount of errors you will be making when coding, and the amount of time you need to think about stuff that is really trivial.

The rule of thumb on how to do this is also simple: Keep your types simple, use them to make your life easier, not more complicated. If you cannot model something simply using types, don't try to do so, just use the dynamic typing escape hatch that Javascript provides.


I'm just explaining to you how it plays out when a business is deciding whether or not to use TS at all. Mistakes and errors are checked for by linters and unit tests automatically in the background so that's not a benefit anyone cares about. The liability of having to find coders that want to work on your TS project decreases potential candidates and makes hiring more difficult. Large organizations can deal with it cause when you have 500 engineers on staff you can find 50 that will use TS for you. For smaller orgs it's a pain.


These are concerns that will arise in some organisations, for sure. Personally, I don't have a need to work together with people who cannot be bothered to learn TypeScript. A nice type system is something you really want, and which improves code quality. No need not also to use linters and unit tests. These things do not oppose each other, they work together nicely.


Some things just have to be experienced 50 times before you learn them


> a dev like me with 17 years of experience in the field

That's not the flex I think you think it is ;) I've been coding for over 40 years now, professionally for over 25. The attitude you have is one I very much associate with younger devs with limited experience.


Pretty sure they aren’t flexing and it’s just you who is treating this like a competition.


They're providing context, not flexing.


> Imagine if TV developed in a way where they sat and tested how much they could speed up the show and be understood and made that the standard so shows could be 'watched' more quickly: That experience would SUCK.

Honestly, a lot of times, I’m left wondering if this isn’t the case.


Well, the trend seems to be completely on the other way, on making the experience as slow as possible.


> I like writing code, not plumbing together bits of other people's code with bizarre config files

I don't like reinventing the wheel every time I build something


Meanwhile I do like reinventing the wheel. Or, more accurately, designing a wheel that fits my needs well.

Sure, I also use libraries for things that are just tedious, hard to get right or just some standard feature that's always used the same way. But you can overdo it.

Debugging my own code is annoying but doable. Debugging other people's code is a lot more tedious. Troubleshooting a mess of 1000 transitive dependencies because one package had version 10.0.6 instead of 10.0.5 and that caused an avalanche of breakage makes me reconsider my life choices.


There's a balance to be struck between your opposing opinions I imagine. I would just suggest solving the core business problems of your software in your own code, so that you have full control over them, can learn about your product from the building process, and can provide value through specializing your codebase.


> I don't like reinventing the wheel every time I build something

This is the crux of the difference, I think. I see open-source libraries as (mostly) mediocre code, usually massively bloated with features I don't need. Using them is like using an 18-wheeler to nip to the shops. I have found in my experience that it saves time in the long run to just write the minimal amount of code I need for the job, rather than dealing with the added complexity and dependency of adding a 3rd-party library.


I think that's an over generalisation, for every "world changing, do everything" trendy OSS library there's also a much simpler alternative maintained by a fellow grognard who just wants to write the bare minimum so you don't have to.

The alternative is adding my own mediocre code, with all the maintenance costs that incurs.


Ideally, I like to build on top of a batteries-included base. Failing that, I'd rather reinvent wheels than spend all my time impedance-matching with glue.

The problem with bizarre config files is that you repeatedly have to learn enough to make them work but then do not need to use that knowledge again until after you've long since forgotten and you have to start the learning process from scratch.


Exactly. Badly written libraries are just bad libraries, it doesn't mean that its a bad idea to build reusable components.


So you need a wheel, but you end up getting a whole car, and the wheels turn out to be octagons, which don't work great for your purpose (but basically work on more different terrains that different users had problems with?) ... and in half the time and one-hundredth the price you could have just made the round wheel you need.


“If you wish to make an apple pie from scratch, you must first invent the universe.”

-Carl Sagan


> but fundamentally because I enjoy writing good code.

Using Copilot speeds you up, so why not take advantage of it?


Because it's a massive footgun, no, footcannon. It also most likely violates the GPL.

It's copy-pasting from stackoverflow except a little faster.


I am still reluctant to use JS/CSS frameworks. What I learned in 10 years is the fact that I hate learning useless abstractions that will eventually fade within the span of 18/36 months. The joy in web dev for me is creating stuff not learning how to use frameworks.


Me too. Also the frameworks deliver a lot of bloat and dependencies which can be a boomerang. For example I did some pages with bootstrap 3 ... for that I had to use a module which modifies the html templates of the CMS. Than the CMS got updated several times but the module is not been updated b/c bootstrap 3 is obsolete. But hey the there is a new module for the new cool guy in town BS 4. So you can't update the CMS or you have to rebuild a large part of the website for the sake of updates. That's totally senseless since it has no advantages for my customers. Yes I can sell it with "you must update because of security but hey nothing changes ..." - something what I would absolutely hate if it's done to me.

I have my own framework - smaller and it fits my way to work. And yes I understand that this would be a other situation with a team (but then you have a guy without soul, moral and honor which is called "the sales guy" who sales everything).


The frameworks are really stable these days. There was like one major change in React since it's beginning - introducing of hooks. But if you got in right 8 years ago, you can just sit and write a good SPA with that knowledge


Vue.js is the only frontend framework that I have some clue about, and the first blog post when googling 'Vue 2 to 3 migration' [0] says it took them 4 man-weeks to migrate their product.

From my limited experience, the main issue is that it's really hard to ensure frontend actually works as before, after upgrading frontend libraries/frameworks, outside of having really extensive end-to-end test suite.

[0] https://crisp.chat/blog/vuejs-migration/


> Remix.run gives us the best of both worlds. Still writing your site predominately with JavaScript. [...]

This point is stated as if it's a widely accepted truth. Is that really the case?

I still labour under the misapprehension that one should aim to use as little javascript as possible.


Also, TFA praises Tailwind CSS as "solving CSS", and as great as it may be, I guess that statement lacks a bit of nuance.


I hear so many people recommend Tailwind but every time I look at examples it reminds me of an entry for the 17th Annual Esoteric Programming Language Competition.

Maybe you get used to it but hell it looks hard to parse visually.


Tailwind is definitely a Marmite framework (either you love it or you hate it). Yes it can be difficult to read. And it can be difficult to debug and fix something where you make a small change and it cascades 'upwards' somehow. But... the problems I have with Tailwind are no different to the ones I had with CSS and all the various magic methodologies that were supposed to "fix" css - things like BEM & SMACCS). But the big difference is that I can solve those problems in a fraction of the time with Tailwind. As a last resort you can just strip out all the classes of the elements in question and start over. If I had to do that with normal CSS, it would almost definitely break something somewhere else. With Tailwind, you are working in a highly localised area and can see everything you need on one screen. That, to me, is the power of Tailwind.


Yes. The true test of a CSS framework is how easily you can change it without breaking shit. I haven’t found a better solution than Tailwind in that regard. Emotion comes close but it also encourages you to componentize everything, which can cause change issues. In Tailwind I found my components disappear because I didn’t need them.


There's something about Tailwind (which I use, love, and evangelize to all who will listen and many who won't) that makes me think of the phrase "You can't fire me, I quit!"


All CSS is hard to parse visually.

Unless you follow very strict development guidelines, you do not really know anything unless you look at the computed styles in a debugging tool.

I think Tailwind does help wrangle some of that cognitive overhead with its naming system.


I kind of agree with the author that Tailwind is the best thing to happen to CSS. But yeah he definitely oversteps when he says that CSS is solved.

It most certainly is not a solved problem lol. UI still feels way fucking harder than it should, in all kinds of ways.


The advantages given in the CSS section are really around:

- utility classes

- theme system

Neither of which are unique to Tailwind (which is great in itself), yet the article paints it like it's the only thing out there solving for it.


TFA ?



>I still labour under the misapprehension that one should aim to use as little javascript as possible.

Well, this is so 2005.


You say that like it's a bad thing?


I'm saying it in jest, but also as an observation. The TFA writer takes it for granted, but then again, so do many (most?) today.


Maybe that makes it even more important to repeatedly raise the possibility that this assumption is wrong.


Yeah, some JavaScrit can be nice, but if your website stops working with all javascript being blocked, you should probably make a native program instead.

This is particularly important so that web scrapers and other automated tools that the Web relies on can work. Also accessibility tools.


On the remix.run site it says

>Remix is a seamless server and browser runtime that provides snappy page loads and instant transitions by leveraging distributed systems and native browser features instead of clunky static builds. Built on the Web Fetch API (instead of Node) it can run anywhere.

I found no mention of the blockchain here or anywhere else. Why are they touting their own horn so much if they are clearly not Web3 ready?


This article strikes me as hopelessly naive. Filled with the same kind of 'new is better and solves everything' mentality I've seen in the JS community for years. There's no desire for maturity because nothing has ever matured. Even frameworks from huge companies go through radical internal paradigm shifts every 18 months. JS runs natively in the browser. The author implies compilation is required to deploy client-side rendered websites. Sass, Less, css grid, all claimed to 'solve' CSS for whatever the problem in the mind of the author happened to be. ... more of the same.

This article brings no hope that somethings has fundamentally changed in the community.


As somebody who's only dabbled in creating a personal website in the late 90s, using FrontPage and hosting on a free hoster, the current web seems like a completely, and way more complex beast to build websites for.

Back then all I needed was get a bit familiar with FrontPage and html scripting, some Photoshop to create/edit image elements, and I already had a functional website.

I knew if I wanted to "really" do it, I should probably learn proper html, but 15 years old me was already plenty proud of having a pretty good looking, and well working, website at all. It even had a visitor counter, and a guest book, all the Jazz that defined 90s websites, and that was enough for me.

Now, over 2 decades later, just the prospect of getting back into it seems so extremely more daunting and complex. We are now at Html5? At "Web 3.0"? I don't really know anymore, it's all just become so "vast" with so many options and so much complexity that I wouldn't even know where or how to start anymore.

Is it still viable to just learn plain html and do something with that? Have WYSIWYG editors matured so much that they can be used at least semi-professionally, or is that still "looked down upon"?


I started in the 90s and have dipped in and out of web dev ever since, but my impression is that a lot of the complexity stems from organizations and teams getting bigger and needing to work with one another. Frameworks and tools proliferate in order to make it easier to spread the work out across devs, communicate with one another, deal with version control, etc. The other main reason is to deal with the fact that a lot of web 'pages' are actually apps now and have to talk to the server/there's a bunch of backend stuff going on, so there has to be reconciliation between front-end and back-end.

If you're working on your own and don't need a back-end, there are options for putting up a personal site. Webflow[0] and Squarespace[1] come to mind. They'd be taken seriously semi-professionally in non-tech spaces. Can't speak to tech ones.

[0]: https://webflow.com/?r=0

[1]: https://www.squarespace.com/


> Is it still viable to just learn plain html and do something with that? Have WYSIWYG editors matured so much that they can be used at least semi-professionally, or is that still "looked down upon"?

Are you trying to make a website, or a web application? That really is the biggest question. For a static website that's read-only (meaning no user login, or user-submitted data, no online store), the old ways work fine.

With modern CMSes, you can create ecommerce stores and other dynamic functionality, but it comes at the cost of flexibility. On Squarespace for example, you can only use the pre-build templates to create the site.

And even if you are building a website by hand where you have full visual control, you can you can add a lot of interactivity by embedding iframes for things like JotForm or a PayPal button.


You can still do html people will just notice that. Personally I don't care. I'm not a webdev and I don't pretend to be. I have setup a pretty basic sorta retro personal website where people can find my resume.

There's a lot more you can do with static sites, it just depends on how much time you want to sink into it. I just made something basic. Maybe when I'm bored I'll try to build on it.


No it's not, because your website will not get traction unless you SEO the hell out of it, pay google ads, pay for FB ads, pay some blogger to write about it etc. It's impossible for a website to be found when the top spots in every search are capped by a few known players. It's easy to make a private web page, but that's not a website.

In fact the best time was about 2005, when most current day's media sites started, because it was still possible to be found.

The tech you use does not matter. The web is backwards compatible

> The first time you see your website instantly update because you made an change in your CMS, it'll shatter your whole Static Site Generated world view.

Jesus, we 've had that thing since 1995. What happened?


> "...your website will not get traction..."

And?

Not to be glib, but you need exactly zero readers on your website to have a whale of a time making it. In terms of building a website, the tech is exactly what matters because that is where the fun is.

I fully agree with your last sentence though. It's ridiculous to think that only dynamic CMS' allow instant changes. A small static website reloads instantly. Even better, one doesn't even need the internet since you can see the updates localhost!


As I get older I remember more about what the grey breads I learned from would say "everything in tech is a cycle. What we have now will be replaced with something new which was which will be replaced with something old reimagined."

If you stay in IT long enough you may see this cycle repeat 3 or 4 times


> Jesus, we 've had that thing since 1995. What happened?

I thought the same thing. I have built several custom WordPress sites, and changing the PHP code and hitting F5 shows me the changes immediately. What world is this where a site needs to re-compile each time a change is made?


Beautiful how I can't read the article on a slightly outdated browser (the state-of-the-art SPA crashes), because in the best of all times to build websites with the internet-of-javascript-bs, actually reading them is a nightmare lol.


It's a strange article where I start out vigorously nodding my head "yes! yes!" and end up "WTF" when the bulk of the post is about how awesome Remix is (huh?!) and Tailwind has "solved" CSS (lol). I mean, if you like Remix and Tailwind, cool cool—but that really has nothing to do with the overall trajectory of what's generally-speaking possible, easy, or commendable about web development today.

The real success story we should be celebrating is how awesome the specs are in modern web browsers. Vanilla JS, CSS, and even HTML itself have gotten incredibly good. Just having Flexbox and Grid working as native CSS layout engines is tremendous, and soon we'll have container queries (!!). The latest ES versions are lightyears ahead of the JavaScript of yesteryear. HTML now has superpowers when you consider what custom elements/web components are capable of. Developments at the level of HTTP itself, along with in-browser imports, are affording new opportunities to leverage the browser directly to handle dependency graphs rather than requiring everyone to bundle/transpile everything all the time for all seasons.

I wish the article had gone much more into those exciting, web-spec developments—rather than tout an unproven and controversial JS framework (Remix) and a somewhat-proven yet still-controversial CSS framework.


I have a question.

Would someone please give me some form of enlightenment about the practice of product placement in blog posts?

Is this the new normal?

>I'm Simeon, a in and Solution Engineer @ Sanity.io

> Some major bias at play here but when I came across Sanity I finally found "the CMS I'd always hoped existed".

It is cool, that he acknowledges the "bias", but this is more in "conflict of interest" category in my humble opinion.

What do you think? I am curious.

Edit: Please, don't just downvote. I don't care about "karma". Give me some explanation.


The OP used to work for me. He was a massive fan of Sanity before he went to work for them. I think that in part that is why he went to work for them. Knowing him, I am sure that he was was not consciously doing any product placement, rather showing what he genuinely feels about the product and heading off any conflict of interest concerns others might have.

As to the practice of product placement in blog posts. This is HUGE and the way that many bloggers make their living (although less so in dev). Affiliate marketing (getting paid for links to products & services) makes up around 10% of e-commerce transactions. It's been around for a long time and is likely to continue to grow as it's one of the few directly attributable sales channels (not without its issues however).

I'm fine with people monetizing their content, especially if it's useful and ad free. What is less comfortable is where people don't make it clear that they get paid for it.


> I am sure that he was was not consciously doing any product placement

But it reads like product placement. You said yourself that product placement is huge.

From the article...

> "And if there's anything you feel it can't yet do, it's only because you haven't spent enough time building it yourself yet. There's no stopping you."

There's no stopping me? That reads like marketing spin.

> "it happens to be the best CMS."

Says the guy who works there.

> "incredibly generous free tier"

So not just regular generosity, "incredible" generosity!


Thanks for the answer.


> at some point we all accepted that if every time we need to fix a typo, it's okay to wait ~10 minutes to completely destroy and rebuild the website again as static files

No, a lot of developers never accepted it. It’s insane.

I have the impression that people who started in the field in the past half decade have a kind of twisted view of what is “normal” in web development. We really walked backwards in many senses in the past ten years, despite all the underlying evolution in the tech stack.


> CSS is effectively solved. Tailwind CSS is the best thing to ever happen to CSS. I cannot imagine ever writing CSS in a separate file and having to think of names for elements. It's also an excellent resource for beginners.

I work with developers still have difficulty with concepts like specificity and cascading even though they have been working with CSS for many years.

I don't think statements like this are true or encouraging.


Tailwind kind of makes specificity and the cascade a non issue, though. Sounds like it might be just the thing for these developers.


Yes, and I would argue that many people using frameworks like these never learn those fundamental concepts.


I'd argue that never learning is the primary reason those frameworks exist.


Plenty of web devs understand how specificity and the cascade work. That doesn't meant that they work well. CSS is full of examples of good-sounding ideas that didn't pan out.

Frankly the whole thing works better when you avoid touching the cascade as much as possible. Which is why these atomic frameworks are modeled this way.


I thought that was a weird statement. Tailwind solves (kind of, to me) the problem of organizing CSS. I mean, you still have to pick the right rules to write.


Isn't Tailwind the one where you write inline styles with classnames instead of inline styles? How's that good?


Yes, there is nothing good or bad in it nor it's something new - we already experienced it in the form of `Atomic CSS` and other similar - it's just another swing of the pendulum, for some use cases it's good and will bring joy and fast development pace, for some use cases it's not good and will bring pain after honeymoon is over.


It seems like a reimagining of the style HTML attribute to me.


I started "developing" websites with my AOL account around 1995. I registered my first domain 1996.

Now that most of the "web standards wars" have been fought and most of the new kids not even heard about Blue Beanie Day[1], to this day my best advice for single (!) amateurish developers is: a website is, what a browser interprets as such. Simply put or hack in your code and enjoy.

There is no right or wrong, only creative geniuses who put a lot of creative thinking into how to design browsers. Kudos to Zeldman for inspiring me. And kudos to all the people who simply hack in code and do not care about conventions.

[1] https://en.wikipedia.org/wiki/Blue_Beanie_Day


Is the "exciting" part that these new meta-frameworks save you from actually having to learn and implement proper front-end development concepts? Remix is an "inflection point" because now you don't have to learn how a server-side rendered website works. Tailwind is an "inflection point" because now you don't have to learn how a design system works, let alone how to implement one. Copilot is an "inflection point" because you no longer have to alt-tab between your code and google, or remember how regex works. It goes on and on.


I’m fine with that. Programmers these days don’t have to understand how a linker works either, and I don’t blame them.


There has never been a better time to build websites. Or web apps. Or SaaS apps. etc.

There has never been a harder time to get the word out about them.

Which makes sense. The more you make it easy to build them, the more there will be. But the world population increases slowly and only has a finite attention span. More and more websites are competing for a pie growing much more slowly.


There are so many technologies to do things now that just getting started is a task. There are a 1000 front end tools, and a 1000 backend ones. Every job description lists 15 technologies. For someone who has been developing for a while, keeping up with new tech can be a 15 hour a week unpaid side job. I built an saas as a solo dev for a fortune 1000 company that managed processes for a good portion of their advertising and built it with css, js, php, and html. No frameworks at all and it ran without issue for 8? years. I feel like sometimes all these new way of doing things just make things more complicated. Of course I am probably just be an old man shaking my fist at the sky :) Node packages are pretty neat though.


> So at some point we all accepted that if every time we need to fix a typo, it's okay to wait ~10 minutes to completely destroy and rebuild the website again as static files on cheap hosting, it'd be worth it.

This is the stupidest strawman argument I've read all year.


There's never been a better time to pick yet another framework/language/stack to over-engineer your brochure-ware site.


something just in case you need to over-engineer it quickly: http://postmodernize.telnet.asia


I expected the section titled 'CSS is a solved problem' to talk about modern CSS features like CSS Grid and variables. Instead, the article declares the popular framework Tailwind CSS to be the 'solved' solution.

CSS will never be nice (tolerable at best), but modern CSS makes it much easier to create layouts in modern browsers - rather than having to rely on horrible CSS layout hacks from the past.

In short, CSS is not 'solved', but modern CSS is more powerful and capable than it has ever been. If you want to take advantage of it, you simply need to put aside some time to learn it.


The problem with web is that you have 3 levels of dependency:

1. OS (Microsoft has 90%*)

2. Programming language (C++ and Javascript)

3. Browser (Google has 90%*)

* of the market on devices with keyboard for production, mobile is consume only and completely meaningless for anything but reading SMTP over radio (unless you consider writing mails on a touch interface productive).

You do not want two huge companies to be the gatekeepers of your work, one is enough (Microsoft), zero is the goal: RISC-V with open GPU and linux.

Applets/WASM are too old/new to be practical now, also they are slow because of the VM (which on the client does not really profit from the benefits of non-crashability atleast not in comparison to the server).

We need to rethink everything and go back to what works; for my part vanilla JavaSE (HTTP/JSON) is still the best option for the serverside as it has the unique combination of a VM with GC that is open-source and no-crashes are important on the server.

On the client I have gone back to C (compiled with ++ for compatibility and comfort) with OpenGL (ES) 3. Audio/visuals both need 3D to be compelling, we have 2 eyes/ears; it's time to level up, whether you call it metaverse or not.

I now exclusively use the web for advertizing, real-time communication and distribution (video and forum) as everything else can be a command line.


You seem to be mixing up monopolizing and gatekeeping. While MS/Google may own 90%+ market share of OS/browser, Linux and Firefox are very usable, effective, productive alternatives. You're not getting gatekept by MS/Google if you want to be in the modern web.


It becomes bad when your customers are combining those softwares in a recursive majority.

The only reason incumbants are kept alive is so we can pretend we have a choice.

My excuse is it's hard to use a adjective as a verb.


> * of the market on devices with keyboard for production, mobile is consume only and completely meaningless for anything but reading SMTP over radio (unless you consider writing mails on a touch interface productive).

I don't generally say this lightly, but I used to be a bit like you not long ago. The future is now, old man!

You can do a lot on mobile devices. You can't do as much as on desktops, but entire generations of people are being born and educated and live their lives using nothing but mobile devices. This will become progressively more so, than the opposite.

And those are a lot more gatekept than the desktop ever was.

The battle was brief and it was lost. And only ever got a chance not because of open source, which was trojaned by corporations almost from the start, but because IBM was dumb and made the only truly successful, mass market, open architecture.

I'll believe that RISC-V and open GPUs and whatever are the future when I can use for all my daily computing needs at roughly the same performance as the latest cutting edge proprietary architecture.

That's very unlikely to happen. Do you know why? Because there's no money in that. Money moves the world. Money makes things happen. Even volunteers have to eat, pay rent, pay tuition, take their partners on dates, etc.

And if those open architectures do win, you'd better be careful. It will most likely mean that the battle has moved at an entirely different level and those open architectures on their own are useless, the average person won't be able to do anything meaningful with them because the gatekeeping is now done at a higher level.


I found this article to be mostly about the tooling for web apps, not "websites". The "website problem" was solved a long time ago with semantic HTML, maturation of CMSes and responsive CSS frameworks. This isn't something that's greatly improved by writing primarily Javascript, as is the author's preference.

This bit made me chuckle:

> Not so long ago, I talked myself out of using JavaScript, mostly because I didn't know where to start learning it.Now you're a google or YouTube search away from learning just about anything – often for free.

If I were a beginner today, Youtube would be last place I would go to learn JS, because the top page of results for generic terms that a beginner would use, like "javascript tutorial" will all be from mega-channels providing thin content and an upsell to paid courses. They have the SEO on lock, because this is how they acquire leads.

Maybe it's because the HN audience skews to technical people, but I seem to rarely see discussion of websites in terms of design patterns, information architecture and usability considerations. That's what makes websites worth working on. The tooling is the most boring part.


Nah, the golden era is long behind us. Every site looks exactly the same using all the same bloated libraries, bootstrap CSS and hosted on the same 3 cloud providers.

Gone are the days of instant load 20kb sites or quirky flash marvels hosted on anything from an home laptop to VPS.

In the mobile first mess where you have to debug your crap not just on multiple browsers but devices there's never been a more mundane time. I won't even get into plumbing.


sanity.io sounds and looks awesome! I didn't know about it until now.

I'm wondering if Sanity would be a good fit for creating static websites for other non-technical people but still give those people the power to update the content themselves.

I've been trying to acomplish that with Airtable but I quickly ran into annoying limitations.

For example, I'd like to create a gallery website for my artist brother (something similar to this: https://brooksburgan.com) and give him the possibility to re-arrange, add and remove images without going into HTML or JS.

Can anyone who used Sanity let me know if this is a valid use case?


Yes, it's actually the main use case for these "headless" CMS systems like Sanity, Contentful, Strapi, etc. Even Wordpress can be used as an editor only with content pulled via API into some other system or build process so that the website/frontend is completely separated from the content.

Search for "headless cms" and you'll find countless articles about this.


Pardon me, but I seem to be suffering from terminological confusion.

I'm puzzled by this business of "building" static sites being a process that takes minutes. Perhaps my understanding of the term "static site" is awry; I parse it as a site consisting of HTML markup, JS and CSS, all that code being handmade, rather than generated. If that's right, then a "static" site doesn't need to be built.

I presume it's supposed to be contrasted with a "dynamic site", which I parse as a site in which the code delivered to the browser is not the code that was written by the developer; instead, the developer's code gennerates the browser's code at runtime (but it's still not obvious to me why that should need a build step).

Or are we talking about writing sites in meta-languages, that have to be subjected to a translation process to yield code suitable for browsers? That would explain the build step. I can certainly see the sense in layering a meta-language over CSS, but generated HTML is just annoying to my mind (and generated Javascript seems like a recipe for madness).


> Or are we talking about writing sites in meta-languages, that have to be subjected to a translation process to yield code suitable for browsers?

It's too laborous to handcraft each page, so people would use template engines and things like Markdown -> HTML converters so they would only edit the meaningful part of the page manually.

Some also choose to compile a dynamic site into a static site. WordPress plugins allow for that, for instance. Doing so, you get all the advantages of a dynamic CMS, while keeping the ability to serve the site as a bunch of plain HTML files. The process of compiling would be the 'build' step you are wondering about.


Thanks for the reply.

> all the advantages of a dynamic CMS

I spent ten years making websites using Drupal, which I take it from your context counts as a "dynamic" CMS. FTR, my work was coding; I never got into the trade of making sites by glueing other people's modules together.

Well, that kind of site development didn't involve a build step. You could modify the PHP code for a module, load the page in a browser, and immediately see the result of the change. The only things I can think of that required a build step were:

* SASS

* Some weird Google technology that I avoided like the plague (and that I believe is now considered obsolete)

SASS was quite good for making it easier to understand CSS. The need for a build step was annoying, though. My colleagues were also working with some technology with a name beginning with 'R', that generated HTML on the fly, in the browser, using Javascript- not just using templates. I never engaged with it, and I couldn't read the generated code, because you couldn't view the page source.

If "dynamic" means "the content comes from a database", as in Drupal or Wordpress, then I think my ignorance is excusable; a Drupal site would behave exactly the same if it consisted entirely of static HTML, JS and CSS.

I once worked with a CMS that used a database to hold the content; but which required a "generate" step to turn the database content into static pages, which is what the site actually served. But that wasn't a "build" step, because it ran on the server in the background, every ten minutes or so. The submission page was a Java servlet, but the Java code only ran when you were submitting new content. The generated static pages could be dropped into the document-root of any HTTP server, including ones with no JRE.

Of course, the "generate" step here was a convenience, but inessential; the servlet could have converted new content to static markup at the time of submission, rather than on a timer. It was just more efficient to do the whole site at once, rather than rendering each page at the time of submission. This was a large site, with tens of thousands of pages, and many, many comments per page. We considered it a static site.


> If "dynamic" means "the content comes from a database", as in Drupal or Wordpress

"Dynamic" simply means that you aren't serving a static file from your server, but - in case with Wordpress and Drupal - pulling all the requests through a script that processes them, and returns generated plaintext data (HTML/XML/whatever) every time.

If you move the response-generating script/software from your public web server, and only upload plain HTML files there, you get what's called a "static" site.

If you don't get why people add an additional "build" step to turn their "dynamic" sites into "static" ones, here are a couple of common selling points:

- You get to maintain less moving parts on your "production" server and reduce the load

- You can get rid of your public server altogether, and host your "static" site on GitHub pages or a similar 3rd party service. In this scenario, content management and build process would take place on some other (remote or local, doesn't matter), "dynamic" server with the interpreter installed

Hope that clears things up a bit.

Edit: I failed to mention that most static site generators (I heard of) build the entire site at once. A snapshot of the site (an artifact) is being built, thus the "build step" naming.

Edit2: Removed some things that are besides the point.


Thanks for the clear explanation.

I understand why one would render the entire database to HTML; that's what we used to do on our servlet-based CMS. What I was unclear about was the static/dynamic terminology. Now it's clear.


static: HTML, CSS, images, JS, all served from the server's filesystem to the site visitor over the network. no server-side code execution to generate the page content.

dynamic: non-HTML server-side code (PHP, Perl, Ruby etc.) is evaluated/executed to generate site content which is then sent over the network to the site visitor. this may involve hitting databases to get content for the requested page, or even make network queries for integrations with other services (social media feeds, payment processor sessions, etc.). the server then returns the generated HTML to the visitor.

I just realized these two Wikipedia articles summarize the two different approaches quite well:

https://en.wikipedia.org/wiki/Static_web_page

https://en.wikipedia.org/wiki/Dynamic_web_page


> Perhaps my understanding of the term "static site" is awry;

My understanding of it is that it's a site that serves simple HTML pages, so it's quick since the server doesn't need to build it at each pageload (most pages basically don't change, yet there's something running on the server that builds it every single time a visitor comes to look at it, which is a massive waste)

But those HTML pages absolutely are "built" (once!) by some process.

A static site could also be serving HTML pages that someone crafted by hand but then there wouldn't be anything dynamic like data from a DB involved (because that requires a server and a layer that queries the DB and prepares it for display).


I can't disagree more.

Static websites are valid and incredibly relevant, no need to prop up the friends at Remix (which is a nice product, just not the end all, be all - we were doing this 20 years ago as well).

You can very well show changes immediately with different strategies - but having a single artifact of what's online is invaluable. Caching dynamic content is fine, but having a log of what changed is nice as well.

A setup I like particularly is to have a real time cms available privately or locally - and a big publish button which build the static website and serve it.

That said, the only positive of today's web is that crossplatform compatibility is mostly a solved problem.

Performance went down the drain to the point that you need a beefy computer to browse multiple websites at once. Data usage is at all time high. Browsing on a payg phone is pretty expensive.

The development world went batshit crazy (likely driven to resume driven development and cargo culting), which means your average codebase today is massively more un-neededly complicated compared to your average codebase 20 years ago.

Development experience in your average codebase is also way slower with all that transpiling (running on a scripting language with bad performances). My rust feedback loop (not the fastest compiler among backend languages) is faster than my typescript one.

20 years ago I loved creating websites, despite the challenges. Today, I try to avoid it as much as I can and do backend.

I still create frontend for my own websites and it's overall as great as 20 years ago - but I don't use the slow mainstream tools which I'm forced to use when working with clients.


Tailwind doesn't "solve CSS". It destroys it. Tailwind is nothing more than a layer of obfuscation on inline styles spelled directly. There's no semantic component.


Tailwind is a design system. It's not the same as inline styles because the options are constrained and are picked to fit on a coherent scale. I consider the inability to do this natively to be a longstanding gap in CSS itself and have been doing my own variations of the tailwind utilities with sass mixins since 2009.

I'll point out that CSS has no semantic component in itself. It's just a way to apply styles. If you want to keep semantic class names then you can apply the tailwind design language using the postCSS `@apply` in the body. There's a tradeoff between the value of semantics against the value of greater code locality and elimination of specificity as a concern. In practice most people seem to prefer the latter.


> Tailwind CSS is a design system token generator

This exact quote beautifully sums up Tailwind and should explain part of the "Why?" of Tailwind to those that do not understand why you don't just write plain old CSS instead.

Yes, you'll have to learn some new tokens in lieu of simply writing CSS statements. But what you get with Tailwind is a utility-oriented design system that allows you to put constraints on your design. These constraints add consistency to your design. With Tailwind (and similar frameworks/libraries) you get these constraints and the following consistency cheap. With plain CSS you need to actively work to put constraints on your design (e.g. building out a design system using CSS variables). There constraints are opt-in, not opt-out.

That utility based CSS frameworks like Tailwind also tend to solve specificity issues is simply a bonus.


I hate every new technology listed in this article. I recently removed all of my SQL-in-programming-language code (orm, strings of SQL) etc, and just wrote actual SQL in files with .sql at the end and load them at run-time. I am now, when my finger heels, going to do the same for the React. I hate React. I have been using it for years and I have never enjoyed it. I feel like everything made after the boomer technology just sucks. I am going to write html, css and JavaScript, and as intended, not fake JavaScript (React), not mangled css, etc. I noticed when setting out on this journey that vscode and I am assuming others, have little support for these things, which is hilarious. vscode can't even detect unclosed meta tags in html, yet can detect a million React syntax errors.


> just wrote actual SQL in files with .sql at the end and load them at run-time

I like this idea, but I'm interested to learn how you are handling dynamic queries -- e.g., when you want to change the order-by field? You can use CASE in some situations, but it has limitations. Another option is to construct the dynamic query in SQL rather than in your outside-the-db application code. Just curious how you approached and solved this?


Sorry if reply is shorter than I would like due to hand, but the dynamic aspects I have encountered have been solved by case when as mentioned, boolean expressions on parameters (where $1 is true or id = $2), or in the worst cast just not caring about DRY in this scenario. Also I use json_agg etc for ORM-like mapping to arrays and objects. I do some other tricks too like bypassing a bunch of redundant and slow parsing steps by sending the result in certain queries back to app server as a single text column that is in JSON (this is via wrapping queroes, not manually) so it can send that to the client. No db driver parsing, converting to objects, back to JSON, etc etc.


Thanks for writing what you have. These all sound like reasonable approaches. Just to add some extra options/thoughts because I’ve been thinking about this lately, for cases where CASE doesn’t work well on order by, you can use dynamic queries to construct the final query in SQL. Another option might be to include a splash of templating to your .sql files, and parse them at run time, or use them to generate the final .sql files beforehand so that they can be tested directly via something like pgtap (repeat yourself via templates instead of by hand). The main reason I like the sound of including the queries in separate .sql files is precisely so that they can be easily and directly included in tests. The downside is the query isn’t included next to the code that uses it, so a little hunting needs to be done to find it (always tradeoffs!).


Check out https://www.hugsql.org/. Is a Clojure library but I'm sure I have seen Python and JS versions around.


As I've gotten older (about a decade's worth of professional experience at this point), I've started feeling the same way. It's so tempting to build up these abstractions and pipelines and introduce new tools but at the end of the day, what is it we're trying to achieve? It's often the case that the abstractions chosen were the wrong ones for the job and staying at a lower "layer" of abstraction results in more benefits in the longer term (i.e. lower layers are move at a slower pace). If a project's lifetime is longer than 2 hype cycles, it may be worth considering digging down a layer and doing things there.

Whenever a subject like this comes up, I'm always reminded of Stewart Brand's "pace layering" [0].

[0] https://imgur.com/V5oL5WZ


Wouldn't it be better to put SQL statements in stored procedures rather than dynamically loaded text files?


There's plenty of queries you may find yourself scattering throughout your application that you don't want to have in the database as a stored procedure, if for no other reason than it being overkill and extra work.

If it's business logic in the query that might get re-used across independent applications (e.g., cancelling an order), then I would think a stored procedure is better. But if it's specific to that application (e.g., fetching title+description+publication date of the five most recent blog entries for a side panel), I wouldn't care to put that in the db as a stored procedure.


I don't understand the upsides of having your queries in separate sql files. May I ask you to elaborate a bit on this point? Thank you very much.


Old man yells at cloud


Yup that is me. I do what I like doing though, I don't yell at people or care what they do. I use lots of new things too like JSON web tokens, JSON SQL functions, vscode, new JavaScript/html/css features, etc.


I used to be excited about building websites ~10 years ago when it was about building web pages (you know, the thing HTML and friends were designed for). I "rage quit" after people started (ab)using web development to build approximations of apps. The reality that in the end the majority of people don't care about "use the right tool for the job" hit me hard. I would have never imagined that big companies with good enough budgets would still build applications with HTML just because "everyone knows JavaScript". It is all so demotivating -.-

These days I just try to find niches where people still care about using the right tool for the job and writing efficient GUI software. Lets see how long these continue to exist.


Without people shoehorning applications into the Web, the web itself would be dead. Nobody would have a need for an electronic document system that only supported static text and images.

Apple and Google would be celebrating because the world would self-corral into their walled gardens because they alone would control the runtime environments for mobile applications.


10-12 years ago, I had to make a CRUD / booking website - I had zero experience with webdev. I got a working example up in about 3 days, using MySQL, ASP.NET, HTML/CSS - that's about it. I had mainly experience with native windows development in .NET, so the back-end part was pretty easy to learn.

10 years later, I had to do something similar...I think I spent a good day just setting up things, before writing a line of code. And then a couple of weeks for the rest, learning by doing.

I don't know - websites today are much more complex...in fact, I'd say that most such websites are basically web apps, which have replaced the native software we wrote 10-15-20 years ago. But I didn't end up with a good feeling after I was done.


There may never have been a better time to build websites, but there's been a better time to design them.

All this over-engineered, developer-centric stuff while powerful and handy in some regards has destroyed the art of web design.


Preach! I long to read more about the substance of web design, rather than the tooling. Art direction, style guide, content strategy, information architecture, usability testing.

If I wanted to read another paean to JS-first development, I'd go to dev.to.


I think that is is important to take into consideration all off the possible additional overhead and costs. Just because something can be done it doesn't mean that it has to be done. The main question is, why do you have a website? If it is to learn new things, to play around that frequent changes are ok. But if you need something that will be stable, and easy to maintain over a long period of time you should be more careful with your choices.


It's never been a worse time to browse the web.


Last weekend I tried to get my first website online and even if lots of information is out there, I was still surprised about the lack of intuition regarding the products.

The learning curve is still steep and I'm not sure of the end product will come close to what I have in mind (informational knowledge base).

All I want is a simple website, that runs on multiple devices without being convoluted by plug-ins.


Remix uses Server Side Rendering (SSR) done at run-time. As any other technology, it has not only advantages but also drawbacks: https://github.com/winwiz1/crisp-react/blob/master/docs/benc...

Tailwind is powerful, consistent and comprehensive but again the advantages come not without a drawback: In order to use it effectively one needs to learn/memorise yet another CSS. I have better things to do and think it's more efficient to use a set of CSS management approaches: https://github.com/winwiz1/crisp-react#css

Again, each approach has its advantages and drawbacks so one can try to minimise the latter and utilise the former. While leveraging the knowledge of only one CSS: W3C CSS.


This article struck a lot of chords for me. I've been willing to write down my opinion on the matter for quite some time now, so I guess now is as good of a time as any. My sincere apologies for a lengthy meditation:

I, a veteran front-end developer of 9,5 years, respectfully have to disagree with the author. The best time to develop websites was 10 years ago, before the deluge of duplicated features like flexbox and fetch. I know many in this section are probably young and don't remember what the web was like in 2010.

Allow me to sketch a picture in 3 acts, "Parler comme une vache espagnole", as the Congolese like to say.

Firstly: We finally were getting serious about semantic html. Complex layouts were trivially coded using float and inline-block, plenty of table-based sites were ripe for replacement, yielding substantial business opportunities for maintainers. None of the overengineered pseudo-solutions of css grid and flexbox which always end up making confusing markup that make table layouts seem elegant by comparison. The web is a text medium, why pretend it isn't?

Secondly: I also very much dislike this trend of writing inline-styles. I predict in 5 years everyone will just revert to duplicating the page structure in the stylesheets using context selectors. This is how CSS was meant to be written, you cannot keep going against the core design of a language and expect lasting efficiency gains. The Cascade Is Your Friend.

Thirdly: We were still reeling from the financial crash the U.S. government had created, but webdev was almost unaffected. In fact everyone's nanny and her grand-uncle needed a page. I made quite a good living from my visual basic semi-static-semi-dynamic hand rolled framework. I wish I had learned PHP earlier, I could have made even more. C'est la vie.

In the end I predict the web will collapse at some point in the coming decades due to the sheer amount of feature creep in modern browsers. That deal was probably sealed the moment webassembly was introduced. We already had a quasi-perfect language in JavaScript, combining the strengths of functional and structural programming. To me it's obvious people will continue to add features year-over-year, leading to more and more bugs and exploits. Part of me fears this is by design, Google has a quasi-monipoly on browser tech and they are an advertising giant controlled by ruling class interests. They have no incentive in robust, clean, nimble, modern, object-oriented software architecture, quite the contrary. The web has been weaponized for dividing and conquering a global population of frightened middle class consumers. And history has shown large middle classes never last.


Well, this guy's site needs a lot more work. I tried to view it and every time I attempted to scroll down a giant black bar obscured the text. Bad CSS. And with CSS disabled, between every single paragraph this guy has GIANT "comment" buttons that fill the screen. It's unusable as styled and unreadable unstyled. That's quite an accomplishment.

The best time to build websites is now. Everyone has the upload that enables them to host from home. Every computer has the resources the host a website without breaking a sweat. You can still just <html><head><title>my site</title></head><body><h1>My site</h1></body></html> and throw some images in a directory and your site will be unhackable, last forever, and extremely fast rendering.

All you have to do is ignore everything this guy says. He seems to mostly be talking about commercial jobs for corporate persons and not human people.


Lots of negative comments, all well grounded.

I totally agree with the title, for totally different reasons. But I don't agree with the article content.

However, what's sure, I won't use Sanity after this article. I was undecided until now, but now I'm pretty sure the product wouldn't be so great, if the author contributes to it.


This weekend I tough people that had never coded how to write HTML. I had already prepared the CSS for the basic elements, so everything would look OK. They where scared by the "code"/editor at first, but once I explained <tags> = elements <p> for paragraph <br> for a line break, they quickly grasped it, so after a few more minutes I had also introduced the img element/tag and lists...

Building a web site is still quite hard, eg. setting up a server, writing the CSS, setting up the site generator, etc. But editing a website has always been easy. It's just that people are too scared when they see code. And screw all those frameworks. You do not need a web framework in order to put text (and maybe some media elements) on the web!


There’s has never been a better time NOT to build websites.. and just use something like Notion/Super or Squarespace.

The learning aspect aside, building a website from a framework is for big corps with insane budgets or those who don’t have enough chargeable work to fill their day.


For my money, the most exciting thing to happen to front-end development in a long time is the rise of Elm ( https://elm-lang.org/ ) I'm surprised nobody's mentioned it yet.


I can share my (unsolicited) thoughts FWIW.

I have never been a UI or front-end developer (altough I am professional programmer), but I do enjoy dabbling in making websites and keeping up with mainstream technologies for this area.

I do find that for amateurs like me, building websites is more enjoyable than a couple of decades ago. Deploying static sites was never easier and the tooling was never better. I can make a quasi-professional site with just some Markdown and sprinkling some off-shelf CSS and minimal Javascript and we're done.

Now, regarding professional developers, I look at the pace at which all tech stacks get obsolote and it's quite scary personally. I'm not sure it's fun.


For a while my personal website https://old.habet.dev was the result of some free HTML template that i reused. But I got board if it and wanted to to start a blog. So over the weekend I downloaded WordPress, set it up and wolla a few hours later I have fully functional site with a blog (thank g-d for open source). It was a thrill to get it going. https://habet.dev/2021/12/self-host/ I'd love to hear your thoughts on my site.


I have some reservations about Tailwind. Seems to me that the only use case that makes easier is to directly copy and paste HTML code that works, but in general I think you are right.

One missing piece of the puzzle in these kind of articles is the lack of references to the fourth leg of web development, template systems (the other 3 being HTML/CSS/JS). There has not been many improvements on that front for years. Main reason why we built https://stack55.com.


This feels like "There's never been a better time to build buildings..."

But then the rest of the article is about 4 story mixed use apartments and not necessarily any other type of building.


The uncaught error upon loading the side is kind of ironic.


There was only one good time to build websites, and it was in the early days of the Internet.

Now the market is overcrowded with little opportunity to demarcate yourself.


I can always predict when the article is optimistic, the most upvoted comment will always be equally cynical.


No need to name components? Good luck with the designer-developer communication, design system etc.


I remember when I was 17 (in 1999) and printing off an HTML tutorial website as a giant stack of A4, so I could read it offline. Anyone remember hasLayout?

Now it all seems relatively simple to learn things, although picking what to learn is probably harder.


Ironic as it seems the site is down even though it's behind cloudflare.


"It has never been easier to sweep things under the carpet."


And…… deployment was easy. There is video on youtube teaching how to deploy a simple django crud app. Video length is 45 minutes. We are going backward.


Deployment has no doubt gotten more complex. However video length isn't a great gauge of relative complexity, or a great medium if you want to be succinct for that matter. I'm willing to bet there are articles covering the same topic which you could read in less than a third of the time.


> CSS is effectively solved

It never ceases to amaze me how many smart people ready to spend 25 years "fixing" thing that shouldn't exist.


The more I learn about web developers, the more I like Gemini.


Eh, it was pretty fun in 1996.


The title intrigued me because it _is_ something I agree with. Speaking as a mostly frontend dev, it's been easy to experience a breadth of innovation and new programming/deployment paradigms. Lots of interesting rabbit holes to go down. However, I have trouble agreeing with some of the content from the author.

> Learning materials are almost unlimited

This is something that's hard to argue against. We do indeed have an astounding number of paid/free resources. However, I feel we have some serious challenges ahead of us, though. Trusting the resource is relevant and up to date is harder than ever. It's easier to navigate if you're an experienced dev, but sifting through the plethora of resources can cause more friction than not. To me this point begs the question -- is endless "free" information a feature of new web development frontier? To me it seems this is both a blessing and curse; furthermore, it's not exclusive to web development. Many fields have a glut of information, but the challenge is navigating through good/bad content.

> Frameworks are lifting each other up

I don't find any evidence cited in the article that frameworks are lifting each other up. To me, it's an arms race and it has become more cutthroat as framework developers have realized that they can build businesses on top of them. It does create competition to attract devs concerned about UX/DX and innovation.

Some of the best frameworks in the space IMO (Astro and Redwood for example) buck the trend framework/vendor lock-in, but the author only mentioned frameworks clearly focused on platform adoption.

> CSS is a solved problem

Hard disagree. CSS, the language, itself has gotten much better over the years. Tailwind in not the winner and you should expect that once a new hot CSS framework becomes available, all of the Tailwind hype-crew will disappear and you'll be stuck maintaining/refactoring it away.

Further, Tailwind today is not for everyone. It's not silver bullet. There is still innovation and trustworthy solutions in the space (CSS in JS, CSS Modules), but IMO CSS is an evolving language and not simply a solved problem. Avoiding writing CSS does not solve the problem.

> GitHub Copilot

I think it goes without saying that YMMV on this one. There are still many unknowns and mixed reviews on this one to say if it's a net benefit to building websites. It certainly created some conversation/controversy, but it's not really fair to say that GitHub Copilot make building websites any better. I'm still on hold as to the benefits on this one.

> Content management is limitless

No doubt there are more players in the space, but the author clearly stated bias. I'm glad that these options exist; however, I don't have enough knowledge to say what the limits are. Headless CMSs solve certain problems and may not help or be useful in building many different kinds of websites.


Just what i needed boss


WebGL for the win.


Truth




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: