Hacker News new | past | comments | ask | show | jobs | submit login
I'm a fucking webmaster (2016) (justinjackson.ca)
332 points by memorable on May 9, 2022 | hide | past | favorite | 222 comments



Back in the early 2000's, I taught "computers" to a bunch of kids in DC. I first started off with how a computer work; IO, CPU, storage and their eyes glazed over.

I quickly ditched that topic and went right to building a web page. Taught them a few HTML tags and they instantly became "webmasters". They were so thrilled. They could create blinking text, could make all the fonts pink. All just by moving around some simple text in NotePad.

Most importantly, they were creating things that they thought were only able to be created by "scientists". I loved how it boosted their own self worth.

I've stopped teaching a long time ago. Not sure if there's still tools out there where a 13 year old kid with no technical experience can immediately get gratification with something so simple on a computer. Maybe it is a time gone by.


I tutor kids in my free time from ages 4 up to college. I find that Processing/p5.js is a great tool for teaching younger kids how to use their imagination through the vessel of built-in functions + tweaking arguments to instantly see their input on the canvas in the form of bright, flashy colors/shapes. It requires a much more hands on approach and can really only be done well in 1 on 1 settings. While I wish there was a way for me to be less of a navigator in terms of the amount of code that goes on the screen, I have come to find that kids are generally exceptional at pattern recognition. Even my student that is 4 (although he’s a smart kid) is capable of writing his own scripts now without much guidance in his logic/syntax. I had my own start with micro worlds as a kid, and played with netlogo in a college class. That made me a staunch supporter of kids getting into code through agent/turtle based environments


I agree, Processing/p5.js is great for beginner programming.


> Not sure if there's still tools out there where a 13 year old kid with no technical experience can immediately get gratification with something so simple on a computer.

Lego MindStorms! There's a lab at my local university filled with Lego and basic robot sensors/actuators. Schools and other community clubs can book one-time or regular workshops.

What fascinates me the most is the adaptability to the students' abilities. Young students can do "block based programming", telling the robot what to do with very simple command blocks that connect to each other. With rising profession, advanced blocks can be made available that are more powerful, resembling software control structures. New sensors can be used and robots are built by the students themselves instead of prebuilt. At some point, you can ditch the graphical block based programming and show the generated Java code and introduce the students to basic software development.


My first experience of programming was walking up to an Apple II in grade school (this was in the 2000s, those were some old machines) and punching in code listing from a book. The machines didn't do anything when you powered them on, just waited for you to enter code or commands. It was so simple, and it just begged you to start writing programs.

Now people talk about getting their kids started with programming, and end up talking about raspberry pis and Linux distros and whatnot. Granted the tools now are so much more powerful, and a determined kid can write a best selling iPhone app, but there's something wonderful about a machine that shows up to you as a blank slate, awaiting instruction.


Same way I got interested in programming, except instead of an Apple II (which was prohibitively expensive and rare in Brazil with import substitution in the 90s) I had access to my dad's Gradiente MSX and manuals for BASIC.

When you boot up a MSX it was the same: a straight command line waiting for inputs, learning to command that was quite magical for 8 year-old me and my intro to programming in general.


> The machines didn't do anything when you powered them on, just waited for you to enter code or commands.

I am reminded of the old Kids React videos (which themselves are now nearly a decade old)... Kids React to Technology (an old Apple ][+) https://youtu.be/PF7EpEnglgk


Similar story but newer hardware. I was given a machine built with left over parts from the IT company my did did sales for. Booted right to the CLI. I was given a giant red hat manual and told to have fun. It was awesome.


At some point in 2013 I watched Lecture 1 of Harvard CS50 because everybody said that was the best starting point for people interested in web dev, product development, tech in general.

I just wanted to make a website to start off with, just a bunch of linked pages. I already knew a tiny bit of HTML from college, but that was years ago and I wanted to know what had changed. So you can imagine my confusion watching that CS50 lecture as it was seemingly covering everything EXCEPT the stuff non-technical end users are familiar with: websites, apps and games.

Yes, yes, I now know that CS isn't about coding but about underlying fundamentals. But that doesn't change the reality that CS50 online has been viewed many, many more times than people who've sat in a CS class, and those people likely had similarly simple goals as I did.

Long story short however, I took a video course on webdev and learned far more in terms of immediately applicable skills. Again, I wasn't trying to get hired at FAANG. I just wanted to get to the level of 'webmaster', where I knew just enough to handle markup, hosting, SSL and CRUD functions. Learning by doing has been far more rewarding than following online courses.


> I watched Lecture 1 of Harvard CS50 because everybody said that was the best starting point for people interested in web dev, product development, tech in general.

That was not good advice. CS50 is an excellent, introductory, survey CS course but it's still a CS course.


Most advice aimed at absolute beginners is bad, because the questions are bad. E.g. "what's the best programming language?"

You and I may know that the question is a bad one, but beginners don't know that. And when they ask, they aren't told to rephrase the question as "What would I like to build?" Someone will just say "well I know Javascript and...." and it goes from there.


> Not sure if there's still tools out there where a 13 year old kid with no technical experience can immediately get gratification with something so simple on a computer.

I can't answer that for a 13 year old, but I can for an 8 year old. Minecraft. The elaborate stuff my daughter built at that age is way more gratifying than anything I could have built as an 8 year old.


My then-5yo built rollercoasters single handed. I helped embellish it with trigger driven fireworks and other stuff for some combo quality time and screen time. Minecraft is such a perfect game for little kids and it works "offline" pretty well.


Glitch (https://glitch.com) is the closest thing I can think of right now.


You can still just use HTML.


This does indeed work fine. The major problem is still finding somewhere to host it, and incurring the still non-trivial risks thereto, since among the many things students want to do with free web space, hosting warez and other things that will get you in trouble quickly is still pretty far up the list. Plus someone has to pay for a domain name or something.

If you can get a modern text editor that can use SSH or something to mount a remote directory you can easily have an HTML party like it's 1999. That's easier than ever. But the surrounding issues are worse than they were in 1999, unfortunately.


It's never been easier to host a webpage than it is today. You can host 100s of student sites on a $5/month VPS. The free tier of AWS/GCP/etc are extremely capable. There are free services like Wix and Glitch that let you build a webpage and then mess with the HTML, no domain name needed.

The problem of warez is certainly no worse than it was 30 years ago, and there are much better tools to address it. A simple storage quota probably goes a long way.


Signing up to and using AWS or GCP is orders of magnitude more complex than those old free hosting services where you got an FTP username and a password. In fact you could upload directly from Frontpage IIRC, but I could never get it to work.

It's simpler for us software engineers, less simple for the snot-nosed webmaster wannabes we were.


Thank goodness you don't need to do any of that then.

If all you want is to host static html, then any free hosting service will suffice. I.e. let them create a GitHub account and show them git add / git commit / git push and you're done.


If you can use git, you can run a webserver. It's all just right there waiting for you.


That's because someone did the work of setting up a server for you, configuring the services, and issuing FTP credentials. That part is now a whole lot cheaper and easier and more secure.

It doesn't get much easier than signing up for e.g. Glitch.


Setting up a server is so unbelievably simple that anyone capable of writing software shouldn't blink at the task. The fact that there's a lot of blinking strikes me as profoundly funny.


I mean, it isn't?

Even the simple matter of port forwarding assumes you have access to the router you're using, and that's a major barrier for a lot of people.

I run the programming club at my high school (I'm a high schooler), and we use Github pages to host all of our stuff for this very reason.

I've used some old laptops to run servers within the network, but my school doesn't want us hosting anything public, and running everything through a free tunneling service like localtunnel is possible, but it scales so much better when students can just follow some friendly instructions to host a static site themselves.

It's not hard for the people with the privileges of (1) having a budget > $0 and (2) not having the adults tell you 'no' every time you want to try learn something.

Am I missing something? Even hosting static content on a server seems like a layer of complexity that is frequently unnecessary for basic use-cases.


You just, as you say, follow friendly instructions. It's as easy as reading a page of a manual.


In 1999? I mean, I did it, but I didn't think it was easy. Every single step was harder then than it is now.


Here's what we do with beginner developers who have never coded before: - build pages in Codepen so they can see everything build as they go, fork from templates, and share/riff off each other - export from Codepen to a folder so they can see that static sites are just text files in a folder and nothing mysterious -- at this point I also introduce an editor like VSCode - drag and drop that folder onto Github and deploy to Netlify

You can do this in an hour with brand new coders. https://syllabus.codeyourfuture.io/fundamentals/week-1/sessi...


Huh? Just use neocities.org. Even has a built-in editor on the site.


HN knows neocities, everybody used to know Geocities once. How's one supposed to sign up there if it's so niche it's not known outside of the tech circles?


Tell people outside of tech circles.


Maybe teach using local html and files and save hosting for the end.


You can even just run a full sever right on your home computer and share the webpages over LAN. It's far more valuable to learn how google/Amazon/microsoft do it, than to learn how google/Amazon/microsoft want you to do it, anyways.


I think there’s quite a lot of options for static webpages now though. Would GitHub pages not be perfect for this?


Yeah github pages is basically GeoCities with a little extra data farming by Microsoft. Works fine, if you don't mind that.


Netlify has drag and drop deploys now. Can't get simpler than dragging a folder onto a webpage


netlify for free static hosting. dirt simple no cc decent free teir


At least the doctype is simple nowadays. The XHTML one was horrendous.


Horrendous yes, but it was necessary, because without it we wouldn't have had the nice doctype HTML5 today. It addressed a lot of problems found in the early html and paved way for standardization and a path forward.


When I was in primary school, at around 10- or 11-years-old there was a popular website that gave you a homepage and had challenges for doing things like making the header a certain size or having images and so on, I remember thinking it was really cool

I think it was called Grid<something> or <Something>grid, I just had a look but couldn't find it

Think it was somehow linked to schools, this was in the UK around 2005


Man, I loved the whole process of learning about IRQ's address spaces, I/O, North and South bridge, DMA...

I didn't know a lick of programming, but just knowing how to trace the 0's and 1's through the hardware greatly boosted my confidence in interacting with the computer.

No matter what program or language got put in front of me, I knew that something on a disk or in memory, or in some storage somewhere was telling the computer to do it.

Then I set out to understand the people who wrote those things and what they set out to do.

Then I realized I probably hated most of you. Because you went out of the way to make having my computer do what I wanted harder.

I'm somewhat reluctant to encourage youngsters down the path. I'm not sure it's the healthiest occupation.


Checkout Firefox Developer Edition, https://www.mozilla.org/en-US/firefox/developer/

Create an HTML file and open it up in the browser without a web server (though some advanced stuff will need a web server running). The developer tools built into the browser can be used to interact with the page and see how changing HTML, CSS, or JavaScript affects it.

Web console can be used to experiment with JavaScript, https://firefox-source-docs.mozilla.org/devtools-user/web_co...


It doesn't even need to be developer edition. That is more geared towards extension development rather than web page development.


I took a summer class in C++ in middle school and ditched programming for probably 5 years before coming back when i learned I could make games with my TI calculator :)


My 1996 self feels very seen.

I think a lot about the way the web was back in the mid 90s, compared to the bleeding-edge technologies of today, particularly blockchain and cryptocurrency.

And one major aspect I keep coming back to is that web technologies in the 90s, and even their descendent technologies now, are simple enough for most technology-literate people to understand, and that carried with it a lot of comfort and trust.

For anyone who had been using desktop computers and office software in the early 90s, which was basically everyone who was a student or an office worker, the web was just files on a computer, served over a network, which we already had plenty of experience using. And I think that's why my father, who was only a little older than I am now when I first showed him the web, grasped it straight away and was supportive of me building a website for his business.

These days I'm not spending much time writing HTML files in a text editor and uploading them via FTP; rather I'm writing programmatic code that talks to a database (A database! Just like MS Access or FileMaker Pro), does some processing, and displays the result on the screen with some elaborate JavaScript to make it look and function how I want (I’m happily a heavy user of React for building rich web apps), but it's still fundamentally the same concepts. Files on servers, accessed over a network and delivering useful information to my browser or app. The beautiful simplicity is still there, and it still doesn't feel like a big leap from the kind of computing we were doing 1993.

The world of blockchains and cryptocurrencies and "web3" feels vastly more opaque and esoteric by comparison and therefore untrustworthy to many of us, which I think is a significant answer to the question that was asked here in the past couple of days: "Why is Hacker News so anti-crypto?" [1].

I can't quite accept that it's just that I'm "old"; firstly because first heard about Bitcoin soon after it appeared and was excited by its potential, but also because I remember how people older than me reacted when the web arrived, and there was much more rapid acceptance and embrace than there I currently see for cryptocurrency, outside of those who are just chasing quick riches.

[1] https://news.ycombinator.com/item?id=31302494


> The world of blockchains and cryptocurrencies and "web3" feels vastly more opaque and esoteric by comparison and therefore untrustworthy to many of us

For me it's more that we can already do everything web3 does without the blockchain sprinkled all over it

The only thing web3 adds to what we are already capable of is a participation fee, which sounds like someone was tasked to implement the worst possible version of the internet


I think the difference is cryptocurrency, and the fact that sites are made by programmers, and programmers have a high tolerance for hassles.

When there was less distinction between programmers and just everyday people, convenience was a high priority in the authoring tools.

Web3 could be amazing. There's no reason it couldn't just be like BitTorrent, but for web pages, with even better performance on mobile.

JS frameworks are just files. Nothing really troublesome about that, besides the fact modern expectations demand a backend and CMS, unless you really want the full retro experience.

Even then, most of what backends do could be done by some hypothetical extension to WebDAV for search. Or just like, add a SQLite query verb to HTML with a standard permissions file or something. There are ways this could all be easy.

SSL is another problem. There used to be one protocol and one thing to set up. HTTP. Now you need certs, and it takes a bit more effort, and it's possible to mess up in a way that takes a half hour to fix. Not quite frictionless.

Facebook is free, but personal sites aren't.

Web3 could be exactly what saves us, giving us a standard platform to make sites for free, that does the security for us, doesn't need a domain name, and is generally really easy while still following modern standards.

We could literally have a browser with a "Make a site" button that works like a file manager, with a "Link to seedbox" button.

IPFS and the like are trying, but none quite reach BitTorrent's level of practicality, and they're all weighed down by cryptocurrency.

I think we really do need a Web4.


> And one major aspect I keep coming back to is that web technologies in the 90s, and even their descendent technologies now, are simple enough for most technology-literate people to understand, and that carried with it a lot of comfort and trust.

I was a 9 year old building websites in the 90s, and while I could write HTML and upload files via FTP, I had no clue about TCP/IP stack, socket implementation on the filesystem, how Windows 95 and Windows NT core worked, how my Pentium processor worked, and about a thousand other technologies that I used in the process. I relied on abstractions.

> The world of blockchains and cryptocurrencies and "web3" feels vastly more opaque and esoteric by comparison

Nobody's stopping you from just as blindly trusting underlying web3 abstractions as you did in the 90s with the web. But we're professionals who learned not to do that and have drastically different approach to technologies that we use. We're not 9 year old kids anymore. We had to learn these things after waking up to an outage, or after our site struggled with 10 requests per second (because we didn't know that database indexes existed), or after any other number of perfectly valid reasons. Now we don't trust tech, we read the whole documentation, and we want to dig in.

The world hasn't changed as much. It's us who's changed.


> Nobody's stopping you from just as blindly trusting underlying web3 abstractions as you did in the 90s with the web

Except that most web3 technology is property-related. There is no easy undo and try again. And getting off the ground as a minor is probably extra hard with such things.


Erm, no, many understand very well what blockchain is and it's definitely not opaqueness what they accuse it of. But everybody can choose their own strawman to fight, can't they.


This sort of opaqueness: Many people hail it as the next big thing since sliced bread, but when you actually spend the effort digging in you realize its barely good for anything except money laundering and tax evasion.

Give me some bitcoin and i would not know what to do with them except changing them back into money.


Cryptocurrency is also good for evading national capital controls. Wealthy Chinese can purchase mining hardware and electricity using the local "funny money" currency, generate coins, and then sell those in another country for real money.


So you want to avoid fiscal regulations of a oppressive regime?

Sounds like tax evasion/money laundering with extra steps to me.

You could call it justified tax evasion, maybe. If I'm being generous.


It's not something I particularly want because I don't live in a country with currency controls, and I have no opinion on whether it's justified or not. But evading currency controls is a distinct and separate use case from tax evasion. Wealthy Chinese are unable to transfer large amounts of that wealth out of the country without government permission, even if they have paid all their taxes.


Bitcoin is fundamentally anti-government. The fact that many westerners currently consider their governments benign enough to not understand the need for such a thing is truly a blessing. Even so... it's always good to have a backup ;)


> its barely good for anything except money laundering and tax evasion

You can’t think of any situation where you would want to share access to and control of data with multiple untrusted third parties, where you would need to guarantee not only that data modifications are only executed in an authorized fashion, but also that no previous modifications have been reversed?

The list of use cases for that capability is not limited to “money laundering and tax evasion.”


> You can’t think of any situation where you would want to share access and control of data with multiple untrusted third parties, where you would need to guarantee not only that data modifications are only executed in an authorized fasion, but also that no previous modifications have been reversed?

Do you mean a cryptographic signature? Been doing those for decades back in email chains and newsgroups. Each message could contain the prior messages and their signatures. What’s new was assigning a currency to it.


What you are describing is something like an informal blockchain. Including prior messages ensures that new messages are presented in the context that their authors want them to be presented in. This approach resembles the back references that bind each block to its predecessors in a blockchain.

But you can't rely on this approach to ensure that surreptitious deletion of old messages or threads is impossible. Not every old message in a newsgroup will be duplicated within new messages, and threads will typically not reference each other.

If an old message or a separate thread provides necessary context for interpreting a new message, the maintainer of the newsgroup or the email server has the ability to change the meaning of a conversation simply by deleting messages or threads. Or the owner of a relay server can block certain messages from propagating. You still have to trust these people.

Email is itself very decentralized, at least in concept, with every new email being copied into multiple accounts on multiple servers, so censorship like this is more difficult. But you can imagine how in a more centralized system (like Facebook or Twitter or HN itself) history can be rewritten by the entity hosting the conversation simply by their deleting messages. If all references to the unwanted messages are also removed, then for all intents and purposes, those messages would never have existed. Gmail is dominant enough that you can imagine Google being able to assert this power over much of the email traffic that currently exists.

A blockchain ensures that every modification to the shared state follows rules that all participants in the system agree upon. No modification to that global state can occur unless those rules are strictly followed, and because all participants receive all details of every state modification, the entire history of the global state can be replayed at any time by any participant such that surreptitious deletion is not possible. There is no other technology that I am aware of that is able to systematically prevent surreptitious deletions.

Cryptocurrencies are not the only thing that is new, and they are not even the most important feature of blockchains. Cryptocurrencies function as an incentive mechanism to ensure that there are a healthy number participants validating the state transitions. They can improve the reliability and security of a blockchain system, but there are blockchain systems that do not depend upon them.


I too see many usecases for the capabilities you describe. I don't see how Blockchain solves it better than "regular" solutions though. Can you give me a concrete example where such data control is best implemented with a Blockchain?

Every single concrete example I can think of personally would work better with a normal database and some kind of Auth scheme which does not involve Blockchain, but maybe I'm just thinking of the wrong usecases?


What you are describing is a signature, and i use gpg --sign for that.


A digital signature can validate that data has been added by an authorized participant, but it cannot tell you if data has been deleted.


It can, if you sign a hash of the previous node(s) with the contents of the new message. git with signed commits is like this.


I didn’t/don’t dispute that “many understand very well what blockchain is”.

My assertion is quite different: that people familiar with basic computing technologies have a natural intuition for web technologies, which brings with it comfort and trust, whereas this is not true of blockchain and cryptocurrency.

As I said I was excited by the potential of blockchain and crypto, and would welcome it being widely adopted, but I suspect this factor is a major barrier. I’d welcome evidence to the contrary, not sneering.


Blockchains are fairly simple technologically. Securing them is substantially more difficult, so creating one is difficult, but to simply understand what they do you don't have to understand all those fiddly details.

What is opaque are all the various bizarre schemes to try to turn them into something useful and profitable, because they really aren't good for that much. (Non-zero. Definitely non-zero. But not really that much.) You either have to stretch them into a space they aren't really good for, or all but create a problem they can solve, but essentially doesn't otherwise exist as a problem. This is where a lot of the complication lies. NFTs, for instance. Drop-dead simple technologically, really. The confusing part has always been, why would anyone spend that much on a fairly crappy picture of a monkey? There's multiple layers of "wtf" to unpack in that question. But the technology itself permitting it isn't that complicated.


How do I add a headline to a webpage? add <h1> Some big text goes here </h1>

How to I add some data to a blockchain? What is a blockchain?


> How do I add a headline to a webpage?

I vividly remember that time in 1996 when I had to explain to my schoolmates what a _webpage_ was. (Whithout really having an internet access yet.)


This, I believe, is the essence of the whole point.


I'm an educator and right now, I'm evaluating the crypto space.

A huge problem here is that it's very fast moving.

Take the fast moving Web of the 90s and multiply it by how much more people use the Web today.

At Eth Amsterdam I talked to many engineers from that space and didn't get a satisfying answer to many questions devs usually ask about new tech.

"Where should you store your data?" - "It depends!"

Back in the days, and even today, in Web2, you would spin up a relational DB and it would be good enough for 99% of use cases.

I know, that changed with the rise of NoSQL, but in Web3 it's even harder.


The reason the "engineers" at that conference couldn't give you straight answers is because blockchain doesn't have good answers.


No cryptocurrency or Web3 platforms actually store their data on the blockchain. That would be ludicrously expensive; it'd cost tens of thousands per gig at least. Relational databases, cloud object buckets, and CDNs are pretty much all you should need, storage-wise, unless you're solving an especially interesting problem.


Isn't the data technically stored on-chain with proof-of-space chains?


Arguably, perhaps, but nobody actually uses those.


So... what's the Web3 part for then?


Branding.

I'm not even joking; everything they do with blockchain can be done cheaper, easier, and just as well without it. They do the minimum they need to to hype themselves up as being "Web3" and that's it.


Web3 is cryptocurrencies? Are we sure about this? Just because it was on someones blog?


You could as usefully rephrase that as:

"Web3 is <anything at all>? Are we sure about this? Just because it was on someones blog?"

It's an empty marketing phrase that some people have latched onto.


Web3 as it is known today is a term coined by Gavin Wood, cofounder of Ethereum, and all of his notable work is related to blockchain technology, and that includes web3.

Despite all the vague talk about freedom, really, web3 is all about cryptocurrencies. That is, web3 needs cryptocurrencies to work and it has no reason to exist without cryptocurrencies.


What do you believe web3 is?


I believe it's what the spider drawn by David Thorne produces: https://27bslash6.com/overdue.html

Ironic that payment via random drawings became a "reality" in this crazy world after all.


Jane Gilles should mint an NFT of that spider to get back those $233.95, with interests.


I miss the days when Web 3.0 was semantic web!


> Why is Hacker News so anti-crypto?

I have missed the discussion you have mentioned. But I remember 10 years ago HN was opposite to anti-crypto. And the anticryptoness of nowadays for me seems like stupid ppl who don't understand some basic principles and downvote me if attempting to criticize their misunderstanding.


> And the anticryptoness of nowadays for me seems like stupid ppl who don't understand some basic principles and downvote me if attempting to criticize their misunderstanding.

Name-calling and assuming that everyone that "doesn't get it" is stupid is non-conducive to discussions. Worse, it makes your point much weaker because now there are only the "true believers that have seen the truth" and the "stupid sheeple" as opposite groups, which is a very stupid take in itself.


I have no point now, I just trying to figure out of who the hell might be anti-crypto.


You lost your thread and descended into irony quite quickly there.


If my English would better I have better quality of discussion, for example better ability to detect not interesting people before talking to them.


You get downvoted for assuming that people who disagree with you are stupid. Attack the points being made, not the person making them.


I do not talk like that with the ppl I have mentioned. Here is another thing because I do not discuss anything except some ppl's attitude. And the ppl did not disagree, they just do not understand some principles.


I miss those times. I could spend hours complaining about the depressive current state of the web, I imagine being a newcomer to such world and have to handle the over complexity of the current scenario without knowing that things could be simpler.

I remember seeing a team member migrating a project from grunt to webpack, the project was just a 404 page. "You need to compile your assets" they say, just a cat *.css > site.css would do it. Or even better, just do it inside the HTML and that is it.

People who need a JS library to check if a number is odd or even went way too far, I'm not exaggerating - https://www.npmjs.com/package/is-odd


>hours complaining about the depressive current state of the web

That's pretty much most HN content about the web / front end development. I find HN al but useless for that topic now.

I've had this thought in the back of my head that a lot of content / comments are on HN are folks who do a little, not a lot, and are really annoyed that they have to do any front end development, so then we get rants about this and that... they're not fundamentally wrong, but as a web dev I find them a little skewed and completely useless / just negativity.

Like how many "I made this rando personal blog without a framework" and "there's too much javascript" (page includes lots of javascript) and etc that gets boiled down so far that it is pointless?

We're at the point a loading icon gets upvoted on HN to bitch about SPAs ... and I don't know about you but a hell of a lot of pages can sit at a loading icon that aren't SPAs. How disconnected is that?

Front end development discussion a land fertile for bike shedding and just about every topic can dip into it now, movies, music, all media really ... questions about who pays for the news, censorship, etc. Somehow it all gets pulled into the tech discussion as well.

Sometimes I just want to find a place to discuss it where we find neat things to do and less campaigning about every other topic.


There’s also is-even[1]. It has one dependency: is-odd

[1]: https://www.npmjs.com/package/is-even https://www.npmjs.com/package/is-even


183,453 weekly downloads — wth?


Now the only right thing to do would be for the maintainer of is-even to rewrite it to depend on is-odd.


That would be even odder.


This has to be a joke right?


isOdd and isEven pop up often but they and all those other packages were explicitly created for the sole purpose of boosting the author's profile.

https://news.ycombinator.com/item?id=29241943


too be honest, a comment of someone doesn't make it true. It could be he has actual use cases for these repositories. Just that someone thinks it should'nt be used like this, doesn't mean it's true.

It seems like a lot of people think they think how npm should work, while forgetting to see how npm actually works.

There's even an example of a well respected npm author that has repositories with just a few lines..

Y'all just love to hate on npm


The "joke" is that you can determime whether a number is odd or even with a modulo in half the code and a fraction of the storage space of a library. There is no use case for it because it's a built-in feature of the language, and in the age of inane web bloat, supply chain attacks and Javascript cargo-culting, I think laughing at stuff like this is perfectly warranted.


> you can determime whether a number is odd or even with a modulo

Or you could check the lowest bit. Better to abstract away from that implementation decision. ;)


That too, but I get the feeling those kinds of operations aren't in Javascript's standard library...


`my_number & 1 == 0` works.


JavaScript does have bitwise operators. Something I was surprised by was that numbers are stored as 64 bit while all of the bitwise operators operate on 32 but numbers. This means that every operation results in a number conversion first.


Not just 64 bit. 64 bit floating point. You can't just get the "low bit". You need to take the exponent into account to figure out which bit of the mantissa to grab. It's intrinsically more computationally difficult than is-odd on a real integer.


> It could be he has actual use cases for these repositories.

If that were true, it would be useful for the author to actually explain that in his repo documentation. As it is, concluding that this is a joke is by far the most reasonable response.


Yes, JavaScript is a joke for having such a barren standard library that the community has to do it for them.


Ok, which programming languages actually have an isOdd or an isEven function in their standard library?

Anyway, Javascript, like virtually every programming language, has "& 1".

Yes, the standard library is lacking and yes, that's a big issue, partly responsible for the huge mess that is NPM. No, it does not matter one bit here so this is off topic.


Gosh, never look at C. You'd break ribs.


That's what it looks like. This is one of the dependents on `is-odd`: https://www.npmjs.com/package/is-104 https://github.com/acappato/is-104/blob/main/index.js


I don't get the issue. You write a function to check if a number is odd. You seem to have this function in every project so you make a package for it so you, and others, can easily include it. This is exactly what npm is made for?


Testing whether a number is odd is done in 3 characters:

    n % 2
if you don't like the % you can also do it using a bitwise AND, this also handles negative numbers:

    n & 1
this is equal to 1 if the number is odd, 0 if it is even, and will be converted to true or false in a if statement. You don't need a library for this.

And you should not need a library to handle string conversions, you should be aware of the types you manipulates. If you do need to handle strings, that's still easy:

    Number(n) & 1
And Number(n) is a no op if n is already a number. But wait, no, actually, you don't even need this because % and & will auto cast the string to a number anyway. You can use parseInt(n) if there's a chance that your number is going to be non integer, but meh.

So n & 1 will cover everything.

All this is readable and idiomatic, there's no need to write a function for that, let alone a full fledged library with unit tests, package.json and all this crap that bloats the node_module folders of everybody.

isOdd = require('isOdd') and your entry in the dependencies of your package.json is going to be more verbose than isOdd = n => n & 1. Even "isOdd(n)" is longer than "n & 1". Nothing beats the "& 1".

If people are going to write libraries for all kind of similarly small and trivial things, soon we will need more than the Earth to handle our code.


Before reading your comment, I probably would have agreed with you that isOdd was overkill.

After reading your comment, I'm starting to think that isOdd is not only more readable but safer. I don't think I ever thought about n % 2 not handling negative numbers, and I'm not sure I would immediately recognize n & 1 as equivalent to isOdd. In languages without truthiness, n % 2 == 0 is longer than isOdd(n). Likewise, even with truthiness, n % 2 == 1 is longer than isEven(n).

I suspect that the real problem is the cost of adding dependencies, in terms of package size, performance, and security. All these problems could be solved using linking, inlining, and auditing.


But would you be sure that the implementation isOdd you are using is safe? To be sure you end up needing to try a few values, or to read the implementation, at which point you might as well write isOdd = n => n % 1.

That does not hold true for more complex stuff, but to test whether a number is odd, come on.

You'll test your code if it works anyway and if your odd test is wrong it will show.

But yes, you need to make sure on how '%' works in your programming language especially on negative numbers (and on non integers) because that can differ.

If you are testing whether a number is odd and that number can be negative, you'll probably think about it anyway.

That's a trap you should encounter once and then you are good.


You don't have to tell me how to calculate if a number is odd or even, that's not my point.

The thing is, n % 2, i need to do mental gymnastics do comprehend whats going on. Even if your function is just one line, I rather read isOdd or isEven instead of n % 1 of n % 2, remembering when 1 or 2 stand for even or odd is something I could have trouble with.


Telling you how to do it was not actually my point too, sorry if it sounded patronizing, that was not the intention. I was writing this for all the readers here, starting from the beginning for completeness' sake, at the risk of stating the obvious.

n % 2 or n & 1 is something you likely get used to if you encounter it frequently enough, and at least something you should recognize after the first time you encounter it. And most people should have encountered it at least once because that's one of the things we learn to do when learning programming (but I guess Ten Thousand [1] can apply). If you don't encounter it enough to be used to it, then the rare gymnastic should not be a big issue neither. You need to be able to recognize it because it's in the wild.

If having an isOdd or isEven function around makes the code more readable for you anyway it's fine, especially if the function can be inlined, but pulling an external dependency for this is overkill.

[1] https://xkcd.com/1053/


> if you encounter it frequently enough

That's a huge if!

So abstracting a one line in a function is actually good. Putting it up on npm is debatable.


those poor gymnasts do some truly impressive things, and then every little leap of basic thought is conflated with their work


well said :) sad but true


The issue is the idea of introducing an external dependency for a one line function that anyone with a rudimentary understanding of programming should be able to write in their sleep. The idea of sharing code isn't flawed here, but the risk / reward in these cases is very much out of whack.


The risk / reward of using npm is always there. It doesn't matter how big the package is. Any dependency is a risk? You're free to not use the dependency. Nobody is forcing you.

If someone wants a 1 line dependency, I say let them. I have zero issues with that.

Again, if you think something is not how it supposed to be, maybe YOUR view on what it supposed to do is whack instead?


How far do you take this: an `inc()` function? If we get to a stage where every piece of software in the world depends on one random person's ridiculous npm module, we'll be in a pretty precarious state.


First job title in 2007 was "web master". 15 years later and i feel more like a web dinosaur than a web master. sick of it all.


Computers are still exciting, the web is not. Now the web is just business, advertising and Javascript. It wears a suit, a fancy haircut and deck shoes. It looks like a crypto billionaire.


More like a web orchestrator these days, so maybe “web maestro”. ;)


It's funny, the package is-odd has dependencies itself. It depends on a package called `is-number`. Go figure.

I finally understand how `node_modules/` directories grow to 100s of megabytes for the smallest apps

Turtles all the way down.


It is a trade off.

In most other ecosystems such as Java, Python, PHP, etc if you have a dependency A that depends on B 1.0, and a dependency C that depends on B 2.0, you cannot have both of them installed, so you either pick one or the other, or some intermediate version which works in all cases, but many times you end up not being to use one of them or, worse, not able to upgrade either A or C because you cannot resolve all the dependencies anymore due to constraints.

In node you can do this, you can use both A and C even if they use their own, incompatible B versions. If both A and C depend on the same (or compatible) B version, then you only have one as in the other ecosystems. And you can always enforce and override what final version you want if necessary.

This, plus usually the inclusion of documentation, type definitions, source and transpiled code is why node_modules is usually pretty big.

But you don't ship node_modules "as is" to the browser (200MB node module does not mean a 200MB bundle). And usually this is not a problem for any modern laptop or server.

Is this better? Is this worse? Can't say, it is just a different tool with a different trade off.

But just "node modules 10Gb is bad" to me only tells the bad side of it, and usually comes from a lack of understanding of how this works and how it compares to other systems.


Anyone not committing the entire node_modules to their repo is in for a really bad time when npm goes down or gets compromised.


No, that's fixing the problem in the wrong way (by using your code version tool as an FTP more or less), complicating code reviews and unnecessarily growing your code repository.

If you want to be safe if NPM goes down, what you need is to keep an archive of built "binary" releases, either zip files, docker images, or whatever.

You can also have internal NPM proxies or caches too.

Committing external dependencies in the codebase is a terrible solution. And no, google doing it doesn't mean it is good for everyone else, probably more likely to be the opposite to what you need.


No one said committing to the code repo, deployment repos are a thing and if you're not revisioning your deployments then you are screwed anyway.


> No one said committing to the code repo

Yes, you did. Quoting: "...Anyone not committing the entire node_modules to their repo..."

> you're not revisioning your deployments

That's what I'm saying, quoting: "...If you want to be safe if NPM goes down, what you need is to keep an archive of built "binary" releases, either zip files, docker images, or whatever..."

Using Git/SVN/etc repository for this (which is what I kind of get from your responses) is just using the wrong tool.


> "You need to compile your assets" they say, just a cat *.css > site.css would do it.

The sassc CLI app is pretty good for this. It's a single binary, has a watch mode, and just compiles SASS to CSS. Although for a 404 page that probably is unnecessary (unless it's pulling in shared styles from somewhere else of course).


I for one have had enough and hopefully, in future projects, I can posit to just write basic HTML / CSS, maybe some JS, and not pull in everything else.

That said, I've been in the Go ecosystem for a while and while the language itself is fine, it's the associated mindset and catchy oneliners that have been more valuable to me as a developer.


"WordPress is too complicated! Let's do everything in React" is my current favourite developer anti-pattern :-)


Enlightened centrism take: both are equally shit. Go back to ~~monke~~ regular html.


Really, that's what fatigues me about a lot of these "I miss websites." posts.

I think if more people would at least try to build and maintain their own website, starting with the barest of basic HTML, we'd have less of this tech wistfulness around the old web. They'd do one or more of a few things:

1) Find a personal outlet in the hand-rolled site.

2) Find that outlet, but get tired of wading through angle brackets and set up/write a static site generator.

3) Find more they want the site to do and start coding.

4) Realize they've exhausted everything they want to say to the world and abandon it.

But what else would happen in any case would be a demystification of it. Old hands would lose their nostalgia goggles, and young folks would recognize how much of the irritations of social media derived from people's dissatisfaction with the time and effort it took to shout into the void on the old web.


"React is too complicated! Let's use inferior tools instead" is my current favorite developer anti-pattern


"One tool is the right answer for every question" is mine!


It's even worse; it went from static sites (html) to dynamic sites (Wordpress, PHP) to webapps, and now back to static sites but built using the tools to make webapps with. Apple wanted to make web technology the one for iOS but quickly realized it wasn't fast enough, so they spent time developing a really good native platform, but now it's boomeranging back to webapps, or adding layers on top of the native layers to stay in that web app comfort zone (react native, flutter, etc). It's refusing to use the path of least resistance because of perceived benefits, or something. Same with desktop apps that now all have to be web technology for some reason.


I never changed from being a webmaster to being something else. For the last 15 years I still manage my own servers, I still design my own websites, I still do most things myself when it comes to building/managing a website.

And the fact that everything is SaaS now, well, that just gives me motivation to keep it that way.

Easy for me to say, of course, since I grew up back then and was fortunate to have the experience of the pre-modernization of the Internet. Good times.


I’ve been and still am a webmaster for 24 years. It’s always difficult when people ask me what I do these days, I literally tell them I’m a webmaster.


I feel that as raw and unpolished as the web of those vanished days was, there was still significantly less fucking than Justin Jackson remembers.


Hahahaha. This is my favorite comment. ;)


His description of what web dev felt like in those early days captures how I feel about VR now.

Back then it was obvious to me that was the coolest thing you could possibly learn. Still to me it is outrageously cool that I can think something up and then in not that much time it can be a website.

So whats the new magic? To me it's the idea of being able to think something up and conjour it into the virtual world too.


I’m curious if you could elaborate on that some. VR just seems like another type of game design to me, as someone who isn’t active in the space.

Yours is the first perspective I’ve seen that actually made me curious.


* More innovations with passthrough (combination of a VR headset and AR). Currently you can get "X-Ray Vision" to see where your controllers are in a room. In the future, this could be expanded to other peripherals such as keyboards. It allows you to have a cooking video playing while cooking (hands-free control) [1] or place/design furniture in your room [2]

* Once the resolution is a little better - virtual offices[3] and cinemas[4] allow for significantly more screen real estate than a traditional setup, requiring less space and capital investment.

* Live/pre-recorded virtual concerts and sports games[5] are already occurring (where you can invite friends to watch in real time). The current 'experiences' (wingsuit flying, virtual tourism, etc..) are far more immersive than watching the equivalent 360° Video

* VR fitness is a reasonable method of cardio, burning ~9 calories a minute. It can also augment other training methods, such as a stationary bike[6]

* Haptic suits[7], gloves and treadmills[8] - not really my thing, but allowing 360° running and feeling the interactions with your environment.

[1] https://www.youtube.com/watch?v=keufFXiO1_M&t=3m45s

[2] https://www.youtube.com/watch?v=_ZFiaILqQng

[3] https://www.youtube.com/watch?v=5_bVkbG1ZCo

[4] https://www.youtube.com/watch?v=Sh9n0bZcprk&t=1m7s

[5] https://www.youtube.com/shorts/duMowUK6tk4

[6] https://www.holodia.com/vr-fitness-blog/holofit-vr-works-wit...

[7] https://www.youtube.com/watch?v=q4UdbYrqa2A

[8] https://omni.virtuix.com/


If you get a chance to try it, or maybe you already have - you get this odd sense of presence when using VR. It's not quite the same as playing a normal game. Even just perspectives of things like size seem much more real in VR. Even with today's early hardware it's pretty good at tricking your brain.

This combination of presence and multiplayer create these crazy (to me) experiences where it's like you're stepping into the internet to meet people and explore it with them.

VRChat is one of the more popular examples, but most of it stuff still needs to be built in the first place.


I know this is kind of negative: but I don't like nostalgia much and I think I like tech-nostalgia least.

Don't get me wrong I like old tech, but just for what it is. Text based, simple, fast etc...but I don't like wallowing in a warm, fuzzy feeling about how it was.

I mean I personally do recall 5.25" disks. I liked their look and feel as a kid. Fine. But making a big deal out of it feels retrograde...


> I know this is kind of negative: but I don't like nostalgia much and I think I like tech-nostalgia least.

It's not negative. It's an interesting sentiment, but one I think is based on a misperception or mischaracterisation, if you'll permit me;

Nostalgia is about experience and feelings. It often has idealising elements, attempting to restore something bound to a specific past time or culture that can never actually be revisited. Wallowing in a warm, fuzzy feeling" as you describe it, is depressive.

That's not what's going on in tech.

Tech people are experiencing disgust, frustration, anger, hopelessness and a whole range of sentiments hostile to current tech which is objectively broken in a number of ways. And justifiably so. We have wandered off the path, not just the path of "excitement and fun" but right off the path of utility into the long grass of bloated, brittle, precarious, unfathomable and unmaintainable complexity under the yoke of exploitation and false efficiency.

The desire to move "backwards" is a desire to fix things.

Many of the qualities of earlier tech can, and should be revisited.

We're in a cul-de-sac and unable to progress precisely because we label earlier technologies as "necessarily inferior". It is an arrogance and parochialism of false progress to do so. That mind-set always sees any sort of back-tracking as negative regression, or "anti-progressive". But sometimes the way forward is back. Remembering where you've been, is central to any successful algorithm that's navigating a maze (which is what technology is).

Projects like Gemini and distributed "New Web" based on overlay networks are not regressive. They are very contemporary attempts to distil the best elements of the 90s internet culture and re-orient development from there, but with the hindsight of knowing how not to do things.

> Don't get me wrong I like old tech, but just for what it is. Text based, simple, fast etc...

All of those qualities are also possible with new tech. And they make the new tech even better. And as energy and environment bite, we'll start to see them as positive qualities to move toward... in other words our definition (direction) of progress will evolve.


You write well. Very inspiring.


Exactly. The problem with the 1993 web was that it was usable by and appealed to only a very small part of the population. When people say stuff like this, it sounds to me like they're saying "I wish the web could go back to being a more exclusive club of which I was a member".


Maybe it wasn't that it was exclusive, but that it felt manageable and comprehensible. It felt like you could be aware of the breadth of it all (even if some was already out of reach). Now, that feels beyond impossible - we have every niche and every underworld.

I started as a web designer as images were being added to browsers and you could change the default grey background colour. Things are a bit different now...


They're right though. The Internet was better when it didn't have tons of people giving their unsolicited opinions and corporations coming in and having every single one of their actions be antithetical to the esprit of the WWW. It was better when people had to put in effort to find something and not take 30 seconds to select the first Google-approved search result.


Was it better for those who couldn't use it?


Considering for how many the internet has a negative overall value by turning them into conspiracy nuts and mouth pieces, yes. Yes it was.


5.25" disks? I can remember waiting ~30 minutes for a game to load on my Commodore 64 from tape, only to have it crash at the last minute. Disks were amazing.


The first program I ever got I typed in from a book. It made a spinning pyramid on a VIC-20. It took hours and hours only for it to disappear when I shut off the computer. Tapes were amazing!


Yes, I recall borrowing books from the library in order to type in the programs. Can't say I recall punch cards, but I'm sure someone around here does.


It amazes me when I look back on my web career. My first website was hosted on an IP address - I'd literally tell people to visit h t t p : / / eight one dot ... - because I'd not come across DNS or how to use domains yet, never mind afford one when I figured them out

I distinctly remember I could never seem to manage to make a website fast. It was always quick (total load time <10s) but it was never fast (<3s). I spent so long learning underlying systems and various caching options.

20 years later I can throw together anycast DNS, geo-routing, web/SQL cluster, in/out MX services, memcached cluster, load balancer, whatever you need. No bother.

I'm glad that first site gave me so much grief!


I really wish I could experience the web back then at 1993 (I'm 25). I imagine much faster, and intuitive websites than nowadays. Beautiful UX is one thing but making a website feels natural, that's what I think the internet is missing today.


You had to wait 30 minutes to wait for all those unicorn gifs to load, so I'd say it wasn't as fast as it could've been :D

The amazing part was that everyone had a homepage at some point, where they put content online about topics that they thought was worth sharing.

Even girls in school had a website with sparkling rainbow gifs and a guestbook, and some pages about topics they liked... like smallville series character details, some cake recipes or games that they played. Oh boi have I read way too many things about dragons or vampires in historic literature.

Those websites were usually hosted at all those ad driven freehosters like geocities or funpic, so they sometimes injected their own ads to fund the hosting costs.

But honestly, I'd take the old version of the internet any time again over the dumpster fire that is social media propaganda and coordinated/incinerated defamatory shitstorms these days.

The internet was a welcoming place for everyone, where people shared what they loved with others. The internet was a nice place. And then, the facebook and the chans happened, and everything kinda went to shit.


Eh, 30 seconds maybe if it was a lot of images?

I remember it took about 20-30 minutes to download an mp3 in the dialup days. Most websites were in the kilobytes because everyone was on dialup. Also keep in mind most screens would have been 640x480, so it's not like you were getting high res images.


I remember waiting like a whole night to download an MP3. When I tried to play it the next day, my laptop was so old and slow that it couldn't play the MP3 without stuttering :D


Small reminder that the PS1 was so weak it couldn't even play 128kbit (IE; quite poor quality) MP3s in real time


> Eh, 30 seconds maybe if it was a lot of images?

Depends on the amount of gifs people were placing on their website :D We only had a 56k modem at the time. The Browser was provided by the ISP at first due to lack of dial-up software that supported the German ISPs, so it was loading all kinds of injected stuff from the ISP, too.

But yeah, I agree with the general sentiment that websites usually loaded way faster if you were using e.g. IE3 on purpose because it didn't load all the stylesheets and images.


I think it depends on when we are speaking. 56k became increasingly miserable as higher speeds became the norm.



Interesting there's a section on Borat at the bottom there - I'm pretty sure I've never heard of this (let alone the lawsuit) and I was reading about it, seeing the photos, thinking he was surely an influence on the character. Not sure what to make of that.


i remember sending out the "i kiss you" XOOM page to my entire AOL address book. that's how things went "viral" back then.


Yeah but to be fair you knew when a website was under construction. Now it's a health and safety nightmare, with no warning gifs anywhere


I'm only 32, so 1993 was a bit early, from what I remember "the internet" didn't become truly mainstream until 1996~.

Now that I think back the computers "acted" really differently from what we use today.

Things felt much more "immediate" (as in the computer responded to you very quickly) but also much slower; imagine that your keystrokes occurred immediately but opening a window took 50ms.

If the CPU was being hit with any kind of load though, the input would buffer and your keystrokes would stop being presented until the GUI got some CPU time again and all your keystrokes just poured into the active window.

The web itself was a lot of slowly rendering webpages, the sites themselves would actually start loading much quicker than modern sites, and the content would progressively load over some amount of time, there was usually a progress bar at the bottom of the Web Browser.

I know I'm only a few years older than you, but I wonder if you remember all this.

From what I recall the most annoying thing was slowly rendering images.


Nowadays browsers intentionally delay painting to load stuff.


It was quite the opposite. Transfer speed was slow, it sometimes took minutes to load a page and I only had access to internet at school or conferences. The machines were also slow, browsers were taxing on the hardware too in the old times as well.

Pages were anything but intuitive, there were no design standards. Most marketing pages looked like their art director imagined a futuristic movie prop.

There's still a ton of interesting pages out there, check out this collection: https://www.hoverstat.es/


Faster? Not really. Even with ISDN, stuff was slow af. 8000 bytes per second just isn't a lot. Latency was often high as well.


Page loads were slow, but you don't get that weird thing you get today when a page is slow even after it's loaded, or stuff keeps loading in and changing the layout so you need to sit and wait for an unknown period before you can interact with it.

It feels like websites are a lot jankier today. The old web was slow, but it was predictable. Like I know a site where if you don't wait for it to finish to load (and it's javascript loading, so you get no browser indication that it's actually loading), then the all the links will be wrong. That is, you get the wrong content if you click a link. It's quietly kind of amazing how they've managed to break hyperlinks, given they're a part of HTML itself and work well out of the box.


this is true, but it's because developers, designers and marketeers add loads of stuff onto it. Pages that show animated ads, CSS animations and (sometimes multiple) autoplaying videos are slow.

But there's plenty of websites out there still - including HN - that do without all of those extras. And it's up to web developers of today to resist using the fanciest technologies to build websites.

At some point there was the concept of (iirc) progressive web apps, where the basis was all HTML - fully functional, you could turn off JS and CSS and it'd still work - and then use CSS and JS to add functionality on top, but that would be purely embellishment.


hahaha just no, I remember my crappy hand me down computer was so slow it would chug and struggle to display a nice looking full screen image. You see the old web on your modern lap sized super computer. Go to the thriftstore and pickup a beige windows 95 computer setup and try navigating the web. it was slow, very very slow.


I was around back then. You don't need to tell me how it was like.

Full screen bing.com-backgrounds weren't really a thing in web design, though. Except in porn, I guess.


Can you imagine how fast those websites would be today, though, compared to modern websites?


One can easily test this. This is the performance of two semi-randomly picked websites:

- Peter Ferrie's homepage (http://pferrie.epizy.com/?i=1), old school: approx. 900ms

- Netflix landing page (https://www.netflix.com/de-en), modern: approx. 1.8s

Measured via Firefox "load time". Peter Ferrie is a famous security researcher.

The Netlix page is clearly 2x as slow, but the load time is far from proportional to the overall data transfer.

The HN front page, right now, takes approx. 1.7s. :)


Netflix is a bad example of modern websites, as they’ve hired really good developers to optimize the snot out of it.

I’d say, compare it to Reddit or Pinterest or something like that.


Images would load forever. Good websites had images twice, a low-resolution variant to load fast, to be replaced by the high-resolution one that eventually loaded. Other websites had images that loaded interlaced - separated in 4 strata that you could watch being built up.

I remember when some supergenius sent the first email as a word document with a 2 mb company logo embedded, and loading that took some 20 minutes - each minute being metered indivdually, only to then contain two lines of text that ought have been the mail body...

The internet was better back in the day - but it was not faster.


> Other websites had images that loaded interlaced - separated in 4 strata that you could watch being built up.

That's called progressive JPEG, and it's still around.


> I imagine much faster, and intuitive websites than nowadays.

Oh no; they weren't faster because computers and browsers weren't nearly as fast as they are today (iirc it was Chrome that emphasized performance, especially JS performance because it saw how much more that was going to be used (for ads lmao)), and not intuitive because there weren't as much guidelines or awareness of UX as there are today.

There should be plenty of websites from the 90's on archive.org


My dad got an internet connection when I was 15; 1999 or something. One summer I got bored, got on my ISP's "portal" website, there was a chatroom feature (powered by IRC), I entered a channel about Buffy the Vampire Slayer. I didn't care for the show, but the ratio of guys/girls was great. All were teenagers like me, made some friends and chatted with my first girlfriend which I met some time later.

It was an easier time back then.


I remember you could often watch images being slowly rendered line-by-line in the browser due to slow connections or slow servers back in 1994/1995.

In the meantime computers and the internet became so fast that websites could be very fast by default, but the front-end tech stack and adtech still makes it slow most of the time.


Reading articles of time the backbone connections in around that time were also heavily taxed. So you had slow connections, and those we connected also to over congested connections. Not exactly greatest user experience.


It was fun and wild. In some ways simpler. But it was neither fast nor intuitive.


I was in the last year of junior high school in 2000-2001. We had computer lab running Windows 3.1 (each student had their own floppy disks to save their works before logging out).

We learnt HTML basics: displaying paragraphs, images, blinking/colored text, marquee, etc. It was definitely an eye-opening moment for me: computer was not a device only for playing games or typing documents, but something that could run what your instructions are.

Then during high school days, I learnt more serious programming languages: Basic, C, and Pascal.

These days, web frontend is definitely more complicated: HTML5, JS, CSS, etc. Nope, not interested to touch them anymore :)


I remember telling people with a serious expression that I was the #2 on my huge army base after the Commanding General because as the Webmaster & Intranet Master, I controlled what everyone saw on their browser when their browser opened (intranet site), and our public web presence (web site), and over 100 tenant and subordinate organizations received our hierarchal navigation headers and footers via SSIs.


I must be very old. I still put my name in the footer of all my personal websites. Which reminds me: must update the copyright year in those footers!


Back around 1996 or so, I made my first webpage. I didn't have Internet at that time :) So I build this site, complete with articles about comics, animated gifs, frames etc. just for myself. I think I never put it online. Later when I finally got access to the internet and some webspace I already build another thing and webspace was precious (I got around 1-2mb if I remember right).


I like this article as I'm pretty nostalgic about those things too.

>Our websites were pretty damned ugly. Instead of worrying about window dressing, we focused on words, hierarchy, and structure.

Naw, the sites were ugly because we were bad at window dressing and the tools were wonky ... and it was glorious that we were bad at it. I loved the ugly web. It reeked of "trying things".


I built my first site in ~1997, and my primary email address is still webmaster@. Nevertheless, I do feel that these early-Internet nostalgia posts mostly forget how crappy many webpages were back in the day. Blinking text, marquees, animated gifs, frames, inline fonts, ActiveX, "Best viewed in Internet Explorer 3", popup windows, tables for layout, terrible search, etc. Today's sites are often bloated, but those of the past had their own issues.

For me personally, the appeal of the web back then was not that it was good, rather it was new.


Plus, it all had to be done (typically) on a dial-up modem which made all of those "web effects" absolutely horrible.

Even today's web sucks without some heavy ad-blocking and script controls.


That is because there are "fucking websites" and "fucking web applications". These two require different skill set. Second one being a programmer.

I do not think it is smart idea to make everything into web app. My daughter for example is Web Designer / Web Master and makes very healthy living designing and building front ends for various businesses. She does not see less need for her skillset over the years. Just the opposite.


Not to mention that there's now a generation of users that, thanks to siloed apps and ecosystems, can probably go days without ever hitting the actual Web.


I’m working on a musical project and the website alone fucking rocks.

atticshapes.com

I reviewed before adding comment, and I feel like changing info@ to webmaster@ now, in homage to the internet.


> I reviewed before adding comment, and I feel like changing info@ to webmaster@ now, in homage to the internet.

RFC 2142 [1] defines both those (among others), for different purposes. I tend to set all those for the used services (and one for XMPP, per XEP-0157 [2]) as aliases: usually it's not hard and costs nothing, but allows to adjust the aliases in the future (in case if somebody else will maintain some of the services), and provides standard ways for inquiries regarding those services.

[1] https://datatracker.ietf.org/doc/html/rfc2142

[2] https://xmpp.org/extensions/xep-0157.html


Clean and simple.


Like everyone else here in the comments, I got major nostalgic vibes from this post. I was one of the snot-nosed kids back in the late 90s/early 2000s that made sure to add myself on my own staff page as "Webmaster".

Does anyone remember the early anime community of those days, by chance? It was such a vibrant community of builders and "webmasters".


I think the statute of limitations is up at this point so I am free to share this with you. It was me all those years ago who downloaded and edited the html for google.com adding my name to the page. I showed my mother the result of this pure vandalism before spending the next desperate few minutes putting it back how it was before. I regret nothing


what now?


The “webmaster” designation probably came from the “webmaster@domain” functional email addresses, which in turn were modeled after the existing “postmaster@domain” convention for contacting the email admin(s) of a domain. Does anyone know if there were other *masters?


There aren't too many other *masters, but check out RFC-2142.


hostmaster, ftpmaster, listmaster, gophermaster, fingermaster, timemaster (NTP)


Right, at least hostmaster and listmaster are what I’m familiar with and should have thought of.


I created podcastbunker.com in 2004. In 2005 it was chosen by Time Magazine as one of the 50 coolest web sites. You can look it up if you want. I was a f*king webmaster:-)


Almost 40, I'm getting older I guess but lately I feel put off with the unnecessary cursing on things that I find interesting.


It's perfectly relevant stylistically - he's talking about a time when things were scrappier and there weren't rules or established standards in his domain (pun intended). This isn't a manual, it's a paean to a time and an attitude that he misses, and cursing is certainly representative of that attitude.


"Back in my day..."

Always amazes me how rose-tinted people who supposedly deal with cutting edge technology are.


snot-nosed 90s kids? I resemble that remark!

I do feel like HN still gives off better vibes from the better days when the netizens had netiquette, when the information superhighway beckoned to the explorer, before the undying flame wars…


Took me forever to realize that's what I actually was, when I wasn't.


TODO: Be a fucking webmaster


_Bring_ _back_ _Geocities_


They did [0] and it's really good!

0: https://neocities.org/


1993 seems very unlikely?


I started on the web in late 1993, so yes. This stuff was around then.

There was actually a competing protocol in those days: gopher. It still exists and has a small, near-fanatical community these days.


It was definitely early-days, but people were writing HTML and putting up web pages back in 1993. Mosaic was released in 93.


its all JS's fault.


Don't know about 'all' - centralisation of content for me is the most depressing part of the modern web.


centralisation dialed to 11 by JS / AJAX mostly.


Brings to mind applets.


(2016)



I'm fucking a webmaster




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: