Hacker News new | past | comments | ask | show | jobs | submit login
Making a Website Under 1kB (tdarb.org)
180 points by iloverss on Sept 9, 2022 | hide | past | favorite | 106 comments



For nowadays standards I consider anything below 500kB and with less than 6 server requests already pretty minimal, yet those are still numbers that can achieve a lot of things and still look so "modern"/"normal" that you'd have to look into the dev console to really appreciate the effort that has been put into these. Whenever I see websites that use 5MB large PNGs for photos and have over 30 requests over a time span of multiple seconds I just question the general state of web-development these days.

I just recently stumbled upon a site that had it all: webp/avif everywhere, minified CSS, even ditching unused classes from used frameworks, CSS data:-embedded and subsetted fonts (I think it even used a recent version of FontAwesome 6, but it still managed to make the WOFF2 be only 2kB in size because they just utilized like two dozen logos), only one request each for CSS and JavaScript (everything concatenated and with nice cache policies) and the site was still usable/viewable even without either one of these if you wanted to. Everything was automated in their deployment pipeline even. It only came to my notice because they wrote an article about it. I can't find it in my history but those things will stick to your head for a while.


> For nowadays standards I consider anything below 500kB and with less than 6 server requests already pretty minimal, yet those are still numbers that can achieve a lot of things and still look so "modern"/"normal" that you'd have to look into the dev console to really appreciate the effort that has been put into these

500kB honestly doesn't require much effort at all. My WordPress blog with a popular theme and a few plugins has each almost every article below 500kB, despite looking like any modern blog and having at least 1 image per post.

Actually, if I remove the Facebook share button, they drop to almost 250kB. Time to remove it I guess.


Honest question—do actual real people use Facebook share buttons? Have they ever?

To me it looks like it has always been a fairytale made up by Facebook to spread their analytics scripts all over internet.


I don't know, my blog is small, but I don't think so. Especially since it's a blog about programming I'm sure it's not the kind of stuff you'll share on Facebook...

I removed it today after seeing the impact on page size.

I guess it works for some niches like news, online quizz, etc.


Coincidentally, I used the fb share button this morning but it is a very rare occurrence.


Sure they do, but do a meaningful number of people use it for niche geek blogs? No. Major news sites? Sure.


You could replace it with something more vanilla. I used to maintain a list of common share URLs, but this project is more more useful: https://github.com/bradvin/social-share-urls


From time to time I visit websites that my i7 16gb dell xps struggles to process. Bloody hell, 20 years ago that kind of power would have powered a small super computer and now it can't run a news websites.


"i7" isn't a particularly useful measure of computer performance. It can refer to over 12 years worth of processors, including the ultra-low power version of the first model, a 1.07GHz processor. That thing probably has trouble loading Emacs.


That thing probably has trouble loading Emacs.

And with those historic words, the vi vs emacs debate was finally won.


> That thing probably has trouble loading Emacs.

I don't know about that, because my RISC-V SBC with a 1-core in-order 1GHz CPU and no GPU can load Emacs GUI just fine.


So can my Amiga 500, even in unexpanded configuration.

But it's microEmacs. Still, it is of course GUI (like nearly everything on the Amiga), and it works fine.

It shipped in the AmigaOS 1.3 extras floppy. Probably also bundled in other versions.


As long as it has more than 8MB, Emacs will do just fine.


I thought it was 80MB and even then you're constantly swapping.

(I had 32MB of RAM on the computer I learned Emacs on and it was totally fine.)


When Emacs came to be, you'd be lucky to have a 80MB hard disk, let alone main memory. :)


It's a 3 year old laptop that can cope with pretty much anything I throw at it.


For small websites I wrote an observables microlibrary, one afternoon. It's called Trkl and minified & gzipped the code is about 400 bytes: https://github.com/jbreckmckye/trkl


Shameless plug

I made an entire organ synthesizer in under 1024 bytes of html/js (use your keyboard):

https://js1024.fun/demos/2022/23

source / instructions / background:

https://github.com/ThomasBrierley/js1024-mini-b3-organ-synth

The javascript was significantly reduced in size by using regpack, which is a regex based dictionary compression algorithm targetting very small self decompressing javascript. Writing for regpack takes some thinking because you have to try and make the code the most naturally self similar, which often means consciously avoiding more immediate space saving hacks in order to make larger sequences of characters identical - e.g it would usually make sense to not duplicate an expression, unless it's very short and only used a couple times, it usually makes sense to store it in a variable, but the character cost in defining the variable is actually larger than simply duplicating the expression through regpack (even if it's used a lot).

This might be worse than the lzw type compression used in HTTP, I haven't tested, but this was written for a code golfing competition where source size counts not transmitted size... Then again, conceptually this should lend itself to any any dictionary based compression, so zipping the original source (minus the whitespace), may also work out very well.

Here's a web interface that includes terser and regpack: https://xem.github.io/terser-online/

Note that it's usually best to hand minify for regpack, terser is only convenient for removing whitespace, but advanced automated js compression libraries usually cause the size to inflate when regpack is the final stage.

[edit]

zipping the source comes in at 952 bytes - so it appears this technique applies when targetting dictionary compression in general, including the commonly used compression in HTTP.


I just wanted to say this is really impressive :)

The latency is almost zero. I've always thought of "browser applications" as these heavyweight mammoths where each keystroke needs at least a couple hundred milliseconds to process.

Good to know these kinds of things are still possible. Browser synthesizers have become my new point of interest :)


Thanks :) I was pleasantly surprised with the latency too, admittedly the web audio engine of the browser is doing all the heavy lifting - I noticed that it seems to sacrifice audio quality for speed if you push it too far, e.g my struggle was with keeping the number of simultaneous oscillators low enough, if you ask to simulate too many then clipping will start happening all over the place since there is going to be some limit based on the CPU... I don't have much experience with audio programming but I expect for this reason additive synthesis is probably not efficient enough for anything that's not as simple as an organ.


Nice username!


haha, snap.


Just wow! And you implemented working drawbars too, really impressive!


I managed under 1.5KB for a page that has an actual function, not just a test:

http://www.captiveportal.co.uk

And yes, that’s supposed to be a non-https link.

I think the entire site, including favicon, might be under 5KB. You can check here:

https://github.com/josharcheruk/CaptivePortal


Were you aware of http://neverssl.com before making this? Though I guess your page is slightly smaller (687 bytes vs 1900 bytes compressed). http://captive.apple.com is even smaller though.


I didn’t know about neverssl.com. I think I would have still made captiveportal.co.uk even if I did because it was fun :-)


https://www.captiveportal.co.uk works and doesnt redirect to a non-https link, at this point its hard for normal users to go to an http:// link without their browser overriding them

neverssl.com solves this by redirecting to a random subdomain (for some reason that isnt clear to me near midnight)

a .co.uk equivalent is a great idea though, if it can be made accessible to users with hostile browsers


  SSL/TLS/https (the padlock symbol) prevents this.

  This site will never use those technologies.
yet it has https? strange haha


The site is hosted in Fastmail - I think they must have enabled https or perhaps I missed that it was active when I put the site in there.


Huh! I didn't think Fastmail hosted webpages, so I thought this was an autocorrect mistake, but I see that they do indeed offer static hosting:

https://www.fastmail.help/hc/en-us/articles/1500000280141-Ho...


Most browser still don't have force https enabled by default, so without HSTS there is nothing preventing plain http.


both firefox and chrome do for me, trying https first even when i type http://. particularly if https was ever used for a domain in past which drives me nuts for local network domains. the only way i make them stop is to clear history


The purpose of the random subdomain is to ensure that the browser doesn't just show a cached version of the page.


I’ll add it to the to do list. The current hosting won’t be around after next year so it’s getting moved in about 6 months to where I have some more control and I can probably implement changes needed.


Well, the function is not in the content so your website could even be completely empty. I would say https://cv.tdarb.org/ has more function.


Interestingly, I surprised myself browsing the website you posted and others, clicking on every link, ready each new page before going back to finish reading the page I was coming from, jumping from link to link just as I remember doing 20 years ago. That is something I don't do anymore. Sure, I sometimes click on some links when I'm reading something, but I usually do it with a middle click (opens the link in a new tab) and continue reading the first article before closing it and seeing the new tab. And at this point I usually lost interest in the content I opened a few minutes before and just close the tab without reading. I was just wondering why I usually do this and why I didn't this time, and I realise that the reasons I open links in new tabs and don't consume them directly are : - opening it in a second tab lets it enough time to load completely, since everything is so bloated; - a website messing with your tab history or redirecting you 6 times before allowing you to get the content you were waiting for means that going back to my previous article will be a pain in the ass, and I'd rather play with two tabs rather than quintuple-click that back button just to find my previous article.

Anyway, that didn't happen here, because I subconsciously knew that every link would load before I could even think of it, and that none would make coming back one step a pain in the ass, and that was refreshing and maybe even made me nostalgic. But more than anything else, it allowed me to read with more focus than I remember having the last few years. So yeah, I love that "bare ones" design.

(PS: I also realise my comment is so long it could have been it's own blog post. Maybe I should start one...)


Fair critique.


Is it really needed when browser (and linux distros!) have their own captive portal detection page ? (e.g http://detectportal.firefox.com/ or http://connectivity-check.ubuntu.com)


On desktop the automatic detection is pretty reliable. On mobile I find myself opening http://example.com with some regularity (the most common edge case is if a network with captive portal is configured to auto-connect, and I don't have a browser open).


example.com also works for that


1kB is charming, the site shows the limitation: it's not even very much text, there's limited room in the medium to leave one's mark.

Not a thing wrong with haiku, but for a whole art movement, we'll get more mileage out of a one-trip website.

Ignoring a bunch of caveats I won't get into, a normal TCP packet is no larger than 15kB, for easy transit across Ethernet: the header is 40 bytes, leaving 1460 for data. Allowing for a reasonable HTTP response header, we're in the 12-13kB range for the file itself.

That's enough to get a real point across, do complex/fun stuff with CSS, SVG, and Javascript, and it isn't arbitrary: in principle, at least, the whole website fits in a single TCP packet.


Ethernet MTU is 1500 bytes (not 15000 bytes or 15 Kbyte), assuming non-jumbo frames. TCP MSS tends to be 1460 bytes. But then, opening a TCP connection requires 3 packets anyway (so 4380 bytes.) TLS connections usually take another 2-3 packets (depending on your version of TLS/TLS parameters) so now your total payload just to establish the connection is sending 5840-7300 bytes over the wire. If we take the handshake as 50% overhead (which is quite sad mind you), then we can transmit a 7300 byte or 7.12 KB website, which can definitely make a pretty decent website.

Optimizing for 1 kB can be a fun creative exercise, but I think it's practically a bit meaningless. It's better to target something like a 28.8 Kbps connection and try to get the page to load under a second (including connection handshake < 20.1 KB), which is more than enough to have a rich web experience.


You forgot the HTTP header... Which is variable size, but unfortunately usually quite large.

They can easily be in the 500 to 1000 byte range, taking up most of the first TCP packet. e.g this HN page has a 741 byte header. I suppose if you control the web server you could feasibly skim this down to the bare minimum for a simple static page - not sure what that would be.


Think you got your math off there. MTU of 1500 bytes = 1.5kB, not 15kB.


I believe the parent comment was thinking of 1 round trip time.

Typically the TCP initial congestion window size is set to 10 packets (RFC 6928), hence ten packets can be sent by the server before waiting for an ACK from the client.

So under 15kB or so (minus TLS certs and the like) website loading has the minimum latency possible given any other network factors.


TLS certs won't count against your initial congestion window, although there's still a bit of overhead.

If there's a session/ticket resumption, the server won't send a certificate, but it does still need to send a negotiation finished message, and it may likely want to send new tickets (I'm not sure if you can delay that though). In TLS 1.3, the client may send the request as early data, if not the request will come as the beginning of the second round trip, so the congestion window will have opened more for the response.

If it's a full handshake, the certificate is part of the first round trip, and the content is in the second round trip; the cert won't count against the congestion window, because it must have been received before the client sent the http request.


Even if the initial congestion window is set to 10 segments, all that does is send out the packets if they're available to send. It still takes less time for the receiver to receive 3 segments than 10 segments, having the congestion window be that wide just means that the sender doesn't need to wait to receive ACKs for those 10 segments (which, to be fair, is faster than sending each packet and waiting for an ACK.)


I think they might be referring to the initial TCP "window size", which is how much data can be sent before the recipient acknowledges it (in multiple packets but without round trips).


That's pretty hacky, but still clever.

Out of curiosity, I recently wrote a little Elisp function to compute the "markup overhead" on a typical NY Times article, i.e. the number of characters in the main HTML page versus the number in a text rendering of it. It turns out that the page is 98.5% overhead. That doesn't even count the pointless images, ads, and tracking scripts that would also get pulled in by a normal browser. Including those, loading a simple 1000-word article probably incurs well over 99% overhead. Wow!


If 1kb websites interst you, check out the https://js1k.com that has awesome Javascript demos within 1kb!



Or chess (including computer opponent): https://js1k.com/2010-first/demo/750


OMG, finally a computer chess program I can beat!

five minutes later

Nope.


Reminds me of the demoscene days


Also in the same vein: https://www.dwitter.net/ which is JS animations in under 140 characters.


Or its spiritual successor JS1024 [1].

[1] https://js1024.fun/


I had a look at https://cv.tdarb.org/. You could reduce it even further.

By:

  - removing quotes around attribute values

  - replacing href values with the most frequent characters

  - sorting content alphabetically

  - foregoing paragraph and line-break tags for a preformatted tag
I was able to bring it from 730 bytes (330 had you compression enabled) down to 650 bytes (313 bytes after compressing with Brotli). Rewording the text might get you even more savings. Of course I wouldn't use this in production.

Here it is: https://jsbin.com/cefuliqadi/edit?html,output


Why not eliminate quotes in production, if you know the value doesn't need quotes? That's still valid, it's optional HTML: https://meiert.com/en/blog/optional-html/

Sorting content alphabetically and that sort of thing to improve compression may be silly code golfing and impractical for page content, but on the other hand I don't see that it costs you anything (aside from time experimenting with it) when applied to the <head> / metadata. https://www.ctrl.blog/entry/html-meta-order-compression.html

I think that both of these methods could be used in production, and I intend to do so when possible.


I tried that too but decided that saving a few bytes is not worth the parser restarting, so I adhered to strict XHTML for fast page load times.


Not worth the… what? I’m not sure what you’re talking about or thinking of, but I think you’re wrong. Parser restarting is purely when speculative parsing fails, and there’s nothing here that can trigger speculative parsing, or failures in it.

If you’re using the HTML parser (e.g. served with content-type text/html), activities like including the html/head/body start and end tags and quoting attribute values will have a negligible effect. It takes you down slightly different branches in the state machines, but there’s very little to distinguish between them, one way or the other. For example, consider quoting or not quoting attribute values: start at https://html.spec.whatwg.org/multipage/parsing.html#before-a..., and see that the difference is very slight; depending on how it’s implemented, double-quoted may have simpler branching than unquoted, or may be identical; and if it happens to be identical, then omitting the quotes will probably be faster because there are two fewer characters being lugged around. But I would be mildly surprised if even a synthetic benchmark could distinguish a difference on browsers’ parser implementations. Doing things the XHTML way will not speed your document parse up.

As for the difference achieved by using the XML parser (serve with content-type application/xhtml+xml), I haven’t seen any benchmarks and don’t care to speculate about which would be faster.


So far your theory, but I measured it in practice. Developer tools give easy access to parser events and different page load timings such as parsing stage.


FYI: with some super basic benchmarking that repeats an HTML snippet 1–100000 times and times how long setting innerHTML takes, I can report that in Firefox, it’s a little faster to omit </p>, a little faster to omit trailing slashes on void elements, quoting attributes is probably a bit faster, and that the XML parser is much slower (mostly 2–4×). In Chromium, parser performance is way noisier, and I mostly can’t easily see a difference by numbers (without plotting), though it’s probably faster to omit </p>. As for the effects of using the XML parser, my benchmark crashes the tab (SIGILL) in Chromium and I don’t care enough to figure out why.


I might as well also mention that we are talking about differences of a a few dozen nanoseconds at most per instance, and that even then the noise threshold is normally well above that, and that it’s difficult to show significant results in benchmarking at all because you require preposterous amounts of serialised HTML to get a measurable result at all.


The tools you’re talking about are useless for measuring this kind of thing. We’re talking about potential differences well below microseconds, and you’re proposing using tools that (presuming I correctly understand which you mean) report answers in milliseconds, with noise rates of milliseconds (and a lot more if you try scaling it up with things like a million elements in a row). It is possible to benchmark this stuff, but the way you describe is utterly unsound.

Unless presented with concrete steps to reproduce what you’re talking about, I refuse to believe you.

(Mind you, I’m not denying in this that there are differences, just that they’re even measurable this way on even vaguely plausible documents.)


I’m not talking, like you, about “A parses faster than B”. I talk about “A causes the parser to start over, B doesn’t, so B is faster”. Resets do make a difference that does not require microbenchmarks and is in the realm of milliseconds. This way I was able to load pages in a single frame at 60Hz, which was the threshold I wanted to hit, because it made my webdev friend not realize he already was on the next page when he clicked the link. Feel free to refuse to believe me.


Yes, I don't think it's worth it other than as an exercise in byteshedding.


Nice, always refreshing to see these small web pages projects. Shameless plug, but I had recently started a search engine [0] with the goal of generating search result pages that are only a few KB in size, backward compatible (HTML 4), and only takes 1 HTTP request per page (no images, inlined CSS, base64 encoded favicon). It's surprising how big the page sizes are for the popular search engines, you would think these pages would be small, but a google search result page can be over 1 MB in size and take over 70 requests.

[0] https://simplesearch.org


Let's go right to the 100 bytes club. Here's Sierpinski triangle in 32 bytes: https://www.pouet.net/topic.php?which=12091&page=1#c568712

Edit: another version, 59 bytes: https://twitter.com/jonsneyers/status/1375828696846721031


Some comments here mentioned that these pages were essentially just text. I wanted to create something that would be useful, showcasing some basic HTML, CSS and JS.

Here's an entry I hacked up together: https://coffeespace.org.uk/colour.htm

It comes in at 1015 bytes and converts HTML colours into their shortened form (i.e. #00F for blue) and displays the colour visually.


Switched the directory to https://coffeespace.org.uk/tools/colour.htm

Also added https://coffeespace.org.uk/tools/avatar.htm for generating avatars in the same spirit.


One of the linked examples zenofpython.org reliably crashes chrome on my phone, although not when rehosting same content on my own server. Can anyone reproduce that?


I got that too. And on desktop it just never loads, pretty strange.


Funny, it crashed for me too on selection. Thanks Pixel 6 prefetch.

Tried again and it works on Firefox.


Confirmed on android. Instant crash.


You don't need the quotes in many cases.

Instead of:

<link rel="icon" href="data:,">

Try

<link rel=icon href="data:,">


<link href=data: rel=icon> will work just fine ;)

Another fun trick is using <!doctypehtml> since the spec says to pretend a space is there if not present for whatever reason (https://html.spec.whatwg.org/multipage/parsing.html#parse-er...)


Cute, but I just ran both your suggestions through the HTML validator at https://validator.w3.org/nu/ and neither of them validated.

The first error read "Bad value data: for attribute href on element link: Premature end of URI."

The 2nd error read "Missing space before doctype name."

Depending on the context these hacks may still be useful, but I personally think that both production sites and code golfing should require valid HTML.


You can't use the error handling part of the spec without invoking an error. Good minifiers will do the same, fwiw.


I run this in prod. Never had an issue with it.


Well damn. That is an oddity.


Anticlimactically boils down to 100 words on a page.


I think not being able to see where a link is going is pretty bad. I would remove that "hack" even if it means the site is slightly over 1kb.


AWS also likes to play this game: https://docs.aws.amazon.com/elasticloadbalancing/latest/APIR... : " MessageBody: ... Maximum length of 1024."

I have implemented some fixed error pages for my company including its logo in svg below 1KiB.


I love the "ackshually..." replies. The point of the exercise is to get a marginally useful web page to fit in 1kB. A landing page that fits into 1kB is pretty impressive. You could fit even more e.g. some bare minimum CSS, into 1kB with compression.

The whole point of the 1MB club and other such efforts are to show what can be done without the equivalent of multiple copies of Doom[0] worth of JavaScript to display what is just a static landing page.

There are completely legitimate uses for "web apps", things that are actual useful applications that happen to be built in a browser. No one is saying that web apps aren't a totally valid means of delivering software.

The problem is every website written using the same frameworks Facebook and Google use for their web apps to build sites that could easily just be some static HTML[1]. We have devices in our pockets that would have been considered super computers a few decades ago. I have more RAM on my phone than I had hard drive storage on my first PC. My cellular connection is orders or magnitude faster with orders of magnitude better latency than my first dial-up Internet connection.

Despite those things the average web pages loads slow as shit on my phone or laptop. They're making dozens to hundreds of requests, loading unnecessarily huge images, tons of JavaScript, autoplay videos, and troves of advertising scripts that only waste power and bandwidth. I find the web almost unusable without an ad blocker and even then I still find it ridiculous how poorly most sites perform.

I love waiting at a blank page while it tries to load a pointless custom font or refuses to draw if there's heavy load on some third party API server. I also absolutely adore trying to use some bloated page when I'm on shitty 4G in between some buildings or on the outskirts of town.

It would be nice if I didn't need the latest and greatest phone or laptop to browse the web. It would be nice if web pages rediscovered progressive enhancement. Add JavaScript to improve some default useful version of a page. You don't need to load some HTML skeleton that only loads a bunch of JavaScript just to load a JSON document that contains the actual content of a page.

[0] https://www.wired.com/2016/04/average-webpage-now-size-origi...

[1] https://idlewords.com/talks/website_obesity.htm


Easy Peasy, Lemon Squeezy:

<html> <head> <title>My Wonderful Website</title> </head> <body> <p>What a wonderful page</p> </body> </html>

Ha! There! 110 bytes. A whopping 914 bytes left for user content.

PS: (I think HN hates code blocks)


HTML is just text and it isn't inherently hard to stay under 1k. What is this other than a weird way to flex?

The linked article is bigger than 1kb, and the <1kb site is just a list of links...


Why not shave a few more bytes removing double quotations around single token attributes?

Before: <link rel="icon" href="data:,">

After: <link rel=icon href="data:,">


Staying under 1KiB is pretty hard when just plain text alone easily takes more than 1KiB.

Checking my own site with `find . -name '*.md' -exec wc -c {} + | sort -h` I find that 30 Markdown files are under 1KiB and 40 are over 1kiB. The largest file is a 18266 bytes post which is still quite small compared to most other blogs AFAICT. It also excludes the possibility of including images with more than a dozen pixels.


More impressive is to build a website that can be sent in a single TCP packet. This means a site can be no more than 14kb compressed and should serve everything in just a single request. Would have to base 64 encode images as well. This would probably end up as a meatier website but still blazing fast.


I assume you mean 1.4kb? Squeezing graphics into that is possible but very difficult for anything larger than an icon. If the browser supports vector graphics of some flavor you could do a lot so long as the shapes aren't overly complex.


Ah 1.4kb for a single packet,

But the initial round trip can be 14kb, 10 packets. So under that the user will not need anymore trips.


If you enable gzip or brotli compression you can get even smaller.

With brotli level 11 I’m down to 330 BYTES.

https://tools.paulcalvano.com/compression.php


1kb for the entire site would be impressive! Would probably just do a single page of ASCII text written in shorthand for that.

They actually mean 1kb per page, which is pretty slick and decent even on dialup.


This is confusing me about the 1MB club. On the site under "submit" it says:

    The two rules for a web page to qualify as a member:

    Total website size (not just transferred data) must not exceed 1 megabyte
    The website must contain a reasonable amount of content / usefulness in order to be added - no sites with a simple line of text, etc.
The github repo just says:

    An exclusive members-only club for web pages weighing less than 1 megabyte

So which is it? Are sites with multiple pages under 1MB (but then the total for all pages exceeds 1MB) allowed, or must the entire site weigh in less than 1MB?


It seems really silly as this is extremely low bar to get, you can get there by accident if you "just" use plain html/css.

Then again in age of JS frameworks maybe it is an achievement for the new developer that was gaslighted into thinking 500MB of deps to make a simple site is normal


You can remove the quotes around attribute values, they’re optional


Last website in the club closes my mobile chrome


Same! Interestingly only crashing Chrome on Android, but Firefox is handling it fine.

https://zenofpython.org/


Same. I wonder what causes it.


Works for me. JIT disabled.


Can you please add me to the club?

My site is 0.306 kB.

http://superimac.com


on what kind of connection you would feel the diff between 1kb and 1Mb? One of my projects for 2022 is a proxy that strips any website to 1Mb in one of standard layouts


It’s… just text on a page. Weird.


> Building a website that actually serves useful content while squeezing its page size under 1,024 bytes is no easy feat.

Narrator: It was extremely easy feat.

Just make spec-invalid webpage and skip all the heads, bodies, htmls and rest of it.


Omitting many kinds of tags is perfectly valid in html5. Most of my websites feature no explicit body or head tags. Html open tag is “required” because of the lang-attribute.

You also don’t need to quote attribute values. You don’t need closing tags for many html elements.


It is a valid page. Check it out with w3c validator. It only gets a warning for missing lang attribute on html tag.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: