Hacker News new | past | comments | ask | show | jobs | submit login
How a new HTML element will make the Web faster (arstechnica.com)
311 points by wfjackson on Sept 2, 2014 | hide | past | favorite | 145 comments



This element reminds me of the ill-fated FIG element (https://www.cmrr.umn.edu/~strupp/elements.html#FIG) which was proposed in HTML 3.x, but never made it in. (I think it was replaced by EMBED which then transmogrified into OBJECT).

FIG was intended to be an alternative to IMG, and unlike IMG it wasn't self-closing. It could have children, and the way it was supposed to work was that the outermost one the browser thought was "good enough" would get rendered. One possible usage at the time was to have a png in your outer FIG, a gif on the next one in (png was new at the time, so not well supported), then an IMG for browsers that didn't understand FIG. Once FIG was well supported then you'd leave out the IMG, and instead just have the "alt" text -- except it could have real markup instead of just the plain text of the alt attribute.


For the technical side, instead of the historical one: http://responsiveimages.org/

An example from the homepage:

  <picture>
    <source media="(min-width: 40em)" srcset="big.jpg 1x, big-hd.jpg 2x">
    <source srcset="small.jpg 1x, small-hd.jpg 2x">
    <img src="fallback.jpg" alt="">
  </picture>


I understand from the article that the img srcset was somehow horrible, but the following (presumably the WHATWG proposal) looks more intuitive to me:

  <img src="small.jpg" srcset="large.jpg 1024w, medium.jpg 640w">
Can someone explain the drawbacks?


The syntax was confusing, but still didn't cover all use cases.

To authors it wasn't clear whether "w" declared width of the image, or min-width or max-width media query.

It's a media query, but doesn't look like one. Values look like CSS units, but aren't.

On top of that interaction between "w", "h" and "x" was arbitrary with many gotchas, and everybody assumed it works differently.

With <picture> we have full power of media queries using proper media query syntax.

srcset is still there in a simplified form with just 1x/2x, and that's great, because it is orthogonal to media queries ("art direction" case) and processed differently (UA must obey MQ to avoid breaking layouts, but can override srcset to save bandwidth).


How does the browser know to grab the 1x version or the 2x version?


I assumed 2x was for Retina-like screens. The browser already knows it and it's exposed via devicePixelRatio.

If I understood your question correctly.


The browser knows about the device it's running on, and specifically its display density.


file names can have spaces and JPEG can end with "640w" in URL. It's really weird to see a tiny DSL invented in a DOM attribute.


This all seems horrific. Why can't HTML be properly extended to support attribute arrays or dictionaries as values? Having a value encode a DSL is so messed up. This is yet more to parse...

HTML keeps getting pulled in so many directions. I wish XHTML had won. It was modular, pluggable, extensible, and semantic. The last bit might have eventually made entering the search space easy for new competitors, too.


Fully agree. XHTML was a sane way to have a proper application development model, instead of this document as applications hack.

But the browser vendors failed to make a stand against bad HTML.


'bad HTML' could easily have just been an ego clash and pissing contest between developers of competing browsers. It was arguably more difficult to implement than just well-structured syntax.


this is why attributes are really a stupid ass way to do things

  <img>
    <srcset>
      <source><width>1024</width><src>large-image.jpg</source>
      <source><width>512</width><src>small-image.jpg</source>
    </srcset>
    <src>image.jpg</src> <*>fallback</*>
    <alt>My image</alt>
  </img>


That isn't well formed, you're missing two </src>.

I dislike XML, the confusion between attributes and sub elements is one of the worst bits.


"1024 large-image.jpg 512 small-image.jpg image.jpg fallback My Image"

That is what your code would look like to browsers that didn't know about the new elements. HTML is defined such that browsers can ignore unknown elements for compatibility and still display the text. Using contents for the metadata means that browsers need to know about the elements to at least hide the text.


Holy crap that's verbose.

This is why *ML is a stupid-ass way to do things. "the problem it solves is not hard, and it does not solve the problem well."


"Attributes are stupid" is also Maven's approach, but this results in unnecessarily verbose XML files.


<imgset w1024="large.jpg" w640="medium.jpg" />


not practical since you'd have to define attributes for every conceivable size in the spec and that's just asking for trouble. e.g. w2048, h1024, w320, w240,h320, wPleaseShootMe :)


But now it's a PITA properly handle and escape for any toolset that don't have good xml support. Imagine people starting to put <![CDATA[ ]] blocks into this.


I meant if I had designed it from the start. Then everything is a tag, no attributes, no quotes, equal signs, etc.


How about JsonML; i.e. XHTML but in JSON format to make it less verbose / further improve integration with javascript?


JsonML is pretty efficient when auto-generated from HTML source. I use it as an intermediate form for client side templates ( http://duelengine.org ) but I don't write it by hand. Its regularity makes it a perfect candidate for output from a parser/compiler.


You'd want to kill yourself pretty quickly.

JSON is great as an interchange format, but there are many reasons editing it by hand is painful, lack of comments and lack of newlines in strings not being the least of them.


There's no syntactic difference between an attribute, an object and an array.


you can't nest tags into attributes


I really dislike your approach.


XML Parsing Failed


I really hate when my code doesn't compile. If my code is wrong, the compiler should just figure out what to do.


You hit the nail on the head.

HTML5 got one thing right though: standardization of the DOM failure behavior. As an implementation detail of their design, they went with "sensible recovery" for failures over stricter failure modes.

In going with the WHATWG over the W3C, we ultimately chose "easy to author, (slow to evolve) living standard" over "strictly typed yet developer extensible". I was disappointed, but it's good for some parties I suppose. (It certainly keeps the browser vendors in charge of the core tech...)

The W3C over-engineered to a fault. They had a lot of the right ideas, but were too enamored by XML and RDF.


It wasn't really a choice in favor of "easier to author." It was a choice in favor of "will this actually get implemented, or just be fancy theorycraft?"

No browser vendor was going to ship new features only in XML parsing mode, because that was author-hostile enough that it would lose them authors, and thus users. (Browser game theory.) The choice of HTML over XML syntax was purely practical, in this sense.


HTML5 got one thing right though: standardization of the DOM failure behavior. As an implementation detail of their design, they went with "sensible recovery" for failures over stricter failure modes.

It was browsers that did that in the first place. HTML5 just standardized the exact behavior on failures.


Incorrect. HTML5 synthesized the exact behavior that was closest to the majority of browsers. But not all browsers agreed (e.g. Mozilla would change its HTML parsing behavior depending on network packet boundaries), so there was still effort aligning with the newly-specced common parsing algorithm. At the time there was much skepticism that such alignment was even possible.


> Mozilla would change its HTML parsing behavior depending on network packet boundaries

I want to know more...



Which is what I said, right?

HTML 4 - vendors implemented the spec incongruently and failed in their own special ways. XHTML strict - standard parsing rules with strict failure mode. HTML 5 - standard parsing rules, suggested (but not required) rendering behavior for browser uniformity, and well-defined failure behavior.


> I really hate when my code doesn't compile. If my code is wrong, the compiler should just figure out what to do.

There's something you're overlooking in the above. If a compiler was smart enough to know what to do with your erroneous code and compile in spite of the errors, that would be the end of programming and programmers.


I'm pretty sure that comment was sarcasm. It's a complaint about how HTML5 isn't just specified to fail on bad input, but instead gives rules on how to recover.


And that would be a good thing!


Well .... now that you mention it ... yes, it would. :)


Sarcasm? I can't tell anymore :/

I love it when my code doesn't compile (i.e. if I've made a mistake). Much worse if when something tries to be "intelligent" and makes my code do something I never asked for - then I spend hours trying to figure out what the issue is (assuming I've noticed) rather than seeing that I made a mistake and fixing it.


Yes, I was being sarcastic. Web designers should stop whining and write proper markup code.


The problem has nothing to do with web devs but rather that no one wants to use a browser that spits out "error 5" on malformed HTML, which is necessarily what you're implying. The other option is to do your best with the bad HTML, and now we're right back where we started, regardless of how "strict" you make the rules.


Here, don't confuse what was XHTML1 with XHTML2.


I'm speaking more in terms of the goals the markup dialects had, irrespective of the ultimate implementation. I think we can all agree that those suffered from misguided engineering choices (bloaty XML culture).

Responsive images could have been an XHTML module with a javascript implementation. The browser vendors could catch up and provide native implementations in their own time, but that would not postpone immediate usage.

If it were done right, anyone could have defined a markup module/schema with parsing rules and scripting. The evolution of those extensions would have been pretty damned fast due to forking, quick vetting/optimization, etc. It would have been well timed with the recent javascript renaissance, if it had happened. It might have meant browser vendor independence at the level of the developer.

HTML should really have been modular with an efficient, lightweight core spec. It should have also paid lots of attention to being semantic so that others could compete with Google on search. I am still curious if that's why Google got involved in the WHATWG. I'm rambling about things I don't know about though...


> Responsive images could have been an XHTML module with a javascript implementation. The browser vendors could catch up and provide native implementations in their own time, but that would not postpone immediate usage.

This is exactly what happened, except without the XHTML nonsense. JavaScript polyfills of the picture element were created and in use before native implementations eventually caught up. (And native implementations are very necessary, in this case, because they need to hook in to the preload scanner, which is not JS-exposed.)

More generally, custom elements and extensible web principles in general enable all of this. Again, without XML being involved.


Spaces can be escaped as %20 in URLs. I do agree that the domain specific language is weird though, and would even require new DOM APIs to manipulate it directly (like the style attribute does).


Surely the way one would represent this in XML, rather than `srcset="foo 1x, bar 2x"`, which strikes me as odd, would be:

   <picture>
      <srcset media="(min-width: 40em)">
         <source size="1x" src="big.jpg" />
         <source size="2x" src="big-hd.jpg" />
      </srcset>
      ...another srcset...
      <img src="fallback.jpg" />
   </picture>
Fractionally more verbose, but really a lot less fiddly.


The article states that the final problem on the Boston globe redesign (meant as a proof of concept for responsiveness) was that image prefetching feature to speed up rendering which happens before html parsing. Thus they needed a way for browsers to parse that information separately ahead of time.

I guess that it should be possible though for browser to parse a html fragment rooted on the picture tag, and then plug that tree back later on in the full document tree when it is constructed. Or is it simpler to search for picture/img attributes? Oh, there's this whole implicit tag closing business in html though...How do we know where to stop parsing a fragment? At least attributes values stops at the end of a string literal, or on a tag end. Perhaps that's the reason why they went for a dsl in attributes.

I agree with you though, it's cleaner your way, and perhaps xhtml could use that approach in the future?


Quite, and it has the enormous advantage that I can extract all data using nothing more than an XML parser, rather than having a two stage [parse XML -> parse embedded DSL] parser for special cases. Even the media aspects of the srcset could probably be better expressed (with more verbosity though) as a standard XML structure.

I really wish it was - though I'm far from a fan of XML for most cases, it does work rather well for this when used as intended...


Verbosity was a huge argument against <picture>. People were ridiculing it with complex use cases that required awful amounts of markup.

Hixie was against using elements, as it's harder to spec (attribute change is atomic). Eventually <picture> got a simplified algorithm that avoids tricky cases of elements, but at that point srcset was a done deal.

At least we've got separate media, sizes and srcset instead of one massive "micro"syntax.


Why, it is just the web being the hack upon hack, of this bending into application framework story.


Would this be the first element that actually varies based on media size? Seems like a strange precedent.


This is a bit of a linkbait. There are maybe two lines on how the new "picture" element makes the web faster.

The rest is the story of how the "picture" element came to be, which is a very interesting story but has nothing to do with how it'll make the web faster.


And, ironically, ArsTechnica isn't responsive and uses stock images that contribute nothing to the article (blurry hands on a laptop keyboard!)


Agreed. Quite annoying. The top rated comment post here was just 5 lines or so and told me more than 4 pages of that click bait article.


I used to love Ars, but they've started this kind of nonsense and I no longer bother to read them :/


What do you read nowadays?


HN.

Even when it publishes stuff from Ars or wherever, I often skip the article and just read the comments here because they're much more insightful and if there's anything good, the commentators here know better.

The demand at news sites that they keep churning out news no matter what is what lowers their quality. Sometimes there just isn't anything worthwhile to read at the moment about a topic.


How is it ironic? Article claims a new element to improve things. They can say that they are not responsive because such improvements are needed and they were waiting for it.


It is ironic because the article claims images slow the web; before extending HTML with special elements and DSLs it would seem a simpler step to get rid of useless and meaningless stock images.


You are not thinking far enough: some of those stock images could easily benefit from a unified CDN. And why stop there, a number of "office girl on phone" might even be included in the browser distribution. Now _that_ would be an html tag to speed up the web!


You're right; although it may slow down the installation of a new browser... ;-)


Perhaps because "how the web will be made faster" by the new element is obvious: by downloding less and more targeted image content. What's more to explain?

It's the rest of the story that needs to be told.


I'm wondering how factually accurate the story is, considering that it says "Apple's Tab Atkins" while both Tab Atkin's homepage and Google+ profile say he works at Google.


It's not just about webpages loading faster, it's about making the web standards development process faster.


Recently, a HN job post brought me to a career page. It served up a 1MB css file, a 650K [mostly red] png image, and a 300K black and white png.

I don't know whether it's incompetence or indifference, but for most sites, slow loads is a developer, not a tool or design, issue.


I'd say it is a bit of a design issue too - I remember when the majority of images on sites were actually "contentful" (for lack of a better term), and not just something like a gradient or glittering button. The practice of providing a thumbnail/preview that links to the full-sized version was common, so you wouldn't be unnecessarily loading images you didn't want to see at full resolution.

In some ways, this almost feels like spacer.gif all over again.


I agree to a point - I think it's maybe thought about a bit more by those who were building sites when a 1mb page meant a 5 minute download. For those that grew up taking high speed Internet for granted it's thought about a lot less.

That said, when the only way to get the design to work is using alpha transparency, and you know you need a massive 24bit PNG you're caught between a rock and a hard place. Especially when you then have to think about creating the same image at double size to serve to retina displays (because the client asks "why is it fuzzy on my ipad/mac/phone?") - page sizes start to get out of control.


Your point is valid, but using imagealpha allows you to have alpha channel in 8bit pngs. www.pngmini.com


That's fantastic, thank you. I've been using TinyPNG.com - will give this a go :-)


If you want people to do more of something, make it easier for them to do.

Making it easier to optimize web resources with a `<picture>` tag may help some developers take the steps to actually optimize where they didn't before (even with existing tools).


A simpler solution might be

    <img src="image.jpg" sizes="640,800,1024"/>
Then then the browser can choose the most appropriate size based on the screen size. The filenames would simply follow the convention, image-640.jpg, image-800.jpg etc. older browsers would simply use the original src.


To my eye, it's a little odd to read an article about multiple groups of experts who spent months or years hashing out a difficult problem, and then to see a comment like this one that proposes its own much simpler solution.

Is your thought here that not one of the dozens of people involved considered this sort of syntax during those months or years or work? (That seems unlikely.) Or do you think their reasons for rejecting it were unsound? (You don't seem to have said why.) I'm just not understanding what you're aiming for here.


Their proposed solution offers more flexibility in that you can use media queries to specify the upper/lower bounds and also specify the filenames. This feels over-engineered though, firstly because the above solution provides enough information to switch between images (we're not changing the layout here) and secondly because a filename convention is adequate for referencing them. The third reason that I dislike their solution is that it introduces presentation logic(media queries) into html.


Arguably, <img> was already a form of presentation logic present in HTML. This just makes it more capable.


Another option would be to allow the source to be specified in css (stylesheets are loaded before images anyway). Then you could do

    <img id="cats" src="fallback_for_old_browser.png" />

    @media (max-width: 600px) {
      #cats {
        src: url('http://cats.com/cats-600.png');
      }
    }


No please. CSS is meant for presentation, not content.


We'll need a solution in CSS for properties like background-image and border-image. (Indeed, there have been multiple proposed solutions already.) It would be nice to use the same mechanism for both CSS and content images instead of introducing yet another case in the web platform where we have multiple slightly-incompatible mechanisms that do the same thing.


Can we replace the WW3C with you? Then maybe they could, finally, get up to speed.


tl;dr: <picture> tag. It contains an <img> tag inside for backwards compatibility, and allows you to define multiple <source> tags for different sizes.


Which is kinda lame IMHO. Why can't we use progressive JPEG, with offsets? The first rough scan for small size image, a second scan for normal image, a third scan for retina sized image?

One file, one URL, different offset for different dimensions, done.


You may want to show a different icon with different viewport sizes. For example, compare the different sizes of the Google Calendar icon here:

http://setup.googleapps.com/Home/user-resources/google-icons...

You'll note the 16px icon uses square corners, while the other sizes have rounded corners.

There are other considerations as well, such as causing unintentional Moiré effects, and scaling DPIs in not-quite-powers-of-two (basically all of Android).


Still doesn't solve the design-problem of showing e.g. a differently-cropped (or entirely different) image to different viewports.


Doesn't that mean I would be downloading all of the image data and only using part of it?


No. Part of the http spec is the ability to download only a specific range of bytes in a file.


I think <picture> will (hopefully) ultimately win out because it quite nicely makes all the main three forms of embedded media (pictures, video and audio) work pretty much the same way. Plus, if only use of <figure> and <figcaption> was a bit more widespread...

Either way, the article makes it pretty clear that the current method for drafting and implementing standards for the web is not working brilliantly (having both W3C and WHATWG around exemplifies this).


There are many browsers out there, not just Chrome and Firefox, esp. on mobile. Android Chrome is fairly out of sync with desktop Chrome. Kindle devices use their own browser and won't let you install another one, etc. Plus, mobile users update their apps or their OS rarely, if ever.

Shouldn't the solution come from the server side? You can serve different image sizes to different devices, whereas if you need the browser to do the work you'll wait forever.

There is even a simpler solution, which is to use just one image of average-to-small size, and size it in the page dynamically. If the image is of good quality in the first place (noise free), most users won't notice.


The article mentioned the problem with this approach: Your browser loads the HTML, CSS, JS simultaneously with the images on the start of the page load - so when you find out what size you need at runtime (via JS), and then put in the right img-src, it will be slower than loading a higher-res version right from the start.

Another option would be to "guess" the right size to serve based on the User Agent String (maybe you meant this and I misunderstood?). This could work, although the server may very well guess wrong.


Yes, I meant "guessing" the right size based on the user-agent string as the best approach (since it doesn't rely on the browser doing the work, it will work with any browser, including old ones).

But I also maintain that a single image of relatively small size can be used for all form factors if its quality is good enough (resized by the browser based on CSS instructions); you can at least double the size of a "good" image before you begin to see problems.


The user-agent is the biggest cluster fuck in the history of the web and has nothing to do with HTML. Once you understand this, you'll learn to ignore it.


Noise has nothing to do with good quality in this case. Pixels are the enemy, and they show up all too quickly when you scale a small image to a large size.


And conversely, you don't want to load a huge high-res image on mobile devices with bad connections that won't even use them.


http://caniuse.com/#feat=picture

This probably wont be used on any major sites for the time being, considering the devices that the element has been designed for dont support it.


There's always a polyfill: https://github.com/scottjehl/picturefill

Considering many HTML5 features and related JavaScript APIs can easily be made to work in older browsers using polyfills, many relatively larger sites have been doing this for lots of things (HTML5 block elements in IE<9, for example).


While the <picture> and <img srcset="..."> are a step towards responsive images, but I personally see them as too complicated for developers that just want to get things done fast. The complexity of the new standards will slow down their adoption even more than the browser support.

For an example, we solve the adaptive images server-side problem with our SaaS image compression service http://www.slender.io/ with smart recompression & a few content negotiation tricks. Some of our customers would like to use <picture> and related polyfills for their sites, but their designers struggle defining the target image sizes relative to viewport dimensions, not the size that the image is/would be layouted. As a result, adoption on both smart browser and server-side solutions are slowed down.

The article mentioned element queries, that will hopefully solve this problem, but make the browser implementation much more complex. While the browser could resolve the normal media queries already when preparsing (e.g. it knows the viewport dimensions all the time), I understood it would know the element queries only after layout, partially defeating the whole purpose of preparsers.

It seems web standards are making things as simple as layouting insanely complex. While I am sad about all that artificial complexity, I am happy that no WYSIWYG editor will automate my job any time soon. :)


I was neck deep in this very issue for most of today. It is surprising that there is no usable solution to this issue without resorting to what seem like pretty awful methods. If I am wrong about this please do let me know.

Based on what I found today, there are a couple of ways to handle the problem of variable sized images. If anyone knows others please do tell.

1. Use picture and srcset with a polyfill (Picturefill). With this you end up with verbose markup as well as needing stuff like "<!--[if IE 9]><video style="display: none;"><![endif]-->" to make it work. It also results in requests to multiple images for browsers that support srcset but not picture, meaning twice as many images are downloaded. Many browsers are in this group with the current or next versions.

2. Use javascript. This is the method employed by various saas solutions that I looked at, and there are of course libraries that you can use yourself. Waiting for javascript to execute before the images can start being pulled down has obvious problems.

3. User agent sniffing. This method requires server side logic to implement, and relies on data that in many cases will not result in an appropriately sized image being rendered.

Is there another way? Has anyone got a workable solution to this and could give a recommendation?


The work to get the picture element implemented in Firefox is going on in the bugs in the "Depends on" field in this bug, if people are interested:

https://bugzilla.mozilla.org/show_bug.cgi?id=1017875


For people interested in on the fly image processing, there's a nice article here http://abhishek-tiwari.com/post/responsive-image-as-service-...



Then there is the self-hosted https://github.com/jonnor/imgflo-server based on GEGL


Why stop at images only? A more general solution using CSS-like media queries would be much preferable; with a general solution it would be possible to serve all sorts of assets (CSS, JS, Images, Video, etc) tailored to the display device and network connection.


<picture> is responsive; when you change the viewport's width, the browser can load a differently-sized version of the image. How would JS be made responsive like that? Once one version of a script is loaded, you can't magically replace it with a difference version.

Of course, you could limit the media queries that could be used with JavaScript, but then your general solution isn't really so general anymore. And there's already a way to dynamically load scripts, so we don't really need another way to do that:

    <script>
    if (blah) {
        document.write('<script src="a.js">');
    } else {
        document.write('<script src="b.js">');
    }
    </script>
And there's also already a way dynamically load CSS (using media queries, even):

    <style>
    @import url(a.css) (max-width: 800px);
    @import url(b.css) (min-width: 801px);
    </style>


Good point about JS.

But there are potentially more types of media (audio, video, html via frames, etc).

Even if a general solution isn't applicable to JS, it can have lots of other uses.


>html via frames

That has the same problem as JS, especially when you realize that framed pages on the same domain can interact with the parent's DOM.


the <picture> tag is set up very similarly to the audio and video tags, so I think that even if this specific solution isn't general, the pattern can be applied generally to other types of media.



Or we could finally stop futzing around with scaling raster graphics and find a way to make vector formats not terrible.

Serioulsy, fast vector graphics were a solved problem back in the late '90s. How is this still a problem today on the Web?


Not that I'm against improving vector graphics, but The majority of images on the web today are not suited for vector formats (photos).


I would disagree that the majority of images on the web are photos. Yes, there are many photos on the web, but I'd wager the vast majority of images (or at least the vast majority of image requests) are tiny bits of connective tissue of web layout - little rounded corners, gradient backgrounds, icons, etc.


Rounded corners and gradients can both be handled without images these days, but even if you use images for things like that, that bandwidth (certainly) and number (probably) of photos dwarfs them.


Vector formats can replace simple images, sure. They can't replace photos. And vector art complicated enough to be interesting can be a huge CPU load to draw... and can often be a LARGER file than a bitmap made at a resolution appropriate for the screen. It's not a more efficient representation for complex images until you start getting to 300+ dpi images that fill a whole printed page.

Source: I'm an artist who works primarily in Illustrator and checks to see how horrible the svg performance on her more complicated images is every few years.


Every game company ever disagrees that vector graphics are inherently slow. They just need to be designed to work fast.

SVG just happens to be a very slow implementation of vector graphics.

And my point isn't that they're better than a 300 dpi image, it's that they're better than an infinite list of high-DPI images for every twisted combination of DPI and screen-size.


Rendering my art takes noticeable amounts of time whether it's a PDF renderer, an SVG renderer, or Illustrator's internal renderers. Even after a recent Illustrator update's main bullet point was "we rebuilt a significant chunk of AI's internals to make it tons faster". It may be theoretically possible to render vector images much faster than this, but ain't nobody stepping up to do this.

Moreover: a lot of the images people would like to serve in multiple sizes are presumably photos. Photos and vectors don't go together well; vectors are a lot better at simple images.

And finally. [Here](http://egypt.urnash.com/rita/chapter/01/) is the graphic novel I'm drawing entirely in Illustrator. The 72dpi bitmaps range from 15 to 212 kb. Printable 300dpi bitmaps range from 1.75 to 2.3 mb. They're CMYK rather than RGB, let's guesstimate they'd be 3/4 of the size -- 1.3 to 1.7 mb. Illustrator files? 948-34 MB. And that's with 'create PDF compatible file' turned off, which generally halves the size of an .AI file.

I'm going to pause here to let that sink in. Even distributing a CD full of 300DPI images of a nearly 200 page graphic novel is less data than the source Illustrator files. And converting them to SVG/SVGZ/PDF makes little in the way of size gain, either. A modest list of high-DPI images sounds like a hell of a lot less stuff to upload to your server, never mind that you'd only be sending the single bitmap that best fits the display at hand.

3D games have a lot of tooling to make things Render Fast. There are a ton of ways people have figured out to fake more complex images - LOD, bump maps, baked luminosity, lots more stuff I don't know offhand because I'm not a 3D person. Nobody has put anywhere near the same kind of effort into getting 2D vector images to render fast.

I would love to see someone build tools for doing the 2D equivalent of things like "turning a super hi res mesh into a simpler one plus bump maps" and rendering them quickly. Maybe I could finally stop rendering my stuff into bitmaps for the web. But I don't see anyone doing that any time soon.

anyway.

TL;DR: 2D vector rendering is sluggish and I don't see that changing any time soon, and the file sizes for interestingly complex images is a couple orders of magnitude larger than a 300dpi bitmap. Only the most trivial images are more efficiently served as vector files than as multiple bitmaps.


What's terrible about svg?


Performance.


I found this article on the subject to be more informative and useful:

http://ericportis.com/posts/2014/srcset-sizes/


Seriously, I thought this was what request headers were for. Have the goddamned browser request the most appropriate size, don't hardcode it into the markup.


The problem is if you want different images for different screen sizes, not just different sizes of the same image.


This article is pretty naive.

1. m dot sites are not a thing of the past. Many sites benefit from a pure mobile experience.

2. The Boston Globe (while impressive) does not show that 'that responsive design worked for more than developer portfolios and blogs'. The globe is largely text / image based, and that does not translate to a site like Amazon / Facebook.


I love this idea of the <picture> element which hopefully most browsers will adopt soon but how will this support on the older iPhones that cannot upgrade to iOS7 (iOS8 soon). Especially with Apple not supporting these devices anymore.


HTTP Client Hints - New draft imp. of Client Hints for <img> and <picture> for Ruby. https://github.com/igrigorik/http-client-hints


Someone from the HTML standard body is going to make the web better? Riiiiiiight...


To resize images, i create a cookie with javascript that gives the browsers current width and height. (mostly for full image front pages -> the function for choosing the image width is based on bootstrap)

When i read the title, i thought media-queries would get the functionality to load external stylesheets, which seems like a better option to me (especially if css could fill the img src, then stylesheets reduce in size, but also images. Only this option uses to much back-and-fort communication. Perhaps a default naming would be appropriate (eg. img-1024.jpg => for browsers with a max-width of 1024 px, same could be used with stylesheets). Even a syntax like <img src="small.jpg" srcset="large.jpg 1024w, medium.jpg 640w"> could be used.

PS. If you downvote me, at least do it with the decency of giving arguments...


0. This requires a full server round trip before it can send the right size images.

1. This breaks caching, which is a big deal for most sites.

2. This also has the problem of needing a separate fallback to handle window resizing.

3. This requires a separate approach to handle format detection, although you could set a cookie for that as well.


It's also horrible for anyone trying to archive your page.


Then the archiving functionality saves all additional stylesheets and images, so it can be appropriately used later on. But not much people "save" a page as html, they bookmark/favorite it, if it's interesting or print it to pdf (or your favorite document type)- eg. on a checkout

Then again, what's your preference?


Polluting the HTML with extraneous information not intended as markup is never a solution.

As someone already said, use headers, css, etc.


TL;DR: <picture> element.

arstechnica: you can do better than linkbait titles.


I think this should just be a server side issue. Querying for images with size information and the server spit out lower res images on the fly. This way, the web design guys have less to do when creating graphics assets for a site.


I don't understand why so much question as this problem should be easily solved with wavelet type images (jpeg2000 for instance).

the low resolution devices could load the first bytes of the image and the high resolution one the full image.


This doesn't seem too useful in the first world anymore now that we have 4G connections and higher resolution screens on our phones than our desktops.


I don't agree with your 4G comment, as its coverage is poor, performance is patchy and smaller files will still download quicker regardless of your max speed.

But your point about high res screens is definitely true. Hell, we first had 'retina' hacks and bigger images specifically for our phones! Why is a site going to want to serve up smaller files for screens with more pixels?


I would presume that the higher resolutions of mobile devices over laptops and desktops is just a transient thing.

Laptops and desktop monitors are slowly catching up. We're just living in this part of history where it was easier to manufacture small high-DPI screens than large ones. It's not going to last. Heck, I'm typing this on a 13" Yoga Pro laptop with a near 4K resolution.

So the sizes needed for larger displays will bump up over the next few years and we're still going to have the same issue of having to serve massively larger images to large screens than small ones.


The first world would still benefit because web pages would download faster on mobile.

Also, the first world is just a subset of internet users.


Ah, it might make the web faster on mobile. Not very interesting.


Mobile is getting worse, IMO.

Maybe it's because I have an older iOS device running 6.1, but the insane size of some of these sites combined with all the side-loaded javascript and CSS rendering (and re-rendering) is just making sites completely unusable.

How come I can load Metafilter in 3 seconds, but Wired takes over a minute, if it works at all? And Medium? Even worse.


OMG this. It's insanely frustrating that 90% of the time we're just reading text and it's so painful on older mobile devices that still should run circles around computers that happily consumed internet content ~10 years ago.


The web used to provide text, and allow the user-agent to render it as appropriate for that display/user.

Now the web is a (bad) application platform, where the content designers demand full control of interactivity.

As a user you are not well-served by the modern web. It's not the devices per se it's also the content that is different than ~10 years ago.


As the border between mobile/portable devices and dektops/workstations keeps getting blurrier, this "on mobile" concept will probably just lose any sense in the near future.


Probably not. In my experience I've found that images are usually higher res (and bigger file sizes) that on mobile devices.


If mobile browsers struggle to download images, change the mobile browsers. Let the browser wait for some javascript to manipulate the src's ...


I think the current mobile browser is long overdue for disruption. In 10 years, I can see us looking back and chuckling at the fact that we had such tiny spaces for all the information we interact with.

Virtual reality should bring inexpensive, full-peripheral "monitors" that we can interact with naturally, anywhere. No more having to bend over backwards to fit all our info on mobile devices.


>Virtual reality should bring inexpensive, full-peripheral "monitors" that we can interact with naturally, anywhere.

Yeah, I don't see that happening any time soon, because ergonomics.


I can see it evolve into a normal pair of glasses in 10 to 15 years. We need another Steve Jobs-type character to speed things up a bit, but I think we can get there.


I knew about this tag 2 months ago, when I was Implementing a responsive website. I came accross the picture element but Unfortunatlly, most of modern browser is not supporting it yet. So I decided to do it via javascript. So I don't see the "news" here, I thought the article was about a compression algoritm or somthing, but nothing special.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: