(b) is not correct. The time complexity of the total fit algorithm (as described in Knuth&Plass' paper) is almost the same as the simple (first-/best-fit) method. They use a technique called dynamic programming to divide the problem into pieces where the perfect line break of a sub paragraph is valid for optimal breaking of the whole paragraph.
It is definitely slower. The computational complexity of the Knuth-Plass algorithm might be of similar (or maybe even the same) order as just fitting as much as possible on each line, but it definitely at least has a different constant out front. I’m not exactly sure what the speed difference is, but it’s non-negligible.... though on modern hardware either way is absurdly fast, and there’s no reason the Knuth method couldn’t be done in real-time even for quite large documents.
If you don't know the constant, how can you say that it is non-negligible? I doubt that one can measure the difference in speed rendering an average webpage with either algorithm.
And your remark about real time: Even the old (and slow!) TeX can do linkebreaking for several hundred pages of text in a second. Including writing pdf. So you are probably right about real-time h&j even for quite large documents.
Knuth-Plass takes O(kn) where n is the number of words and k the number of words per line. That’s definitely slower than the naive line-by-line method.
But the better algorithm can be supposedly improved with the method in this paper (I haven’t read it and don’t have time just now, so I’m not sure how much slower it would then be than the naive way).
I haven't really wrapped my head around the paper yet, but I think it only holds if you take out some of the elements of the original Knuth & Plass algorithm (the box, glue and penalty model.) Nevertheless, linear time seems possible for a total-fit algorithm if you're willing to give up that model.
Nobody denied that it is slower in theory, but I still claim that you won't be able to measure the difference on an average web page. So time is not an excuse for not implementing this (and this was the original question).
Yeah, I never would have disagreed with that. Which is why I said “Both seem like pretty weak excuses to me.”
By “non-negligible” I meant something like 2–10 times slower (you’d have to test to be sure... I tried searching around but couldn’t find any direct speed comparisons). For typical pages it’s not going to matter, but if you have a page like the HTML5 spec, where it takes a half a second to resize a window now, it’s going to be noticeably slower using a more sophisticated paragraph composer.
Keep in mind, Safari doesn’t even do kerning, because they’re afraid it’d be too slow. (Also a ludicrous excuse for desktop hardware and typical web pages, IMO.)
And the Safari guys also use the same excuse (“it’s too slow”) for not doing real color management with CSS/HTML colors.
Before we make fun of the webkit folks for not doing the proper work because "it's too slow," remember that these are the same folks that we praise for making the fastest dang HTML rendering engine on the planet. (Or near enough.) Their culture of performance has significantly raised the standard of web browsing performance, and that's not something to take lightly.
I imagine it would have to make a fairly significant difference in rendering quality in order to make any speed losses worthwhile. Death by a thousand cuts and all that.
I would like to have two browsers of the same type, one with high(er) quality typography and one as it is now. And then compare the speed. That would be interesting.
Oh, sorry. I think maybe these are ones that have full text available if you arrive via a google search? I don’t think I’m currently logged into anything special.