Hacker News new | past | comments | ask | show | jobs | submit login

I have been using Firefox since version 1.0. I don't understand the desire to use google's browser. However, why would even trying view secure data in your web-browser... Not even just Chrome. Things may get cached ect... Although, Mozilla has been doing things that I find annoying at times. Like adding pocket ect...

Little Rant Although, I have looked at some of the other forks. What I find more depressing is how few up to date browser engines exist. It's a sign that web standards are getting too complicated. We already going to have a 3rd version HTTP as well... Both HTTP/2 and potential HTTP/3 are based off of work from google. Those protocols are a lot more complicated than HTTP. So it's much harder for a small group to implement them. That's just the protocol layer. Let alone JS, HTML, CSS, and all the other little things. It's like big companies keep bloating the standards. The result is the browser is probably one more complicated pieces of software regularly use.

What ever happened to "KISS".




Maybe over-complicating things is a way of eliminating competition from Google. If it wasn't so complicated, someone could easily offer a competing, privacy-oriented browser; which Google would not like-- so make it so hideously complex no one can do it without $50 mil? There would be more innovation if things were simple, because anyone with a good idea could contribute.


This sort of happened when the WHATWG effectively wrested control of HTML away from the W3C (although Google was not a founding member, they are one of the Steering Members now). https://thehistoryoftheweb.com/when-standards-divide/

The membership of the W3C supported XHTML, to improve interoperability among other reasons. Apple, Mozilla and Opera had a different vision and broke away and formed the WHATWG which Google and Microsoft later joined. Those companies (minus Opera) now have near total control over HTML and the W3C just rubber stamps whatever they decide.

(Note: I don't believe the participants in WHATWG were doing what they did for anti-competitive purposes, but in hindsight it had that effect.)


XHTML actually decreased interoperability with seldom anyone able to produce conformant strict XHTML. XHTML was a huge mistake, the W3C obsoleted itself with this one.


Precisely that has been observed across many markets. Teachers unions being an example where adding on requirements to entry enshrine current members.


Google also has additional power by simply not implementing things introduced by WHATWG participants. Case in point: the menu/menuitem elements which would have provided scriptless interaction in a limited way (removed from W3C HTML 5.2). Any small attempt to make the web more declarative by extending HTML is doomed since not essential because it can be implemented using JS.

WHATWG's specification process, putting the world's main communication medium into the hands of browser vendors with an interest to eliminate competition and define entirely new Turing runtimes (WASM), and advertisers who turn around and create competing mechanisms (AMP), then not actually ever delivering a standard (the "living standard" nonsense) is broken, and has been for a long time.


I've seen a number of comments about http/2 and http/3 being driven by Google. The ideas originated there (SPDY and QUIC respectively) but in both cases many different entities backed the ideas and formulated specifications in IETF settings. I'm not sure I buy that somehow Google managed to hoodwink the people that toiled on these specifications in a non-Google environment and managed to influence them in such a way that the output was beneficial to their nefarious goals.

There are already quite a number of http/3 implementations from non-Google companies and projects [0]. Cloudflare seem to be big backers of http/3 [1]. There were some other articles today that are generally positive on the http/3 approach. One was from Tim Bray at AWS [2] and the other from @ErrataRob [3].

0. https://github.com/quicwg/base-drafts/wiki/Implementations

1. https://cloudflare-quic.com/

2. https://www.tbray.org/ongoing/When/201x/2018/11/18/Post-REST...

3. https://blog.erratasec.com/2018/11/some-notes-about-http3.ht...


It's just an example of how much weight Google is able to throw around. That's a just one part of what would be needed for a browser. The more parts you add the harder it becomes to build a browser.

Also I have read about QUIC there are some things that are interesting about it. However, there are things that I don't like.

Moreover, this was something I read from IETF mail archive: "That QUIC isn't yet proven. That's true, but the name won't be formalised or used on the wire until the RFC is published, so we have a good amount of time to back away. Even then, if it fails in the market, we can always skip to HTTP/4 one day, if we need to."[1]

I find that pretty concerning. If it does not pan out we can just skip over. That's still something someone has to implement even it's not used much. I would only be considering things that people in general are eager to use not just a few big companies.

[1] https://mailarchive.ietf.org/arch/msg/quic/RLRs4nB1lwFCZ_7k0...


That's still something someone has to implement even it's not used much.

Not true. This is why alpn and the upgrade header exist. You do not need to implement any of the new protocols, and you can certainly skip a version if you don't think it's worth the effort.


Cloudflare seem to be big backers of http/3 [1].

Having the second-largest traffic analyzer on board would seem like more of a cautionary negative than a positive to me.


Tinfoil hat off for a second it makes more sense Cloudflare and Google are backing these protocols because they're more efficient which means lower infrastructure costs. They both terminate traffic already so can already see everything regardless of the protocol used.


I am not the person you responded to. However, I would only be considering things that people in general are eager to use not just a few big companies. Most users of HTTP have never been too concerned with it's overhead. Except maybe the way cookies have been design. It definitely has problems, but most peoples problems are not googles or cloud flares.


So we should be against something that makes all sites faster... because big companies care more about their sites being fast? That just seems like spite to me.

If anything, smaller sites have more to gain from HTTP/2 and HTTP/3 than the likes of Google. For example:

- Both HTTP/2 and HTTP/3 seek to reduce the number of round trips, mitigating latency between the user and the server. Now, from Google's perspective, the "server" is the nearest load balancer in a globally distributed network, which is probably geographically close to wherever the user is. Thus, users with good Internet connections typically have low enough latency for the improvements not to matter much. But Google still cares about latency because of users with poor internet connections – such as anyone on a cell network in a spotty coverage area. Well, poor connections affect all sites equally. But small sites tend to not be fully distributed; they probably only have a single origin server for application logic, and perhaps a single server period, if they're not using a CDN. That means a fixed geographic location, which will have higher latency to users farther away even if they have a good connection – thus more benefit from latency mitigation.

- QUIC can send stream data in the first packet sent to the server, without having to go through a SYN/ACK handshake first. TCP Fast Open lets plain old TCP do the same thing – but only when connecting to a server you've seen in the recent past (and retrieved an authentication tag from). Thus, QUIC is faster when connecting to a server for the first time – which affects smaller sites a lot more than Google.


Most users of HTTP have never been too concerned with it's overhead

End users complain all the time about latency. And that includes the latency to your small website hosted on a single server hundreds of milliseconds from your visitor... certainly more than it includes google's websites.

What you really mean is that small website operators generally don't care that their visitors are irritated by how slow their website is... and just brush it off and ignore it because they have no solution to the problem.

Maybe you should consider h2 as being for the benefit of visitors across the internet, and a benefit for those who care about performance.

It says it all that even though h2 is not required, small website have adopted it across the globe... now at 1/3rd of all websites, and growing.


I don't think cloudfare really does traffic analysis. At least nowhere near the level that google does. It is not their core business.


Why then they offer free fully functional CDN-like service, free SSL ? Data is new oil, and CF has all data in plaintext - your logins/passwords included.


Because...

a. It's really cheap for us to offer that service

b. Lots of those free customers end up upgrading, paying for extras, etc.

Between a and b offering the free service makes sense. We make money from the customers who pay us for our service (https://www.cloudflare.com/plans/), not from doing something nefarious with data. We'd be shooting ourselves in the foot if we did because that data is our customers data. We need to be very careful with that or we'd lose trust and not be in business.

Also, free means anybody can try the service and kick the tires. Often those people turn out to me the CIO, CSO, CISO, CTO, ... of big corp.


The plaintext thing is just too sensitive, and your free service offer makes the reach too wide. Could you be compelled, by warrant, to provide all plaintext traffic from a single user IP?


> I don't understand the desire to use google's browser.

It was the only browser with a decent Javascript sandbox, at least until recently. Wikipedia claims Firefox got a sandbox this month, but I think I've seen earlier claims:

> Until November 2018, Firefox was the last widely used browser not to use a browser sandbox to isolate Web content in each tab from each other and from the rest of the system.[120][121]


Also it was the only browser where every tab ran in its own process so a crash would only take down that tab.


Microsoft's browsers got this functionality pretty early as well (I believe around the IE9/10 timeframe), though they of course had and still have numerous other issues that would make them undesirable for regular usage.


> It was the only browser with a decent Javascript sandbox

What about Safari? IMHO it has strong sandboxing. Another interesting thing I found, is sharing cookie access between private tabs, Safari does not, Chrome does.


Could be. I don't know much about the Apple ecosystem.


> Those protocols are a lot more complicated than HTTP. So it's much harder for a small group to implement them.

Why does a small group need to reimplement HTTP/2 and HTTP/3? It's important that we have more than 1 or 2 implementations, but we don't need more than a small handful, and we definitely don't need every independent group reimplementing them. We just need enough that anyone who needs it has access to an implementation that's usable for them, whether it's bundled with the OS (such as Apple's Foundation framework including a network stack that supports HTTP/2), or available as a library (such as Hyper for Rust, or I assume libcurl has HTTP/2 support).


Because then you get more parts of your stack that you don't really understand how they work and are unable to audit.

We are basically doing with TLS. Which went fine - until people realized that one of the major go-to implementations of TLS contained years old unfixed bugs that could be remotely exploited.


I am not sure TLS would have been better if instead everyone rolled their own TLS implementation.

Nor do I think that a more diverse world of TLS implementations would've led to better auditing of openSSL. We had barely enough eyeballs to audit openSSL, let alone to audit more stuff.

The issue with openSSL was that the protocol was sufficiently complicated and sufficiently critical that people just picked the available option. Perhaps those who did look into the code they were running concluded it was bad, but weren't willing to create a new library. Besides, any new library would have the stigma of 'they are using a non-standard and new crypto library'.

In that case, the solution would've been louder complains about the code quality of openSSL.


Better for everyone to be using a small handful of battle-tested implementations written by experts than for everyone to roll their own implementation. The latter may mean that people have a better understanding of the component, but it's also pretty much guaranteed to mean the various implementations are buggy. Even very simple protocols are easy to introduce bugs into.

For example, it's pretty easy to write an HTTP/1.0 implementation, but it's also easy to open yourself up to DoS attacks if you do so. If you're writing a server, did you remember to put a limit on how large a request body can be before you shut down the request? Great! Did you remember to do that for the headers too? Limiting request bodies is an obvious thing to do. Limiting the size of headers, not so much. But maybe you thought of that anyway. What about dealing with clients that open lots of connections and veeery sloowly feed chunks of a request? The sockets are still active, but the connections are so slow you can easily exhaust all your resources just tracking sockets (or even run out of file descriptors). And this is just plain HTTP, without even considering interacting with TLS.


"It's a sign that web standards are getting too complicated."

Is there precedent for standards significantly simplifying over time, or do they always tend to get more and more complex?


What frequently happens is that a simplified alternative appears.

HTML5 rather than XHTML, Markdown vs. HTML or LaTeX, HTML, originally, vs. SGML or Sun's ... proprietary hypertext system (Vue?).

Arguably, replacement of much office suite software with Web technologies.

Multics -> Unix.


This is true, but a web browser can't really make those choices without a breaking a lot existing stuff. The big problem is that we keep piling onto HTML, CSS, and JS. For instance if we wanted web apps it would have been better to make something separate. Instead we have taken HTML which was originally just a way of rich text formatting and have made into the beast that it is today.


This may be a nitpick, but hopefully it's also an interesting rabbit-hole:

HTML was originally contemplated as more than a method of rich text formatting. It was created as a way to describe and link arbitrary media and applications. I'd recommend reading the first published proposal for (what later became known as) the World Wide Web written by Tim Berners-Lee [1]. In my reading, I see it as being intended applications as powerful as the kind we build today - at least as far as could be contemplated and described in 1989, and given the degree of abstraction with which the document as written:

> "Hypertext" is a term coined in the 1950s by Ted Nelson [...], which has become popular for these systems, although it is used to embrace two different ideas. One idea[] is the concept: "Hypertext": Human-readable information linked together in an unconstrained way. The other idea [...], is of multimedia documents which include graphics, speech and video. I will not discuss this latter aspect further here, although I will use the word "Hypermedia" to indicate that one is not bound to text.

An example of anticipated usage:

> The data to which a link (or a hot spot) refers may be very static, or it may be temporary. In many cases at CERN information about the state of systems is changing all the time. Hypertext allows documents to be linked into "live" data so that every time the link is followed, the information is retrieved. If one sacrifices portability, it is possible so make following a link fire up a special application, so that diagnostic programs, for example, could be linked directly into the maintenance guide.

Another category of use-case was web crawling, link-based document search, and other data analysis.

These and other anticipated use-cases envision more than text formatting; the primary purposes of the proposal were, in my opinion, the inter-linking of information and the formal modeling of information, especially for the purpose of combining different programs or facilities into a single user experience.

[1] https://www.w3.org/History/1989/proposal.html


I wish Google Search would create an HTML5 subset for documents that would boost rankings if used.

A good majority of search results I am looking for should be simple single page HTML documents that don't use complex HTML5 features that are needed for web apps.

Change ranking, and you give websites the incentive to avoid JavaScript or CSS features that are against the reader's interests.


I'm 80% sure you're joking, but just in case, this is essentially what AMP does.


Last thing we need is google dictating more about the internet.


My understanding was that this was the original plan for XHTML. Keep HTML 4.x around as a "legacy standard" for old content, make new developments in a new language with an architecture more suited for modern use cases.

Of course this would have required browser vendors to support two languages at the same time for a sufficiently long transition period, which was apparently too much to demand.


But they did support both languages, and support them to this day.

It's the sites that didn't adopt XHTML. Everybody on the infrastructure side loved it.


..without a breaking a lot existing stuff...

That's specifically why and how new standards apear. They accomplish most (though not all) the earlier capbilities, with a masive reduction of complexity. It's a form of risk mitigation and debt reduction.

Compare browsers generally: Netscape -> MSIE -> Mozilla -> Firefox -> Chrome -> Firefox. Each predecessor reached a point of complexity at which, even with massive infusions of IPO, software monopoly, or advertising monopoly cash, they were unsustainable.

The old, dedicated dependencies (frames, ActiveX, RealPlayer, Flash, ...) broke. Simpler designs continued to function.


>For instance if we wanted web apps it would have been better to make something separate

But then we need to make another app + browser version? Which defeats the purpose...


Moreover, we have gone from Microsoft pushing complexities to Google.

Like the latest two HTTP protocols are both based of tech that google has already made. However, IETF is like that sounds good. It's got it's advantages, but there is very little push back saying well that makes things more complicated.

For instance with HTTP/2 it has support for pushing files to the client. Most back end web stacks are still trying to think of good ways to make that easy to use. Mainly since what files to send depend on what the page contains. So either you have to specify a custom list or the web-server now needs to understand HTML to get a list of required resources. This also gets more complicated since a push will be useless if the resource is already cached. This means your webserver has to have some kinda of awareness of how clients will cache data. Again this starts to mean your web server needs more client knowledge.

This is does not even take into account how the browser should handle these things.

Additionally, while cryptography is a good thing, the standard for HTTP/2 does not require it. However, pretty much all the browsers ignore that un-encrypted HTTP/2 is allowed. So if you wanted to run HTTP/2 without TLS the browsers act like site does not exist. This gets into the problem since there are so few browsers they can basically make defacto standards. So if you went through the effort and followed the standards what you encounter may not follow those standards at all.


The standard for h2 may not have required it, but practically it was required. There are middleboxes on the internet that assume any traffic over port 80 is http 1.1, and will destroy/interfere/break non-1.1 traffic. There are also servers that will respond with a 400 error if they see an unrecognized protocol in the upgrade header. This is why actual data shows h2 has a higher success rate when sent over tls.

IIRC MS/IE wanted to implement it, but they backed off because of these issues

Asking browsers to implement h2c is asking them to make their browsers flakier... their users would see a higher connection error rate... which the user WOULD attribute to their browser, especially if they open the same URL in another browser without h2c and it works.

Using the upgrade header instead of alpn is slower anyway.


> HTML5 rather than XHTML

Huh? Parsing HTML5 is much more complicated than XHTML, and everything else is about the same.


The issue with XHTML is not parsing, it's generating valid one. The internet got years to try, failed, time to switch to something else...

Because parsing invalid XHTML, which all browsers ended doing, is more complicated than parsing HTML5...


It's pretty easy to generate a valid XHTML doc. The issues come when someone is editing by hand and doesn't care.

> Because parsing invalid XHTML, which all browsers ended doing, is more complicated than parsing HTML5...

I don't understand what you mean. Isn't the non-strict parser for XHTML just the normal HTML parser? The complication levels should be equal.


> It's pretty easy to generate a valid XHTML doc.

In the face of arbitrary user-content, like comments? Are you checking they don't include a U+FFFF byte sequence in there? (Ten years ago almost none of the biggest XHTML advocates had websites that would keep outputting well-formed XML in the face of a malicious user, sometimes bringing their whole site down.)

It's absolutely possible to write a toolchain that ensures this, just essentially nobody does.

> Isn't the non-strict parser for XHTML just the normal HTML parser?

Yes. It's literally the same parser; browsers fork simply based on the Content-Type (text/html v. application/xhtml+xml), with no regard for the content.

The bigger problem with XML parsers is handling DOCTYPEs (and even if you don't handle external entities, you still have the internal ones), and DOCTYPEs really make XML parsers as complex as HTML ones. Sure, an XML parser without DOCTYPE support is simpler than an HTML parser, but then you aren't parsing XML.


The problem is that with the glut of document declaring strict conformance but failing to be, fallback mechanisms had to be implemented, making it like a two pass parser, where if strict fails, you reparse in non strict. In the end slightly more complex, and definitely slower.

Anything more would be paraphrasing http://www.webdevout.net/articles/beware-of-xhtml


In the particular case of web standards, my impression is that some companies that develop browsers (1) tie individual performance evaluations (e.g. bonuses) to whether the engineer has added stuff to standards and (2) _really_ like over-engineering things. The effect on web standards has not been good.



Firefox performs badly, especially on my 2-core macbook.

Quantum is still not fast enough with many pages I use, I bet most devs do not test on firefox anymore and I've found FF unusable unless you use a 4 core machine, otherwise you get many random pauses here and there.

So my choice is chrome or safari. Safari is not customizable enough for me so chrome it is.


I use Firefox as my daily driver and I am consistently amazed by how slow Chrome is whenever I load it up for a debug session or to access a work related site (it's the new IE, sites only support it).

Most Google sites are faster than Firefox (big surprise /s) but most everything else is the same or slower. I thought Chrome was supposed to be fast, it feels like a turd.

I have a Yoga 2 (4 years old) and my laptop fan revs up like a harrier jump jet whenever I load Chrome. Firefox only manages to make it purr loudly.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: