Hacker News new | past | comments | ask | show | jobs | submit login
Chrome browser for businesses (enterprise.google.com)
317 points by vikiomega9 on Dec 29, 2016 | hide | past | favorite | 274 comments



Is anyone else concerned that this means IT can choose to hold back the version of Chrome in their organizations? Auto-updating Chrome has been low key one of the best solutions to the pain of backwards compatibility with older browsers. In the past we not only had to worry about compatibility between browsers, we had to worry about compatibility between browser versions. Further, auto-updating Chrome as dramatically reduced the time from new web feature implementation to widespread deployment and thus usability. I take it that turning off auto-updating will not be widespread, but I'd rather not risk it


Companies have far too much internal plumbing to permit browsers to auto-update themselves. Being feature-compatible with the latest SV-poster-child website is insignificant compared to maintaining stability for the tax-management team's internal web app.

I was once involved in an IE6 -> IE8 upgrade for a Fortune 100 corporation. It took nine months to analyze all the possible impacts and implement mitigations before the first internal production release. And that was fast on account of teams being forced to particpate by C-officer mandate.

By the time the deployment was finished, which required several more months, IE10 was mainstream in the Real World.


I think this is a weak argument. The problem here is that companies think that software can exist in some sort of "done" state, where no further updates or development is required.

That has never been true.

The hardware and software changes. The infrastructure it connects to changes. But most importantly, your workflows must evolve if you are to remain efficient.

If you roll your own software, you either commit to constantly updating it, or you commit to throwing a ton of money down the drain when its rots away to the point where it is no longer fit for your workflow or for the world it operates in.

Software is never finished, only abandoned.


I question if you've worked with large enterprises. They freeze everything for a decade or more. Think about a bank with 500 offices, a dozen tellers at each office. Having their internal web app that everyone uses go down because of an auto upgrade could cost them hundreds of millions. They have a very different risk appetite.


That doesn't mean this is a good practice. In fact, I would argue that this type of thinking has created far more harm than good for both the IT industry as well as the companies that adopt this harmful practice.

Case in point: many of these same companies that refuse to upgrade systems for a decade or more are still running Windows XP and Server 2003. Due to the negligence of their IT leadership, they are now vulnerable to all types of security vulnerabilities that won't be patched, and have put both their business and their customer's private data at risk.

This is no different than a state or local government refusing to maintain critical infrastructure like bridges and tunnels. Negligence might be viewed as conservatism for a few years, but when the bridge collapses because routine maintenance wasn't performed, it becomes clear that the administration who neglected maintenance was at fault.

The sooner the cancerous idea, that IT software can be frozen in some golden state of perpetually providing business value with zero maintenance required, dies, the better.

Modern, thoughtful IT leadership realizes the value in a continuously updated, secure browser. Chrome is already winning the browser war in the Enterprise for this reason alone. This just provides additionally needed controls like white-listed extensions that are known to be safe.


> That doesn't mean this is a good practice. In fact, I would argue that this type of thinking has created far more harm than good for both the IT industry as well as the companies that adopt this harmful practice.

> Case in point: many of these same companies that refuse to upgrade systems for a decade or more are still running Windows XP and Server 2003. Due to the negligence of their IT leadership, they are now vulnerable to all types of security vulnerabilities that won't be patched, and have put both their business and their customer's private data at risk.

There are legitimate reasons for this though. The way I understand it, a big chunk of the problem is that many software vendors refuse to properly separate security related updates from regular updates. Firefox at least has an ESR version, AFAIK there is no equivalent for Chrome. See also attitudes like that mentioned in: https://news.ycombinator.com/item?id=9164251

Similar examples can be given for MATLAB - I know of many professors who ask students to run some old and fixed version of MATLAB for many years - they do not want to waste their time dealing with the breakage that inevitably happens with newer releases.

If all vendors actually put in effort into maintaining LTS versions, the situation could be different.

I do agree that the idea of perpetual business value with zero maintenance in a frozen state doesn't make sense - but as a business, I would always be interested in minimizing my maintenance burden.


As another commenter mentioned, LTS for Windows XP and Server 2003 is very good - where else can you expect 15+ years of support?

> but as a business, I would always be interested in minimizing my maintenance burden.

The argument that I'm trying to make is that by deferring maintenance until it is too late, you're actually dramatically increasing your maintenance burden in the long term. Testing and maintenance fixes for your app to support Chrome updates, which rarely break anything, will be much lower over a 10+ year (or probably even 5+ year) timeframe than just sticking your head in the sand and making someone 10 years from now pay an astronomical cost to refactor the app completely.


Not that I'm arguing for or against auto-updating, but I'd say Windows XP fits the bill for a vendor maintaining an LTS and yet people are still on that. LTS versions need to die at some point too.


You make an interesting analogy, but I'm not sure it truly supports the point you were hoping to make.

If maintaining critical infrastructure like bridges and tunnels worked like deploying software updates, every few weeks the bridge or tunnel would close itself to traffic in one direction for a while, without warning.

Every now and then when the tunnel reopened, it would feature a loop-the-loop hidden inside the mountain. While it looked the same from outside, it would therefore become completely impassable by large amounts of the traffic that used to rely on it, including ambulances, food supplies, and the CEO's limo.

Every couple of years, your world famous suspension bridge, used as a widely recognised landmark by millions, would be redesigned more like a Roman aqueduct. With traffic driving on the opposite side of the road. And a high speed railway line running right down one of the traffic lanes. With warning signals that are supposed to alert motorists to the danger, but in practice don't because none of the motorists know what the funny symbols mean.

When software engineering is at least on the same planet as civil engineering, and software updates are subject to the same standards of oversight and approval before being deployed, we can talk. Until then, organisations can and will see software as a tool that is there to do a job, and view updates to working software as a high risk that isn't worth taking if it might stop that software from doing its job. It might be inconvenient for the software developers, who naturally would prefer everyone to bend to their will and never to have to maintain anything older than five minutes, but those software developers are going to have to realise at some point that the world doesn't revolve around them.


> If maintaining critical infrastructure like bridges and tunnels worked like deploying software updates, every few weeks the bridge or tunnel would close itself to traffic in one direction for a while, without warning.

They already do this by closing lanes, and even closing entire bridges at night during maintenance when required. The "without notice" part is only if you're doing it wrong - you should be giving your users advance notice of any maintenance activity.

> Every now and then when the tunnel reopened, it would feature a loop-the-loop hidden inside the mountain. While it looked the same from outside, it would therefore become completely impassable by large amounts of the traffic that used to rely on it, including ambulances, food supplies, and the CEO's limo.

I'm not sure I buy this argument. The failure modes of bridges are well understood and usually life threatening. The failure modes of software are not as well understood, but are rarely life threatening. If you're updating pacemaker software, yes, take extreme precautions and don't introduce defects, but the amount of testing and preparation should depend on the impact of a potential failure.

> Every couple of years, your world famous suspension bridge, used as a widely recognised landmark by millions, would be redesigned more like a Roman aqueduct.

This seems like a false equivalency. Suspension bridges take years or decades to design and build. Software can be built in weeks or even days. The frequency of major infrastructure changes is directly correlated with the time and effort involved in deploying them, in both civil and software engineering.


Sorry, but we seem to be talking completely at cross-purposes here.

My previous comment was not really about the realities of civil engineering. It was about how absurd civil engineering would be, if the routine maintenance done in that context failed as often and as spectacularly as software updates do.

To be more blunt about it: Organisations don't want software that updates itself frequently and fallibly, because they simply can't afford to have basic functionality going out of service every few weeks, complete and possibly permanent loss of compatibility with critical services from time to time, and the design and UI changing at random in ways that are confusing to users.


I agree that we're probably talking past each other, re: civil/software engineering.

The entire reason to do frequent, small updates, instead of large, spectacular updates is to avoid all the problems you mention in the last paragraph.

In that case, it actually does become quite a bit like civil engineering, in that if they keep relatively minor repaving a lane every year, without disrupting traffic completely, they can avoid the entire road failing spectacularly and having to be blocked off to rebuild. Of course I don't really know much about civil engineering...

Cheers, anyway.


The entire reason to do frequent, small updates, instead of large, spectacular updates is to avoid all the problems you mention in the last paragraph.

So the theory goes, but I'm not sure there's really any such thing as a small update if you're talking about software used by hundreds or thousands of staff that provides the platform on which tens or even hundreds or important business applications are built.

If you're talking strictly about security updates, which are intended to address an identified vulnerability without making any other change, then I would agree there's an argument for making those more frequently.

Unfortunately, many of the key players, including those producing the evergreen browsers, make little or no distinction between essential security fixes and other changes, and at that point the risk/reward ratio of accepting frequent updates can change rather sharply.


>The failure modes of software are not as well understood, but are rarely life threatening.

Well, it's not life threatening, but what do you think will happen if every month your credit-card stops working for a few days because of a Windows/Linux regression?


I appreciate the idea, but in that case, it's simply a cost/benefit analysis. The benefit of not losing a few days worth of fees on credit card transactions a month is worth way more than the relatively minimal cost of good QA, testing, change control, and rollback capabilities.


Ive actually been working in banks mainly. But on the business side.

At the upper echelons there is a real appetite for change. Banks realise that they have become software companies, but that their businesses are groaning under the weight of antiquated systems and processes and it is holding them back.

But even the board can only affect so much.

The only reason an auto update would take down an app is if all development has stopped. As part of the process you should be testing against upcoming releases of chrome so you spot these problems well in advance.

But if dev has stopped, then yes, you will have a problem.


Dev stops. That's reality. If you hand-wave it away, your tellers go down, and you, the CIO, lose your job.

Not all software is under your direct control. Once business processes achieve a stable working state, it is imperative that they maintain uptime, sometimes in excess of six-sigma uptime.

Imagine your business is NASA. Are you going to let mission systems auto-update 18 secs before launch?


The dirty truth is that every piece of software or infrastructure you deploy immediately becomes "technical debt," and you know the thing about debt is that it charges interest.

You can choose to neglect paying on your debt for some period of time, but it always comes back, more expensive, later. Case in point: the IT leaders that refused to upgrade from Windows XP and Server 2003 are now paying astronomical prices for "extended support" and "migration/rehosting." If they would have simply invested in maintenance to ensure compatibility with newer OS/browser combinations, they would not have this problem.

Pay now, or pay much much more later. Tech debt is a huge problem in IT, and ignoring it won't make it go away.


The prices charged for extended support are high but not astronomical and I don't think you answered the question on NASA.


I think the point here is that there is no such thing as a stable state in software. Maintaining old infrastructure to support old applications is not a steady-state, it is a degrading state. As bugs and exploits are discovered and patched in new versions of windows/browsers/databases/frameworks/libraries/etc the old systems degrade to a vulnerable, un-maintainable, unsupported and often un-upgradeable state.


I think we can all agree that NASA is a special case.

Also, is it 100% true that they dont update software on actuve systems mid mission?

What if they discover a critical bug?


Of course NASA updates software mid-mission; they're famous for sending software updates out to deep-space probes and rovers to fix things as critical as communication issues, which is quite a neat trick.

What they don't do is allow updates to happen automatically. They're carefully pre-tested, and the update is carefully planned and scheduled. Their goal, like most enterprises, is to minimize risk, and bugs that you know about and understand the impact of are much less risky than updated software you haven't been able to test yet.


In this example though, your app is the mars rover and Chrome is the rest of the Universe.

The Universe auto updates. So you need to update your rover when that happens.


NASA is not a special case. What about medical equipment, weapons , oil industry, stock trading, I bet other people here can think of more examples.


Not only does Dev stop, but the people that built the app, agreed on the specs and really understood it, all left too. Auto update is unfortunately something that's just not worth the risk in a lot of cases.


With web apps there should be very little risk if you followed standards. The problem is most of these places chose to follow IE6 instead of standards. They will put this off until it becomes a major issue or someone else takes their business. By know I've left banks of terrible client facing systems but internal enn efficiencies are even obvious. It took almost 8 weeks, multiple in person visits, and a freaking typewriter to get my HELOC. It might take another bank a visit and home appraisal and you get a check.

The "risk adverse" nature, I can't belive I just said that about a bank, will become a problem.


With web apps there should be very little risk if you followed standards.

There should be very little risk, but the fact is, there is actually a great deal of risk. With my professional web developer hat on, the "evergreen" browsers are the best advertisement in the history of computing for why organisations that need reliability in their IT systems should be wary of automated updates controlled by outside parties.

Browser updates break stuff all the time. And not just obscure things, though there are plenty of those. On the small scale, I've seen Chrome updates break basic page layout, and rendering styles as simple as rounded corners or shadow effects. On the larger scale, Chrome sometimes removes entire chunks of functionality, like support for important plugins. New features are often the worst, and it's particularly insulting to be told we should all use HTML5 feature X or JS feature Y instead of plugins, when the reality is that the new version still isn't up to doing what the plugin did a decade ago.

Businesses don't care that it's more convenient for the browser developers if everyone rearranges their entire work schedules to keep up with the latest "living standards". Businesses just want software that works, and having found it, they will go to extraordinary lengths to continue using it rather than playing the upgrade lottery. And given the track records of most of the upgraders in this game, frankly, it's hard to blame them.


> With web apps there should be very little risk if you followed standards

You might be surprised by the number of subtle bugs lurking in corners left out as "implementation detail" by standards.

I've learnt to treat browsers as capricious genies that stick to the letter of the standards, but inevitably screw you over by inconsistently doing the opposite of what a sane person would have regarded as implicit.


Thats called bad management.


It is the reality of the situation in a vast amount of companies.


Only because we let them. I'm very much against auto-updating software, but I still don't see this as a good argument against them. It's not that companies can't keep up, it's that companies choose not to keep up. In this aspect, taking away their choice is the winning move, imo


The only thing you do by "taking away their choice" is to force them to use alternatives that are not Google Chrome.


I would argue that's a good thing


I'd say that this enterprise version of Chrome will finally fit in a financial institutions's baseline target for their patch management system.

Take a look at page 45 (page 27 of the PDF itself): https://www.ffiec.gov/pdf/cybersecurity/FFIEC_CAT_CS_Maturit...


They used to, this is no longer the norm though. The norm is moving to patching systems regularly.


> That has never been true.

Sure it has. When software existed on physical media and there was a real, material cost with distribution of software, there was absolutely a done state. Waterfall software development and actual requirement control would clearly define done states. That's not to say that there weren't bug fixes and updates, but it was (and is) possible to define what an application should do, code it to do so, test that it works, and ship it. But the world has changed, software has become easier to develop and distribute, and software development methodologies have shifted to where we don't have to have a done state in the same sense that we used to. I don't think it's a coincidence that we got agile development after the Internet.


Definitely true, now video games on discs are manufactured completely broken or even mostly empty and they make you download 50gb on launch day


That is not true. Games released physically for video game consoles must pass platform certification requirements and only then manufacture, distribute, and release the game. The lead time required for this in my experience is on the order of 2-3 months, during which time the development team cannot touch the "gold master" build of the game.

The build that ends up on the disk is not "completely broken or even mostly empty"; it is what the development team and publisher were satisfied shipping as the finished game 2-3 months prior. Any software, and games moreso, can be improved with more time so the intervening time to release day is often taken up with development of a launch day patch to fix bugs that were triaged to be ok to ship with, and/or to add features/systems/content that just missed the cut before.

Launch day patch sizes is more of a symptom of the build pipeline (maybe there is some non-determinism that is introducing larger diffs than are otherwise necessary) or late optimizations (to build packing, asset management, etc) that can force you to download the majority of the game again.


>The build that ends up on the disk is not "completely broken

...Well, then. Would you care to explain Assassin's Creed Unity, and Tony Hawk's Pro Skater 5? I suppose you classify them as "not completely broken"?


Assassin's Creed Unity suffered from a lot of bugs for its PC release, namely graphical glitches on some (not all) hardware. Additionally, it had a number of bugs that could be broadly described as "open world jank": funny physics interactions, NPC spawning issues, etc. The game should have been delayed more on PC to allow for better hardware compatibility testing, but no, I wouldn't classify it on release as completely broken.

Tony Hawk Pro Skater 5 was a rushed project and generally poor game. It was buggy but that was the least of its problems. Many people considered it the worst, or at least one of the worst, major releases of that year. I don't think it is representative of contemporary games as your comment would imply.


The point is that AC Unity had very serious problems, which would have no doubt been fixed pre-launch in a pre-patch world. Also, if testing is as thorough as you claim, and you actually do have to have a fairly playable game to relase without obvious, terrible, bugs (Unity was unplayable for some on launch, and it was difficult to play for many others because of just how buggy it was), than Unity should have never seen the light of day.


Are you talking about Final Fantasy XV? This case is ridiculous, they are even modifying the story after the launch day. It would be like releasing a movie and changing the end one week after the release because of the bad reviews. Hopefuly the sales of FFXV are relatively bad.


Sorry but why is that a bad thing?

I can understand that it's "bad" that their story was lacking at release, but isn't it awesome that they'd go to such lengths to serve fans and buyers and have a good product in the end, if not in the beginning?


pretty quick we will end up with nothing but the worst graphics/story when you buy a game because you can just patch it and fix it later.


> It would be like releasing a movie and changing the end one week after the release because of the bad reviews.

To me this is not that uncommon. Let me tell you a story (I cannot provide a quotable evidence for a central point of the story except for multiple friends who independently had the same feeling after seeing Harry Potter and the Chamber of Secrets two times in few weeks when it came out in cinema in Germany).

At that time when this movie came out in Germany there was a law that "FSK 12" movies must only be watched by people who are at least 12 years old. The problem was: Harry Potter and the Chamber of Secrets was rated "FSK 12", while Warner would have liked it to be "FSK 6". So they cut some scene parts out of the movie to get this rating for the German cinema release (BTW: this incident lead to a change in the laws that now children under 12 years old may also go to "FSK 12" movies as long as they are accompanied by a parent). So far this is a story where for every claim I could provide strong evidence.

Now the strange part comes: (#) People who saw this movie shortly after release day and a few weeks after (in Germany) all could point out to some scenes (and they were the same) that they clearly remembered when they first saw the movie, but not the second time. So it seems to be that there were some cut alterings done afterwards and there were actually two different versions to be seen in German cinemas. I personally assume that this had to do something with the stated age restrictions (they forgot to cut some scene parts when they initially released the movie in the cinemas, so they hastily changed things afterwards and released a new version to the cinemas).

As I said: I can provide no independently verifiable evidence that (#) happened - but I deeply trust the independent persons that claimed so and I deeply trust their visual memory (I would only trust people who have a really good visual memory for such strong claims).

TLDR: changing a movie shortly after release is nothing that deeply surprises me.


It seems like many movies these days get released with two versions when the theater run is complete; one version that matches what was shown in theaters, and one Director's Cut. The latter often includes scenes that were cut to get a specific rating, but it also often changes things that caused bad reviews. Superman v Batman is a good example; the theatrical release got bad reviews for being confusing, and the Directors Cut includes a lot of backstory scenes which help the movie make more sense. (Can't help the overall plot holes though.)


It's not just engineering/R&D that makes this decision. IT departments generally have the last word on what applications (browsers, for example) will be supported. A real life example:

I worked on a team that maintained a data portal website used by major pharma companies and clinical research organizations. It had to be IE7-proof for a very long time because many of our largest clients were still running very old browsers internally because of IT rules. Our R&D team would have loved to drop support for these antiquated browsers, but, shockingly, our sales and operations teams were not willing to tell multi-million dollar clients that they better upgrade their infrastructure or they couldn't use our software.


I had a similar experience. I was developing a customer portal website for one of my company's clients. Well into the project I found out that everyone at the company, including everyone who would be doing QA, had monitors that only went up to 800x600, they were all locked down to an old version of IE (8 or 9 I think), and they had a default-blocked web proxy in place so they needed special permission to even see the test portal I had running on my server.

Happily we did convince them to get some hardware and software upgrades, so by the end of the project they were able to see their website the way most of their customers would. The proxy continues to be a nuisance; their developer can't access the cloud server where their production website is deployed; he has to rely on me to make any changes that are needed there.


>The problem here is that companies think that software can exist in some sort of "done" state, where no further updates or development is required.

>That has never been true.

That opinion is the exact reason we constantly have jr. dev's re-inventing the wheel with their great new idea. Nothing frustrates me more than someone thinking they should re-write applications just for the sake of re-writing it. If the software is feature complete, and lacks bugs, WHY WOULD YOU RE-WRITE IT?? To frustrate end-users with a new interface they have to learn? Because you're bored and want to deploy language of the month?

Your two options are ridiclous and show a very naive view of the world of software. There are literally pieces of code at likely every one of the fortune 100 that was written in the 80s for a specific task, still does that task flawlessly, and has absolutely no reason for anyone to touch it. That's a GOOD THING.


> The problem here is that companies think that software can exist in some sort of "done" state, where no further updates or development is required

Counter-argument: software does exist in a 'done' state, or rather snapshots of it via releases. When version 1.2.1 is done, it is done.

Version numbers are a sane way to manage things for sys-admins and software engineers (think compatibility matrices and SemVer). I'm going to guess you are closer to the engineer side of that spectrum: can you imagine the insanity of having to maintain compatibility with a library that is self-updating and has an 'evergreen' API? The ability to pin versions is a God-send, falling too many versions behind is abusing that ability.


I wish I could paste a screenshot of my desktop with Oracle EBS (enterprise business suite) windows open-- it looks like something straight out of 1997. Horrifically bad interfaces that burn through users' time and patience with no end in sight and surly non-responsive staff running it. Efficiency be damned, and yes, people are perfectly happy to throw money down the drain to keep it running.


It could be worse. I worked for a company that still had VB6 apps in active use, and wanted to write new apps in VB6!


>That has never been true.

Some software does actually reach a "done" state, where there is nothing left to do, assuming no major platform changes.

wc will probably work as intended until UNIX itself shambles off the coil at this point.



BSD wc hasn't been touched in a year. GNU wc's recent patches were either the addition of features, or bugfixes for what seem to be relatively new features. Or optimization.

But you could pull in wc from V7 on modern computers, and it would still work. (assuming GCC can still compile K&R C).


Yup. Likewise, the version of `wc` from Kernighan & Plaugher's «Software Tools in Pascal» still compiles and runs with any ol' ISO Pascal compiler, and still does exactly what the manpage says it does.

As an aside, I'm pretty sure K&R C is a subset of ANSI C, so any ANSI C compiler should be able to compile K&R C.


>pretty sure K&R C is a subset of ANSI C,

Not so, AFAICT. For one thing, K&R function declarations looked like this:

  int main(argc, argv)
  int argc, char **argv
  {
  ...
  }
Ever wonder where 1TBS came from?

Also, in K&R, function prototypes didn't list args.


Yes, function declarations did look like that. And GCC accepts it totally silently in pedantic ANSI C mode, not even warnings:

    11:44:57 cory@tizona /home/cory/Workspace/test
    $ cat knr.c
    #include <stdio.h>
    
    main(argc, argv)
            int argc;
            char **argv;
    {
            int z = foo(3, 4);
            printf("foo is %d\n", z);
    }
    
    foo(x, y)
    {
            return x + y;
    }
    
    
    11:44:59 cory@tizona /home/cory/Workspace/test
    $ gcc -std=c89 -pedantic -o knr knr.c
    
    11:45:07 cory@tizona /home/cory/Workspace/test
    $ ./knr
    foo is 7
    
    11:45:11 cory@tizona /home/cory/Workspace/test
    $ gcc -v
    Reading specs from /usr/lib64/gcc/x86_64-slackware-linux/5.3.0/specs
    COLLECT_GCC=gcc
    COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-slackware-linux/5.3.0/lto-wrapper
    Target: x86_64-slackware-linux
    Configured with: ../gcc-5.3.0/configure [...]
    Thread model: posix
    gcc version 5.3.0 (GCC) 
I seem to recall the compatibility was deliberate on the part of the ANSI C WG, for hopefully obvious reasons.


Huh. Didn't know that. Okay...


The hardware and software changes. The infrastructure it connects to changes. But most importantly, your workflows must evolve if you are to remain efficient.

All true, but in a larger organisation with all the overheads and co-ordination that entails, you're looking for a pace of change measured in years, not weeks. You have better things to do than constantly redoing work you already did because software changes, and IT systems that aren't set up accordingly are simply failing to meet one of the most basic business requirements for being useful.


The problem is that Chrome remove features without providing any viable replacement.

For example Flash: currently the Chrome audio APIs do not cover all of the functionality provided by Flash. You cannot, for example, pause the recording of audio.

If you have a product which requires that functionality then you're stuck. You can't upgrade.

In the recent past I was in just that position: we had no choice but to block Chrome upgrades, at least until such a time as we could implement the functionality ourselves.

It would be much better if Chrome would just leave the functionality in there; or maybe provide a special "Enterprise" build which includes all of the features behind feature flags. All they need to do is make that configurable by Group Policies and you've made Enterprises happy (and we can keep upgrading the core browser).


> You cannot, for example, pause the recording of audio.

You can use the Web Audio API (getusermedia) and set an audio pipeline filter that stops sending data while a boolean flag is set.

I don't know if you mean a different API entirely, and I don't mean to disagree---just FYI :)


Thanks, that is indeed so. I must say that the Chrome team have been really fantastic about it (they reached out to me after asking on twitter).

It's just the age old resources problem: not only do you have to change the frontend, you also need to make changes on the server. That more than doubles the work (quadruples in our case because the server requires far more QA effort).

That comes at the cost of other features, features which are more valuable.


I've had discussions internally and the it staff basically said they will not considered actual support for Chrome. However they let us install any browser and support reps just instruct people to use certain browsers for antiquated software.


Chrome can't kill Flash fast enough. I'm stuck with AppD, and the sooner that Chrome highlights what a festering pile of garbage their web portal is, the better.


It's amusing how we see web apps as the solution to deployment woes. I miss proper custom desktop apps with horrible looking cluttered interfaces. Soo productive compared to any web based interface for almost all tasks.


Eh, yes, and the yells of "Over budget, we can not afford new machines for the whole team, just because your app is not 386 backwards compatible."

And the partial rollouts, with teams being split into and stuck- and demanding you hack together tooling on the fly to allow for work to continue anyway.

And the horrible cross-application databases, which would went inconsistent if you didn't include basically a reduced data version controlling in every of your applications.

And those proto-typed tools, made by some intern in obscure languages, that would in secrecy fester into the "Main-Tool" of a department. And all hell would break loose, if some virtual environment update would break those env-dependcies.

You know what- it was bad back then. Real bad. And today is better. And even nostalgia cant save it.


You wouldn't have to use antiquated tools and practices just to use desktop apps. I'm not arguing to use the old apps, I'm arguing that a lot of new development should be desktop when it's often made as web apps without thought.

We threw all the good stuff with desktop out at the same time as we moved to intranets. You can still have centralized deployment, thin clients etc, but have the niceties of proper native apps (good multi screen/multi window support, better, good support for complex interactions like shortcuts, ctrl-clicks etc).

Even on mobile where native has fewer benefits than on desktop we can't make web based apps that are better than native ones.

A clean understandable web UI is good for a seldom used app, but a lot of the intranet apps are "constant use" and were faster to use in their legacy dos implementation than the newer web version. Browser hell just makes the problem worse.


> A clean understandable web UI is good for a seldom used app, but a lot of the intranet apps are "constant use" and were faster to use in their legacy dos implementation than the newer web version. Browser hell just makes the problem worse.

This. Click, wait, click, wait, click, wait is in no way better than how quickly you can fly with a well memorized set of keyboard shortcuts in a native app.


This is not a web vs. native problem. You can implement keyboard shortcuts in a webapp. You can neglect to implement keyboard shortcuts in a native app.

This is all about attention to detail, which internal corporate apps almost always lack.


You can add keyboard shortcuts to a webapp. What you can't do is control or override the keyboard shortcuts that the browser adds, and you can't make use of the full suite of possible keyboard shortcuts. I've looked at adding shortcuts to my webapps many times over the years, and it's never been worth the effort because the result is just confusing to the user.


I use keyboard shortcuts extensively in gmail, jira, stash, and a bunch of other apps.

There's no problem implementing it. The designer should just think about it.


> (good multi screen/multi window support, better, good support for complex interactions like shortcuts, ctrl-clicks etc).

Totally do-able on the web. It's often the web apps architecture that prevents a good UX, not the browser's capabilities (which are really good now).

> Even on mobile […] we can't make web based apps that are better than native ones.

That's an issue of the developer, not the browser itself. I do admit that Safari/iOS is really hurting here but you can work around that if you really want to (it will still be a sub-par UX, though).


I think it became harder to write true desktop apps once VB6 was killed off in favor of .Net with its higher learning curve.

As bad as a real language VB6 and it's GUI builder were, it did allow even amateurs to write a decent working crud app in minimal time.


I still think VB6 was the pinnacle for writing line of business CRUD apps. I've been told there are still some VB6 apps I wrote ~15 years ago running at a big enterprise I once worked for.


I find C# and WinForms at least as simple as vb6. They did make Lightswitch to really dumb down the one-off crud app development, but I'm not sure if that still lives.


Delphi was the pinnacle. But VB had definitely set the trend.


>I miss proper custom desktop apps with horrible looking cluttered interfaces. Soo productive compared to any web based interface for almost all tasks.

You mean where everything runs on the UI thread? Gotta pass on that. :)


Not necessarily - although that's usually fine. I prefer a non responsive UI when waiting for io/db, to a poorly implemented background execution where you aren't sure anything is actually happening.


> I prefer a non responsive UI when waiting for io/db, to a poorly implemented background execution where you aren't sure anything is actually happening.

You can't be sure anything is happening on a locked desktop app either though, so at least you can close it gracefully if it's doing that on another thread

IME customers close stuff if Windows shows "Not Responding" but not if you just have a fake progress bar.


But (most) browser apps are single threaded and you can block the UI if you do anything intensive on the client-side.


There quite a few ways around that with async APIs and WebWorkers, even ServiceWorkers now. A lot of devs just don't use them properly.

Edit: Did you mean the browser itself could freeze? That's a thing of the past with Firefox being the last browser to separate the content and chrome process. So the web page may freeze, the browser window itself shouldn't.


Those are relatively new features, and aren't used a whole lot.


WebWorkers are not a new "feature". The last browser to support it was (on desktop) IE in v10 (4 years ago) and on mobile Android 4.4 (3 years ago). Though that doesn't really matter since you can polyfill it, which obviously does not give you multithreading in those older browsers but it still forces you to think async.

Regarding other APIs, a great number has been available since many years, and any newly introduced API since at least 5 years is either non-blocking or async.


In my experiences being forced to use various SPAs from various venders, more often than not async just means I need to behave single threaded rather than the app lest I wish to trigger race conditions that desync the local model and I end up having to hard refresh.


Heh, the MS Windows file/folder permissions tab comes to mind when you mention that. Been single-threaded for 20+ years now. Clicking on the Add... button always freezes the UI while it communicates with the domain controller or whatever it does.

http://imgur.com/a/YbGbU


True, that's why I enable running explorer as a secondary process, so the desktop doesn't completely freeze.


At my previous company we used IE8 for the internal crap, and up to date Chrome for everything else.

Basically all the issues the internal junk had stemmed from JavaScript; they were JUST forms! No legitimate need for JS.

If you just need a form, just build a simple POSTing <form> without any JS. It will stand the test of time. A simple form from 1996 will still work today. A livescript enabled form from 1996 however almost certainly will not.


As an engineer that worked on IE, I can totally sympathize. Your story is one repeated throughout almost every enterprise of sufficient size.

That said, there's a middle-ground we should strike here. At some point, being out of date (sometimes as much as half a decade or more!) on your browser trumps the cost of keeping internal software up to date. This is not to mention the network costs of third-parties attempting to support your users browsing the public web and on browser vendors that take investment away from their latest versions to continue to support legacy software.Allowing indefinite suspension of upgrades by IT is definitively a mistake. It adds insult to injury that Google is repeating a mistake which we all learned so much about via IE and older versions of Firefox. We will all collectively pay for this if they don't course correct.

The middle ground here is providing extended support channels like what's being done for Firefox and Edge. Chrome already has Canary, Dev/Beta, and Stable channels. A slower moving, more stable channel for businesses would be a natural solution here.

This gives businesses time to test and adapt, limits the total number of versions in the wild that must be supported by browser vendors and web developers, and provides just enough paternalistic motivation to keep your internal software in a good state (upgrading to support latest browsers is a forcing function for testing, performance tuning, security tightening, etc.).


>Companies have far too much internal plumbing to permit browsers to auto-update themselves. Being feature-compatible with the latest SV-poster-child website is insignificant compared to maintaining stability for the tax-management team's internal web app.

So why don't they have two browsers - one for internal use only (IE6), which is firewalled to only internal network shares and another browser (Chrome, whatever) which can only go to the real internet?

At least that way you'll be browsing the wild internet safely.


> So why don't they have two browsers

I definitely do not claim they were right, but one of my previous employer's (not a software company) IT department insisted that additional applications in employee's computers were an extra workload for them to maintain. (Occasionally also security was used as an excuse). So the text editor that was allowed was notepad and the browser was IE.


I hope Google are going to do continuing security updates for every enterprise release of Chrome for the next 10 years, LOL! At least limit the company browser to only internal apps (still not safe!). Perhaps they could de-feature the enterprise browser for better support&security? You would hope that using standards compliant web apps would make upgrading/testing less of an issue for corps.

Using ancient versions of IE to get work done was awful and in the current security environment, I don't think it's safe for a browser that's not in its own VM. Heck, you could use Chrome Remote Desktop!


It's not just "internal" plumbing. We've had to stop Chrome updates in our Customer sites, from time to time, because "cloud hosted" applications were rendered unusable by updated Chrome versions. Eventually the third-parties got their acts together, but we would have had user-facing outages if we hadn't been cautious with allowing Chrome updates to deploy. I'm glad that the ability to "freeze" the Chrome version exists.


IE6 was an exception though... It it's day websites were specifically made to run on IE6, which was highly non-standard, and nothing else. Nothing is made to run on only Chrome 38 now. We aren't in the world of <blink> and <marquee> anymore.

Most companies I know of (customers) let browsers auto upgrade now or they vet upgrades and they get pushed to client machines in a few days.


IE6 upgrades were much different than other browser updates because it was the last IE to support ActiveX applications. There were many internal enterprise apps written in it that kept IE6 alive for years until they could be rewritten. There is no equivalent in modern browsers.


Holy moly, that is so toxic and innefficient! Big companies have a big problem.


Yeah the problem is they hired big software companies to develop sub-par software (in terms of being future proof, platform agnostic, and standards-compliant – if there were any standards to begin with) that worked for their needs at that point in time. Now they have to deal with the legacy.


> Companies have far too much internal plumbing to permit browsers to auto-update themselves.

What plumbing might prevent a browser update? I can't think of a good use case.


Poorly written intranet web apps that break between browser versions. Especially likely if those apps were only tested in one specific browser -especially if it was an older version of MSIE.


Even the most perfect web apps won't save you from breakage when there is a botched browser release- such as the recent Chrome update that blocked all sites using certs from a specific CA.


My first thoughts exactly.

My work has increasingly become creating software for enterprise clients. They all want it to be delivered through the browser. Over time, chrome has become the primary target because its simply unfeasible to target IE. Organisations have bizarre and arcane rules for who gets which versions of IE, so it inevitably leads to developing for the lowest common denominator, which in turn leads to disgruntled clients who see some fancy feature on the web and can't understand why we can't give them the same thing in the time and budget they've allocated. Now, we generally say we'll target chrome and provide "some" support for IE.

But I absolutely guarantee that an orgs IT dept will seize this opportunity to convolute and complicate who gets which versions of Chrome.

And its all for nought. In my experience, IT departments cite vague security requirements, but when you scratch the surface there's typically no security at all. For example, at one company who really does need strong security policies, it was common knowledge that you could circumvent their stupid auth process by opening the task manager and quitting the process. You could also open up the developer tools console and send whatever requests you wanted to any service because features were enabled and disabled in the UI only.

My point is, IT departments in my experience only talk about security and use it as an excuse to do bizarre stuff that looks smart, but is generally the opposite.


In another comment I mentioned a client company that had a locked down default-block web proxy in place. During a site visit with the two developers I was working with I discovered two surprising things:

1. The conference room where I was working had two ethernet ports; one went through the proxy, and one had a clear line to the internet. Apparently this was a common setup in the offices.

2. Everyone was required to use IE, and to use the proxy-managed ethernet port. But I develop using Firefox and I didn't get blocked by the proxy, regardless of which port I used. It turned out that the proxy had to be set up on the browser. The developers were granted admin rights on their machines so they could manage the proxy settings for IE but no one else could. However most people had the ability to install Firefox or Chrome, which would give them proxy-free access to the internet. They just had to know they could do it, and then not get caught doing it.

Turns out this was all about control rather than security; they had a real problem with people spending time on websites they shouldn't have been on, during time they should have been working.


For many organisations there is a real risk that a browser update will unexpectedly break a key internal application, which could have a catastrophic impact on operations.

The vast majority of large companies will control and test key software updates, balancing the various risks (security patches, obsolescence, operational incidents..).

Not allowing auto-updating to be controlled basically means that the software is not intended to be deployed in enterprises.


I prefer Firefox Extended Support Release (ESR)¹ for this though. Every year (currently in March) the current normal Firefox version (that does auto-update) is forked for a new ESR support cycle and made available for acceptance. Three months later if all is stable, it replaces the previous ESR release series, and for a year corporate IT gets a browser that will only be updated with critical (security) updates; no new features, no removal of features. At the end of that year the cycle repeats itself, and you can roll out the new ESR series when you've confirmed everything works as it should.

1: https://www.mozilla.org/en-US/firefox/organizations/


Looking at all these "enterprise" horror stories, I don't think ESR's 1 year is nearly enough.

3 or 5 years might be more suitable.


Most of those horror stories won't happen if you go through the upgrade cycle once a year. A year's worth of chances is manageable for whoever is maintaining the software, three or five years worth of browser features will make the heart of any developer sink.

It is a fallacy to believe that you can stave off updating a modern internet connected browser indefinitely; the IE6/7/8 hell has driven that lesson home in the industry, and security updates are a necessity. If you do need to stay with one particular browser version, then you use a virtual machine or some other properly sandboxed environment to offer it. You can keep running IE6 on Windows XP for as long as you like with a VM that thinks its the year 2003 and which for some reason can only reach http://oldunmaintainedapp.legacy.intranet.example.com. That's fine too (although I would hate to maintain that solution).


my personal take on it : the most you break it, the most you may get money to fix the shit.

So contrary. Guerilla IT. You push for auto update so that they get a motivation to clean shit.


Couldn't orgs just use multiple browsers to reduce risk? With some monitoring of which clients are used, one could remove e.g ie8 support once it's established that every internal product has been used long enough with ie10


This works reasonably well for applications that are used daily or weekly (deprecation notifications help). But then, after you phased out IE8 after months of no problems with IE10, during the holiday period the accountant tries to close the yearly accounts, finds that the form does not work with IE10, and hell breaks loose.


Can't you just build on HTML and JavaScript api's that are widely utilized and accepted? I know before I call something I'm unfamiliar with I'll look up what browsers support it. Most of the problems with IE versions have to do with plugins be it ActiveX or a Java applet. In my mind that isn't a web app but an application packed in a web browser (often so the consultants can call it a web app, "you go to a website to lauch it, see?”)


> Can't you just build on HTML and JavaScript api's that are widely utilized and accepted?

Absolutely. I still have websites that I designed with ie6 in mind that work just fine. But if modern design trends are any indication, it just won't happen. Designers want too many of the new flashy features.


I'd just be happy to be able to use it at all at work. Our organisation blocks it because, apparently, 'it sends a lot of data back to Google.' I have no idea if this is true or not.


Why would they change their reasoning with this new enterprise version?


Well, they wouldn't, unfortunately. But I guess if having an enterprise version that doesn't auto-update means that more people are going to get to use it, then that's a Good Thing.


It's true, but you can turn that stuff off.

You want to turn off things under Privacy like "Use a web service to help resolve navigation errors", "Use a prediction service to help complete searches and URLs typed in the address bar", "Automatically report details of possible security incidents to Google", "Use a web service to help resolve spelling errors", and "Automatically send usage statistics and crash reports to Google." And not login into Chrome.

The problem is that even if you tell all your employees what to disable, some of them will probably not comply. So a truly paranoid IT team might want to make their own fork of Chromium with those features removed.


Are you sure those can't be configured via group policy? Maintaining a fork seems like an absurdly overcomplicated and expensive approach.


Isn't that true for all browsers (and OSes)?


Not for Linux and Firefox afaik.

At the very least, they are modifiable to not do that.


It's already possible to disable Chrome auto-update, and enterprises that wish to already do it. I've worked in an environment that did, years ago. My personal preference remains to allow silent auto-updates for software that auto-updates silently like Adobe Reader or Google Chrome to continue to do so... as long as they have yet to break the environment. (And it saves on management headache for a piece of software we don't really support the use of heavily in our environment.)


On the other hand this will probably help deploying Chrome inside companies with more restricting policies. Some of these companies still use IE for everything because it's the only browser allowed, because you know, it's corporate.


My company has already disabled the auto-updates in our installed versions of Chrome :/


What does "low key" mean? Is "low key" a good or a bad thing? Does it mean we should conceal this information?


This isn't a new thing at all, but more awareness to the sysadmins shouldn't hurt.

Also, let me point you to the Legacy Browser Support extension.

https://chrome.google.com/webstore/detail/legacy-browser-sup...

With proper GPOs you can force a domain/subdomain to open in IE directly from any links.


For Firefox there is (was?) IETab which lets you open websites in IE tabs inside Firefox. I think there was even an option to specify that certain websites (e.g. the Intranet, certain banks) should always open in IETab.

Haven't used it for years but back when I was a Windows sysadmin it was the final proof that FF was better than IE: in addition to being Firefox it could also be IE : )


IETab - now that's a blast from the past o.O


I've got a medical application that only works in Firefox running IETab. Interestingly, it doesn't work in any version of IE. It's an odd product.


How the hell did this app manage to get developed?


By somebody running IETab, duh


After a quick search, it looks like this may still be a thing for Chrome? https://chrome.google.com/webstore/detail/ie-tab/hehijbfgiek...


I use it daily and it works great.


I understand the utility of this, and the (unfortunate) reasons why this is needed, but good lord, it seems almost like a parody.


What I really want is the ability (recipe ?) to chroot a browser.

I would like to run 3-4 browsers at all times, each with totally sandboxed identity and IP address.

However I do not necessarily want to run a full blown VM for each of them.

This is very simple and efficient for sandboxing server daemons - I do it all the time and there is almost zero overhead involved in chroot/jail. However I have never chrooted a GUI application and cannot find any instances/recipes of anyone chrooting a browser ...


Are you aware of https://firejail.wordpress.com/ ? Sounds like it's what you are looking for (or is there a reason it does not satisfy your requirements?)


Thank you - I was not aware of firejail.

It appears that this does the things I am looking for - however, I am suspicious - why do we need a new project like this rather than a simple recipe for the existing jail or chroot system calls ?

What is it that makes something like firejail necessary ?


I've run it only once, and I don't really know, but I was under the impression it just took care of setting up bpf syscall filters and namespaces to provide least privilege - which, given neither X not e.g. Firefox was designed to be sandbox, is more complicated than one would expect.


One notable thing about the dominance of Chrome (and the decline of Firefox) is that WebKit is now a de facto standard. The desktop and mobile versions of Chrome and Safari are based on it, and according to Microsoft, "any Edge-WebKit differences are bugs that we’re interested in fixing." (https://en.wikipedia.org/wiki/Microsoft_Edge). That covers every major desktop and smartphone browser except Firefox.

At some point, developers are going to target their stylesheets for WebKit only, because Firefox rendering differences are going to be seen as nuisances that aren't worth overcoming in order to reach a tiny minority of users. Firefox will have to work toward WebKit compatibility as Microsoft Edge does.

WebKit is doing pretty well for something that was originally part of KDE.


It was true a few years ago, but Chrome no longer uses WebKit, they forked it to their own Blink engine, so now there are differences between them and you have to write stylesheets for both (example: -apple-system font vs BlinkMacSystemFont).

Although some code is still being ported between them (e.g. Safari recently "Enabled support for a modern CSS parser, ported from Blink, that improves performance, specification compliance, and compatibility with other browsers, while also adding support for scientific notation in all CSS numbers" https://webkit.org/blog/7120/release-notes-for-safari-techno...).


I manage 134 branded sites for a university and a handful of other sites. We target IE11, Edge, Safari, FireFox and Chrome compatibility with desktop and mobile (ios, android). Once Microsoft announced sunsetting support for earlier browser versions our job got a lot easier. We all said "thanks MS, took you long enough".

In the last 30 days for the branded sites we have roughly these percentages of sessions by browser:

  chrome  46%
  safari  32% 
  ie      11%
  firefox 8%
  edge    2%
This for about 370,000 sessions.

We will continue to support the same browsers for the next two to three years and then adjust as needed.


Thank you, I'm always interested in seeing browser metrics in other industries. I'm assuming safari is so high due to your higher-ed audience.


We have a lot of Mac users in the student body. More than 60%. Also iOS users.


I bet the majority of your Safari numbers come from iOS users. A lot of Mac users ignore Safari and use Chrome.


Yes, in abstract Webkit only support sounds nice.

In reality, each mobile device and consumer computer has a different Webkit version.


Except Google and the Chrome team have forked WebKit as Blink, and are running quite a different development path.

There's now Blink, WebKit, Gecko, Trident in mainstream and a few others lurking.


Firefox already does that, started long ago. They even added support for -webkit prefixes for the same reasons.


As I can tell you from experience, they don't work all of the time, and it is a source of frustration for front-end developers I know who need to work with it.


You're not supposed to use the webkit prefixes. The only reason firefox has them is because some older websites will stop working.

Most of them have equivalent standardized features, or are in a state where you really shouldn't be using them. Don't expect every webkit prefix to work in Firefox.


I'm not sure we're supposed to think about prefixes at all, but instead just let https://github.com/postcss/autoprefixer do its thing.


So yeah, using prefixes to make props work on older browsers is fine. Using only the prefixed version of a property or a property that doesn't have an unprefixed version at all is not.


For me Chrome is already the new IE: good enough to saturate the market to the point where devs stops caring about standards compliance.

Personally I have tried to like it, multiple times but I always get annoyed and go back to FF. But then again I prefer Linux over Mac and Netbeans over IntelliJ so maybe it's just me.


I used to switch between Chrome and Firefox all the time before finally Chrome won me over with the most convenient cloud sync for bookmarks, passwords, etc.


What did you like about Chrome's sync that Firefox sync didn't do (well)?


Same for me. Boggles my mind there still isn't a p2p (IPFS-like) system that allows you to take control of your bookmarks and use them with any browser. Then again, Chrome doesn't allow you to specify your own bookmarks server... :(


Password sync in browser is a piss-poor password manager solution. You should just use a password manager


Why is password sync in browser bad? Care to explain? I hate to have another password application; Dropbox sync t everywhere. Remember Firefox sync encrypts everything before upload. So does chrome.


At rest the data isn't necessarily encrypted. Also browser hacks are a dime a dozen. It is the most exposed interface on your computer.

You are freely executing untrusted code from unknown parties, coming over insecure and unencrypted channels. You really can't be sure who is sending HTTP.

And don't talk about sandboxes. There is a sandbox escape fix in every version of Chrome. This isn't on google, there are way more attackers then fixers.

Basically webbrowsers are under constant concerted attack by every single bad actor out there. And you trust them to sync and secure your passwords?

You have more faith in humanity then I.


Well, for one, for most of the life of your browser, it took no security for anyone staring at your desktop screen to read plaintext versions of all your passwords. (Yes, physical access is complete access and all, but there used to not even be a casual attempt to prevent someone from stopping by your PC when you stepped away having a look at your passwords quickly.)

Later versions of Chrome, IIRC, will trigger a UAC prompt on Windows before displaying passwords, or something similar.

It's also generally been trivial for software to mine saved passwords from all browsers. I'm not fond of password managers personally, I prefer outright memorization, but password managers generally at least try to keep their contents secure, usually.


What standards do they not follow?

The biggest issue is the experimental features webmasters use and then force rest to use Chrome. I'm not so sure Chrome themselves not caring about standards.


I meant web devs stop caring about standard compliance ("already works in Chrome, moving on to next interesting feature").

Does this clarify?


That clarifies it a bit yes. I read your sentence the devs of Chrome.

But it's true, interestingly Safari is often holding them back because so many devs use Macbooks and Safari.


Chrome doesn't have enough market share for any sensible developer to behave like this.


I thought Chrome had over 50% of the market share? https://www.netmarketshare.com/browser-market-share.aspx


That's true but in part because Safari isn't available on Windows anymore (and never was on Linux).

Chrome is still dominant but way less so when you filter down to only OS X users. This is especially true if you filter down to only the US where Macs are more popular than worldwide.

Unfortunately it seems like both that site and StatCounter require a payment to view browser percentage by platform. This info is out there for free somewhere.



It's not really about standard compliance, it's more an issue of how the browser behaves in non-standard situations and other fun edge cases. If devs only test with chrome then it's very easy to become dependent on behavior specific to chrome.

For example, yesterday I was implementing a server that uses http multipart to stream images to a browser. multipart uses two methods to frame messages: a length prefix and a boundary token. So when you send a message you prefix it with something like

    Content-Type: image/jpeg
    Content-Length: 12345
And you end it with something like

    --MagicArbitraryBoundaryTokenPreviouslySpecifiedInHead
In principle these are redundant, you could frame the content with either one but both are provided. And that's where the problem started. I misspelled a property name and because I'm using javascript it helpfully inserted 'undefined' in the "Content-Length" header. Firefox refused to display the images but chrome happily displayed them as if there were no problem.

If I had only tested in chrome I probably wouldn't have noticed that my code had a problem that was breaking other browsers.


I actually meant web devs stop caring, not Chrome devs.

Apologies for creating the misunderstanding.


You mean.. it's not just me? There's someone else who likes Netbeans?? I thought this day would never come!


Is this about replacing one monopoly with another?

It is slowly starting too look like in the early 00's, most developers are starting to support only chrome, with no testing in Firefox or latest IE.

On Android it is even worse, no one is testing with Firefox Mobile (Fennec) - this is WinXP-IE6 all over again.


They seem pretty good about using the open specification.

Also they're starting to phase out -webkit tags for experimental chrome only features. They're hidden behind flags so you basically can't use them in production unless it's endorsed as part of the spec.

Not to derail but safari mobile is the real IE6, plus lock in so users can't even get away from it.


From a vendor strategy perspective, that's true. From a web browser market share perspective, that's not true at all.


I completely agree. It's re-enforced with Chrome's supplied dev tools which are really, really nice (I always found them far better than Firefox's built-in or Firebug).

It's going to be harder to unseat Chrome. IE 6 wasn't so difficult due to an increase in competition who offered features that IE wasn't even working on (standards, tabs, etc). How would you even unseat Chrome?


Safari and nowadays Chrome DevTools are great.

I miss the "DOM" tab, that Firebug for Firefox had features had, and I haven't found anywhere else. You can browse the DOM tree state and edit - similar to Smalltalk and Windows Registry - very useful & powerful. Sadly, they just killed Firebug, broke the addon, deactivated it with an update and replaced it with a still not feature-complete (at the moment sub-par) solution.


> I miss the "DOM" tab, that Firebug for Firefox had features had, and I haven't found anywhere else.

It's in the Fx DevTools sinve v48 [1]. You may need to enable it though in the DevTools settings.

> Sadly, they just killed Firebug, broke the addon, deactivated it with an update and replaced it with a still not feature-complete (at the moment sub-par) solution.

Who's they? The Firebug Team works with the DevTools team since forever to port all functionality to the internal tools. The Firebug Team decided it was not worth to maintain the standalone version beyond v2 (also related to architectural changes to Fx, see project e10s) since they almost achieved feature parity by now and are working on the last remaining pieces [3]. Firebug v3 is Fx DevTools + a lot of extra features that Firebug never had. If you miss a feature in the DevTools, try Fx DevEdition [3].

[1] https://developer.mozilla.org/en-US/docs/Tools/DOM_Property_...

[2] https://hacks.mozilla.org/2016/12/firebug-lives-on-in-firefo...

[3] https://www.mozilla.org/firefox/developer/


I think that what you're asking for is already doable in the "Elements" tabs of Chrome DevTools, no? Or maybe I didn't get what you really meant by "You can browse the DOM tree state and edit"?


I think they're talking about the DOM tree, not the HTML element tree. Similar to https://addons.mozilla.org/en-US/firefox/addon/dom-inspector...


I have strong hopes for Servo performance wise.

But my guess is that Google will become more and more monopolistic, more and more close in disguise. And closer to the big companies interest with DRM and such.

And if Mozilla keeps being Mozilla, they will be a natural alternative.


Don't use 8GB of memory?

Nah, people have lots of memory.

I honestly don't know (but...then that's not such a bad thing, right?).


Firefox dev tools are sometimes painful to use and lacks some important features which Chrome's has.

Some examples:

- debugging with source maps: it doesn't work properly (I can set breakpoints and debug variables content on typescript files in Chrome, but not in FF)

- JS exceptions: clicking on the filename inside the stack opens a new window with the source of the JS file, not the debugger (ugh)

- doesn't support typescript highlighting


Did you try the Fx DevEdition? I know of some recent Source Map related changes. If the issue persists (same with the other issues you mention), would you mind filing a bug [1]?

[1] https://bugzilla.mozilla.org/enter_bug.cgi?product=Firefox&c...


Yup, I tried FF Dev Edition a month ago, same thing. Maybe I will try again later after updating


On the other hand, the network tab in the dev tools actually knows how to parse incoming JSON, unlike Chrome which only renders a string.


>"How would you even unseat Chrome?"

I won't directly mention the project that's best placed to do that, as it's not yet ready for primetime, but let's just say Chrome may not have the performance crown in the next 5 years.

Also, aside from on the Mac, what's holding back Firefox? I accept that the Mac version seems to have more problems than the Windows or Linux versions (I have no technical explanation for this, it's just what I've heard anecdotally), but on Windows it seems to be a solid choice. Is it just about momentum for Chrome rather than superiority at this point?


As another poster in this thread mentioned: bookmarks and other personal data syncing. Allowing 3rd party sync servers would go a long way towards Chrome projecting the message they don't want to lock you in but that's probably not the case -- if we judge by their team's [lack of] actions.

(And don't get me started on services like XMarks. They're miles behind.)


> let's just say Chrome may not have the performance crown in the next 5 years.

Servo?


Yeah, no; most times when you test in Firefox it works just fine with no changes; specially when you already tried on mobile, because if it works in mobile it works on desktop Firefox.

The other thing is that major libraries (CSS, JS: jQuery, bootstrap, React) make sure they work on all browsers so you don't have to worry about that part.

The winXP-IE6 was hell because that never happened, you always had to changes to make it work on IE6 and some other changes to support IE7 and so on.


Most times but not always unfortunately. Had a couple of times that I did not check a small frontend change on firefox because I assumed a small change would be fine, yet had different results in chrome as on firefox.

I did not check on mobile though.


I use Firefox on Android and often run into web sites that don't work right because they are dependent on some Chrome or Webkit behaviour, and I need to switch the browser to desktop mode to resolve the problem.


It's rare for me to find one, although the highest profile annoyance for me is textboxes break on m.facebook.com. The caret refuses to move from the current line without deleting or adding a character.

Known issue and I'm yet to see a fix (other than "mbasic" which you have to use for messenger anyway).

Almost everything else seems to be fixed with uBlock Origin and/or Reading Mode.


I experianced this myself and was pretty sure this is on purpose. If you request desktop on Chrome for Android you get a stretched out version of the mobile site and the auto complete of your keyboard makes the textbox on messages impossible to use. I think they want to funnel people to the app I just don't go on the site anymore.


i often run into google web sites that render and behave terribly in android chrome (usually floating windows with the close hidden or extending outside the screen).

most web problems are now bugs either by the site developer or in a specific browser.

thats a long way from ie6 problems where either you needed a unique site configuration for every browser or supported ie6 only.


"It is slowly starting too look like in the early 00's, most developers are starting to support only chrome, with no testing in Firefox or latest IE. "

Certainly you realize this kind of cycle has been going on for longer than that. In browsers, yeah, but that's only because browsers have only existed that long.

This is pretty much the way of all software cycles since the dawn of computers. No area of software that i'm aware of has really broken out of this kind of cyclic behavior.

Sorry :(


>On Android it is even worse, no one is testing with Firefox Mobile (Fennec) - this is WinXP-IE6 all over again.

I'm always surprised why this is the case. Firefox Mobile + UBlock Origin works really well and is only very slightly slower than Chrome, while also blocking ads


Firefox Mobile works because it has a huge set of fixes they use for exactly that – changing UA on some sites, patching the code of other sites, implementing all the -webkit-* things, etc.

It’s not that sites support Firefox, it’s that Firefox tries to implement 1:1 what Chrome does. At far less than a tenth of the budget.


Chrome is more convenient in that it syncs everything once you sign in and Firefox sync system just doesn't feel as refined.

Probably doesn't help that google plaster "get google chrome" over all of their services.


The latest ads are interesting in their "Official browser from Google" angle.

Read: Google is the internet. Google is whom you should trust.


What is Firefox's sync lacking?


For me it just never works. It never did during the last 3 years I tried it. The tab just don't sync, and I don't have the time or will to play the debug game.


I was a long time Firefox user until the start of this year. I think I started using Pheonix as my main browser around 0.5 or 0.6. However over the years Firefox has got heavier and slower and taken way too long to evolve. It sucked when I realised I hadn't opened Firefox for a whole month. Now I just stay in Chrome as it is, for me, the best browsing experience. It is fast and looks good. It might be a large program in terms of install size but it doesn't feel bloated. The UI is snappy, scrolling is instant, etc. I do miss a few things about Firefox but not enough to slow down my whole browsing experience. I have moved on from the huge UI changing extensions. Now I just keep things minimal with just a content blocker and one or two other things for convenience.

What I don't understand about Firefox is that Mozilla have a lot of money but progress seems very slow. Sure they get there in the end (or most of the way) but it just seems to take forever. Why?


Firefox slowness is often related to some bad-behaving extension (though that improved over time), some AV software hijacking the process (happens far too often and way harder to detect than it should), or some busted profiles with a lot of cruft that failed to be cleaned up automatically (re: bad behaving extensions). I advise you to try the Firefox DevEdition (which uses a separate profile by default) and check if it works for you. You can also get rid of your old profile or try to reset it (preserving some data).

Money isn't necessarily the issue for Mozilla, they managed to create a decent browser with a fraction of the big player's budget. As always, it's the triangle of complexity and time and manpower, they just can't fight on all fronts at once. At the moment, they are trying to get rid of a lot of legacy that prevented them from improving some aspects of the system (see project e10s, WebExtensions, etc.) all while preserving as much of the ecosystem as possible – which is very hard due to the current extension system that was created more than 10 years ago and just doesn't fit the bill anymore. E.g. e10s is already going on for 5 years or so and it just started rolling out a few months ago.

Good news is that with these projects completing (improving overall responsiveness), there's a huge new project starting (Project Quantum) that aims to overhaul major parts of the rendering engine. This will integrate a lot of the research and investment poured into Rust and Servo, which will probably outrun current browsers by an order of magnitude (at least in some aspects that has already been shown, see WebRender).


Mozilla have a lot of money but progress seems very slow

As opposed to Google, Apple and Microsoft?

Consider what happened to Opera. I think you vastly underestimate the complexity of modern browsers. By orders of magnitude.


You are not answering his concern at all, just talking about your preferences.


Assuming we're talking on the desktop, have you tried Firefox multithreaded? It's a relatively new change.


while i agree with you ( i use firefox on my macbook and on my nexus 6 ) the situation is not even close to the one with ie6, everybody that has be doing web development in those days knows what im talking about, now days its usually some little quirk, those days you had to rewrite parts of your code or have big if(IE6) do something completely different.

while i also hate that chrome has such a big influence, if all of them implement the common specs, i would be ok with that.


Bingo. One might say the economic system in whole really pushes monopolies or oligopolies.


One might say that if one were completely ignorant of economics and economic history. There are no examples of long running monopolies unsupported by state power.


Define "supported" and "long"...


It would be nice for the people who are down voting you to provide a counter example.


The banking sector, Alcoa, big pharma, automotive, oil/gas energy, telecom, and even groceries. Anywhere you have rents being sought and reaped you have monopolistic power. Im excluding the remark about government supported because that's more of a corporation forcing the governments hand through threats.

When someone calls me ignorant I don't necessarily feel like replying.


Yet somehow many thought that the cynic ones of us where wrong and Google would be different.

Our opinions are based on life experience, watching the same cycles unfold time and time again.

The next one will be the demise of open source ideals after everyone has migrated away from GPL for everything but the Linux kernel.

I will give it around 10 years for it to happen.


Open source ideals defined by who? Richard Stallman? True open source is code that you can do whatever you want with it. When Google releases their code it's under what I consider the true ideals for open source software and not some license that threatens legal action if you don't release your modifications.


How does it feel following your true ideals of open source software, installing AOSP on your mobile device, with the ability to use apps from the Play Store?

What about updating the BSD distribution on the PS4?


Even though it is mostly under the radar, UC Browser is an extremely popular browser for Android in India


It's WebKit based so it doesn't actually change anything.


Indeed. It's highly customizable.


We test on browsers we see more than 1% usage on by rule. This includes a swath of versions and platforms of IE, Chrome, Firefox, Edge as well as some foreign ones I know little about like UCBrowser.

Firefox Mobile does not meet that rule for us. I know personally a single person who uses it, and that's only because he's fiercely anti-Google.

There's just little sense testing against something no one is using, lest we want to test NetPositive [1].

[1] https://en.m.wikipedia.org/wiki/NetPositive


Well, if nobody tests it, the browser doesn't stand a chance because it does not work on many sites. It's a vicious circle …


> On Android it is even worse, no one is testing with Firefox Mobile (Fennec)

Maybe because (almost) no one uses it? It's definitely a vicious cycle (If there are no users, there won't be any testing on that platform. If there is no testing on that platform, there are no users.).

Developer time/resources are not unlimited. It's not worth their while to target an obscure platform.


> this is WinXP-IE6 all over again.

This is logical. If you have limited developer resources, it makes sense to target the most popular combos first. Until we have true "write once run anywhere" it will always be this way.


The difference is that Chrome is probably the most standard compliant browser around. IE was just a clusterfuck from a monopolistic company whose intent was to lock you into their self appointed standards.


One day every webpage will render the same in every browser. One day...


Doubtful, because someone will come up with their own proprietary rendering engine in a free browser, with some minor built-in feature people want; and then we're back where we started.

PS - I miss Presto-based Opera.


Stick to <h> <a> <img> and <hr> and you can probably have that now.


I, For One, Welcome Our New Google Overlords


It will be so much better for developers if there is another browser installed apart from IE in all enterprises.

I still encounter companies downgrading to older version of IE to support their legacy applications.


I thought this had been around forever? (I mean some form of installer and deployment kit/admin tool)

I see other pages talking about it going back to 2010/2011


This appears to go far beyond that though. The policy and maintenance pieces in IE/Edge were one of the biggest reasons for some of the larger companies I've worked with that they had for staying off of other browsers. This looks to fix that.


It'll be interesting to see if this causes some "no open source, even if a support contract can be purchased" policies to get changed.


Chrome never was open source, and never would be.


But it includes open-source elements.


So does IE and always do. (zlib, for example, libpng, and a few others). So?


Wow, this is news to me. If it started in 2010/2011, then why are enterprise still using IE?


Enterprises can't upgrade until all the software they depend on support Chrome. And those services aren't in any particular rush.


Because they don't know how to manage it properly. We've been doing this for several years, alongside the "Legacy Browser Support" extension for those internal systems that are still depending on IE until we decomission them later on. Everything else uses Chrome.


Oh, I never knew about this.


It's good to see that Chrome's auto-updates can be turned off and updates pushed manually to users at a time of the IT administration team's choosing (I had to dig down into a few links to find this information). But it seems like there's nothing equivalent to Firefox ESR [1] in place, and Chrome would continue to update with the same frequency as the general public release (of the consumer focused version). Does anyone know if a longer term security-only-updates model is available with this like Firefox ESR (just for the sake of curiosity)? When I searched online I found a two year old reddit thread that indicated there wasn't one.

[1]: https://www.mozilla.org/en-US/firefox/organizations/


Oh cool, a version for corporate rule fetishism admins who love controlling their MSI files. Happy I don't work in such an environment.


when the MSI file defaults to sending crash dumps that may contain confidential financial information, to a server located in the developers broom closet, yes, we have to control the MSI files.

Because admins have to answer to the auditors(and our conscience) when they ask the question "How do you ensure that personal and confidential information is not leaked from your environment"

Which is why Windows 10 is my worst nightmare.


That is a legit concern, but if that is the case, Chrome, Win10 and the like should be completely off the table.


I thought it can be switched off in the enterprise version of Windows 10? AFAIK that's the only way how Microsoft was able to start convincing companies to switch from Windows 7.


They do provide a GPO object to turn off the "Telemetry" for Enterprise versions.

But it still installs the entire Windows store with Candy Crush Saga, Facebook, Minecraft, XBOX, etc. and they make it nearly impossible to remove. Why on earth they would force that crap down the throat of their ENTERPRISE customers is beyond me.

I fought with it for months and finally gave up and cancelled the deployment project until MS offers a better way for enterprises to control the Window Store.

Update: Added more below

----

IE: When trying to build a secure environment, you must eliminate all unnecessary attack surfaces. The customer did not need "XboxIdentityProvider" installed their Windows 10 Enterprise environment. But this is something MS feels must be installed on all Windows 10 PCs thus they made it nearly impossible to remove.


If you can't see how even a single one of these 200 policies could be useful for a corporate environment, then you are just plain ignorant and/or trolling, or you've only been exposed to tiny startup environments.

And I'm not talking about controlling employees here, but simply enforcing settings for compatibility with legacy systems, legal compliance, etc, the list is endless.


No I'm not trolling. And I also don't agree with you when having software developers in mind. It's more a cultural thing (does the company wants to control everything or not?). Just one famous example: Google. You can be big and enforce and live a start up culture. I never heard from Google web developers that their Chrome is controlled by corporate policies. There is also a term called BYOD (Bring your own Device) or BYOT (Bring your own Technology) https://en.wikipedia.org/wiki/Bring_your_own_device with all its advantages and disadvantages and many even not so hip / old companies live and work that way.


so much better for the web application world. Most of the support tickets / issues are mostly related to the browser they are using. This is one step closer to removing IE.\


This is a big negative for security. The best thing a sys-admin can do is blindly click okay every update.

Out of date web browser bugs are pretty much THE mainstream hacking route. It is the route of least resistance.


Chrome is the new IE. Enterprise apps are being developer with the assumption that they will only even be run on Chrome, and become fragile because of that assumption. A recent example: someone has set up a race condition of timers that was only working (i.e. resolving in a specific order that is needed for the app to work) in Webkit-based browsers. No one cared to fix that, because it did work in Chrome, and that's all that's needed.


Windows 10 is the new IE


I guess this is more oriented to the devices like, meeting room hardware (can it be plugged to a VoIP server / Avaya switch?) that predates Polycom and Cisco markets... using hangouts.

I guess for the TCO for Digital Signage devices is cheaper than the incumbents (cost of Android signage app/web app vs Windows app/web app + remote management control).


I don't see a Linux version on there. Does anyone know when/if Google will provide one?


They will, if and when there is demand for a Linux version of Enterprise Chrome from large Enterprises with deep enough pockets. Which might or might not ever happen.


This is a silly off-topic request, but any chance someone knows of a theme (ideally Wordpress) that looks like this page? I actually really like it and could use it for my next project.


Are there any real differences besides support?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: