So their example use case for `position: sticky` is putting a distracting header onto a space constrained mobile device __where the browsers own address bar has the good sense to disappear__ when scrolling down. Not rise from the page like some pixel craving zombie element.
And let's keep in mind in the real world, it's going to be at least 3 times bigger than that because it's not a sticky bar for ants after all. And it'll be rocking its best friend Mr Hamburger Menu, some social-tracking-sharing-buttons, and a banner ad for whatever was mis-classified from the background of the weighted average of the last 10 Facebook pictures you appeared in.
> distracting header onto a space constrained mobile device
This is my pet hate with Google's AMP mode which most sites seem to be switching to.
On my iPhone 5S it breaks hiding the browser's address bar when you scroll, then you have the AMP 'address bar' of the same size, then you finally get to the site's sticky header and footer. On Huffington Post the content is around 50% the size of my screen. Oh and it breaks Safari's Reader mode, which I assume also means accessibility features.
Well, I do not because I am in a limited connection (~1.5GB per month). I gladly appreciate that sites loads faster when I am using a mobile connection too.
While I understand the pain of (sometimes) having broken sites when AMP is on, this really helps when someone is on a country where mobile internet is expensive.
I agree I hate sites that do that, but changing it to a CSS property instead of some custom half-assed JavaScript is an improvement because now there's at least a hope of being able to block it with some extension.
If you write a mobile SAP, you may need a quick way to change context. Gestures are limited. If at one point you need buttons / links to be always available for the context switch, then a sticky header or/or footer is a logical solution and is what is used in a native apps anyway. It also has the additional benefits of giving you a place to put something helping to identify said context, such as a label, an icon or some shape/color.
On a regular desktop you would not have this problem: you can have a menu far away from the content. You can have tool bars. You have the name of the tab and the URL giving you context, and the page need less context switching because you can just display more instead of requiring to juggle with what the viewport can display. But on the mobile, context switching is very hard to do right, and must be accessible from your thumb.
I wrote that demo. I was a little behind schedule and needed a quick example that highlighted the point. I'm not disagreeing with you but there are plenty of good usecases for this and in the case of my blog it's easy to lose track of the current section in the article.
> I needed "sticky" today for a table header that I want to keep visible as the user scrolls on the page...
Note that what you want is different from what the user wants. I am not alone in thinking that stickies never add value, no matter how convenient the designer thinks they are: https://alisdair.mcdiarmid.org/kill-sticky-headers .
I generally agree with you, but I'd argue there's good use-cases for it. I hate it when sites feel the need to add a top and bottom navbar on my already small mobile device. But on the other hand, when I'm on my desktop using a web app that displays large tables (which is pretty common for me), it's really easy to confuse the column you're looking at. In cases like that, I like having the table headers visible at all time.
I have never, mobile or desktop, seen any use case for it that I like, although I also don't do much mobile browsing; but it is fair to say that my personal preference shouldn't be what determines the future, or even the present, of web design! Despite that, I do think that it is fair to say that designers are much more enamoured of them than users are.
(It's always refreshing how civil disagreements on HN can be. Thank you for your thoughtful reply!)
I agree absolutely that this is exceedingly annoying. That said, it's popular among the people who control design decisions so it's here to stay, for now.
So do we use a javascript solution that also performs like crap, or accept that this is a common use case and provide a standard way to do it?
But plenty of sites already have floating elements that reposition based on scroll listeners. The sticky property just allows it to be done in a performant way.
"""
When content changes above the viewport, Chrome now automatically adjusts the scroll position to keep content in the viewport fixed unless the CSS overflow-anchor property is set.
"""
Not only that, but "Showing and hiding the URL bar on mobile no longer resizes the initial containing block or elements sized with viewport units such as vh." Thank God! I thought they were never going to reverse this horrible decision. These two changes should substantially reduce scroll jumping on Android which is one of my biggest annoyances on the web these days.
So if I just redirect all http requests to https, what unexpected consequences might I run into?
Not-for-profit hobby site/community with a couple hundred regular users. Have been running https with Let's Encrypt certs for a year but not advertising or defaulting to it. No legacy systems or other entanglements.
There is a forum that allows users to post inline images and media from other domains.
If you have absolute URLs (http://..../foo.jpg) saved in documents or a database or something like that, you would need to rewrite them to be protocol-relative or just https:// .
Hmm, but they said SSL is already enabled if you visit the site with https. Shouldn't redirecting http > https be completely risk free then or am I missing something?
If you visit a secure site (https), embedding insecure items (http) will fail because they could do just as much damage as loading the entire page over http.
That's current behavior, but I wouldn't guarantee it will remain that way, an http image compromises the integrity of the site: it could be manipulated to change the meaning. Imagine reading a news site using an unencrypted CDN and whoever runs the WiFi (or the country you're in) is replacing the images with similar yet different ones, giving a different impression that what the creator intended.
You could do this quite subtly as a way to influence people by making certain figures appear more sinister.
These warnings tend to get tougher over time, so what's grey now may be red next year.
I think the reply was suggesting that "not advertising or defaulting to [HTTPS]" implies that a quick check for absolute HTTP URLs would be Good Thing.
Appears it should be OK unless some inline embedded images are hard coded to "http://" in which case the browser will, I believe, refuse to even try and load them, so you may need to fix that. If they're all relative you're good to go :)
IME, mixed-content warnings. Tracking down those can be a nuisance. If users can add inline images, you'll basically not be able to avoid these.
Let's Encrypt certs work on pretty much every device. (Some Comodo certs don't work on older Android, for instance - we hit this at work and switched those sites to Let's Encrypt, problem solved.)
You mentioned community. If you have a forum or message board where people can select their own avatars or post image links, then you'll enjoy the land of mixed content warnings.
Add the header "Content-Security-Policy: upgrade-insecure-requests" and the mixed content errors will go away, but you'll experience a new problem.
About 30% of images/avatars will break, and of those, roughly 10% of them will cause the page loading animation to keep going for 2-3 minutes before finally timing out. I presume this is due to some sort of server misconfiguration on their part, as most non-HTTPS sites fail fast.
Give it a few months and users will slowly update their avatars and stick to sources that support HTTPS (like imgur), but don't expect the problem to ever go away completely.
Don't do this on your domain. Have a separate domain for user content (and, if the content isn't public, better have many separate domains and distribute the content between them; maybe even have a per-user domain). Browsers can be convinced that a particular resource is an html page even if the content-type is wrong.
Just did this this week. The only unexpected effect I found was that all our Facebook likes were reset to zero. FB must consider the two versions of each page as being separate and different.
Oh I agree it's no excuse, OP just asked what the tradeoff/cost was. There is still some but as you say it's negligible and especially so for a very small site like that.
>the Web Bluetooth API [...] enables web developers to connect to bluetooth devices such as printers
Feeling nostalgic for FAX spam? Let's update it for the modern era!
(Kidding. A little bit. The Web Bluetooth spec addresses these concerns at length -- https://webbluetoothcg.github.io/web-bluetooth/#security-and... -- but mostly as a "here are the many, many ways this could be used maliciously", with not nearly enough "and here's how we plan to prevent that" for my taste.
I gotta say this is an attack surface I'd prefer we'd left buried deep underground; I can't think of any device I'd feel safe allowing a website to pair with. Keyboard? Keylogger. Printer? Physical spam. Headphones? Audio spam. Who wants this?)
I can think of a couple of things at least in the prehospitalmedial field. Bluetooth ekg machines are used to do documentation and needed to have their data sent to the hospital as fast as possible. Also Bluetooth scanners are used to scan drivers licenses and triage tags.
The less manual input a provider has to do, the better. Then they can spend their time making sure the patient is being taken care of.
Most ems documentation is done on web applications instead of desktop apps so it can work on anything from an iPad to a surface to a tough book.
You don't want to know how these things are hooked up today.
With location based services making use of beacons, you could easily help a customer navigate your store without asking them to install an app - they just need to visit your website. This is exactly what my startup is working on.
This is too soon if Google goes forward with the red exclamation point "insecure" flag before there is a free solution for wildcard certificates.
What are free sites that give every user a subdomain for their profile/site/etc supposed to do? There are many open source multi-user site platforms that follow this pattern, and all of these platforms now effectively have a huge monetary subscription fee associated with them due to the fact that you now need to purchase a wildcard certificate in order to not scare away users.
There is no free wildcard SSL certificate service, so Google is essentially forcing you to pay money to not scare users away from your site.
I am all for this change eventually happening, but not until all DV SSL use cases are covered by free services. Until then, this change is inappropriate.
If Google wants to punish sites that can't afford or won't pay for expensive wildcard certificates, then they should offer a solution of their own (like their own ACME CA that supports provisioning wildcard certs).
The only problem is if those subdomains have a password input field "somewhere" on their sites, AFAICT. Or are you referring to google's "later" plans to add a red exclamation point "insecure" flag to the url bar for all HTTP pages?
I'll admit when I first heard of them flagging HTTP password fields, my instinct was to write a little javascript to "mimic" password input field behavior (and store the real password away somewhere else, then at submit time, it sends in the correct data). But if it's just a tiny warning on the url bar, meh, not sure if I care...
Sephr was talking about wildcard certificates, which Let's Encrypt doesn't offer. But dynamically requested certificates can sometimes be used as a replacement for wildcard certificates.
I doubt most will even go that far. You can pretty much expect those that won't go HTTPS (for whatever reason, and there are many) to change their input type="password" fields to input type="text" fields.
That is probably what I am going to do for the internal site I maintain at work, since I can't get an SSL certificate for such a thing.
All this change is going to do is make password eavesdropping in person easier.
If it's hosted in local IP space and therefore you can't get a certificate, you can setup a CA and push that CA certificate through Group Policy. I had to do it myself and it took 3-4 hours (mostly because I'm bad at Group Policy)
The problem is that I'm a developer on a team of six. And my site is used by another five or six teams. It's a little tools site that does various SQL queries and such against databases other teams don't have access to. They're not going to allow me to push my signing certificate onto everyone's computers. I'm very low on the org chart.
Well I assume in a companywith that many teams they would have already came across a need to manage their own simple, internal CA. Maybe you can be the person to set it up, trust me it's scarier than it looks
oh yeah text field, that's an interesting option and people might even use it as an easy work around [LOL]. I guess you could make the viewable size like 1 char then it won't be much worse than inputting it from a smart phone. Except for the large screen people can see from behind you LOL.
I guess people could do self signed certs that expire in "100 years" but you're right, even installing those can be super painful, and people may not go that far.
Of course, initially what people will do is "nothing" and just let the insecure message appear, since it doesn't actually block any functionality seemingly...
I would. That's the same price as a year of hosting an entire VPS instance with 1TB bandwidth a month, 40GB SSD storage, and 512MB of RAM. And for what? A completely automated process that loads a page on your site, verifies a META tag is present, runs an openssl command, and prints the output to a webpage for you.
So for that $50, you can double all the specs on your server, or get a little green lock in your URL bar. And don't forget, we're not all in the US. I know a guy in Venezuela that doesn't have $50USD to spend on an SSL certificate.
I understand it costs money to run a CA. But if Let's Encrypt can run one for free, then surely so can Google, if they're so gung-ho on 100% of the web being over HTTPS. If you want that to be a reality too, you need to back free wildcard certificates. Otherwise you'll never be able to deprecate HTTP.
You're absolutely right. I'm surprised nobody here seems to care. I guess HN is popular with techies who have root access on their servers, or people who run commercial sites.
What happened to the hobby / non profit apps, blogs and forums that fill 90% of the web and my bookmarks? All of them have a login form somewhere. Tons of people already use secondary emails or throwaway emails, but these site owners now have to pay for SSL or have to upgrade their shared hosting to "business" or the like to get SSL included, or pay extra fees to install SSL.
If all they do is look for a password field that means pretty much any phpBB or wordpress blog out there is defacto "insecure" and the owners of those sites now need to find a way to get SSL on their (most likely)shared hosting. Mine is not free at all, it requires "installation" fees, plus "dedicated Ip" so in the end it cost near as much as just buying their SSL package.
At this point though it's very unclear how Google will find those "insecure" password fields. Does it honer the no-follow? Because last time I heard it's not useful to get a login form page indexed anyway.
I'm going to get skewered alive for saying this, and I do believe their intentions were noble, but I honestly think Let's Encrypt has done a lot more harm than good.
It's completely deflated the opposition to the CA racket, but not given a comprehensive alternative.
There are servers that can't run certbot for a variety of reasons (including administrator policies that people running the webservers can't control.) The 90-day expiration is the most arduous of any CAs out there by far. Not only are there no wildcards, but there are strict limits on the number of subdomains you can register per week.
And if you don't like it ... too bad. There are zero free alternatives.
Most of us are hackers and can work around these limitations, but not everyone can. You can't expect most people to set up certbot, let alone write their own version. You can't expect them to build some system that automatically batches and registers new subdomains and maintains all those certificates.
> I guess HN is popular with techies who have root access on their servers
That's a lot of it as well. I'd go so far as to say the majority of internet sites out there (by number, not by traffic volume) are little commodity hosting firms that give you a web GUI version of an FTP client, and maybe if you're lucky charge you $10-20 a month for SSL as a checkbox feature in the payment options.
> At this point though it's very unclear how Google will find those "insecure" password fields.
It will probably just look for <input type=password> and if that exists, show the "Not Secure" message.
If you're running a commercial entity and $50/year is a problem for you, then I'd suggest that your business may have its challenges. Obviously it is (in most cases) possible to automate the cert creation process using let's encrypt for free, so it's a trade-off on whether the time it takes to do that is worth more or less than the $50 it'd cost to purchase a wildcard cert.
As to the compute power you can buy for that, it could be that says more about the cheapness of compute power than it does about the expense of SSL certs.
As to Let's encrypt running for free, well the post I was referring to indicated they didn't want to go that route, also you do know they're actively soliciting donations at the moment because, guess what, you can't run a CA for free...
Making the web open only to commercial entities is a huge problem by itself. I definitely would not want to make that a premise or assumption to participate in the Internet as an equal player.
I'm not. I'm running byuu.org without any ads, and without selling any products. I used to spend about 60 hours a week coding, nowadays it's more like 20 hours a week, and recoup only the occasional donation or licensing agreement (they reach out to me) for my software. If you were to weigh it against the hours I've put into things, I'm earning about $0.60 an hour for my work. And if you consider my own personal expenses on my projects, I'm at about -$30,000 in total. But I have fun doing it (occasional trolling aside), so it's worth the cost.
I wasn't willing to accept the Let's Encrypt limitations, and so I paid for a three-year AlphaSSL wildcard certificate for ... I believe $132 or so.
> As to the compute power you can buy for that, it could be that says more about the cheapness of compute power than it does about the expense of SSL certs.
Not really. There is absolutely no reason it should cost $5 to run openssl with a SAN of "www.byuu.org" and $50 to run openssl with a SAN of "*.byuu.org" -- the verification process is 100% identical for both.
That is 100% pure, unadulterated greed. They charge that rate because they can.
> because, guess what, you can't run a CA for free...
I know, the auditing fees are obscenely expensive and required every six months.
Google could afford it if they want the web to be 100% HTTPS. It wouldn't even be a drop in the bucket for them. It'd be a fraction of a sliver of a drop.
>Chrome will mark HTTP pages that collect passwords or credit cards as non-secure
Will probably get hated on, but you should shell out for your own domain if you're collecting this info - there are quite a few BYO domain, free hosts.
Zendesk, for example, has a Let's Encrypt-based system for all those <account>.zendesk.com pages. While it wouldn't be trivial to set it up, it is free and the information is out there if the admins of those sites want to make it possible.
Unless you're creating [more than 20 new subdomains a week][1], you're not gonna hit that limit anyway. You can increase that figure to around 2,000 new subdomains a week if you're willing to include multiple subdomains per certificate; that should be more than enough for all but the largest of sites.
And even if you do need more:
> If you are a large hosting provider or organization working on a Let’s Encrypt integration, we have a rate limiting form that can be used to request a higher rate limit.
Just start them off with HTTP and transparently upgrade to HTTPS after 9 hours. And if you're only retaining a small percentage of users who sign up, you can stretch that limit even further by only upgrading the users who you retain.
> What are free sites that give every user a subdomain for their profile/site/etc supposed to do?
Couldn't you include an ACME request for a new subdomain certificate into the "create account" script?
This would certainly add some complexity (especially with automatically fulfilling the "proof-of-ownership" challanges and properly renewing all the subdomain certificates) but it seems doable with less running cost than a wildcart cert subscription.
Edit: On second thought, this does leak some interesting data to the CA - such as, how many users you have, what their usernames are and how your rate of new signups is.
>Couldn't you include an ACME request for a new subdomain certificate into the "create account" script?
If you're using Let's Encrypt that'd also limit you to 20 new users per week so I think you'd need a more complicated approach than that (or a different CA). Batching into groups might work but then user provisioning time increases. Wildcard certs do seem like the right way to go here, and the cost is pretty minimal
I mean, those sites are insecure. Google has a responsibility to the user of the browser, not the site owner. If you want people to trust your site, you have to provide them the security to do so.
20 certificates per week, each cert can be for up to 100 subdomains via SANs; and that limit is only for new certs, renewals don't count towards it - so if you can batch your requests (eg, with an "it may take up to 12 hours for your site to become available over HTTPS" notice, then twice a day make one big cert request for all your new domains), you can get up to 2000 per week; and there's no hard limit to the number you can accumulate over time.
No site should require a day's delay on an automated software thing that can be done in seconds. The limit remains at 20 per week if you want to provide a good experience. And that's not big.
Sure, it's not ideal... but does it really matter? Just register your domain with this hypothetical service while you're still preparing the content for it, so there's time for the cert to go through before you need it. Or they could let you provide your own cert if you're in that much of a hurry.
We're talking about edge-cases within edge-cases at this point: a service that (1) offers HTTPS subdomains (2) hasn't been able to get a rate-limit increase from Let's Encrypt (3) is unwilling to buy a wildcard cert (4) either is growing by hundreds of subdomains a day, or has a significant amount of clients who can't wait a few hours for a cert but won't buy their own. That's so oddly specific, I'm not surprised people aren't going out of their way to cater to it.
So, how do I manage a website where I allow users to embed arbitrary URLs as image, so I don’t even know which protocol they’ll use, but where I want HTTPS for the page itself?
Will I just have to deal with the yellow "mixed content! insecure!" warning? Will I have to proxy requests?
You could (and in fact probably should) do what GitHub does. Fetch the images, store a copy and serve them yourself. That way even if the external URLs stop working, the images on your forum/site will remain functional.
This is scary to me so I haven't tried it yet. If I'm allowing users to upload arbitrary files, I might have to deal with things like copyright laws. It doesn't sound fun. Does anyone have relevant information for such a site in the United States?
I'm not a lawyer (nor even American) but I believe the most relevant legislation would be the Digital Millennium Copyright Act. In which case, it might be a good idea to formulate a policy similar to the one linked to in the footer of this very website.
It's not a trivial problem, I'd be more worried about users uploading illegal content maliciously. Would definitely require a moderation team depending on the site size...
In the short term, as long as you don't have any password/CC prompts, you can continue using HTTP.
In the long term you'd have to "hope" the whole web transitions to https and just rewrite the url's to https, or cache them locally or proxy serve them locally. HTH.
A single HTTP request in an otherwise HTTPS page can compromise the privacy and integrity of a user's connection. SSL/TLS isn't just about checking all the boxes so Chrome will give you the prettier icon, there are real stakes for users.
Imagine you have a website with a payment form on an HTTPS page, but you embed some external analytics script over HTTP. If your visitor's request is MITMed (e.g. they're in a coffee shop or some other insecure WiFi access point, etc), then an attacker can modify the script (that's over HTTP) to send these credit card details to the attacker's server. Once there's a single HTTP resource on the page, HTTPS means a whole lot less, hence the scary warnings.
Hmm I don't understand bluetooth protocol well. What are some security issues I should be aware of if a rando site ask for bluetooth permissions(assuming this feature is gated) and I accidentally click yes?
Will it be able to take over bluetooth speakers or robot toys? They must be in discovery mode for it to connect so I am safe right? Then it will just be privacy issues which I can live with.
Every bluetooth device has a unique MAC-like address, which could be used to track you (or your device, at least).
There are also obvious privacy/security issues with the fact that device-stored information (full name, location, biostatistics, etc) can be accessed by any JS loaded onto the same page (read: analytics companies).
The whole Web Bluetooth spec is online[0], if you'd like to read it for yourself.
I have already been playing with this as a manufacturer of BLE power switches.
It does allow for cross-platform BLE access leveraging JavaScript in the browser. The experience of someone gives permanent access is similar to running an app on a phone.
This means you can do a scan request from the browser and all BLE devices respond. Or you can silently listen to advertisements. I don't think you can only give access to particular devices and for example hide the fact that you have a Fitbit or Apple watch.
For smart home applications it is cool for configuration but not for daily use. You don't want anything rely on the user opening up a browser. Apps much more naturally run in the background so the downtime of the browser is a serious hurdle.
Every day I'm shocked by new BLE devices coming in that leak information or are improperly protected. I would only give permissions to sites that you would definitely trust with this.
Something that can come out of this is better attention to multi-device access to BLE devices. Keys that are set up for bonding can not be shared easily with other users. Apple brings its usual Apple-only solution in the form of Homekit, but it would be better to have keys that can be shared across platforms. If the browser becomes also a player this is gonna be important.
I'm most concerned that this will make it feasible to perform rogue firmware upgrades to vulnerable devices that previously were protected by not being connected to the Internet, turning them into various kinds of surveillance and denial-of-service bots…
Web Bluetooth?! Am I the only one who just learned this was a thing?
Recently I have grow interest in progressive web apps, I wonder if there is a list of the sensors/interfaces that can now be accessed from a web browser, it will certainly be helpful
I wish they implemented text reflow on mobile devices, it'd make the web so much more readable for end-users and websites which are not mobile optimised.
And usually having control over the zoom level and get reflowed text makes a vastly superior reading experience than TEN LINES IN HUGE LETTERS PER SCREEN like many mobile-optimised sites sadly do.
>Chrome no longer allows opening of pop-ups during inputs which represent a touch scroll, such as touchstart and touchmove.
Wow. I remember, people were complaining that Google were the only major ad network company that did not forbid onthouch* code in ads, and on that has picked a principled stance on allowing them.
There’s too much potential for abuse in web technologies; it’s past time for web browsers to clearly divide their capabilities into higher-level buckets that have restrictions.
Two buckets that come to mind: maybe you should have to specify that either you are developing an “app” or you are developing a “page”. And no page should be able to do things like create sticky-elements (not to mention a zillion other capabilities) because we all know some Obnoxious Clueless Ad Company will take about 13 seconds to start ruining your browsing with it. Anything identified as an “app” should be impossible to load in a normal browser, accessible only under a dedicated home-screen icon with a special sandbox, etc.
I agree, and it seemed like Chrome was aiming for something like that model for a while. But then Google seems to have backed away from it. Except on ChromeOS, Chrome apps are supposed to be phased out by 2018.
Personally, I am really unhappy with the direction of the web right now. Ads have gotten more and more intrusive (particularly on mobile), and it seems like every site is doing some sort of shitty "sign up for our newsletter!" or "but wait there's more!" overlay. Blocking and display-control mechanisms have seemingly fallen behind (again, especially on mobile, where for the typical user there's basically no viable ad blocking solution).
I've heard web developers defend all of this from a business perspective (we need the ad revenue), but it seems really shortsighted. All it's going to do is drive users to walled-garden apps, probably dominated by one or two big players in each market category, and the friction to get your content or product in front of customers will increase tremendously.
You are forgetting half of the benefit of HTTPS. Not only is content kept secret, but it is also kept unmodified.
Even if your don't care about secrecy, you should use HTTPS to ensure your content doesn't have ads or malware injected into it by your ISP (which happens) or a WiFi hijacker (which happens). If a user of yours sees an ad or gets a virus from someone injecting it in your website, they are going to blame you.
HTTPS protects the site owner just as much as the site user.
Hmm, good point. The ad injection thing is quite relevant to developing markets, where many ISPs routinely inject ads onto sites.
But it's certainly more overhead particularly for mom and pop businesses. I suppose it will drum up a lot of business for web developers though, a Chrome stimulus of sorts.
the other thing to note is that a lot of new APIs (and older ones which are being updated) require HTTPS - geolocation, webBT, webUSB, service workers, etc, all require HTTPS (or will, soon).
Yeah, can't wait till my CV and project descriptions (not hosted there) static Jekyll site is somehow insecure and dangerous.
[edit] I'll guess I'll have to find a new host then, since Github pages doesn't support https on custom domains. Perhaps Netlify, seems to support Github hooks.
If you're serving them over plaintext http, then they're already insecure. How dangerous that is might be debatable, but there's no doubt that it's insecure.
Well China intercepts http traffic and inserts javascript malware joining users into a botnet that DDOSes github pages of human rights organizations. It depends on what you mean by insecure and dangerous.
The comment I was responding to was talking about gitlab having SSL on their <username>.gitlab.io pages. While correct, that is besides the point OP was making. Which was about custom domains. Gitlab doesn't offer that, either.
Opening up a secure HTTPS iframe (for example from another domain) on a non-secure HTTP website labels the entire site as "Not Secure". This will generate an unnecessary warning to the user when the login form or the payment form is really secured by HTTPS.
If the Chrome development team implements this new security feature they need to check that the window where the form exists is secure by HTTPS or not, and not only check if the top window is secured with HTTPS or not.
A warning which is completely applicable if someone modifies the HTTP transmitted site and replaces the secure iframe with a phishing version of it. And since the browser can't tell when that happens, it displays the warning always.
I think that the main problem with iframes (and frames in general) is that it's impossible for the user to check in an easy way where they originates from.
If the URL bar would automatically change to the iframes URL when the user hovers the iframe it would at least give the user a possibility to check where the iframe originates from and there would be no need to label the entire site as "Not Secure".
> I think that the main problem with iframes (and frames in general) is that it's impossible for the user to check in an easy way where they originates from.
Exactly, just like all other resources (assets, media).
> If the URL bar would automatically change to the iframes URL when the user hovers the iframe
Not only is this extremely confusing for anyone who isn't an expert it also breaks when you don't use a mouse. It can also be circumvented really easy by simply hiding an iframe.
-----
Why exactly don't you like the current behaviour? If you load a page over HTTPS that loads a bunch on insecure pages inside iframes, the page _is_ insecure.
I'm my example it was the other way around. You load an insecure page over HTTP without triggering any visible warning. And then you load a secure iframe over HTTPS with a login form and this triggers the "Not secure" warning.
The only reason to do this that I can think of is because of MiM attacks like detaro wrote about.
position:sticky first appeared in Chrome Canary in 2012, somewhere around Chrome 30, but then they pulled it because of the issues.
A coworker was looking for a JavaScript sticky solution for an internal tool, and since that team could dictate what browsers the users should use, she started building it around position:sticky. Soon enough the property disappeared and she had to go with JS anyway.
The company got bought by their largest competitor a few months ago. I never would have guessed that the company would be gone before this finally got released.
Bah, it's not free on HostGator even with a free certificate:
_Do you authorize the $2 / month or $24 / year dedicated IP charge? (Dedicated IP required for SSL) *_
_Do you authorize the $10.00 SSL Installation fee?_
EDIT3:
Re: having to buy a SSL on a hobby site / forum ...
Ok I see HostGator allows third party SSL it's not super straightforward but at least it seems possible to use a free SSL from Let's Encrypt and then get it installed...
I have shell access too, but it's not "administrative" access so afaik, the Let's Encrypt "Certbot" would not work?
Would https://gethttpsforfree.com/ work on HostGator with a shared shell access (not administrative), anybody knows?
And let's keep in mind in the real world, it's going to be at least 3 times bigger than that because it's not a sticky bar for ants after all. And it'll be rocking its best friend Mr Hamburger Menu, some social-tracking-sharing-buttons, and a banner ad for whatever was mis-classified from the background of the weighted average of the last 10 Facebook pictures you appeared in.
Just the text please! Viva le reader mode!