Google's insistence on using security as an excuse to remove functionality that they want to keep exclusive to their products is so blatant it either is or should be illegal.
No local access kills Google Cast competitors as well as things like this (Nearby Share competitor), Android SMS limitations nearly killed most Android Messenger competitors, removing clipboard access forced people to use one of 2 decent Android keyboards to get clipboard sync back (one of which is conveniently enough developed by Google)...
I have not seen one single instance of Google killing some functionality for security that wouldn't have been easily implemented with permissions instead and that wasn't clearly designed to stomp out competition.
Local access itself isn't killed, there are still some ways to do it. This Chrome post lists a number of options. I think the WebTransport and Reverse embedding options would both work for Portal.
Remember when all the Google products had a beta label on them (except search)? ... when you're designing software that interacts with Google, always imagine it has a beta tag and plan on what you'd do if the beta was abandoned. I personally miss what Wave could have been (better than Slack) and think the trajectory of GWT could have been dramatically better if Google had remained the primary developer. I'm now devoid of Google dependencies with one exception - the Go UUID library. Other implementations exist and it would be trivial to wrap one.
What's the other good keyboard? Since Swype went off the app store I have not been happy with Google keyboard at all. I want to be able to Ctrl+C and Ctrl+V like I could with Swype.
https://github.com/florisboard/florisboard is currently the leading FOSS keyboard, even with swipe input (swipe is English only for now and not as good quality as the proprietary solutions, but it's the first step)
I don't really know a good solution to the problem of "if platform vendor does something it doesn't naturally cross a security boundary but if 3rd party app or website does the same thing it would."
Because we deal with this kind of stuff all the time in totally benign ways -- people aren't too upset that the kernel can do things that userspace processes can't and that certain privileged functionality has to be built-in.
Chromecast isn't that magic. If you wanted to build a CC competitor you have to provide a browser extension or an app that uses native messaging which seems fine, it's the same as what Google themselves do. Google's advantage is that they get to be bundled with Chrome (heyooo Pocket). I think you could make a case that maybe it shouldn't be necessary and that a sandboxed webpage with local network access is better than an extension or native app that isn't sandboxed at all but tomato potato.
> Google's insistence on using security as an excuse to remove functionality that they want to keep exclusive to their products is so blatant it either is or should be illegal.
The Google Chrome change looks like the most significant of the two factors that caused Portal to shut down, although it's a shame that they didn't continue running the service for users of other browsers. (Android file transfer apps that implement scoped storage have similar functionality as before if users grant them access to the entire "sdcard" directory.)
Here are some FOSS alternatives to Portal that don't depend on Chrome:
- Syncthing (https://syncthing.net): Continuous P2P file syncing for Android, Linux, macOS, Windows, and BSD
- Wormhole (https://wormhole.app): File sharing from the browser through encrypted uploads that expire after a set amount of time or number of downloads
I use syncthing but not as peer to peer, but as client-server-client.
My phone takes images so large - 108MP - that my normal service (an instance of Up1 pastebin) hangs on complex scenes while encrypting in android Chrome. Installed syncthing on a VM I already had with public facing httpd and I can just hit "share to syncthing". The linking part isn't automated, I type it by hand then copy and paste to recipients.
I looked around for another way to do this, including ssh, and syncthing fit the bill and has been running fine for about 2 years.
I have no idea what it's doing on the backend, though, and now I am wondering if my "node" is passing traffic for people who are not me...
I still prefer my pastebin service, even if it won't accept the largest images from my phone, because you can zoom forever, for some reason, whereas windows viewer, syncthing direct jpeg mime type, etc, will not actually let you zoom 1:1 or beyond on 108 megapixel images. This also ignores the fact that my pastebin only has 200MB of tmpfs storage available, so perhaps 2 crazy images before I have to reboot it... Still worth the zooming, though.
i've taken pictures, standing in the middle of a road, of cars at least a quarter of a mile away, and you can zoom so far in you can crop out the front end of a car and see what type it is. it looks like a very high iso low resolution image, but it's enough. The crop area is like 4 pixels on a 1080p screen when viewed fullscreen.
in my opinion on a smaller screen, and especially oled, the unzoomed and uncropped pictures always look better than the other cameras the phone has, but it takes longer to take and save the shot, there's no "pro" mode, and the files, as mentioned, are massive, embarrassingly so.
Snapdrop has the edge over ShareDrop since its server can optionally be self-hosted using Docker and it doesn't depend on Firebase. I confuse Snapdrop with ShareDrop sometimes.
It's not clear to me that these changes make an app like this impossible. There would be a lot of implementation churn, and maybe some user visible differences, but I think the main issue here is they don't want to spend the time to rewrite the app, and maybe there's some fear of future changes breaking it again. Which is totally fair, churn sucks. But on the other hand there are real, serious security issues motivating these changes that should not be minimized.
Google, Apple and Microsoft have been making a lot of security-related changes in the OSes in the last few years. The result is that a lot of apps need updating or just no longer work.
It's not an issue for apps that are actively developed -- but it's devastating for side projects or open source apps that rely on volunteers.
Some changes (like new code signing requirements) might be possible to implement in a few days, but if you need to make bigger changes you are quickly looking at weeks of development time -- which is kinda hard sell for people who develop software in their free time.
It's really killing my enthusiasm for mobile phones that all my old apps are stopping working. My sleep alarm and Tasker screen blocker fell prey to permissions/battery saving issues, and Swype and a few others shut down.
That's the impression I got from their post. It was a passion project (meaning goodwill, costs more than it makes) that Google has forced them to rewrite and it's not going to happen because passion projects can only cost so much of the budget.
> Starting with Chrome 94, Chrome no longer allows HTTP requests to local IP addresses. Since Portal worked by accessing a local server running on your phone to do all file transfers locally over WiFi, this means Portal no longer works for anyone using Chrome. Since Chrome and Chromium variants have overwhelming browser dominance, this means Portal will soon no longer work for the vast majority of people.
The linked post (https://developer.chrome.com/blog/private-network-access-upd...) says "If your website needs to issue requests to localhost, then you just need to upgrade your website to HTTPS". Also, I just tested a Chrome extension on Chrome 94 and it has no problem sending a fetch API request to http://localhost so maybe that's an option too?
"local ip address", not localhost. They're doing file sharing on the local network. This also means that the common 'install a custom root for localhost' https trick won't work because it's machine-to-machine communication - https everywhere specifically stops machines from communicating locally.
You could probably hack it with a relay server but at that point, why bother?
These are reasonable security measures, but it does bother me that there's no way around them. I don't use android (or Chrome) so I don't know, but could anyone more knowledgable about these moves chime in about whether it's possible to circumvent them with user permission?
HTTP is vulnerable to MITM attacks. For example your ISP can modify any page you visit, steal passwords, cookies, oauth tokens, etc. If the site is forced to switch to HTTPS, your ISP can no longer pull off this attack.
Full disclosure I work at Google, but not on Chrome.
Well maybe I want to grant the site access to a particular internal IP without the ability to send requests everywhere... at what point would Chrome just have its own separate firewall?
The modern browser is already an operating system by all definitions I know so yes, it might as well have a firewall.
But that isn't very elegant, so I propose version 2.0:
"This site has requested communication with a local device. Please select it from this list:"
[the user is presented with a list of devices discovered by zeroconf/dns-sd/mdns/upnp matching the service descriptor given by the page]
Turns out, this was an actual spec [0] pushed by Opera of all companies. Here's a quote from the spec:
> window . navigator . getNetworkServices ( type , successCallback [, errorCallback ] )
>
> Prompts the user to select discovered network services that have advertised support for the requested service type.
>
> The type argument contains one or more valid service type tokens that the web page would like to interact with.
>
> If the user accepts, the successCallback is invoked, with zero or more NetworkService objects as its argument.
>
> If the user declines, the errorCallback (if any) is invoked.
The problem with that is the site the user is on is http:// . Security at that point is pretty iffy. You kind of need to say "foo.com or some MITM attacker pretending to be foo.com wants to scan your local network. Allow Once/Deny/Block".
Full disclosure I work at Google, but not on Chrome.
Of course that would be a problem, but that simply isn't the case. It certainly wasn't the case for Portal or any other site I've seen that used this feature. Limiting this to only secure contexts would solve your problem and is what Google is technically doing. The issue is that requests from HTTPS->HTTP sites in aren't allowed either ("Mixed Content") and so effectively, Chrome is disallowing ALL public->local requests, since HTTPS on a local site is broken by design.
> If your website needs to issue requests to a target server on a private IP address, then simply upgrading the initiator website to HTTPS does not work. Mixed Content prevents secure contexts from making requests over plaintext HTTP, so the newly-secured website will still find itself unable to make the requests.
The solution, of course, is clear to the Chrome devs - certificate pinning. Unfortunately, they implemented it in an absolutely ridiculous way that limits you to WebTransport for absolutely no other reason other than they want to force developers to use it as soon as possible before the other browser vendors (*vendor) implement it. As I mentioned in a sibling comment, there is absolutely no reason to not simply add a "CACert" option to XHR,fetch,WebSocket... and let developers use the existing protocols.
>It certainly wasn't the case for Portal or any other site I've seen that used this feature.
What do you mean? The only change Chrome is making is limiting it from http:// sites. If Portal is saying Chrome is breaking them, then they must have been making connections from http:// sites. The mixed content limitations have been there for a long time already, which must be why they were using http:// . I've never used Portal myself so I don't have direct evidence they were using http:// , but this is the only logical explanation I can come up with for why they would say Chrome is breaking them.
Wat? Won't Chrome allow this at all (eg when entered via the URL bar, as long as there's one, that is) or just when linked from a non-same domain page?
I believe the change is that http:// pages on public websites won't be able to talk (such as via XHR/fetch, embedded images/scripts/css) to private IP address or sites with domains that resolve to private IP addresses.
One obvious workaround is to change the requesting page to https:// instead of http:// . This is somewhat difficult because then there's the problem of mixed content, since it's often difficult to get an https:// certificate for the server that runs on the private IP address.
There are 2 other workarounds listed on the Chrome blog post: WebTransport and and Reverse embedding (which means running the site on a private IP address instead of a public IP address). It looks to me like those could work in the Portal case.
Full disclosure I work at Google, but not on Chrome.
No local access kills Google Cast competitors as well as things like this (Nearby Share competitor), Android SMS limitations nearly killed most Android Messenger competitors, removing clipboard access forced people to use one of 2 decent Android keyboards to get clipboard sync back (one of which is conveniently enough developed by Google)...
I have not seen one single instance of Google killing some functionality for security that wouldn't have been easily implemented with permissions instead and that wasn't clearly designed to stomp out competition.