Unfortunately it's not that simple. On my system with bridges for hyper-v and wireguard tunnels, the NCSI service happily ignores my default route and tries to establish connectivity through one of the other devices when resuming from sleep.
It wouldn't be that bad if it was just the status indicator, but several apps refuse to work properly if "internet" isn't detected. The workaround is to disable every other adapter in the system until NCSI is happy. I ended up binary patching the connection test service in memory to get it to always return true.
Why not point NCSI to a host that's always reachable, like a local web server? There's a lot of configuration you can do to the connection check service that won't get your antivirus all anxious (i.e. https://www.ghacks.net/2014/02/07/disable-customize-windows-...)
> but several apps refuse to work properly if "internet" isn't detected
For me it's very bad code smell when the developers try to check some sort of a global flag ("has internet") instead of just perform the action (eg send http request) and fail gracefully
The OP isn't asking those developers to duplicate the functionality but to remove it. It's entirely pointless for most applications and only breaks things when there is something wrong with the connection check itself - you still need to handle connection errors in either case.
Although there's no source code, Microsoft's public debug symbol information makes it pretty easy to determine where certain functionality lies in most of their binaries. Then you overwrite the opcodes to get the desired functionality - the APIs are pretty simple, OpenProcess, WriteProcessMemory and you're done. The harder part is finding something to signature match or similar so that each time the DLL is re-compiled your patch doesn't break since the offsets have changed.
He's been around a long time and likely built up a lot of good will. I strongly suspect that he does these blog posts as a side project and is encouraged by management because they are popular and people like them. It would be foolish of a manager to kill something that people like.
> WSL is extremely frustrating because it has so many bugs and gotchas, but I'd say it's pretty popular with its intended audience.
I understand why WSL2 is now just a HyperV'd Linux, but WSL1 is amazing despite its limitations. I'm honestly hoping that they re-consider deploying a properly-developed Unix personality again, but that ship has sailed.
> I'm honestly hoping that they re-consider deploying a properly-developed Unix personality again, but that ship has sailed.
Agreed. WSL1 was incredibly ambitious, and that alone made it exciting. And what it does manage to do, even in its unfinished state, is also impressive in its own right.
(FWIW, I actually had WSL2 in mind when I was talking about bugs. Switching to a VM-based approach solved some compatibility issues but WSL2 definitely still has problem.)
On the Visual C++ side of things, STL (Stephan T Lavavej) is also well-liked for working at Microsoft while being allowed to sound like a human. He helps moderate the r/cpp subreddit, and some other MS-ers post there too.
Doing this during a Windows 11 install will result in a road block until you fix said connection issues. You literally can't proceed without finding a hidden terminal and disabling the process and restarting. Insane.
It was becoming common knowledge that you could disconnect your machine from networks during installation to use Windows with a local account (i.e. the normal kind of user account that people have used for decades). Microsoft wants people to register online during installation and to use the online account on all their computers, in order to increase the market share of the Windows app store and other Microsoft online services such as OneDrive and Office 365. So it's not malicious, Microsoft is just implementing their strategy to create more monetization opportunities.
I mean, sorry, but I consider that malicious. The OS can create a local account but doesn't give you that option specifically to make you create a windows Live account. That is almost textbook definition of malicious for me.
What's may be more surprising than their malice is your surprise at their malice. These 'dark patterns' are so numerous now it's exhausting to remain duly outraged. I hope there's a future (or alternate universe) where dark patterns, such as this, result in economic loss rather than economic gain.
They might be a Linux desktop user or a free software person in general. If you stick to F/OSS you might still live in a world where that stuff is pretty much entirely absent, experientally.
Then the outrage comes in full force, as a kind of culture shock, whenever some external situation requires you to do something like set up a proprietary desktop operating system.
> I hope there's a future (or alternate universe) where dark patterns, such as this, result in economic loss rather than economic gain.
I agree. I wonder whether that can actually be achieved through end-user savviness alone.
I bought my first Apple device (an iPad) two weeks ago out of necessity and even created an Apple account in advance, assuming Apple will force me to use it. I was very surprised to find a very obvious "Skip" button that let me complete the process with no account.
And just because other manufacturers do it, isn't a reason not to blame MS for what MS did. Nobody forced them to do this and it's not even common practice. The OS wasn't "built with online accounts in mind". It's something nobody asked for and everyone lived happily without before.
From a Windows perspective, Apple might seem somehow generous, but if you're not interested in Apple's cloud services or you're actually concerned about privacy, the Mac setup wizard is still a minefield of shit to opt out of. :-\
Are you able to e.g. install apps without an apple account? I seem to recall simple things like that wasn't possible unless you had signed into an apple account.
Considering that on these Apple devices the Apple store is literally the only entry point for running actual programs on these devices, I would say it is indeed pretty equivalent to "not being able to setup the computer".
You might have bad information. Very little of the software I run on my Mac comes from the App store. Even if your assertion were accurate... You'd still be able to use Safari, and web browsing accounts for the lion's share of most people's computing needs. So the score is:
Windows 11: lay-users can do literally nothing of value with their computer without signing in to an account they probably don't want
MacOS: lay-users can use the internet and download software provided outside of the app store without signing into an account that they probably don't want
Calling these equivalent would be quite an exaggeration.
That is true. I guess I missed that we were comparing the operating systems completely across device categories. However, half of what I said still applies since I explicitly mentioned Safari.
I don't know, almost everyone who has an iPad in my family probably never used an app store. They just use it for browsing, facetime and calendar management - and all these apps are preinstalled. I'm trying to think what I have installed on mine that isn't standard, and it's pretty much.....YouTube? I could browse that through the browser if I wanted to.
Like, my point is that these devices are fully functional out of the box even without an account(but yes, it sucks that you need one to unlock it fully)
> Microsoft wants people to register online [...] in order to increase the market share of the Windows app store and other Microsoft online services
> So it's not malicious
I would have to firmly disagree there. Microsoft are pushing the notion that an online account with some identity provider is a necessity for home computing by hiding the (perfectly functional) option to create a local account.
- This is My procedure (with Lan) to obtain a local Account on W10
Install Windows 10 and go through the OOBE
- Select Region
- Select Keyboard Layout
- SKIP Secondary keyboard layout
- Network connection
- ENABLE Allow PC to be discoverable
- Setup = For personal use
- Account = Offline account
- like old times - a completely standalone PC
- Sign in = Limited experience
- ignore the nudging to make an online account
- User account = <my initials>
- as this gets used for the name of the home folder and I don't want my full name for that
- No password
- to avoid the nonsense socalled "security questions". Password will be set later *after* completing the installation
- Location usage = No
- Find my device = No
- Diagnostic data = Required only
- Improve inking = No
- Tailored experience = No
- Advertising ID = No
- Customise experience = Skip
- Set a password for the account
People are so quick to assume malice whenever a multinational conglomerate practices deception in pursuit of profit. Whatever happened to engaging in good faith?
I think most people don't realize this given how normalized it is. Wealth is a mechanism of distributing limited resources, and thus profit seeking without concern for externalities or creating value makes everyone else worse off.
If one's sole driving concern is making money, a whole lot of evil becomes possible. The other side of profit is power - those with more capital become more powerful. That's the reason Microsoft made this decision. They don't want the average person being able to easily make a local account, they'd much rather force everyone to register an account with them in order to even use their computer. It's just one further notch down the slippery slope of the end of private ownership and personal rights. A small notch, certainly. But it's a small notch in their overall endgame of putting general-purpose computing "back in the bottle" so to speak.
Certain powers in this world want there to be less regulation for the moneymakers in this world. Centralized wealth has been a cancer on humanity for millennia, and we're nowhere near putting the proper amount of shackles on capitalism.
It is absolutely malicious tho ugh, because Windows works perfectly fine with only a local account, and tbh it would still be malicious, and actually even more malicious, if they managed to change windows to make it not work anymore with just a local account. I suspect they won't and that nrobypass or something equivalent will remain for the forrseable future. Maybe a regulation authority will even force them to provide back the offline option out of the box.
That explanation is exactly what I meant by malice. Microsoft is trying to force people to do something to their own computers that they don't want to do and that isn't actually necessary to do.
It's worse than that. The account it creates is somehow "special". I have not yet figured out a way to connect to a computer with an MS account via RDP (yes, RDP is on and allowed through the firewall, the user is a local admin (the default)). Ditto for accessing that PC's shares.
FWIW, I just did this two days ago with a Win11 machine that I set up solely for remote access.
Created an MS account (because I want this machine to be as normal-user as possible), set up a PIN. Signed in with a PIN to the desktop, run 'Remote desktop settings' and fip the 'Remote Desktop' toggle to on and affirm the prompt that asks if you really want to do this.
After that no issue RDPing to the machine by IP or hostname from another machine on the same LAN. Username and password is the same as the MS account I first signed in with.
(For reference, Windows 11 22H2 running on an HP Prodesk 600 G5, RDPing from macOS using Microsoft Remote Desktop 10.7.10 installed via App Store.)
You have to delete the PIN it forced you to create during OOBE. This forces the system to apply your MSA's password to the actual account. Without this step, the account has no password. You can still recreate the PIN afterwards and it won't delete the password.
PIN is only used for local logins because it's part of Windows Hello, meaning it literally is the PIN to unlock the password credentials where they are stored in the TPM.
You can also try to join a domain, enter bad credentials, wait for the error to show up, and then select a local account I believe. That may not be available in the home edition of Windows 11, though.
IIRC you don't need bad credentials. It just offers to create a local account, without even asking to connect to the domain.
But yeah, I'm pretty sure the domain join is only an option on the pro and enterprise editions.
I've also found out that the domain join is only offered if it can contact the internet. I installed this on a brand-new laptop the other day, and it didn't detect the wifi card and it had no wired network. It absolutely refused to go past the "let me connect to the internet" phase until I went through the "hidden terminal" dance.
The "domain join" is misleading. It does not really join the domain, it just creates local account. Joining domain has to be done manually after installation. (Otherwise, network connection is a logical requirement for for domain join, you need to contact DC after all.)
This is in a stark contrast with current linux desktop distributions, which do allow domain join straight from their OOBE.
Windows experience is worse and worse. On my Windows 10 company provided PC I found games installed remotely. In a professional PC!
A friend of mine found Spotify installed [1]. That's insane.
One of the companies I was in actually blocked the entire MS Store through policy, but this results in most of Windows 10 going batshit insane (e.g. the Settings app will not even allow you to change keyboard layouts, because to retrieve language files it must go to the MS Store).
That reminds me of when they made it impossible to get rid of IE by making it a hard dependency of most of the rest of the OS. Why won't they get in as much trouble for this as they did for that?
I’m not saying it makes it okay, but the “Professional for Workstations” edition does not have any sort of sponsored start menu items or auto-installs.
Not that I know of. But it won't install on "normal" PCs, it requires "workstation-level" CPUs. If you install it on a regular PC, it will just revert to regular win 11 pro.
I don't know if this changed with the 22h2 upgrade, but it's the behavior I'd noticed before.
That’s not the case at all, and never was. I remember when Microsoft gave out free upgrade licenses for Windows 10 Workstation when it first came out, and it’s just an upgrade.
The weird CPU spec thing is that Microsoft doesn’t let you sell a high end desktop computer with “regular” Windows Pro, and requires instead that it comes with Workstation. But you can either version on any grade of CPU as long as it runs Windows in the first place
Is your company big enough to be able to afford Windows 10 Enterprise instead only Windows 10 Pro? Because on the latter, you can't stop all of that from being installed.
I am similarly bewildered by everyone going along with it. I am not in on whatever cruel joke is being played on us.
My best guess after turning my head 270 degrees, closing one eye, and squinting the other is that the noun is Microsoft/Windows, and the verb is running the ncsi daemon and having it fail the check.
Yes, can confirm I was installing Win11 on my brother-in-law's new PC that I built last week and there were no default drivers for the USB wifi adapter I had attempted to reuse from his old machine. The entire process was roadblocked wanting me to plug into Ethernet until I researched the secret keyboard combination that allows you to shell out and then restart with the ability to do a network-free install.
I was using a Win 11 Pro installer stick on a Surface 4 the other day paired with a USB Ethernet interface. Aside from the fact that keyboard and mouse don’t work, because Windows doesn’t have drivers for a Microsoft Surface during install, I was able to convince Microsoft that I have no connection by unplugging the Ethernet cable right before the Microsoft account login prompt. Maybe I was lucky? Or really old installer? (Got it from Amazon Japan)
It’s definitely not an OS I would like to make my daily driver. I just hope Apple won’t go down that road with macOS.
I ran into this a few weeks ago with an Intel 12th Gen NUC. The Windows 11 install media does not have drivers for the LAN nor the WiFi so it just sits there saying you need to connect to a network to complete setup.
How on earth did Microsoft okay releasing a Professional version of their OS that offers no suggestion on how to finish the install when no network devices are detected?
A simple Google found me the answer but it is piss poor UX to offer zero options when it knows there is no network interface to enable.
People like to joke that you need the Terminal in Linux still and yet I couldn't even install the brand new Windows 11 on a computer without needing to open a command prompt using a keyboard shortcut and enter some cryptic command which rebooted my machine and enabled some hidden option.
Whenever I access a public Wi-Fi with login / captive portal, and the portal doesn’t show up immediately, I enter captive.apple.com in Safari to trigger it! Works every time (iPhone)
Same for Android which always call to Google everytime device connect to WiFi. At least you can change captive portal via ADB, I'm not sure if iOS let you do that too.
I suppose you could spoof having full Internet access by hosting nsci.txt or connecttest.txt locally and editing your hosts file to direct www.msftncsi.com or www.msftconnecttest.com to 127.0.0.1? Conversely, if those Microsoft websites ever failed, countless Windows machines would determine that they have limited or no Internet access.
In the linked comment section, Raymond Chen replied somewhat abrasively to a comment similar to this (the comment was "wouldn't this be easy to spoof?").
Yep, exactly. Because another part of this is NCSI is used for captive portal detection, so Windows can/will notify the user that they need to do something more to keep using the network.
Android and Chrome do the same sort of thing to detect internet access; this is how Android pops the notification to sign in to the network.
Then, at least on Windows, the results of NCSI flow down into WinHTTP and a ton of other things so apps can know the status of the network.
It's also possible, via Group Policy, to configure a different URL for NCSI. This is useful in enterprises which may not have the NCSI URL available to unauthenticated things (eg: the OS) but still has internet access via proxies.
It's also possible to disable NCSI, captive portal detection, etc, which is useful on some closed network boxes (eg: some enterprises) but this will cause problems if the machines are ever used on public/walled garden/captive portal networks.
The biggest problem I've seen with this comes about where captive portal detection is disabled, a user ends up on a captive portal, tries to hit a website to satisfy the portal, but due to most sites that normal users will try being https these days can't get their session redirected in order to display the portal, so they think "the internet is broken". The NCSI/captive portal detection makes a point of using HTTP so captive portal redirection can work properly.
it's interesting, when I open http://www.msftncsi.com/ncsi.txt directly in Chrome it triggers a Translate this Page url bar icon and it prompts to translate between Hungarian and English. All that's in the content of the page is `Microsoft NCSI`
Google (ChromeOS, Android) does a similar thing by checking http://connectivitycheck.gstatic.com/generate_204 and expecting an HTTP Response Status 204, or else it assume there's a captive portal or something blocking Internet access.
Particularly for unimportant things like authentication. There must be at least a dozen redirects when you login with your ms account online, none of which is a microsoft.com domain.
Given the sheer volume of connections to the test servers, wouldn't it make sense to have the content of the text file as minimal as possible? Such as "1" or "OK".
You're correct, at global scale it's still peanuts (and of course it's not a single server but redundant clusters of servers).
I have a problem at your math though, I'm guessing with the overhead and the pretend-IE headers it could be a whole Ethernet packet (which can be up to 1,500TB/day, but realistically it could be around 500TB).
My maths fine, I was comparing it to the comment suggestion of sending "1" instead of "Microsoft Connect Test" as the response body. The headers should be the same.
But I guess you could say the content length header would be 1 byte longer in the double digits length contents (which it is).
Two things I’ve always wanted to do is figure out how to cache windows updates on my local network without an enterprise windows install, and how to block windows updates by sinkholing domains and/or ip addresses (I work in IT security).
Set up SOCKS5 proxy (e.g. github.com/rofl0r/microsocks) on the nearest router and configure router's firewall to drop all outgoing packets whose TTL is near 128 (Windows). Then configure FoxyProxy in Firefox or Chrome to use your SOCKS5 proxy. Windows will think it's offline, browser and other apps which are aware of your proxy will work fine.
The nuke-it-from-orbit approach works for me but ymmv: a default-deny firewall for the Windows IP on the default gateway with external squid proxy for Firefox. netstat -on | grep $PID to add rules to allow access per process for things that just have to get through.
I'm surprised "in-home router-level network caching" hasn't become a thing, really. Lets say you have a family of 4, all with iphones that need updating, windows updates, downloading same games from steam, app store, etc. It could be significantly sped up for whole house to download file 1 time instead of 4.
I believe Microsoft use BitTorrent to distribute updates. It took me a while to realise that many Linux distros use unencrypted http to enable caching, using signature checks to verify file integrity
Sometimes Windows installs an upgrade that insists I must connect my user account to a Microsoft account. It will not let me boot the OS if I don't. Only hell if I know what my Microsoft account is. I never use it. I need to use my web browser to find out. But I can't, because I need to set up my Microsoft account first. So I have to use another computer which will let me use it even without a Microsoft account, and then try to figure out my Microsoft account password. Then boot into Windows, let it connect the accounts, go into account options and try to find the hidden dialog to separate them again because hell fucking no I don't want Microsoft to associate my user account with my email address.
Being shafted like this every now and then has eroded my trust for Windows' updates.
Remember that security vulnerabilities in Windows are discovered all the time, so it's dangerous to use Windows without installing the updates. If you (rightfully) don't want to install the updates, then you should switch to an OS that actually respects your freedom instead, like Linux.
Because working in security sometimes I want to test malware on outdated AV, blocking full internet causes command and control failures, creating a weird spot to analyse traffic. Disabling Defender is not persistent (it seems to switch itself on, etc).
Would you like to describe your standard practice? I am interested in implementing this after windows updates have killed our workstations multiple times.
Is there a nice description / workflow / tutorial / script / community where I can learn how to do that?
I did not find any recommended workflow for this by Microsoft itself, but maybe I was searching for the wrong things - windows updates are generally a bad thing to research anything related for. I expected to find some standard workflow description plus tools on some MS website, but no success. Does that exist?
You are looking for WSUS (Windows Server Update Services). If you have Windows Server somewhere, you can add WSUS role to it and use group policies to point your clients to it for updates.
Then, in WSUS console, you set up approvals for updates and then the updates will be offered to clients only once you approve them. You can divide the clients into groups and manage the approvals for these groups individually, so you can have a separate testing group.
Too bad it almost never works correctly. Even now it says I have no Internet while I do and can access those txt files.
I used thousadands of Windows machines in last decade, this is typical. You can ignore this "feature" almost entirelly.
If you introduce proxy in your system, then you can be certain that it will not work, including Windows updates. You have to masssage your system with net commands and learn about WinHTTP
proxy (that nobody heard about) sfor it to sometimes work.
To deal with this and other nuisances I made 2 functions in PowerShell:
It works perfectly well for 99.9999% of the users. The few exceptions are people with proxy servers, virtual machines and other exotic network configs.
That is not my experience and you can't just imagine number like that.
I am currently on basic OS install without anything in between and it doesn't work. I just switched from home router to my phone's hotspot and its the same.
It seems its not only Windows problem. Viber desktop has exclamation icon and it will persist until it is restarted, Mattermost works, Signal works etc.
If they don't load for you you probably modified your install with shutup10, block microsoft domains using your firewall or /etc/hosts or something else and you're out of support.
I've seen this problem pop in randomly; like having three identically configured VMs, two work fine and in third causing trouble. Good luck finding out why.
Mate, you live in a bubble (as everyone else). There are over a billion Windows PCs out there, how many of them do you think have hyper-v enabled? Keep in mind that even among the small minority of users who have a hypervisor installed VirtualBox, VmWare and Qemu are a lot more popular than hyper-v.
> Keep in mind that even among the small minority of users who have a hypervisor installed VirtualBox, VmWare and Qemu are a lot more popular than hyper-v.
Keep in mind, that these do require Hyper-V nowadays. Especially if your Windows 10/11 has virtualization-based security enabled (mandatory in 11); then using Hyper-V is the only way to virtualize anything.
Hyper-V is also requirement for WSL2.
So given this, Hyper-V might be enabled on a good chunk of these billion PCs.
It's not that Hyper-V would be my first choice either. But it is not exotic configuration at all, and the fact that it is a first party product which breaks this makes it even weirder.
It might be bubble compared to entire planet, but its still pretty big bubble. Once you have hundreed K users, you should have responsibility to deliver.
Especially becasue people in that bubble are those moving the others forward.
I'd love to know how Apple decides if a Wifi network is usable or not. For some reason my home network is flagged as a "Mobile" network and people connected to it can't update to the latest version of iOS etc.
Is it because I've also created a Mobile Hotspot with the same SSID on a spare mobile phone I have, so my family can use their iPads out and about without having to connect to a new Wifi network? (i.e. I just works)
Is it because for some devices on my network, I DHCP them a different DNS server so they get adblocking via AdGuard home?
Who knows. It's so annoying. The fix if you want to update iOS on my home network is to connect to the Wifi Network called "F*kApple" which is exactly the same network as normal, but with a different SSID. Because that works just fine.
Also, F*k Apple.
PS: Also F Google because trying to search this problem just gives me the most infuriatingly childish "How to fix!" articles.
DHCP option 43 is often used to indicate if a mobile network is metered. (May be other ways, this is the one I'm most familiar with.) I'm guessing your "spare mobile phone" is Android?
Apple caches a lot of info about networks it connects to. So it's probably caching that it received this option and "knows" that network to be metered.
Best solution for this is to have either more control over your mobile network so it's not sending that option, or more easily, name the mobile network something else from your home one.
Android uses a vendor DHCP "option 43" set to ANDROID_METERED or something like that. If you Google for something like "iOS set wifi to metered", it looks like there's a user accessible setting.
and what about the origin behind the cdn? seems like that it’s still rather critical and would have migrated multiple physical hosts over the past 30 years.
This sort of thing is fascinating and I do something similar at a much smaller scale at day job, keeping a simple service running that enables the rest of the stack to survive.
No clue on QPS, but it can't be terribly high. It is a single request, compared to the 200+ requests served when someone loads something like CNN.com.
I doubt there is any origin at all. Since the response is so small and never changes it is likely hard coded into the config. Maybe it is a file on a shared object storage.
You can manually configure the network adapters from the Hyper-V UI. I don't know if you can set a priority order, but you can definitely configure the interface Hyper-V uses for VM network connectivity.
In my experience it is a selectable option when creating a new virtual switch but is not modifiable for the default.
I have my machine directly connected to my underpowered Remote Desktop client via ethernet, and it always picks that connection instead of the WiFi that actually has internet until I disable ethernet (or maybe the virtual adapter created on top of ethernet, I forget every month) and reboot.
I don't know if it solves your entire use case, but in PowerShell you can change the interface metric number to rearrange connection priorities (https://learn.microsoft.com/en-US/windows-server/networking/...). You have to watch out to pick the right interface (Hyper-V takes over packet routing from your real network adapter in many cases) but it may be worth looking into. Higher metric number means lower priority, so setting the bad connection to a metric of 100 and the real uplink to a metric of 5 may resolve your problem.
In Windows 2000-8.1 the control panel GUI was the standard way of accomplishing this, but in modern Windows 10/11 I doubt Microsoft has the setting still accessible. There are still guides out there with screenshots, though: https://www.windowscentral.com/how-change-priority-order-net...
The automatic metric detection system bases its decision on network speed (https://learn.microsoft.com/en-us/troubleshoot/windows-serve...) so virtual 10gbps adapters can cause problems if you use a common network adapter and the custom settings for Hyper-V and such get messed up.
This is the kind of thing that comes up whenever I've worked on an app and the client asks us to report as server connectivity to the user. Despite what the OS tells you, you basically don't know anything until you actually try to use the network, so not surprised at all this is what Windows does.
The annoying thing is that some MS programs detect internet connectivity in their own ways (or do networking via some custom network stack), that do not work properly. Each time I had a proxy like Fiddler running, Outlook would think I'm offline. Everything else worked fine.
Why did they change the domain after Windows 8? The domain name is more verbose but it doesn't seem like a valid reason for me to invest time in this. Anyone know?
One reason for that is explained well in a comment on the article.
Note that as with the Windows version, the protocol is HTTP, not HTTPS – because captive portals completely break TLS, but plaintext HTTP will result in a clean redirect to the portal, allowing the network service to detect the presence of the portal and to bring up a browser window to let the user authenticate.
Yeah, and there's also no guarantee the computer's certificates are even up to date (eg. first time you connect a PC after a fresh install off older media)
Also no guarantee the computer's clock is set ballpark accurately (which TLS requires), which can be relevant if Windows is checking for Internet connectivity before (for example) using NTP to update the computer's clock.
This is why (until very recently) Windows updates are distributed over HTTP - the only benefit of TLS is real-time error checking (and only because there are stateful HTTP proxies that can mangle files).
The whole point is to determine if you have full internet access, so you want to make sure that an HTTP request returns the data you're expecting. You may be able to get DNS responses but not have full internet access, like when on a public wifi that redirects all requests to a login page.
I mean really, what does it mean to you, or to Windows, that I have "full" access to the Internet?
For me personally, I only visit a few walled gardens, so as long as I had my Google, my Wikipedia, and my work-related sites, I wouldn't miss 99.99% of the Internet anyway.
But what if your ISP blocks a whole bunch of ports? What if Adware has taken over 33% of your DNS space? What if you're behind a Great Firewall of <Dictatorship>? What if there's some sort of Balkanization or segmentation of your side of the 'net and you can't reach a lot of stuff? What if Cloudflare's down again?
Yes, "full" internet access is a hard to define term and this simple test doesn't cover all possible cases. But it will work in probably 99.99% of cases and honestly I think that's good enough. If it doesn't work you end up with a little "no internet connection" icon in your taskbar and that's about it. You can still use the connection, so I don't think it's a big deal that this test isn't 100% accurate. It's still more useful to the majority of users to have a slightly inaccurate test than to have no test at all.
Unless you know what the DNS request is "supposed to return", you can't know that just getting any DNS response indicates that you have full Internet connectivity.
Sorry, you were imagining the use of a TXT record (I assume) and that would work better than an A record inquiry that I was considering.
It still wouldn’t find http filtering, but it would work better than I initially gave it credit. (I still doubt it would give a contextually correct answer for an airplane wifi connection [where DNS May very well work but few other services do if not paid].)
Mindless routers (which are frighteningly many) that cache the results and won't show the true state of upstream connection, which is an important thing. There are a lot less transparent HTTP proxies that wouldn't respect no-store than mindless routers trying their best to cache results.
You've got it exactly. This is also part of Windows captive portal detection, which makes what you say even more important. HTTPS would actually be a step backwards here.
I realized a while back that msftconnecttest.com is the domain it uses to check online status, and is the domain it checks in the background that gets redirected to wifi captive portals and pulls up. Any time I have an issue with a captive portal, I use that domain and the redirect works, because I know any other URL with end up with a certificate issue and I won’t be able to get to the portal.
What if your ISP hijacks your DNS (pretty common) and someone was to poison it and instead downloads a malware ? That would mean thousands of windows pcs download this malware by just connecting to the internet.
Then the string compare would fail and the little icon in the bottom right would show an exclamation mark.
This can be a problem if there's some kind of critical vulnerability in the Microsoft HTTP stack, but I don't think this attack vector is all that relevant.
Same with other captive portal detection endpoints, there's very little actual parsing going on with these requests.
Windows Updates are downloaded via HTTP but signed in the package themselves. This is why Delivery Optimization (peer to peer distribution) can be used. HTTP downloads for WU are also good because it allows upstream proxies to cache the content reducing overall network load.
Thus: Hijacking WU to download malicious content takes far, far more than just DNS hijacking. You'd also need to subvert the WU signing system. (This is more nation-state level stuff.)
Really unfortunate terminology that molds peoples ideas of the internet, being able to access a website is pretty far from having full internet access. Calculated or not, it was also in MS interests to lead people to think this way.
It should check if you are behind NAT and then say "Your computer doesn't have full Internet access but can reach some services via a gateway"
Even without NAT I don't necessarily want any old program to be immediately immediately reachable from the outside, so I still want a default-deny inbound firewall, and as long as it happens under your control, there's not much of a difference between having to configure my router's NAT and having to configure its firewall (in the case of my home router it's literally the same settings page). I.e. no big deal for me, and still a bit of a struggle for non-techies (but with sufficient motivation some will still manage it).
(And if you want something like UPnP to let programs automatically punch holes themselves anyway, again it doesn't matter much whether we're talking about NAT or "just" a plain firewall.)
The true evilness of NAT only really comes in when it's done by some third party outside of your control (CGNAT and friends), but I think that compared to home routers doing NAT the latter is a slightly more recent phenomenon that only got widespread traction when the IPv4 shortage became more acute.
Are you referring to port forwarding? This can work around only a small part of the stuff NAT breaks, and even for those it covers it's a major barrier to application adoption. A new application relying network effects needs to work for the vast majority of users to be able to take off. If you prevent 30-50% of users adopting it, it's not going to take off for example in gaming or communications / sharing apps.
For example port forwarding dosn't help evolution of new internet protocols. Iit prevents replacing TCP with SCTP due to this, or deployment end-to-end IP level encryption (like IPSEC attempted). Or a myriad of other decentralized or security enhancing inventions that depended on the end-to-end nature of the internet architecture that now have never gotten off the drawing board because they are not NAT-compatible.
(And of course the majority of users behind NAT are in fact behind third party controlled NATs)
Well yes, NAT might pose some additional constraints, but my main line of argument is that even in an alternative timeline where we never had the IPv4 address shortage and therefore no pressure to develop NAT because every device can be assigned its own address just as is possible now with IPv6, we might still have ended up with default-deny-inbound firewalls for home networks anyway, because it might have turned out that letting random programs run world-accessible serves on random computers without any special user authorisation isn't such a good idea.
IPv6 doesn't require NAT, but my bog standard home router still firewalls it, and I need to manually allow inbound connections (or give up and just use UPnP).
Yes. Default-deny-inbound firewalls are much better than NAT because they are meant to provide security, can be managed (the "default"), and don't prevent deployment of new protocols. Also we might have a standard way to manage the firewall policy in this alternative future (upnp was done outside the IETF, and is a tire fire, because IETF rejected NAT).
But also the whole posture of a home network and consumer os might have been different without NAT, maybe the host based firewall would have won out, who knows. In the alternative universes we can't assume other things remain the same.
I do not want my home computer to be exposed to the Internet. I do not want your fancy new Internet apps, the existing ones with explicit user-initiated connectivity are more than enough for 99% of people.
And even if you somehow have a non-NAT, non-CGNAT, no-ISP-filtering home connection, do you have full Internet access if the server behind NowhereNews.com refuses all your connections because you’re in Europe?
You probably know this but NAT is not the same thing as a firewall. You can have one without the other or both. Just because your machine is addressable doesn’t mean it is accessible. You can have machines on your home LAN that have public IP addresses but are not publicly accessible. NAT exists because historically ISPs didn’t give out blocks of public IP addresses, and now that they are running out of them, they are expensive. It’s not really a security measure.
Yeah, I know, but NAT’s side effect of preventing all sorts of remote access is quite convenient, I don’t have to trust the cheap router or cheap internet of shit device to do the right thing firewall-wise.
This is a non sequitur. Your home computer being "exposed to the internet" is orthogonal. And of course this is now enough for 99% of people because said new apps are prevented from coming into existence.
I was referring to when this was built, ages ago, and the reality this helped come about.
Today most home networks have NAT for for v4, and then NATless IPv6 (or no IPv6 as the case may be).
Trivia: NAT is not routing, the normative router requirements RFC actually specifically forbids tampering with the IP source or destination address fields.
This ship has sailed now. A huge portion of internet users are behind double NAT these days. When you deploy a service on production, you'll have to assume your users are behind double NAT or CGNAT and add additional supporting services to mitigate them like STUN/TURN servers.
Not sure when Micro$oft can stop such abusive moves. Why the user have to be able connect to your endpoint to prove they are connected? Why keep nagging me to set active hours coz I regularly use my computer 8am to 6pm?
You can disable the checks if you want. Any application that uses the operating system API/DBus to evaluate network connectivity rather than building its own bespoke online check will probably break if you do, though.
Because all of these have to use HTTP, you can also easily override the standard network addresses in your hosts file and pick your own server if that's what you prefer.
I don't want that sh*t to be there in the first place.
No one knows better than the end user. And also, if the user could not reach some M$ server does not mean users' network was not connected to the internet.
Anyway, won't be a trouble for me much longer. Retiring all my Windows instances, had enough.
It's crazy, everyone else is just letting M$ control the narriative and falling for it.
Micro$oft is trying to define what is and is not 'internet' access, eg. they turn the definition of 'internet' to mean unfettered access to the micro$oft servers is and you're ability to send personal information over to them.
This really gets on my nerves! Some twit of a programmer (probably via a committee) "knows" whether a system has connectivity.
For some people Facebook is the internet and for others it is Outlook. Somehow a twiddler at MS has decided that a file on a web server at some wanky location is the internet and that's the final answer.
"I cn haz a file" is fine lovely internets! No it bloody well isn't. You should be constructing a response to a challenge from the other end, not a response to a simple GET like it's 1999.
I run quite a few systems/sites that have multiple internet connections - deciding whether the internet is available is quite a nuanced thing and that Windows internet detector is bloody stupid, naive and a fucking hindrance.
Define "internet" and then have a crack at defining "internet accessibility". Those things are quite specific to individuals. orgs and so on. Connectivity is way too complicated for a simple, naive check.
> Define "internet" and then have a crack at defining "internet accessibility". Those things are quite specific to individuals. orgs and so on. Connectivity is way too complicated for a simple, naive check.
This is asking for the true Scotsman. This is designed for the 99% of users where internet access is a ternary "yes, no, needs credentials". The "naive" check is enough for them.
Also, as far as I know only Windows has defined settings to disable this check and assume that there's a connection (https://learn.microsoft.com/en-us/troubleshoot/windows-clien...). At least you could disable this if you need to, because Android don't have a similar setting.
But it works 99.999% of use-cases as a business decision, what do you suggest windows should do to make it a better user experience for their customers?
> You should be constructing a response to a challenge from the other end, not a response to a simple GET like it's 1999.
Really? Isn't simple better here, why are you making this sound so crazy? Also you can turn it off.
The author also had a good response for spoofing question imo:
"So what if somebody spoofs it? Congratulations, you tricked Windows into showing a “full internet access” icon, and then when the user tries to go to a web site, they get an error."
You sound like a typical engineer who cannot see the bigger picture of business decisions.
> Somehow a twiddler at MS has decided that a file on a web server at some wanky location is the internet and that's the final answer.
Seems like it's the same in many linux distros as well as android and chrome os. It's right there in the article. So it's not really MS specific at all.
> I run quite a few systems/sites that have multiple internet connections - deciding whether the internet is available is quite a nuanced thing and that Windows internet detector is bloody stupid, naive and a fucking hindrance.
I'm not following this, where is the nuance? What problems does this cause and why are they so difficult to solve? Even if behind a strict firewall that allows only a few IPs or ranges (arguable that would still be the "internet" as commonly understood), couldn't you just override DNS to return the same file from your server?
I've been annoyed by iOS disconnecting from wifi when no access is detected and I'm just trying to stream something from local network, but not Windows.
When you have a router with multiple WANS, LANS and VPNS etc, routing can get a bit complicated.
For example how do you tell traffic to go via WAN2 (or 3 or whatever) instead of WAN1 if is really down (define really down). So you create a rule that says that all inbound on LAN is routed via a failover thing. That's fine but now you've broken RFC1918 routing. You try to connect to a remote site via 192.168.lol and its fucked.
So you now create a rule that forces 192.168.0.0/16, 172.16.0.0/12 and 10.0.0.0/8 to be routed via the usual routing table and after that you have a rule that worries about internets and multi WAN. Simples.
No of course it isn't that simple but it is quite close and good enough mostly!
There are several problems in search of a solution here. Is a WAN down? Usually you ping something. What do you do if the thing being pinged is down but the link is actually available and how do you deal with that? It gets to charts of risk/reward at this point.
Huh? If you have multiple WANs, presumably you’re running an actual routing protocol (BGP if WAN means WAN) and it’s solving your routing question based off of it’s configuration and the routes announced by it’s peers.
Also, I hope you’re not relying on ICMP to tell you meaningful things about your relationship with the internet. It lies.
The beauty of Windows is that everything is configurable through the registry if you look up the documentation. Usually you don't want to configure everything so in these cases Windows comes with sane defaults, but Microsoft has a way to override the URL that's being checked. See this article from eight years ago: https://www.ghacks.net/2014/02/07/disable-customize-windows-...
If you need some complicated algorithm, write a quick simple web server that runs on 127.123.45.67 and does all of these checks for you when the magical portal URL is requested. Then update your registry to point to that IP (or use hacks like editing your hosts file) and you've just added your special logic to every WinHTTP application on your computer. You can even point Windows to an endpoint only reachable over VPN if you want so the Internet check becomes "is my VPN operational", though that may break the VPN software itself.
Microsoft did a good enough job for all normal use cases of the Internet. Bespoke use cases need bespoke solutions, and they provide the ability to set that up without hacks if you want to change the standard behaviour.
It wouldn't be that bad if it was just the status indicator, but several apps refuse to work properly if "internet" isn't detected. The workaround is to disable every other adapter in the system until NCSI is happy. I ended up binary patching the connection test service in memory to get it to always return true.