Hacker News new | past | comments | ask | show | jobs | submit login
Twitter preconnects to the wrong domains (ctrl.blog)
221 points by weinzierl on Oct 27, 2020 | hide | past | favorite | 149 comments



Twitter is so full of weird connectivity bugs.

For the last weeks until a few days ago I've been unable to access twitter unless doing a hard refresh on every click. Because they installed a faulty service worker of some kind that would break most of the requests. (Both on desktop and android)

And on Android, every time I follow a link to Twitter in an app that opens in a web view, it gives me a faulty page that I have to refresh a few times before I'm able to view it. It loads the page fine, but some rest call or whatever to fetch the tweet crashes.

Edit: Not heard many others complain about this before, so mainly thought it was something about my setup. But the huge amount of upvotes this got suggests some Twitter engineers better look into this..


> And on Android, every time I follow a link to Twitter in an app that opens in a web view, it gives me a faulty page that I have to refresh a few times before I'm able to view it. It loads the page fine, but some rest call or whatever to fetch the tweet crashes.

I get this issue too, I figured it was part of some trick to make me want to use the app.


Does that on iOS too, every single time


Insane that one of the most used websites on the internet doesnt work most of the time.


Maybe it works when navigated to directly, but the only time I ever go to Twitter is if someone links it from Reddit. And without fail that looks like this: https://imgur.com/fQJ27MF

The button to “Try Again” doesn’t work either, you have to manually reload the page.

This is in Apollo on iOS, but it sounds like the same broken behavior happens in web views on Android.


I thought this was intended: any page load that does not finish within x seconds is aborted. It will normally succeed for me on the reload with the cached assets. I understood this to be a (bad) optimization for rejecting slow clients.


It does that for me even _in_ the app, so no, it's just broken...


Thank you for mentioning this. I've been getting frustrated with my Firefox install for a while because everything is so slow and it eats so much memory. After deleting the 200 service workers [1] I didn't even realize were installed, it is taking wildly less memory and seems generally snappier. I wouldn't call it "night and day", but it's faster.

I had been under the impression that service workers required approval from the user to be installed. I had service workers from websites that I haven't visited in years and don't even exist anymore, sitting there chewing on my RAM every time I started Firefox.

Also Twitter has also been broken for me this way for months.

Service workers have some nice features, but I submit that they need to be something to be whitelisted by users, not something websites can just toss in there whenever they feel like it. The odds of one of them screwing up and eating far more resources than they have any right to approach 1 as the number of them increases. I don't know exactly how they managed to collectively eat 2GB of RAM and I don't care; there is no way they were bringing that much value to me, especially as I permit no website to simply push me notifications.

[1]: A suspiciously round number; it was exactly 200. Is that a limit? I don't see it in about:config.


As a data point I've got 299 service workers according to about:debugging#/runtime/this-firefox

Time to remove these. Thanks!


And thank you for the link! I was not going to find that one on my own...


Thank you for the hint with the service workers, quickly checked my settings and figured out that thankfully Cookie Autodelete [0] can take care of those too. Down from ~200+ service workers to a few with my existing configuration.

[0]: https://addons.mozilla.org/en-US/firefox/addon/cookie-autode...


I'm not a fan of service workers myself. Another team at a previous job added a service worker to a site that served many separate backend apps. It caused endless loading, caching, and crashing problems for my team, for no clear benefit.

If you want a cache, use HTTP caching, not custom JS code.


Not just RAM, I noticed on Android 11 that with the broken Twitter service worker installed, Firefox was gobbling up battery as well.

I ignored Twitter being broken for a bit since I didn't mind not using it. Then I realized my Firefox battery usage had essentially doubled and was draining my battery faster than normal. Once I cleared all Twitter site data including the service worker, back to normal.


So it wasn't just my browser or my internet connection. I used CTRL + F5 in FF and it always worked, but on a normal refresh it would either display an error or the page didn't load at all.

Thought that my IP had landed on some list where Twitter degraded the connection and I wasn't even logged in.


Yeah, I got loads of these - this tweet is not available to you etc.

Recently, it appears that I can't load twitter URLs with query parameters - the Guardian use some really weird ones of these, but when i delete them the URL loads. Super weird.


I am relieved that I am not the only one. I thought it was something to do with my phone, the browser I am using or something else.

Happy to know that Twitter is just being Twitter.


In Safari it will just drop you to an error page 70% of the time you navigate to it through a link, I assume because it doesn’t get its tasty tracking data. You have to reload the page to get anything to show up.


On FF on Linux and Android it shows me that error message 100% of the time. Refreshing gets it to load.


Same here, now refreshing typically doesn't work.


Yes I get this too, for a long time. I’m constantly shocked that a major platform could be so fundamentally broken for so long lol


This has been my experience on mobile twitter for years, and in the last year on the desktop one as well. I thought it was a passive aggressive push for users to log in or install the official apps.


I have just gave up on clicking on any Twitter link. I guesses a log in was required now.


+1. I figured it was anti-abuse.


This has been happening for at least a year for me in Firefox. 100% of the time. It's rather odd.


Me too. First follow from external website to previously unvisited tweet, there's an error. If I open it again it works, and keeps working from then on. Also happening 100% of the time on new tweets.

Kinda funny that twitter can't figure out such basics such as opening a page correctly when following a link. This has been happening for months.

Web development is hard.


Loading a tweet really isn't that hard; it's just Twitter's engineering team is incompetent. Even tiny little websites that custom-roll their own forums have less trouble loading their content than Twitter.


Yep, same. I used to get a message saying I'd been rate-limited, now I just get a generic error.


I have the theory that they do this on purpose so more people opt to install their mobile apps, to which I strongly refuse.


I don't use Twitter except when I'm linked to it and for occasional searches, and it's been a huge step backwards with the new UI that was forced on everyone a few months ago. It even has a loading screen because it's much slower, and somehow also manages to show less content than the previous design.

If I remember correctly there was a brief period a while ago when they did use a JS-only "web app", but then switched back to simple HTML with JS enhancements (and I seem to remember they announced that change with much pride).

Of course, there is always mobile.twitter.com that is still static-only and quite usable without JS, but IMHO Twitter is the perfect example of how the "modern web" is doing less with more.


Go to about:serviceworkers and ctrl-f twitter and remove every instance. That will fix it.


TIL about service workers.

So there's a persistent cache of javascript that can't be cleared except one at a time? My cache is full of fly-by-night websites. Surely this is a shocking privacy hole?

Web browsers need to be more transparent about what state they are storing, and more accommodating of attempts to clear it.


> Surely this is a shocking privacy hole?

Also a performance/battery suck. Apple refuses to implement it in Safari, and it's one of the big reasons web developers deride Safari for being "the new IE"


> Also a performance/battery suck

That's far from obvious. Good cache control and offline usage are great for battery usage.

> Apple refuses to implement it in Safari

Wrong, Safari supports them. What Apple doesn't do well is enabling webapps that you pin to your homescreen ("PWAs") to work (they allow it, but from what I hear it's buggy and has weird restrictions), because they'd rather force you to make a native app.


You can have good cache control with HTTP headers, no need to run scripts on the client for that.


As I understand the docs, they can be killed any time, e.g. when their site is left, then they aren't different from page scripts. Well, it doesn't look like they solve any problem either.


The most crucial thing they solve is giving websites access to their cached pages, and they can thus show cached content while also making a request for new data in the background, still provide cached functionality while the device isn't connected to the internet, ...

In the wider concept of "PWA" (Progressive Web Apps), they also are used to enable webapps to act a bit like local apps if the user opts in - a user can add them to the homescreen, and they then are launched just like normal apps (but still sandboxed in the browser). An example that does this quite well is the Web-IRC client "The Lounge" - you can open and use it in the browser, but also pin it as a standalone app, including support for push notifications etc if you allow that.


Well, except there's nothing stopping a site from handing out a unique service worker with a hard-coded i.d. to every visitor. It's persistent state with behaviour, like a cookie that can do things.


Me as well, I look up about service workers.

I agree with the transparency, I was not aware about service workers until this post. In the meantime, there are Firefox/Chrome extensions that manages the service workers. Those extension can block service workers from installing without the user consent which that was nice.


The Cookie AutoDelete extension has an option to delete them. I'd highly recommend having that extension anyway!


>Surely this is a shocking privacy hole?

AFAIK it's fine because they get cleared when you delete cookies/site data, so it's no worse than a site using localstorage, for instance.


It doesn’t answer how Twitter can be so bad at this, though. Looks like a like of us have experienced this for a long time, I’m mystified why they wouldn’t fix the bug.


Yup, that fixed it for a few days at least last time. The issue about the whole page being inaccessible. The issue with the page loading but not the tweet is still ever present..


Wow, I guess this is common. This has happened to me for months using Chrome on Ubuntu with UBlock & Privacy Badger. I always assumed it was the extensions but I guess not.

Twitter is literally so buggy. Most of the time I have to refresh a page several times. Maybe it's because I don't have an account and refuse to use their apps?


I've seen similar issues. Who knew it was so hard to display 280 characters?


I've been experiencing something similar, every time I click on a twitter link from outside of Twitter, Firefox errors with "There has been a protocol violation" error and a hard refresh fixes it.

I'm not sure whether it's because I've setup FF containers to open all twitter.com links in a separate container.


You can stop the error by clearing Twitter cookies.

Or as @miffe said, "Go to about:serviceworkers and ctrl-f twitter and remove every instance. That will fix it."


Service workers are hard to get right and if they've for example updated the service worker, it might be damn near impossible to unregister the old one and installing the newer one before the old worker is set to be cleared from the cache.


Reminds me of an awesome talk at jsconf a while ago.

Outbreak: index-sw-9a4c43b4b4778e7d1ca619eaaf5ac1db.js https://youtu.be/CPP9ew4Co0M


I switched to using nitter.net for all links to twitter. At least this trick works.


No freakin way... this whole month I thought twitter was broken... I figured it was some censorship related filter. Every visit (for me at least) requires a reload to work properly.


I’ve noticed something weird moving between hacker news and Twitter safari on my iPhone 7. If I first visit Twitter then in the same tab go to hacker news. The favicon is the Twitter one but the tab says hacker news. This doesn’t work with other sites like google that I tested. It’s quite odd. It’s like the favicon for Twitter overrides the hacker news one for the first page load


My iPad does this except the icon is TechCrunch's. Not sure why.


Oh, so it’s not my browser being weird, thanks :) I have to refresh every tweet I load unless it’s opened within a few seconds of the first.


I see this a lot in Safari on iOS. I don't think it's happened in Firefox yet. Which is weird because it uses Safari on iOS.


It's been like that for me for the past full year on iOS Safari. Surprised to hear it's reached Android!


This also happens to me. I just assumed that it was a dark pattern because I don’t have an account.


Been having this exact same experience for weeks, maybe months. Desktop on two OS’s and mobile.


I had seen it a lot with the android client (I'm on the beta channel so I've probably been experiencing it longer than most). Glad it wasn't just me.

Kudos to the OP for investigating and documenting their findings.

@Twitter: please fix! kthxbai


There are some websites that I just accept only work 60-70% of the time for me; Twitter is one


It used to be the case for reddit too but they appear to have sorted most of the issues.


I have the same issue with Firefox on MacOS. 3rd party cookies disabled, privacy protection on, uBlock Origin, and PiHole. I assume it's a poorly handled call to a tracking or ad domain that fails, blowing it up.


Oh I assumed that was intentional, to make you sign up for an account or something. I get it almost every time, opening in incogneto helps.


The issue on Android also happens for me. I honestly assumed it was a deliberate dark pattern to encourage installing the app


I get the same thing on my android. I always wonder how an issue like that can go on for years with no fix.


This, for me, was due to an expired cookie. You need to clear the cookies and local storage and log ba in to fix.

Not ideal


I have to refresh the page once or twice when attempting to load a tweet in Safari or Chrome on iOS, otherwise I usually get a cryptic "this tweet is not available to you" message or something similar. I also just assumed it was a dark pattern to get me to use the app.


> It strips out the www. prefix to make a ”display version“ of the URL. I have no problem with this, as the prefix is entirely meaningless to humans. It does serve important technical functions, however.

I know most people don’t know the difference, and it would generally be a bad idea to have your www not redirect to the bare domain (or vice versa), but personally I prefer when we don’t hide these things. Just a bit of pedantic correctness, I guess.

> I can’t look at older versions of Twitter, as its pages don’t work well in the Internet Archive’s Wayback Machine.

Now this really gets to me.


All of this gets to me. A www subdomain is _not_ necessarily equivalent and interchangeable with the apex domain; treating them as equivalent is presumptuous in the extreme.


I think for most common domains it’s far to late to make a distinction; we already lost http:// and now www is pretty much gone everywhere. I wonder if we’ll get rid of .com anytime soon.

While we’re on the topic, I wonder why some websites chose to redirect to www and some keep the apex domain as their homepage. I use the latter because it’s cleaner and, well, my domain is primarily used for my website, but I wonder if there is a technical reason behind choosing one or the other.


Besides the security questions people have already talked about, there's a sizeable number of non-technical people for whom "www" signals "type this into a browser address bar", whereas the plain apex domain doesn't give that context because it's also in your email addresses, other subdomains etc.

I don't think many people get seriously confused by it, but accessibility is for everyone and it just makes it that much easier for people to know what to do.


One reason is that you "can't" CNAME your apex domain. So redirecting foo.com to www.foo.com might give you a little more deployment flexibility. The servers hosting the redirect don't need to be super beefy like your main site so just a A/AAAA might not lose you much.

Just a guess.


Of course, were WWW browsers like a lot of other softwares are, there would be a SRV lookup of the domain, plus the protocol and transport, and the deployment flexibility would be in that mechanism. (-:

* http://jdebp.uk./FGA/dns-srv-record-use-by-clients.html

Problems are more along the lines of conflicts with internal Active Directory deployment and "split horizon" DNS service.

* http://jdebp.uk./FGA/web-allowing-omission-of-www.html

* http://jdebp.uk./FGA/dns-split-horizon-common-server-names.h...


There should be no scare quotes around can’t. CNAMEs on the apex don’t exist because CNAMEs are not allowed to coexist with other record types, and you can’t have an apex without NS records or SOA records or whatever else it may be.

You can have a wacky DNS server that allows you to do weird things like ‘alias’ records, but those are sidestepping the DNS RFC: They’re answering with A records, but in a dynamic fashion.


If you use www.example.com and then add forum.example.com the probably external forum.example.com can't reach your cookies. If you put everything on example.com and then later add forum.example.com you have to think about security/privacy.


Whoa! I had no idea this had changed. It used to be that cookies for .example.com and example.com were different -- one was accessible by foo.example.com and the other wasn't. Seems RFC 6265 changed the behavior[0].

I assume the RFC explains the reasoning, but prima facie, this seems like a bad change to me.

[0] https://stackoverflow.com/posts/23086139/revisions


I would hope one would always more than think about security or privacy when implementing a public web site.


Thinking one can take care of every aspect of security or privacy when implementing a public website, especially one that publishes UGC, is similar to believing in ability to deliver bug-free software: very likely presumptuous. However, a good way of achieving reasonable security is by reducing the scope of things you have to think about in the first place, preferably by offloading them to trusted implementations someone else (e.g., browser vendors) took care of where possible. Scoping cookies to subdomains, for example, comes in very handy.


I think the point the GP is trying to make is that if one has thought about security and privacy then one is more likely to use www.example.com instead of example.com for one's website for this very reason.


The suggestion is that using basename.tld instead of www.basenamne.tld adds to the security matters you need to think about, if not now then later if/when you add features on a subdomain that you (and/or your users) want to keep separate in terms of cookie sharing.

In that sense using www.basename.tld is thinking about (or at least autonomicly mitigating, by way of scope limiting) those potential security/privacy issues.


> [..] and now www is pretty much gone everywhere.

I don't think this is true. There was a time about 15 years ago when everyone fancy went without www but the pendulum has swung back completely in the meantime. Most sites nowadays redirect to the www version. Twitter is the only page I am aware of which was no-www from the very beginning and stuck to it consistently until today.


Ah, that was poor wording on my part. What I meant was "users type in the apex domain and either stay there or get redirected to www", instead of "users still type www".


Got it, I'm certainly like that, I never type the dub dub dub.


Netscape Navigator used to automatically prefix "www" and suffix "com", and this carried through to Mozilla as "domain guessing". What you are describing is the state of the art regressing to that. (-:


I am waiting for Chrome to automatically “I’m feeling lucky” single word queries so we slide further into URL ambiguity…


No doubt directing to an AMP page, as well.


Using a CNAME is more resilient against DDoS attacks. Using apex introduces potential problems with subdomains and cookie "leakage".


it stems from the old ages when there was more than http to the internet and `www` it was simply the web server machine, and `ftp` would go to the ftp server ...

now everything is HTTPS (or tunneled over HTTPS) so it makes little sense to specify mundane things such as the machine you want to connect to and the protocol to use... there is only one machine (the apex domain) and only one protocol


Even if everything is HTTPS as you claim (spoiler, it is not, there's still plenty of other protocols in use today) subdomains are still relevant information used in SNI to serve virtualhosts.

There is no reason that imposes www.example.net to serve the same content as example.net. Whether or not it does is left to decide by the website operator.


You're right that it isn't everything, but the non-https stuff out there is super buggy. I had a Canon 5D Mark 4 and it had the option to (s)ftp up raws so I wouldn't have to take out the card. So I went to the trouble of setting up my own secure ftp server. Guess what. It didn't work. I tried everything from raw IP, to a naked domain, to a ftp domain, whatever it was there was absolutely no way of getting Canon to make the connection securely.

That's why everything is moving to HTTPS endpoints. It's simple. You just give a url and it works.


> (s)ftp

Technical point: FTPS is FTP with TLS, but SFTP is a completely different protocol based on SSH.


Specifically SFTP is a SSH subsystem, when your client connects to a SSH server it gets to specify what subsystem it intends to talk to, and so a remote SSH server can choose to offer the "traditional" shell service, an SFTP server, or any arbitrary thing. In this way SSH subsystems are rather like ALPN in TLS (except years earlier) which is how HTTP/2 works among other things.

The SFTP protocol itself, the thing spoken over SSH to the remote SFTP subsystem, is pretty simple, although I don't think it was ever formally standardised, https://tools.ietf.org/html/draft-ietf-secsh-filexfer-13

You could in principle talk SFTP over some transport other than SSH (e.g. you could use ALPN to select it over TLS), but nobody does.


It’s not necessarily equivalent and interchangeable, but for public websites, one should redirect to the other, and you’re a troublemaker if you do otherwise, because it’s a well-established and understood convention, and different people will start by guessing one or the other.


Unless you have a very good reason you just shouldn't change what a user has entered; just trust the user's input where there's no security or integrity reason to do otherwise. This example of Twitter messing with user input and breaking things is a good example of why you don't do that.


Remember that the problem here is purely a performance optimisation having become a pessimisation; it’s not actually breaking anything. I tend to agree with you, but only as far as functionality is concerned: that you shouldn’t act otherwise than the user specified. For display, I’m quite happy with dropping https://, and not too distressed about dropping www.. But for my comment, I wasn’t talking about changing what the user entered, but rather being willing to accept either of the two most common forms that users may enter.


Entering www.example.com and getting example.com is a breaking change in the sense that you wouldn't expect the user's machine to make a request to a different domain.


I repeat: I am not talking about changing the domain that the user typed. Only that I’m happy with things showing www.example.com as example.com.


So to reveal someone's IP address do I just send a DM with URL to my server and log connections? Is it that easy? No clicking needed?


It also bypasses browser preloading settings and messes with people who have limited bandwidth or transfer quota. It should be optional.


Yeah, I'm currently looking for a way to disable this and it seems like it may be controlled by:

  network.predictor.preconnect-min-confidence
in Firefox.


Looking at caniuse.com preconnect hasn't been supported by Firefox since v71. I assume for this very reason.

https://caniuse.com/link-rel-preconnect


Suspect that's not for explicit `<link rel=preconnect`

More likely it's for when FF is making a connection because it predicts that page is going to use a connection to another origin


"network.predictor.enabled" is there. Defaults is still "true".


It just connects to the domain without making a request to it, so how does it use even a remotely noticeable amount of bandwidth?


To answer that question: https://megous.com/dl/tmp/5a6729bcaa62d382.png

From wireshark capture of `openssl s_client localhost:443`

2kb can add up if done often enough. And this is a fairly ideal case. I just have two simple certs using secp384r1.

If certificate chain is longer and when the certs use RSA, it will consume more bandwidth.


You'd need to use a unique domain for it though.

That might be why they don't add the subdomain, because adding a unique subdomain to track a user. is free and a domain isn't.


you mean, they don't know about freenom.com, where one can register free .tk/.ml/.ga/.cf/.gq domains?


I didn't know about that until just now. Thanks!


in the case of software.codidact.com, they seem to have included the subdomain

https://twitter.com/CodidactQA/status/1321237358776373248

https://i.imgur.com/cdh8rKZ.png

edit: prepending software.* to a domain prevents the subdomain from being removed, https://twitter.com/lukerehmann/status/1321310972468973568 I've also tried messaging this same format of unique link and the preconnect slides right into the DMs


Right, I wonder if using IPv6 link would be enough. It would have a lot of space for differentiating the user, and they may not do much pre-processing on it...


I think at least a link hover is required.

"Technically, it only preconnects when you hover over the link."


Due to browser design, one might be tempted to hover links to see their content (small popup in the bottom left corner), before clicking. Kind of a habit for me.


Yes, a feature that originally was helping security (allowing you to "look before you step"), has been hijacked to actually harm security and/or privacy. Makes one laugh, bitterly.


You hover ever link as you scroll down the feed, though. Each link is the width of the feed, so it's hard to avoid.


Is there much of a downside to just disabling all hover events in the browser? I've also considered disabling the clipboard API after hearing about clipboard hijacking attacks recently


Some websites provide useful information on hover, including Twitter. Reading the bio of someone without leaving the page is convenient.


Perhaps an even better example is Wikipedia's article previews - enormously helpful when reading through a long or technical section.


Is there any easy way to monitor preconnect / dns prefetch and others [in chrome / firefox] ?


WebKit-based browsers (e.g. Safari, Epiphany) will show a console message. That's it.


I guess you might also overload a server if someone popular links something.


Indeed it is. I believe /var/log/nginx/access.log is where you'll find the info for an nginx server.


nginx will only log actual http requests there, but you can use error_log /path debug; to log connection attempts without any http content.


Ah, I see. Thanks.


> Twitter redirects links through its t.co link-shortening service. It was once a useful addition to its service as it helped people stay underneath the strict character limits. The link shortener reduced all links to 23 characters. Twitter gains some more data-insights about its users in the form of click-stream data and insight into popular links.

The t.co link also helps them block URLs that they deem problematic on their platform - in the event of spam, attack, or abuse, the redirect can instead be a black hole.


They don't need `t.co` to censor messages. `t.co` was never anything but click tracking, since the character limit is completely arbitrary.


It wasn't arbitrary. Twitter started out with an SMS interface, and kept it for several years. SMS are 160 characters max [1], and some characters are needed for the protocol, so the payload was limited to 140 chars.

[1] Note: Modern SMS apps support a protocol that sends larger messages as a stream of multiple interlinked SMS. This is transparent to the user, but at the time of Twitter's SMS interface this was not common.


Are you sure about that note? I have one phone that is some years older than Twitter and it supports long SMS transparently.


I didn't say "was not available at the time", I said "was not common at the time".


Well I learned something that's not in the article.

I thought browsers requested a domain lookup (gethostbyname()) and basically got back a "zone file" which would have the cnames in. So, I was confused, when people complained about Twitter forcing a domain lookup "on the wrong domain" as I was assuming this would at least cache the domain lookup: It's the right domain, of course, but the lookup for an address on a subdomain includes the subdomain and then gets the cname directing wherever.

It always confused me that dig/nslookup didn't seem to provide all the info. They can, using nslookup 'ls example.com' or 'dig example.com -t AXFR' but the server in general refuses to serve the zone file (seemingly for security by obscurity reasons).

So, for example, if the browser looks up example.com it doesn't get that there is a cname from www->example.com . It only gets that relationship from looking up "www.example.com".

So, TIL, and now results provided by dig/nslookup on the command line make more sense!


Almost everyone uses a “recursive DNS resolver“ provided by their ISP, or one of the big ones from Google/Cloudflare/Cisco. The recursive resolver does all the hard work of resolving the root, top-leevl-domain, the apex domain, the subdomain, any CNAMEs, and finally find the right IP addresses to respond to the DNS client. Recursive resolvers benefit a lot from caching responses at each stage of the chain, the same way your browser/OS/router DNS client benefit from caching the final responses from the recursive resolvers. If you run a full DNS resolver, you have do to all of these steps locally.


How does the preconnect work with the t.co redirect in between? t.co will return a 301 right? Then we see the real domain, then the browser can preconnect to the server, not earlier, or can it?


Twitter run t.co so they know where the redirect goes without actually asking t.co. So they can preconnect to the target domain as well. And that's where they goofed up (I mean, apart from preconnecting to all these domains in the first place).


Ah I get it. I thought the preconnect was an attribute on the anchor tag, but it's not. Confusion resolved, thanks.


So it's not just me that has to reload basically every time I try to view a Twitter page?


Now we have at least three tech giants that don't know www.example.com is not example.com:

* Twitter

* Google (maybe deliberately)

* "The almighty WHATWG" who accepts Google's revision on whatwg/url about this


Twitter delenda est


Perfect! Please hire more qualified diligent software developer twitter.


Or improve your QA process.


I don't know why that gets down voted it really a test to see what kind of dev are on Hacker News now days.


It’s not really a test of anything but how well commenters react to shallow dismissals. As you may have guessed, the answer is not well.


that is just matter of opinion. The quality of developer has gone down in the years. That's all I'm saying. To provoke a knee-jerk reaction is expected.


Comments that try to provoke a knee-jerk reaction are generally not welcomed.


Knee-jerk reaction are hiding something dark and dirty in the tech world. It need to be provoked and addressed!!!!


Not here, at least.


seems like hacker news doesn't like the comment of how developer has gone down in quality. That's a shame.


It's not a comment that adds anything to the discussion. That's why you've been downvoted.


> It strips out the www. prefix to make a ”display version“ of the URL. I have no problem with this, as the prefix is entirely meaningless to humans.

Hah, what a dumb comment. Let's just go back to AOL keywords... but we can call it Google keywords.

I propose a new URL scheme: "web:nytimes/some/article/". Sadly I don't work for the Chrome team, so I can't just force it down the web's throat.


> I propose a new URL scheme: "web:nytimes/some/article/"

This is already what we have, except that "web" is "http" and we have TLDs to namespace domains.


> and we have TLDs to namespace domains

We do have them, but as everyone has already realized, namespacing domains isn't really a good idea.


> We do have them, but as everyone has already realized, namespacing domains isn't really a good idea.

Could you expand on this?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: