Hacker News new | past | comments | ask | show | jobs | submit login
Prevent Google from mangling search result links when click/copying on Firefox (gist.github.com)
576 points by calmingsolitude on Sept 27, 2021 | hide | past | favorite | 199 comments



Google does this so they have click tracking data. But they don't need to mangle URLs in Chrome because it supports the `ping` attribute on <a> tags [0].

The ping attribute basically adds click tracking as a native browser feature so you don't need to do URL redirects. It also makes these analytics much easier for the site and mysterious to the user. Looks like most vendors besides Firefox support it. (They were pretty opposed I recall)

If you're a Chrome user, there's some extensions that disable ping requests/link auditing [1]. (EDIT: a commenter noted that uBlock Origin already blocks these! So I recommend that over this obscure extension)

[0] https://caniuse.com/ping

[1] https://chrome.google.com/webstore/detail/ping-blocker/jkpoc...


Firefox has a browser.send_pings setting to control this, not sending any pings when it is set to false. This is explicitly a valid browser implementation of the ping attribute:

https://html.spec.whatwg.org/multipage/links.html#hyperlink-...

> 2. Optionally, return. (For example, the user agent might wish to ignore any or all ping URLs in accordance with the user's expressed preferences.)

The problem isn't that Firefox doesn't support the ping attribute, the problem is that Google fails to respect user requests not to track.


I noticed this part of the spec too:

> When the `ping` attribute is present, user agents should clearly indicate to the user that following the hyperlink will also cause secondary requests to be sent in the background, possibly including listing the actual target URLs.

> For example, a visual user agent could include the hostnames of the target ping URLs along with the hyperlink's actual URL in a status bar or tooltip.

Does any browser supporting pings actually do that??

Also the "Note" in that section provides a decent argument for supporting `ping`. Basically, users will have their clicks tracked anyway, but the `ping` attribute provides more transparency and a better user experience. Though the transparency part is debatable given browser implementations.


> Also the "Note" in that section provides a decent argument for supporting `ping`. Basically, users will have their clicks tracked anyway, but the `ping` attribute provides more transparency and a better user experience.

I see this argument used a lot for including user-hostile features in browsers. I don't think it's a good argument since having a browser implementation a) makes this privacy abuse easier and b) legitimizes the practice. Meanwhile as shown in this post, user-preference to disable the feature is ignored and simply worked around by websites.


Third party links won't be "tracked anyway" if you're blocking JS. That's the only reason to have this feature since a site can track activity from its own links via logs.


This is wrong : what google currently does on firefox is make you go through a link that logs you clicked on it, then redirects you to your destination. This doesn't need any javascript, it just needs to log your http request, so it should work on any browser (even, say, elinks)


Exactly. That's why I solved this problem by using DDG.

Although I admit some searches I have to send to Google to get the result I'm looking for.


Seems to me the old Emac (or vi) saying have changed. Today it is "How do you know someone uses DDG? He'll tell you!"


How do you know someone uses Google? Well, virtually everyone does, so they probably do.


Same for me, but all google searches are done by DDG bangs like 'g!'


Using bangs defeats the purpose of using DDG in the first place.

And DDG tracks your clicks just like Google does.


Absolutely not at all. It's the reason number one why I use it because very often I know on which page I want to search for a query. And having DDG as the default search engine in the browser gives me the ability to use the URL bar to directly query Youtube with !yt or Wikipedia with !w and a lot more.


>use the URL bar to directly query Youtube with !yt or Wikipedia with !w and a lot more.

My understanding is both Chrome and Firefox can do this natively, ie without DDG


Yes you can add these, in Firefox it's called Search Shortcuts and a few are there by default. But there are a few reason why I prefer DDG:

1. I don't have to add those things in my Browser

2. When I searched DDG for something don't find good stuff I can just go to the search bar on the DDG page and add any !bang to query a search engine I like (!y for Yahoo, !g for Google, …)

3. No matter which Browser I use on which device, the only thing I have to change is the default Search engine and it's set up for anything I want to search. This is especially nice for me since I use Firefox on my Mac, Brave and Safari on my iPhone and Brave and Chromium on my Windows device


Yeah, me too. Love the bang system. Don't forget to put the exclamation point in front! "!g"


> Don't forget to put the exclamation point in front! "!g"

At least some of the bang keywords work with the "!" on either side. I tried "g!" and "w!" and they work OK.


For all the bangs I use it works either way but yes, the official way is with the exclamation in front


Seems like it's set to false by default. So it's not really a "user request" not to track so much as a "browser request" not to. Reminds me of the situation with the "Do Not Track" header where browsers sending it by default caused the signal to lose all meaning.


That wasn't what caused DNT to fail. What caused it to fail was that websites could decide whether or not to honor it, and honoring it would have meant a reduced ability to spy on people, impacting their income.


That was part of it. Obviously if sites had no choice in the matter then it wouldn't have mattered whether browsers enabled it by default or not.

Since it was a voluntary thing though, browsers sending it by default pretty much destroyed what chance there was of mainstream sites deciding to implement support for it. It's one thing to give up on tracking a small portion of users who explicitly opt-out, and another thing entirely to give up on tracking everyone except for a tiny minority who choose to opt-in.


If browsers weren't sending it be default, it wouldnt have any support because nobody saw enough traffic with it to implement it.

The design is the problem since websites who don't feel like it don't have to honor it. Whether it's because it'll bankrupt them by everyone setting it, or not bothering with supporting their unprofitable users.


Its reasonable to assume that users choose a more privacy focused browser intentionally, meaning that "default" setting is intended by the user and not a decision that's made for them without their knowledge.


No, I don't think it's reasonable to assume that when the browser that broke DNT was INTERNET EXPLORER. Internet Explorer is perhaps the most notoriously unchosen browser to ever exist.


Well, in the relation between user and browser vendor it is quite a reasonable default. I can always change my mind if I want to give up my privacy.


No tracking by default means it's Opt-In - as it should be.


Arguably, the user requested it by intentionally choosing to use a browser with that default behavior.


Note the caniuse link says "While still in the WHATWG specification, this feature was removed from the W3C HTML5 specification in 2010." Another reason I prefer Firefox. Why implement a rejected non-standard feature whose primary purpose is to enable surveillance?


Seems like it's not meant to enable tracking so much as to improve its performance and UX, as the article demonstrates. (Google tracks clicks from Firefox users just fine without ping, it just does it in a more annoying way.)

Also probably worth noting that the W3C doesn't maintain an HTML standard anymore[1]; the WHATWG standard is the definitive one.

[1]: https://www.w3.org/html/


The fact that google made some aspect of their website behave better in chrome than in firefox is not really an argument that firefox is doing it wrong so much as yet another example of browser wars 2.0


Improving performance for tracking is enabling it. We should fight to get rid of tracking, not making it more performant.


> Why implement a rejected non-standard feature whose primary purpose is to enable surveillance?

Something that's in the spec that matters (WHATWG) but not the one that desperately pretends to still have relevance for HTML though it hasn't since it tried to push XHTML 2 (W3C) isn't “rejected” or “nonstandard” in any meaningful sense.


WHATWG, also known as We Have Aligned Totally With Google...

The company that has an effectively complete control over the "standard" and churns it frequently to discourage competition...


How that came to be is an interesting study in company PR. MSFT arguably should've had a much more prominent advisory position in WHATWG than Google but WHATWG ended up solidifying in a large part to counter act all the non-standard behavior folks experienced trying to develop cross browser pages in the days when mentioning ie6 would cause a terrified silence to fall on any web dev department.


Why do you call this surveillance? You just voluntarily entered your search terms. Why shouldn't Google know which link you clicked? The main purpose of this information is to improve the service that you are using.


> Why shouldn't Google know which link you clicked?

Why should they? I asked them info. They provided a list of links. Why are they entitled to know which of the links I visited?


Because you have an option of not using Google at all. I think if you are providing a free service, you are at least entitled to know how the users use it.


Because it is surveillance. There’s plenty of surveillance examples with multiple uses and “improving the service” doesn’t blanketedly discount their other uses.

More importantly they make it a pain in the ass to copy the actual URL of a link without actually clicking it. If you right click a search result link then their JS edits the href to the Google tracking link. So you can’t actually examine the entire URL without risking opening it up and being tracked. At best you get whatever preview your browser shows on hover.


Definition of surveillance: close watch kept over someone or something (as by a detective)

Counting the clicks on the search results page is nowhere near. A cashier in the supermarket knows which items you are buying, it doesn't mean that you are under surveillance.

And most of all, if you are worried about surveillance by Google, why use them at all?


> whose primary purpose is to enable surveillance

Google search is good because it tracks what links people click and knows when they come back to go to a different url on the page. if 99% of people visit the top result for a query, return, then hit the second one, chances are that the top result never answers what the search query asks.


People here may not like to read "Google" and "good" together, but yours is a description of how things actually work.

Of all the tracking Google does, this is by far the most justified, and least concerning to me. I'd rather log out and search anonymously, if my concern was being put in a bubble, rather than block this kind of feedback for search result quality. Then, again, I primarily use DuckDuckGo and I wonder if they do anything similar.


Yes, ddg sends out beacons to improving.duckduckgo.com when you click as well to do this.

https://i.judge.sh/ragged/Derpy/chrome_5j8fWLGX6J.png

They have an info page on it: https://improivng.duckduckgo.com.



> Google search is good because

Google search might have been good over a decade ago but today it's trash.


> Google search might have been good over a decade ago but today it's trash.

IME, its still consistently far and away better than the alternatives. Part of the difference in perception of quality may be that over time it has come to use more personal signals to zero in on relevant results, and the people that complain about how bad it is overlap considerably with those who actively seek to deny those signals to Google.


> over time it has come to use more personal signals to zero in on relevant results

I am specifically disinterested in existing within an echo chamber.

When I search for a topic, I am looking for information that is most faithful to objective reality. A detailed explanation of the limits of our current understanding, or why my understanding / model is inadequate is orders of magnitude more valuable to me than something that will affirm that I am a smart, special person. Google used to be exceptionally capable of delivering those kinds of results, even if it took some work refining search terms. Over the preceding decade, their effectiveness in this regard has significantly diminished.


good point, that seems plausible. It's an unenviable choice though: good results from incessant surveillance (and nasty link re-writing per topic), or poor results if you use it infrequently.

[I wouldn't know. I've been using DDG for so long now, I can't remember the last time I used Google search. Maybe I've forgotten how much better G is. Truth is, though, DDG does what I need of it. Rarely come away without the answer I want. So no temptation to use Google. At. All.]


I still have the habit of adding !g into my queries when the results are bad.

There have been years since the last time I remember Google actually giving me better results than DDG (except when I only want product sellers, on this case there has been around an year).

Yes, maybe if I let Google see even more of my life, they would be able to get me better results. But they have access to much more than I'm comfortable with already, and the results aren't there.


10 years ago it was probably way worse than it is today, but people has this fantasy idea of good old google that found everything magically.


The first page of "search results" is usually filled with ads and Google scraping and special casing dozens of sites. It is almost at a point where I have to skip to the "second" page just to get to the actual search results.

Also Google broke a lot of search qualifiers for stupid reasons, like the '+' when they created Google+.


Occasionally I accidentally use Google search when I've got an automated Chrome window open, and I find it astounding how unhelpful the results are. Everything that is returns is clearly optimized for what Google thinks readers want in a webpage rather than what actually matches my query. DDG isn't perfect but I find it better respects my queries.


Google is objectively a better search engine then it was 10 or 15 years ago.


10 or 15 years ago, I could actually find what I wanted without it second-guessing my queries and rewriting them into irrelevance.

Now it's absolutely useless for the hard-to-find information that you most need a search engine for. That's not "objectively better" at all.


I wish you could give us a single example.


IC part numbers. Service manuals for various equipment (now all you get are sites which may or may not have one, but are willing to collect your $$$). Error codes (filled with pages of results for a different code).


A solid example? I search for some of these all the time don't remember any particular issue compared to past.


I use Google search only for one thing: shopping.

Is this the intention behind everything at Google - advertising/shopping/consumption? If so, congrats to Google I guess.


The ping attribute is blocked by uBlock Origin. It's called hyperlink auditing in the dashboard.


It’s also a notable DDoS amplification vector[1].

[1]: https://securityaffairs.co/wordpress/83890/hacking/ddos-html...


I'm confused as to why ping was used in that situation at all, rather than, for example, just a normal POST request.


Doesn't require running javascript, so presumably the devices could be more efficient in sending them versus XHR/fetch.


The article specifically says the offending pages used JavaScript to add the ping attribute to the <a> tags, so the attack wouldn't have worked against users with JS disabled anyway.


It doesn't use Ping on Chrome browsers. For example, this is how the a tag looks like on Chromium:

<a href="https://news.ycombinator.com/" data-ved="2ahUKEwiIxrz0jKDzAhUWHcAKHQnnArkQFnoECAcQAx" ping="/url?sa=t&amp;source=web&amp;rct=j&amp;url=https://news.ycombinator.com/&amp;ved=2ahUKEwiIxrz0jKDzAhUWH...">

and this is how it looks like in Firefox:

<a href="https://news.ycombinator.com/" data-ved="2ahUKEwj9i67MjKDzAhXUfMAKHWJcCYsQFnoECA0QAx" onmousedown="return rwt(this,'','','','','AOvVaw3F-2xUE22tTvOxNDwVufx-','','2ahUKEwj9i67MjKDzAhXUfMAKHWJcCYsQFnoECA0QAx','','',event)">

You can see that Chromium based browsers call a ping endpoint whereas Firefox browsers use a mousedown event. This device detection uses the user agent; changing it on Firefox to look like Chrome results in a ping attribute instead of mousedown.


My understanding of the actual amplification vector is that the JS is just obfuscation on top: they could have just as easily deployed static HTML with those attributes.


Wow, I had no idea about this.

At least with the mangled link approach it’s easier to tell that tracking is going on, but that ping attribute seems extraordinarily sneaky to me. I get that it enables “clean” links but the opaque tracking is way worse in my eyes.

Sigh.

Edit: when I think about it, I guess it’s not that dissimilar to what you can do with JS based tracking anyway, so perhaps it’s not really any worse than what already exists. But it still feels wrong for some reason.


For Firefox, there's a relatively popular extension called ClearURLs that sanitizes most URLs to remove tracking, including Google's and Amazon's.


I did notice a bug with this... I had a magic link for authentication in Gmail that used a `+` symbol in a URL, e.g. `http://example.com/token/abcd123+3cf==` and ClearURLs ended up convering the `+` to a `%20` which caused the server to fail to find the token.

Otherwise I love ClearURLs.


Or you can add https://raw.githubusercontent.com/DandelionSprout/adfilt/mas... to your uBlock Origin filters


I've tried other uBlock Origin filters for URLs and they weren't as good. Does this work flawlessly with Google, Amazon and other major players?


I've not had any trouble with it.


I see no cases when I, as a user, would want my broswer to support "ping" attribute. It's basically shady as fuck.


It would allow Google or other companies to track which result the user clicked without resorting to more complex JavaScript tracking or redirects.

I bet anyone would prefer the ping method rather than the redirects we see on Firefox that mangle copied urls.

You seem to be of the opinion that no tracking would be better. And that's fine and a popular opinion around here. But that's not an option as Google relies heavily on the clicks as an input for ranking.

So in a context where you consider tracking HAS to happen ping does offer advantages for the user.


This might hold up better if Google's rankings were actually good for all of the spyware they add. Here's a search I performed earlier today:

https://l.sr.ht/u1vK.png

The desired result is indicated in red, well below the page break. All of the other results are blogspam, SEO hacking, and mostly useless "featured snippets" and "people also ask".


Because then you could at least see the URL you're going to instead of the redirect the site is going to use to track you anyway.

This attribute doesn't do anything shady, and would do the opposite if it were actually used. The whole idea is to be able to provide the tracking data the site will get one way or the other, but with the ping attribute, you can do it without mangling the URLs.


If everybody was using "ping" instead of their various other tracking solutions (Javascript, redirecting) then you could just disable it in browser settings.


That's the whole point: to give link tracking a simple consistent interface, which makes it much easier to implement (no JavaScript libraries or proxy URLs) and to disable (a user agent can very easily choose whether to respect the ping).


> which makes it much easier to implement (no JavaScript libraries or proxy URLs)

I don't want it to be easy to implement.

> and to disable (a user agent can very easily choose whether to respect the ping).

This post is about Google working around that and just falling back to the old redirect-based click tracking for browsers that do not enable pings by default.

So having the attribute increased the browser complexity, brought no value to the user and only helped the tracking industry.


I don’t want my clicks to be tracked!


They are tracked regardless. Either through ping or redirect urls.


Then call your ISP! They are tracking them, too!


Okay? Does that mean my search engine should do that too? Two wrongs don't make a right.


you just told your free search engine all the different variations on the search strings you're looking for, basically all what is on your mind, and you've chosen your search engine based on the quality of the search they provide, and now you don't want them to know which result you chose?

I'd argue that it's the one piece of information they are most entitled to in the tit-for-tat, you help me and I'll help you arrangement you two have.


Sorry, my wording insinuates it's an either or. I meant to point out this is a war that needs to be fought on multiple fronts, and arguably your ISP has the better data (with the worse IT security, to boot)


Even through HTTPS? They'll have the domain and IP address and transfer size, but not the URL or contents of the traffic. (Unless they managed to MITM a trusted certificate somehow.)


HTTPS encrypts the host? Thought you had to know where to go to open that secure transmission. It's enough for your ISP to know you went to "pornhub.com" for example.


Yeah, despite ESNI and DNS-over-HTTPS it's likely an ISP could still effectively track usage of certain large-ish sites by IP address alone. Compare against the anonymity inherent in accessing s3.amazonaws.com/some-bucket/some-path.


They won't know when you use an alt DNS or DoH.


Your ISP needs to know the IP address of the site to route your TCP packets there, and they can easily do a reverse DNS lookup[1] on it. So hiding your DNS query from them won't prevent them from knowing what site you visited.

[1] https://en.wikipedia.org/wiki/Reverse_DNS_lookup


Exactly. At the end of the day, computers need a public address to find each other. And if you can find it, so can they.


You'll also need ECH, to avoid leaking your TLS handshake's SNI.


So that's what is going on https://news.ycombinator.com/item?id=21427341

I thought I was the only one.


Even without ping still doesn’t make sense why they can’t just use a JS event hook on click to fire a track/log request in the background instead of having to do a redirect. They already do this on link copy.


Why can't they just add a click-listener to the links?


This can be blocked easily in a variety of ways with forward proxy as well.

Google generally does not allow POST method for user-iniatiated queries, e.g., from HTML forms. However POST method is commonly used for tracking.

Using the web today with a "modern" browser and trying to exercise the slightest amount of control is like being in a Spy-vs-Spy MAD magazine comic strip.


Is this turned on by default in Chromium too?


Not to be facetious but Wow - I'm not sure how we let that "feature" slide past the "privacy" folk.


It didn't slide past. I recall a lot of discussions but those discussing and outraging are not the ones implementing and pushing forward with sheer momentum of technological dominance and money. It's the same for every privacy issue.


This is actually well-known and gets blocked by ublock origin and others.


The Google monopoly doesn't give a fuck.


Support for the ping attribute is one of the several reasons why I don't use Chrome/Chromium.


I find link mangling to be a great test for when a service becomes too powerful relative to its users, able to add friction to its product that serves no purpose for its users[1]. Google started doing this around 2004[2]. Facebook didn't do it until a few years ago. Slack and Discord don't do it. Yet.

[1] Example: a search result that's a pdf. How do I share this link? If I click on it it downloads to my disk. If I rightclick on it I can copy a crappy URL.

[2] http://akkartik.name/firefox.html


It's worse with Google, because they add the friction but then hide it in their own browser. There's no issue copy/pasting links from Chrome.

This is a pattern of behavior. For example, when reading an AMP article hosted by google.com, an iPhone will correctly show `google.com` in its URL bar. Whereas an Android will conveniently rewrite `google.com` to show the URL of the article source, falsely implying the browser connected to a hostname that it did not.

In any other context, it would be called phishing.



This works via Google blackmailing publishers with the threat of SEO downranking without Amp


"While AMP itself isn't a ranking factor, speed is a ranking factor for Google Search. Google Search applies the same standard to all pages, regardless of the technology used to build the page."

https://developers.google.com/search/docs/advanced/experienc...


That only happened after it failed.


This is a recent development, and a questionably positive one. They don't get to be morally exonerated by recruiting other MITM like Cloudflare into their cartel.


> able to add friction to its product that serves no purpose for its users

A good moment to remind everyone: the goal of the attention economy is to make things as inefficient as possible, because money is made on the friction.


How’s that? Reducing friction for many sites means that users will spend more time browsing on the site. E.g. video auto-play on YouTube.

In this case, though, Google doesn’t mind frustrating users because there is no competitive alternative search engine.


In general: attention of any person is limited. To monetize attention, it is necessary to steal it from things the user want to pay it to.

> Reducing friction for many sites means that users will spend more time browsing on the site.

The users usually want to spend as little time as possible on any given site. They're using it to accomplish some goal - like find a piece of information, or pick and buy an item, or be entertained. Even with YouTube: autoplay may be something the users want, and in this case it reduces friction, but money is made on the users being exposed to ads - ads before, after, during and surrounding the video. All these are friction, taking both time and attention away from watching the video itself. This is where YouTube makes money.

Showing ads is an obvious case of introducing friction to monetize attention. There are subtler approaches too. They're insidious and so pervasive that some became examples of "good design" on the web. There's too many to list them all. For example, think of all the cases where you thought, "this UI is dumb" or "this design is inefficient". Chances are, it's because it ensures you have to stay on the page longer, click around more, possibly get confused.

I'll give one specific family of examples: a UI/UX pattern in e-commerce, where each item is featured as a large tile or "media item" on a list. A big picture, a name, a price, perhaps one or two pieces of detail. Only 4-6 results fit on the screen at any single time - where a better design could make the site fit 20+ items instead[0]. You start clicking on them individually, and notice each item has a different set of details, possibly using different units - making it impossible to compare options. That's not accidental. That's designed to frustrate user's ability to compare items and make a good choice, in hopes they'll just stick to whatever the store surfaces at the top.

--

[0] - I know because I did that. Last time I was assembling my PC, I spent an hour or two writing custom CSS rules for a major Polish electronics retailer, and was able to turn their 4-5 items/screen into some 30 items/screen. Without this, trying to pick among many similar options was too much cognitive burden for people to bother. As intended.


Here, here.

My alma mater switched its mail accounts to Outlook365. Now all links in email messages - including text emails - are mangled to go through microsoft's servers. And they're humongous-long too!


Answer to #1:

In the google search results, click the three vertical dots above the link and to the right of the domain. If using mobile, you'll need to switch to desktop mode to see the three dots. After clicking, an "About this result" pane will pop up to the right, probably[1]. In that pane you'll see the true link, and you can Right Click > Copy Link.

[1]: On my computer, the "About this result" pane says "BETA", so not sure if everyone can use it. It works for me in a private window, though.


> …serves no purpose for its users

It might be that one of the reasons they track link clicking is to determine “bounce rate” [0] to infer how useful the result is. That is something I would want to know if I were building a search engine and wanted to verify ranking accuracy. Though I would have thought there would be better ways of tracking this than url redirection if JS is enabled.

0: #5 on https://www.spyfu.com/blog/improve-google-rankings/ (I tried to find a more authoritative source but didn’t have much luck. If some one can find a better one, please share.)


Following up (since I can no longer edit):

- It’s not clear to me if google actually uses bounce rate to rank results aside from generic mention of identifying “signals that can help determine which pages demonstrate expertise, authoritativeness, and trustworthiness on a given topic.” [0]

- google does track sites you visit and URL redirects may be a way to achieve this.

> My Activity is a central place to view and manage activity such as searches you've done, websites you've visited, and videos you've watched. [1]

0: https://www.google.com/search/howsearchworks/algorithms/

1: https://support.google.com/accounts/answer/7028918?visit_id=...


Their search results used to be great before they started doing this. In fact, they've started to suck more in the past 10 years, though probably for unrelated reasons. As the Conchords say, "what are your overheads?"


> Their search results used to be great before they started doing this

Google is in an arms race with SEO --- one Google is gradually losing. Removing relevance signals will make search results worse. That Google was able to deliver excellent results ten years ago is irrelevant today: the environment is different and if they went back to what they were doing then, results would be far worse now.


It is really worth it to switch to DuckDuckGo. You can throw a g! at the end of your search if you don't like the results DDG gave and it will redirect you to Google. That was the feature that gave me the confidence to switch over, it's painless to get different results, even on a mobile keyboard.


DDG user here, I’m not even sure most people would need g! anymore. I recently came across a Google search results page, and was surprised how unrecognizable it has become. Instead of giving me a page full of search results, it was presenting me with a page full of Google-curated content. At this point, Google is basically unusable for me as a search engine.


I’m a full time DDG user, but I still use !g sometimes. For some reason DDG is not great with discussion boards like any forum, stackoverflow, reddit etc., especially if the posts are too recent.


Mostly if DDG can't find something, chances are that neither can Google. We've also reached a point where the Google result pages are now ads the first half page. Fair enough that they want/need to make money, but it seems a little excessive.


I think it's been two years since I've used the g!


DuckDuckGo also tracks what links you click on. Try it: when you click on a link you'll see an immediate ping to https://improving.duckduckgo.com/t/...

(Disclosure: I work for Google, speaking only for myself)


I don't think most folks would have a problem with Google doing it the same way - it makes sense that l you'd want to know what search results were clicked on.

The problem is Google has implemented the tracking in such a way that it's hostile to users, preventing them from copying the target link. There are various ways that Google could allow copying the link whole also enabling tracking if it's explicitly clicked, but Google chose an anti-user option because it almost guarantees people click the link.


> There are various ways that Google could allow copying the link whole also enabling tracking if it's explicitly clicked, but Google chose an anti-user option because it almost guarantees people click the link.

On browsers that support it (all modern browsers except Firefox) it uses <a href=... ping=...> which is exactly what you're looking for.


Firefox supports it fine but disables the setting by default.


Thanks, I didn't know about this. It's blocked by my ad blocker but I just whitelisted it.

I stopped using Hangouts a long time ago because every link that was sent was wrapped in a redirect through Google. Sometimes that tracking service would be slow and I'd have to copy the links manually. Really infuriating.

Anyways, there is a big difference here. If I copy a link from Google I get [1] and if I copy a link from DDG I get [2].

1. https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&c...

2. https://github.com/


They are only posting the domain in my tests.


As well as other bangs too. I frequently use !gh to search for software repos on GitHub, and !a to search for a product on Amazon.

Also, DuckDuckGo has taken keyboard-centric users into account, whereas Google has not. I rarely have to touch my mouse when searching on DDG. Up and down arrows to select the results, enter to go, / to return to the search bar, left and right arrows to go between images and maps and whatnot.

I almost use DDG solely for the UX. The added privacy is just a very nice bonus.

(Tip: you can also type "figlet ______" and it will give you ASCII art for the text you typed. That's neat.)


My biggest hangup is that Google is pretty good at making suggestions based on my past history. You end up losing that doing ddg proxy


That might actually be a good thing, to break free from an information bubble created to you by Google.


I find the Safari search bar to be good at this sort of thing.


Try using !s for Startpage.com results. It is proxied google results.


Or use startpage.com which has Google search results.


( !s via ddg)


I prefer to avoid US services if i have the choice, due to bad privacy laws there.


Isn't it "!g" or do both work?


Both work! Which is really nice on a mobile keyboard since it will do an auto space after the !.


(If you have one that can be and is configured to do so. I don't, because I find that annoying whether I'm writing a DDG query or not! My computer doesn't do that, it's jarring if my 'phone does.)


I have no mangling issue—thanks to uBlock annoyances filter: https://github.com/uBlockOrigin/uAssets/blob/02d16a221c276fe...


Thanks for posting this, this filter indeed fixes the issue. I was recently hit by this, when I was trying to share a link to PDF file. In the end, I worked around it by opening Downloads in Firefox and copying source link from there.


There's already an add-on which does this, and more. https://addons.mozilla.org/en-CA/firefox/addon/clearurls/


I actually prefer short userscripts like this over addons, because I can immediately see it's not doing anything nefarious and I only need to trust the addon that I install the user script with.


> I only need to trust the addon that I install the user script with

I wonder why this is even necessary. User scripting should be a standard feature of browsers. We should have direct access to a complete Javascript environment every time we launch a browser. Just like Emacs gives users a Lisp environment.


It is, sort of, in Firefox. It's just janky and unfriendly and underdocumented.

But you can have user styles, user chrome (styling of the FF frame itself), and user scripts per profile, as files wherever all your profile stuff is stored, e.g. $XDG_CONFIG_HOME/mozilla/firefox/profiles/blah.


For now. userChrome.css is already disabled by default, requiring you to enable toolkit.legacyUserProfileCustomizations.stylesheets first.


You do, it's just in the devtools menu. The missing piece is triggering javascript based on the loaded page.


Browser devtools are amazing and do have a Javascript REPL. Do people use it for scripting though? It's always been more of a debugger than a Javascript environment. People install node.js for local scripting even though it's the same Javascript engine.

It's also not available on mobile.


It’s great if you’re on an unfamiliar machine and need a calculator with loops. It’s also great to press all the buttons on a website.


it was, in Opera up to 12.xx. You just dropped .js files in designated directory and it _always_ ran before executing website js.


Same here.


I recently noticed addons.mozilla.org also tracks links using this method. If you scroll down to "Add-on Links" the links to "Homepage" and "Support site" go to

  https://outgoing.prod.mozaws.net/v1/8a4c4de845953bc85d10c6465c5c0f11210b5ca1c195b70d7ddfcf8b74592477/https%3A//clearurls.xyz/
and

  https://outgoing.prod.mozaws.net/v1/8a4c4de845953bc85d10c6465c5c0f11210b5ca1c195b70d7ddfcf8b74592477/https%3A//wiki.clearurls.xyz/
respectively, instead of going to the page directly.

Why Mozilla? Why are you tracking us ? Of all the pages...


This is the "privacy friendly" company that adds google analytics to their sites while enrolling users in experiments and collecting telemetry without informed consents. Why are you surprised?


Sadly it seems to be unmaintained, or at least under maintained. I was going to open am issue for missing Firefox mobile support but issues seem to be going un answered


To add a little color and for clarity:

Some google links (notably shopping links for products) don't just point at a google-owned redirect (presumably for ad tracking/payment calculation?), they also change the link target on click (?!?evil!?!). There are redirect-removal addons which re-write the original URL correctly, but the on-click handlers mangle the target of the link if the event is not blocked.


>> There are redirect-removal addons which re-write the original URL correctly, but the on-click handlers mangle the target of the link if the event is not blocked.

On-click event handlers should never have been allowed. Hijacking the browser UI is never in the users interest.


I was going to upvote you, but unfortunately it required an on-click event handler.


Except that's not true at all. HN works with javascript entirely disabled, and the upvote buttons become actual links.


Just tried it and that is true. I take it back.


While this might solve the problem for the Google search engine it is but a patch to a bigger problem. Instead of applying this patch on each and every device you happen to use it is much more effective to refrain from using these search engines directly by using a meta-search engine like Searx [1]. This not only solves these obnoxious attempts at leaching a bit more data from you, it has an even bigger advantage: it shows search results from multiple engines, ranked in the way those engines present the results to an anonymous user. This often reveals interesting patterns by showing just how those who run these search engines either promote or demote relevant results for a given search. Google clearly prefers to show results from corporate media and established actors (e.g. Wikipedia) above those from non-affiliated sites, DuckDuckGo gives far more 'organic' results.

[1] https://searx.me


Searx is very cool. It would be nice if I could configure my browser to rotate through different searx instances rather than configuring one as the default search engine.

Btw the list of public nodes is here: https://searx.space/

What is the difference between SearXNG [0] ("next generation," i.e. the one you just linked) vs. SearX [1]? NG claims to be a fork, but it's not clear why? The main SearX has recent development activity.

[0] https://github.com/searx/searx

[1] https://github.com/searxng/searxng


> It would be nice if I could configure my browser to rotate through different searx instances rather than configuring one as the default search engine.

That can be achieved using the Privacy Redirect [1] extension, set it to redirect search engine calls and it will use a random engine. The list contains more than just instances of Searx and can by default not be edited by users so you might have to get the source [2] and build a version with only those search engines you want to use. It can redirect many other corporate entities like Youtube, Twitter, Instagram (which does not really seem to work but since I never go there anyway I don´t really know), Reddit, Maps (Google etc) and others. I have it redirect to private instances of Invidious (for Youtube), Nitter (for Twitter) and LibReddit. I do not use search engine redirect since I run a custom Searx instance which doubles as an intranet search engine and as such offers more than any public instance.

[1] https://addons.mozilla.org/en-US/firefox/addon/privacy-redir...

[2] https://github.com/SimonBrazell/privacy-redirect


My solution is simple: I don't ever use Google Search. DDG results are good enough for me.


I've been using neeva.com lately, and I'm warming up to it. I like the idea of the search engine's revenue coming from aligning with its searchers, rather than its advertisers.

One thing I dislike, however: making you agree to terms when sharing an invitation link with friends. You'd think they would want that to be zero mental friction?


Twitter also uses a redirect (t.co) and it's very annoying


Is there any known way to bypass those?



I have noticed this, but I have never understood what is happening. Still don't, but nice to see a fix.


Tracking is happening. It's the goog after all. As for Chromium-based ones, perhaps the goog thinks that it already knows enough about them?


Chromium browsers support and enable by default the "ping" attribute. This helpfully makes link tracking a standard bit of functionality, rather than needing ugly workarounds to accomplish the same thing.


Mozilla has refused to fix the stealth URL rewriting on mousedown/onclick behavior for years[1]... it's super toxic and harmful to user privacy.

Why do they bother not implementing pings when they allow an equivalent privacy invasion to continue? Either way the user's privacy is invaded, but at least URL copy/paste still works correctly with the ping functionality.

[1] e.g. https://bugzilla.mozilla.org/show_bug.cgi?id=229050 though I'm sure there have been many other bugs filed on it.


I used to use “Google/Yandex Search Link Fix” but it died along with XUL https://github.com/palant/searchlinkfix


Works fine in practice. I've had it installed for years and didn't even know it's not being maintained anymore until I saw your comment.


There's also "Don't Track Me Google": https://github.com/Rob--W/dont-track-me-google which seems to work pretty well, including on Firefox for Android.


Google is cancer to the web that has metastasized and become malignant.


It is also possible to use user scripts & Violent Monkey on mobile - Kiwi browser [1], based on Chromium, supports browser extensions as well as full Developer Tools. :)

[1] https://kiwibrowser.com/


I wrote a simple addon that avoids just that https://github.com/dandanua/copy-true-link

The code doesn't prevent event propagation, instead it copies the link before propagation happens. I guess this way is more reliable. It works on other sites too, like FB.


While it doesn't modify the behavior of any sites, Intercept Redirect automatically skips most redirect services. It's a dead simple implementation that attempts to require the bare minimum permissions to do the job.

https://intercept-redirect.bjornstar.com


Anyone on Safari who wants to run this script - UserScript (https://apps.apple.com/us/app/userscripts/id1463298887) seems to work well


It's certainly a bad user experience to not be able to cut and paste links.

It's also kind of a bad web search experience to have actual web search results hidden below ads and Google properties.

On the other hand, "Google search" is very good for searching Youtube and other Google properties.


Speaking of Google web products not working with Firefox: Google Meet backgrounds. What's most frustrating there is that it used to work until they disabled it to just throw up an ad for Chrome (i.e. use a "supported" browser).


Easier way:

Use DDG (or some other search engine)


Could this also solve the problem on Facebook Messenger? It does similar mangling.


That would be awesome. It currently filters everything which blocks websites sometimes for no good reason! (My websites are blocked and FB pays no heed to my requests to unblock them).


I wonder about the detail at which W3C specifies the behavior of copying a link. Is it outside of the web spec, and thus could be non-portable?


I use https://startpage.com/ instead

Good search results, with privacy


I have encountered a number of occasions when the Startpage results are frustratingly shallow, but the direct Google results are not. It was as though Startpage was not being provided a complete set of results.


If only startpage hadn't an ad tracking company as an investor


Bookmarklet version

javascript:(function(){window.addEventListener("mousedown",(event)=>{event.stopImmediatePropagation();},true);})()


Why are you still using google search anyway?


Thank you for this. I thought something was wrong when I tried to hover over links, or I was going crazy. This explains a lot.


psst, don't tell google, but sometimes I like to click the triple dots next to a search result and copy the url in the popup box to the clipboard to get the url without any of the tracking crap.

(I mostly do this when google's redirect page lags for some reason)


Dont worry, they probably also track that [...] click :)


Can't you just prevent most scripts on google.com from running, for this mangling not to happen?


they've been doing this for a very long time. didn't know about this ping attribute for anchors though.

i always just assumed it was for improving the index. the more a result gets clicked, the more relevant it must be.

it's kind of a zero'th order optimization.


>the more a result gets clicked, the more relevant it must be

how? the more clickbaity Yes, but how do you judge quality by action of the uninformed (clicking before viewing content)?


google was originally based on pagerank, which was based on the idea that if you analyze the link structure of the web, you can assign quality scores to pages based on number of inbound links, and then use that quality score to propagate a high quality score to other pages that are linked to by pages with high quality scores. in other words: find the pages with reputations you trust, use their opinions to boost the reputations of other pages in the graph.

you could do the same for people. first off, a user looking at a search results page isn't uninformed, there's lots of signal in the results page for a search query: domain name, familiarity/recognition of domain name, abstract text quality (grammar/spelling), abstract text, spamminess, etc. for the trained eye, that's a good amount of signal, but who has a trained eye?

you could, say, have some ground truth rated webpages that you have human raters rate in house, and then you could use this to score actual users on the website in terms of who frequently picks the known best result. now you have a cohort of users who you trust in terms of clicking on quality search results.

now you just pay attention to what this cohort pays attention to and let their clicks materially boost the ranking of results.

this is just one over simplified way, i'm sure they do tons of stuff like this (with tons of other stuff to avoid abuse/seo/etc).


Google charges ads per click, possibly mainly for that. Also it's a preferred solution to asynchronously ping the log on click rather than a synchronous redirect hack


i'd guess the mechanisms for ad clicks and search engine refinement are largely separate (with maybe some overlap for sponsored results? does google do that?).

the asynchronous ping attribute is new[1]. i'm pretty sure it was the mid-00s when i first noticed that search result links bounced through a redirect via google. (and i'm guessing it was added to reduce confusion when hovering and ameliorate copy/paste issues for search result links, but i don't know for sure)

[1] https://github.com/mdn/browser-compat-data/pull/9470 (april 2021)


Shoutout to whoogle


funny, i solved this problem by not using google




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: