Hacker News new | past | comments | ask | show | jobs | submit | blauditore's comments login

As a quick workaround, you can set a CSS filter on the whole page: Either use dev tools to put a rule `filter: hue-rotate(60deg);` on the `body` element, or simply run `javascript:void(document.body.style.filter='hue-rotate(60deg)')` from the url bar.


Nice hack, thank you! :-)


Can also use Chrome Extentions like Colorblindly to change all the colors. Tested it on the webpage now (I am not colorblind) and I see that the colors change: https://chromewebstore.google.com/detail/colorblindly/flonia...


You can also use ublock origin, it has a section for your own filters:

https://gist.github.com/aclarknexient/c39c83f2f97c3c6b1c307c...


benjdd.com##html:style(filter:hue-rotate(45deg))

Tested with uBlock Origin on Firefox Mobile.


In case you are not aware, you can put this sort of thing in a bookmark on the bookmark bar (both FF and Chrom{e|ium}, I assume other browsers too) for easy access. If you don't have the bookmark bar visible hit [ctrl][shift][B] to flip it on (and the same to flip it back off later if you don't want to keep it).


let i = 0; setInterval(() => document.body.style.filter=`hue-rotate(${i++}deg)`, 16);

Disco mode!

(better to use requestAnimationFrame but I'm lazy atm)


There you go:

  let i = 0;
  function bump() {
      document.body.style.filter=`hue-rotate(${i+=2}deg)`, 16;
      requestAnimationFrame(bump);
  }
  requestAnimationFrame(bump);


> True end-user programming and product manager programming are coming, probably pretty soon.

I'm placing my bets rather on this new object-oriented programming thing. It will make programming jobs obsolete any day now...


I'm no expert and a bit tired, but: Is the problem around hashing password + salt for a key just about the fact it can be brute-forced with enough recources, or did I miss something?


Basically, yeah. It sounds like the problem is the difficulty of cracking it scales linearly, and a ton of work has gone into figuring out how to efficiently crack it. Modern KDFs are memory-hard, which is a resource that's a lot harder/more expensive to scale than computing power.


But as Thomas points out, this is almost certainly not a problem you should actually care about.

If the user of your new Shiny Goat service used the password "ShinyGoat" then all the memory hard KDF shenanigans in the world won't help, attackers will guess "ShinyGoat", and that's correct, they're in.

If another user chose a 32 random alphanumerics then it doesn't matter if you just dropped in PBKDF2 with whatever default settings because the attackers couldn't guess 32 random alphanumerics no matter what.

The KDF comes into the picture only for users who've chosen aggressively mediocre passwords. Not so easy attackers will definitely guess them, not so hard that it's impossible. Users who insist their "password" must be a single English word, or who insist on memorizing their passwords and so nothing longer than six characters is acceptable. That sort of thing. The attackers can guess these passwords, but they need a lot of guesses so the KDF can make it impractical.

That's just not a plausible scenario for a real world attack and therefore it should not be a focus for your attention. You should use a real KDF, but PBKDF2 is fine for this purpose, any time you spend arguing about which KDF to use or implementing a different KDF, rather than solving actual defects in your system's security is a bad trade.


PBKDF2 is at least better than YoloPBKDF (which looks rather like PBKDF1). Besides brute-forcing, YoloPBKDF/PBKDF1 has a maximum key length (the length of the hash function output) whereas PBKDF2 can construct longer keys. PBKDF2 also uses a pseudorandom function like HMAC-SHA-1 instead of just a hash function, and I'm assuming that change was done because it strengthens the security in some fashion.

In any case, if you have the choice of making "aggressively mediocre" passwords harder to crack, is there a reason not to do so?


> In any case, if you have the choice of making "aggressively mediocre" passwords harder to crack, is there a reason not to do so?

"All features start out with minus 100 points" (Eric Gunnerson, popularized via Raymond Chen)


If you can't stitch polygons together seamlessly, how can you be sure the background doesn't bleed through with sampling? Isn't computing the exact coverage the same as having infinitely many point samples? The bleed-through of the background would then also be proportional to the gap between polygons, so if that one's small, the bleeding would be minor as well.


No, in animated models, there is no gap between polygons. And if you only compute single-polygon coverage, you can’t determine whether for two polygons that each cover 50% of a pixel, they both cover the same 50%, or complementary 50%, or anything in between. In practice, systems like that tend to show something like 25% background, 25% polygon A and 50% polygon B for the seam pixels, depending on draw order. That is, you get 25% background bleed.


But as I understand it, the article is about rasterization, so if we filter after rasterization, the sampling has already happened, no? In other words: Isn't this about using the intersection of polygon x square instead of single sample per pixel rasterization?


This is about taking an analytic sample of the scene with an expression that includes and accounts for the choice of filter, instead of integrating some number of point samples of the scene within a pixel.

In this case, the filtering and the sampling of the scene are both wrapped into the operation of intersection of the square with polygons. The filtering and the sampling are happening during rasterization, not before or after.

Keep in mind a pixel is an image sample, which is different from taking one or many point-samples of the scene in order to compute the pixel color.


It is applying the filter before rasterization, and then taking a single sample of the filtered signal per pixel.


The problem is determining the coverage, the contribution of the polygon to a pixel's final color, weighted by a filter. This is relevant at polygon edges, where a pixel straddles one or more edges, and some sort of anti-aliasing is required to prevent jaggies[1] and similar aliasing artifacts, such as moiré, which would result from naive discretization (where each pixel is either 100% or 0% covered by a polygon, typically based on whether the polygon covers the pixel center).

[1] https://en.wikipedia.org/wiki/Jaggies


This kind of makes sense from a mathematical point of view, but how would this look implementation-wise, in a scenario where you need to render a polygon scene? The article states that box filters are "the simplest form of filtering", but it sounds quite non-trivial for that use case.


By that logic, any product/organization that generates revenue mainly through ads would be an ad company. Search is the product that (probably) carries most of the ads revenues, so that's the main product, and ads are the means for generating revenue around it.


> By that logic, any product/organization that generates revenue mainly through ads would be an ad company.

What logic? The person you’re replying to didn’t explain their reasoning, so any logic you’re seeing is being constructed in your own head and projected onto someone else. In other words, you’re likely responding to an argument you’ve seen (and disagreed with) before instead of what that poster had in mind (which may or may not jive with what is in your head).

Google doesn’t just show ads, they track you and have the infrastructure to sell your information to people who buy ad space. That is fundamentally different from a website that makes money by showing ads, many of which they don’t pick themselves. So yes, Google is an ad company. And they’re one “first” because that’s where their efforts are, not because of the revenue. YouTube, Chrome, their web proposals, it all serves the same goal: ads, ads, ads, and keeping Google’s dominance in the space.


Last I checked Google doesn't 'sell your information to people', but it does offer hyper targeted ads; the ethics of both scenarios are deeply rotten to me.


> Last I checked Google doesn't 'sell your information to people', but it does offer hyper targeted ads

So you understand what I’m talking about. I didn’t mean selling the information directly. Because why would they, they make more money by keeping the data to themselves and selling you out indirectly from the information they gathered over and over.


It's digital pimping. Google are fully-automated, mass-scale digital pimps. They pimp your eyeballs out to Johns who pay for the privilege of mindfucking you, with the help of an extremely sophisticated matchmaking and realtime auction system. In return, you get nice handbags (YouTube) and get your hair did (GMail).

Calling it "ads" and Google an "advertising company" is just making a vague allusion to what's really going on and does not carry the proper connotation of exploitation.


Yes, because they get a competitive advantage when they hold all that information for themselves.

They have your email (gmail), location history (google maps), search history (google), viewing history (youtube) and know pretty much every site you've visited (chrome + ad network).

This is the data they aggregate and sell to advertisers so that their ads can target highly specific groups of people - they don't want the advertisers getting the direct dataset, that would be competition.


Reposting this to counter the narrative that they don't sell your information: https://news.ycombinator.com/item?id=40636844#40642672

First-hand account from me that this is not factual at all.

I worked at a major media buyer agency “big 5” in advanced analytics; we were a team of 5-10 data scientists. We got a firehose on behalf of our client, a major movie studio, of search of their titles by zip code from “G”.

On top of that we had clean roomed audience data from “F” of viewers of the ads/trailers who also viewed ads on their set top boxes.

I can go on and on, and yeah, we didn’t see “Joe Smith” level of granularity, it was at Zip code levels, but to say FAANG doesn’t sell user data is naive at best.


Yes. I would characterize a company that mainly generates revenue through ads as an ad company.


Sure, I'm fine with that. It just makes sense. Newspapers have been thought to be ad companies for centuries.


I feel personally attacked


Do I understand correctly that a low swappiness basically also means higher priority of anon pages over disk caching, simply because the former uses more space and the latter "runs out" eventually? I.e. low swappiness would achieve what you want, sort of?


Although this author seems pretty reasonable with regards to actual privacy (preferring self-hosted software), I find the popular echo chamber of "avoid Google, they steal your data" quite misguided.

Yes, Google accumulates data and does stuff with it. But Google also has rigorous processes to lock down data and access to it, unlike virtually any small-to-medium cloud software provider. I've heard crazy stories like people looking up their friends' health insurance details for fun, just because almost everyone in the engineering part of that company had access to the production database.

Plus, Google is so large that it constantly receives attention by public institutions, which makes it harder to pull off shady stuff without getting caught. If <random SME> sells your data to the highest bidder who will spam you with cold calls, no one's gonna bat an eye.


Don't forget any large hoard of data is ripe for government abuse. It might not be a cold caller but a police department parallel constructing you into a crime conviction using faulty GPS data.


Never though of it that way: With enough data, it's likely to find something that looks suspicious, for some definition thereof, even if just due to software errors. When looked at in isolation, this could convince people like real evidence.

This would basically be the same as p-hacking.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: