For our dating site, which of course has to deal with many prinses, Nigerian or otherwise, when we manually verified an account to be a scammer, we reject logins with a message stating that the IP address has been blocked. Scammers will usually go through all of their VPNs/bots in order to try to login, allowing our system to flag them all.
We'll manually review all accounts that use (more than one of) those ip addresses. Works like a charm! :-)
Yes, although I would add an attention threshold too, as it's not entirely unknown for hired manual review to just spam the "guilty" button so they can get to lunch. In any case: your false positive rate needs to be massively low if you want to be a massive asshole to the people it flags -- or else you are just an asshole.
If you can afford to get the FPR down, sure, have fun, but if not, please have the decency to not pretend.
You can implement a jury trial system - have a pool of moderators, select a few at random and have them look at the account, only flagging it if there is a consensus that it’s a scam account
Admittedly, there is the occasional false positive. For such cases, we display an email address right underneath the error message. Scammers rarely dare to complain, and when they do, they are usually not very convincing.
So, the problem I see here is when spammers abuse someone else's machine to conduct activity like this, and all those random people get their IP addresses blocked by your system.
And how would the legitimate owner of that IP address ever know how to contact you to get removed from your blacklist?
You can just anonymize stuff, right? dbo.naughty_ips does not need to be linked to any real people and I do not need to keep records of why any IP address got placed on that table.
Indeed. Dating sites have a legitimate (and I'd say moral) need to protect their customers from all kinds of nasty business. If the only way to do that is through the use of PII, and that use is well-documented in the privacy statement, and the data is not being used for unrelated purposes, this should be well within the bounds of GDPR.
In the past three and a half years I have witnessed four cases in which this exact method (cross-linking remote IP addresses to detect spammers/attackers/bots/etc.) has been an issue with GDPR, but I am sure those downvotes and the general tech-centered HN'y wave-off as misinformation have a better standing in EU courts these days since the fear-mongering GDPR hype is mostly over as it seems.
People can claim it all day long but it was determined that IP addresses are only PII in the hands of an entity who can actually associate it with a person, like an ISP.
Yes, you are right. No way for a dating site for example (as stated by the original comment) to make a relation between an IP address and the person behind it. It's all fake profiles or some other strawman argument anyway, right? Like who uses his real name, address or even picture for something like that?! That'd be just ridiculous ...
It sounds like that's what they're doing, in order to find other spam accounts:
> We'll manually review all accounts that use (more than one of) those ip addresses.
Obviously only vanviegen knows what they're doing, but here is what I'd do (IANAL!):
1. Identify offender (scammer/spammer) using other methods like manual review
2. Block offender as described, and only now start logging the IPs for them (claim: at that point it's legitimate interest)
3. If another user now uses one of the IPs, assume their also offenders and log their IPs as well to weed out false positives (claim: they use the known offender IPs, so there is a good chance their also offenders -> leg. int.)
4. Ban all actual offenders and delete associated IPs for false positives.
It's possible they're doing this flow and just simplified it for posting here.
Saving the IP/geolocation could also be legitimate interest to identify altered locations. E.g. say you're US based and suddenly login from $abroad they could send you a 2FA mail to secure your account.
Even with all that, the IP address itself still doesn't represent a person in the hands of that dating site.
An ISP can identify which IP address has been assigned to your phone, at what time, on what tower and exactly what points in time that IP addressed changed. It can also associate the device itself with the IP address.
An IP address on a cable modem can be associated with a particular account for a house or a business office, but even it can't positively identify the person in the house or at the business who was using it to connect to a particular website.
And yes, as you said, anybody can create a fake profile. A coworker could create a fake profile on a dating site of you if they wanted to and that IP address still doesn't positively identify you.
The name, address, photo...all of that is absolutely PII and covered by GDPR.
The IP address isn't and is also used for legitimate security purposes. People trying to get them scrubbed under GDPR are overreaching on a piece of data they have no right to have scrubbed.
Ok, I have no issue with tactics like these when they're wasting spammers' time. But sometimes it seems like real users get caught up in these honeypots for scammers and hackers.
A lot of the crap real sites make people go through e.g. when they lose access to their account or login to a VPN or the site just "can't verify their identity" for some reason. Where you go through a bunch of hoops and captchas, only to have some step fail or reach a dead end. They really seem like they're just set up to intentionally waste people's time.
For example, Steam has a system where if you enter too many invalid passwords, it will present you with a captcha which you can never actually solve. It's a lot more annoying than just saying "you have been locked out of trying to log in for X hours".
But this, this is fine. It's pretty clear that the person you're targeting is a spammer, and it's pretty clear to the user after about 60 seconds that you're password system is a joke.
> For example, Steam has a system where if you enter too many invalid passwords, it will present you with a captcha which you can never actually solve.
I call this "login gaslighting" and it's evil. Pioneered by the "do no evil" company.
ReCaptcha does a similar tactic but rather than unsolvable it's a stream of the most annoying captcha -- "select all of image until none are left". Fail one and you're back at the start. You do have the option to cycle captcha, but 9/10 times it'll be this one. Eventually you'll get locked out of captcha entirely. Anyone who has used Tor on Google has probably experienced this.
> ReCaptcha does a similar tactic but rather than unsolvable it's a stream of the most annoying captcha -- "select all of image until none are left".
In such a situation, I often think: isn't the fact that one makes "stupid mistakes" when attempting to solve a ReCaptcha rather a sign that the entity that is attempting to solve it is a human?
The worst cpatcha has got to be Rockstar games support. You have to click all the images of rolled dice that sum up to 13, 5 times in a row. Then another 5 for some reason. If you make one mistake, you go back to 0.
So that's what that was... Was trying to do something legit, MS gave me a puzzle to solve, it was unsolvable in the time given, it wasted maybe 20 mins. Can't remember what it was, I think create an account for visual studio (you had to sign in to an MS account to keep using free VS, the wankers).
Blocking people leads to them searching for ways around your block really quickly. Making them waste time not realizing they have been blocked, such as endless retries or shadow bans, is much more effective at making them stop bothering you for a while longer. Time spent doing this is time they can't spend being malicious on your platform.
It's unfortunate when a non-malicious user gets caught in one of these traps...
"its easier to just punish everybody than single out the person actually deserving of punishment" is actually a common defense of collective punishment
the equal and opposite response would be, "Sounds like you never had to actually deal with such a usability problem yourself", but I'm not interested in trying to devolve this discussion into one about you and me, instead of the topic
Can't access your mail archive going back a decade - could be a huge problem for some people
Sure, it's not as bad as getting sent to the gulag. But the problem with most of these tech companies is far bigger than not being able to post a meme.
I do believe you're misapplying Blackstone's Formulation. I think it's perfectly fine to presume guilt in some cases when operating a public service. This can be reformulated as making users go through extra steps designed to screen bad actors. We in fact do this at places like airports (and increasingly schools) all the time.
For instance if your user happens to be an unwitting botnet member (or even if he has a newly assigned IP previously belonging to one) - his IP will be suspicious and you can "punish" him accordingly. Also Blackstone's ratio was 1/10, not 1/1000.
I tried to sign up for steam and my long complex password seemed to trigger a never ending stream of captures. Also, just today ticketmaster decided my firefox browser was a bot and blocked me. Fun times.
You are lucky. I haven’t been able to use Ticketmaster for 2 years because all IPs from my ISP are blocked as bots. Contacted their support on Twitter and they told me the only way to use their site is to change my ISP as even the VPNs I tried are blocked. Looks like they have enough money to have the luxury to block one of the biggest ISP where I live
I understand the initial idea to block this known neo-Nazi short handle (8 for the letter H and 88 as HH standing for the 'Heil Hitler' salute in these circles).
But how many people do I know born in 88. Or on the 8th of August?
I understand that given the login is your public visible name on steam they just don't want clear neo-Nazi signifiers.
I bet for every one neo-nazi they block hundreds of birthdays or Chinese [0]. Just seems overly sensitive to me when you wouldn’t even know a person is a neo-nazi unless they start saying neo-nazi shit. Saying neo-nazi shit is in itself grounds for a ban so why block the number 88 at all? How could anyone possibly be offended by the number 88 alone?
If Steam never saw your real password, the hash of the password would itself become the password, and Steam would be storing your password in plaintext.
In order for password security to work, you have to send Steam your actual password, which they then check against the hash themselves. So at some point, Steam will have your password in plaintext.
No, not really. I'll give you an example that's not really secure, but should illustrate one possible method:
1. You produce an "authentication hash" X = hash(normalize(your_username) + your_password) and send X to the server.
2. The server computes Y = hash(X) and checks Y against the stored hash.
Now you're not sending the plaintext password to the service (e.g. steam), and steam is also not storing the "raw" authentication hash X on the server either. Yes, a manipulated client can send a stolen X instead of a stolen password (but in reality it's a reused password that's been stolen, not X). The advantage is that a compromised server will then not be able to log the plaintext password for credential stuffing.
In case anyone thinks about using the above scheme: Don't. It's merely an illustration for one specific property. Other than that it is PAINFULLY flawed in many ways.
You're just adding a step for generating the hash-as-password from some other information. Just because you label something in that generation step the "password" doesn't mean you're avoiding the hash being the password. From your and Steam's perspectives, "X" is still a secure value that Steam has to see in plaintext and not store.
I simplified my other post for better readability and just realized I made a stupid mistake doing so; I replaced a public per-user salt that's to be queried during login with "normalize(username)". In the given variant it would actually be possible to perform credential stuffing if both use exactly the same hash function and if I use the exact same username. However, if a user-specific (or at least site-specific) public salt is added, this doesn't work anymore. That's what I deserve for writing this by heart instead of putting it into ProVerif first. Maybe that bad simplification caused the misunderstanding?
----- edit, original post. feel free to answer if you still disagree :)
Now we're talking labelling. To me the password is what I, as a human, enter in some login form. What's sent to the server is derived from that. In once case the derivation function is just the identity, in the other case it's a trapdoor permutation (with a public salt). For the authentication flow it's quite similar, yes, and for many kinds of attack I wouldn't care what I have (e.g. PtH on Windows) - but for the user there is a huge difference if they memorize + enter "7110eda4d09e062aa5e4a390b0a572ac0d2c0220" or "1234".
Let me pose a scenario and ask you a question: Assume I'm dumb and my login on HN as well as Steam is archi42 with password s3cr3t. Now the simple "sent password in the clear to the server" allows GabeN [president of Valve/Steam] to log my credentials and post spam on HN. With a trapdoor derivation function that's not possible anymore [this is where I realized my bad simplification].
So if the two are exactly the same thing, why does that attack work in once instance, but not the other?
(edit2: if you answer this, assume that there is a user-specific, public "salt" A that the client queries from the server prior to computing X = hash(A + password))
To me, you're overthinking the original comment. The particular line
>Also, steam should never even see the password, they should only ever see the hash.
is, if you interpret it generously, trivially true. There are 101 authentication mechanisms where Steam doesn't "need" to see a password (i.e. some secret information that is remembered by the user.) As you point out, the password can be hashed and even salted before transmission.
Alternatively, Steam could authenticate with e.g. a public/private keypair, in a way that means that it would be immune to replays of the authenication protocol, while never seeing or storing any sensitive info.
But I find it hard to believe that the original commenter's objection to Steam seeing your password is based on any of these alternatives. The comment didn't say "Steam should be using a protocol where they don't see your password", and I don't think many hackers have such an opinion about any service, given how prolific just basic username+password authentication is on the web.
My original reply was based on the interpretation that the commenter had misunderstood how username+password authentication interacted with password hashing, both of which are technologies used in 99.99% of web services - rather than a more esoteric approach which somehow justifies the idea that "Steam shouldn't see your password".
No. In order for password authentication to be something a five year old can do by pasting PHP code they found in a Stack Overflow search, that is how it works.
But algorithmically even if you want passwords (you don't in most cases, get WebAuthn for example for web site authentication) you can use an asymmetric PAKE such as OPAQUE:
This is quite a bit more complicated than the one line PHP password stuff you pasted from Stack Overflow, but the user's password never leaves their machine, and so the Relying Party doesn't know the password, and yet they can verify that the user does know the password which they originally chose for the site.
When people downvote something I wrote because it's sharing an opinion they don't like it, I kinda get it, that's not really what HN downvotes are for, but sure.
However in cases like this what I wrote was just a fact about a world which they weren't aware of, I'm not sure what they hope to achieve by downvoting.
ivanbakel wrote "In order for password security to work, you have to send Steam your actual password" and that's not true. It's not going to become more true if you can just delete my comment explaining why it's not true, that's not how our universe works.
It seems pretty obvious that you were downvoted primarily for the entire entirely unnecessary "5 year olds copy-pasting from stack overflow" bit. "Actually, this isn't true anymore, a few years ago [afaik OPAQUE is from only 2018?) they found a protocol that solves this:" would've been a much more productive start to that comment.
Somehow I thought OPAQUE had some important change that made it practical compared to previous variants, but now I can't find what that would be or why I thought that, so yeah, the reference to the date is indeed irrelevant I think.
>Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community.
Troy, watch out you don't open yourself up for an attack from the bad guys: They'll start sending you solicitations with ReplyTo addresses of industry honeypots, and before you know it, you'll become a known spammer and your regular outgoing emails will be routed to recipient's spam folders or maybe even dropped entirely.
He covers this in the article. He manually moves an email to a different folder, maybe about 2 seconds worth of effort, which then triggers everything else and it is hands-off from there.
But how does the palindrome rule work with “password must start with ‘cat’” and “password must end with ‘dog’”? It seems impossible to satisfy these three.
Having the conditions contradict each other serves as a proof that it’s impossible to create a password; I thought this information shouldn’t be revealed to the user.
The emoji rule is particularly annoying on ios; there the password keyboard (i.e. virtual keyboard used on “password” form input fields) is different and doesn’t support entering emoji.
This reads a bit much like an ad. I sure have to scroll through a lot about Microsoft, Cloudflare, etc. before the funny password requirements I came for, at the verrrry end.
This is a Microsoft sponsored tech talk. Advertising for MS is one of Troy's businesses, as he discloses in his bio on the page. And the banner at the top says this particular post is sponsored by Cloudflare.
I once attended a MS workshop at my school on Azure. The speaker kept saying “open x page in Edge”, until there was a page that didn’t render properly with Edge. He was very hesitant to say Chrome.
This is a cool project, but I would more concerned that replying to spammers confirms you are real and that could result in much more spam. So is the net increase in pain your own?
There are simpler and more effective ways to waste spammers' time. First of all, I can't remember the last time I've gotten email spam that expected a response. On the other hand, phone spam, which is much more disruptive, is usually trying to screen me briefly then funnel me to a scammer.
So I pick up spam calls, press 1 immediately, then put the phone back in my pocket. This usually connects it to a real person who hears ambient noise, thinking I'm nearby. Usually I waste like 60sec of their time for 2sec of my time. It's hard for them to protect against this because no matter what, they need some victims to talk to the real person, unless they develop a very smart AI. But a relatively simple bot with a list of likely scam numbers could automate the fake victim's side.
A colleague was dealing with more advanced scammers who had already made some progress with his unaware mother. Their scam was unique in that it required calling them back. He managed to collect all the phone numbers they were using, then he put up fake Craigslist ads for free couches... and you can guess the rest.
I tried a similar approach but it resulted in me being spammed way more frequently. I assume pressing 1 flags your number as "likely to respond" and the database is then sold to other scammers. Also, collecting numbers doesn't work because all the spam calls I receive are from spoofed mobile numbers which change each time. The craigslist trick just punishes some unsuspecting person, not the scammer.
Yes, it probably adds you to their lists. Somehow, after doing this for years, I'm probably only getting one spam call each day, maybe because my carrier did something to cut down on them.
The Craigslist trick worked because of a unique situation. They were using real numbers because their scam relied on being called back. He did call manually to make sure.
I assume your starting password rules deliberately set the bar low to encourage PRs to improve it, since I can think of much more believable, infuriating, tedious ways to drag this out longer, keeping the user thinking they're always one step away from a valid password without being obviously silly.
Believable, stupid requirements I've seen in the wild in the bad early days of complexity requirements.
- your password contains a common word
- your password contains one or more repeating characters
- your password contains a forbidden character
- your password needs at least one additional uppercase letter
- your password needs at least one more distinct special character
- your password cannot end with a special character
- your password contains an escalating series of numbers
I've seen a real site where the minimum password length was more than the maximum password length. Of course, if you know that you'll stop wasting your time. But if the error is just "your password is too short" or "your password is too long" it might take several tries to figure out it's impossible to satisfy the requirement.
Twitch complained that my password longer than 16 characters exceeded the 40 character limit.
But the worst I've seen was a registration form that truncates long passwords to the (hidden) maximum length of ~10 without telling you, so anyone choosing a safe password cannot login and won't know why.
Vnc does this too with it's 8 characters. Stupid design decision.
Even more stupid though is their declaration that encryption is 'out of scope' and anyone wanting it should arrange it out of band (eg VPN or SSH forwarding). Seriously... :/
Well, given their track history, they are very correct on their recommendation to allow localhost connections only and tunnel any traffic through ssh. I mean, would you trust them to enforce the security of their server?
(It would be better if they only allowed pipeline connections and actually required that you run the data through ssh. But I bet they didn't notice people have all kinds of untrusted software running on localhost.)
I agree with that recommendation (it's absolutely not advisable to expose it to the internet even if it were encrypted) but that's where defense in depth comes in.
It's not supposed to be the only level of security but using unencrypted protocols in this day and age for something as sensitive as server control is unforgivable.
For example tunneling through SSH does make it possible for other people to sniff the traffic on either side if they are on localhost. Port forwarding is not a very safe tech since it doesn't allow to limit which user uses the port.
I do respect people that say "I don't know how and don't want to learn how to solve this hard problem, so I'm letting it explicitly unsolved", as long as that "explicitly" is part is real.
And yeah, I would probably use vnc if the protocol was over a pipeline, like scp or rsync. As it is now, it's a program to avoid.
Defense in depth is only useful for vulnerabilities that you can't solve to a satisfactory level. You should be able to publish a high-quality access server on the internet without any loss of security.
Paypal did the silent truncation to me at 20 characters once, what a nightmare. I can't even remember how I figured it out, probably some other poor soul left a breadcrumb for me.
You also need to only give the feedback on password quality after user has entered it twice; until then, "passwords do not match" is the only piece of info.
> Because it would be rude not to respond, I'd like to send the spammer back an email and invite them to my very special registration form.
Don't do that. No, really, don't.
Okay, you didn't listen and did it anyway; please, at least don't automate it or semi-automate it where you're just doing it with one click.
> Spammer burned a total of 80 seconds in Password Purgatory
So you think, based on the belief that when you reply to the spam, it goes back to the spammer.
That may not be the case; when you engage spam, you are possibly generating "backscatter"; a person having nothing to do with the spammer may receive the e-mail.
Spam messages are not always relying on someone replying to them to hook in the victim. Sometimes there is no hook at all, or sometimes the hook is in the HTML links, and not in replying. (They additionally hope that if you reply, the person you are replying to will also get the spam e-mail, since it is quoted, and that person will click on the links.)
Back in early 2000's, I'd written a simple ASP page that produced infinite amount of random email addresses page by page. Had any crawler bot got caught up in it, it'd keep filling its database with these nonsense email addresses. I'd distributed its source code too. Troy Hunt's project made me remember it.
Nice! I like things that keep spammers and scammers busy.
My own low-effort method is to accept mail for any domain on my name servers. Spammers think they are relaying their scams but it just goes to a flat text file. It isn't like I try to hide it. The banner even says its a honeypot and not to use it.
The complexity requirements pretty quickly become unreasonable, to the point that I would have realized they weren’t serious after like the 2nd try.
To be really evil, Troy should play with the password field- make it not a text or password field, but rather some sort of custom input field that doesn’t work with password managers and doesn’t allow paste.
Also maybe return errors sometimes that are themselves erroneous.
A better idea would be to use custom inputs that produce "typos" that the user didn't make. E.g. you have a "zip/postal" code field and your input sneakily swaps 2 neighbouring characters at some point, resulting in error "this zip code doesn't exist". Or change 8 to 9 etc.
Or you could make a "check your input one more time before confirming" step and display typos in e.g. names/emails there.
I've seen worse password purgatories in the wild. One was the Princeton undergrad acceptance (or should I say rejection) portal, which for some reason required registration even though I was entering a key from an email. It was something like:
1. marcopollo – Password must contain at least two numbers.
2. marcopollo11 – Password must not begin or end with a number.
3. m1arcopoll1o – Password must not contain two of the same number.
4. m1arcopoll2o – Password must contain at least one special character (! ? & % $ # @).
5. m1arcopoll2o! – Password must not end with a special character.
5. m1arcopoll2!o - Password must not contain 3 or more of the same letter.
6. m1arcopoll2p! - Password must not contain 2 of the same consecutive character.
7. I forget, but it kept going.
At some point, I gave up and started generating random passwords. The first 3 attempts were still not accepted. In a way, those restrictions were actually reducing the entropy.
This will work on some spammers but not forever. This is an infinite cat and mouse game.
For better or for worse, "publicizing spammers pain for our pleasure" has a guaranteed effect of shortening the useful lifespan of this tool. Unless of course no spammers ever read that article, OR HN.
I like the idea, but upon pondering, I think it could be made better by imitating other dark patterns:
1. Ask for a username, only offer "OK"
2. Upon OK: Wait 2-3s while showing an ajax spinner, then add another box to the DOM, asking for an "e-mail"
3. Rinse and repeat with first name + last name; then company name; country (pick from a list of ~50 widely known country names, sorted by median age of the population - remove the IPs country of origin, so they have to pick "other" and enter it manually)
4. Tell the user the username doesn't match the expected format and offer to add a random "#1234" for them - upon doing so, back to square one (except their username is now "#1234" and not "scamoverlord#1234"). make sure to flush the other info as well.
5.1. Tell them you're sending a verification mail (you don't). Offer them to resent it after 30s.
5.2. Upon "try again", tell them first to check their mail address, and lock the "try again" for another 10s.
5.3. Now, after another 30s, tell them there must be an error with the mail gateway (there is no mail gateway) and offer them to continue; the verification mail is queued and will be sent later (-> you're sooo super userfriendly!).
6. Now the user/victim easily spent 90s to enter "valid" details and must be quite invested. Show a re-captcha style captcha before asking for password (after sending an email and possibly spamming someone? yeah, maybe put that before the fake mail verification, I came up with that in the wrong order).
7. the "checking if you're human" should fail after 3-4s (spammers are used to that).
8. Then the first of the 9 captcha images should pop up afer 1-2s initial "loading time", the other ones after another .5 - 2s, each.
9. Let the first one or two captchas fail no matter what (two if they're fast, one if they're already spending a lot of time there - plausible if you're handpick terms + images for which foreign speaker often don't know the exact meaning; like "barnacles", "melange", "cabin", "truck", or showing differnt styles buses and asking for "tram").
10. Three times the charm: Accept any answer, as long as the "user" spent more than 4s on it (use a simple term with obvious images to make it plausible, like "birds" or "cars").
11. finally get started with the password. Let them do four or six levels.
12. What's that, the the captcha timed out and/or too many bad password tries? Are you sure you're not a bot? Well, do it again! (maybe let them only fail once to keep them hooked)
13. Oh no, the password field has been reset after the captcha was solved. At least you now know how to do a rule-abiding password. So let them do all the levels.
14. If they're really persistent, fake a "oh no, your tab crashed, reload?" screen for their browser.
Uuuuh, I think I put that on my infinite todo list.
PS: Have them write "a few words" about their business. Make sure to garble copy/paste (e.g. reverse word order or just reset length counter to 0, increase decrease from there and do a proper recount on submit). On submit, verify the input for a few seconds and claim that it's either to short or too long (if they wrote >500 chars, say it should be 200-400, if they wrote <500 chars, ask for 600-800). And remember to keep the char counter broken (update only after not typing for 2s, making the input field not readable for another second while "counting"). Bonus points if a WYSIWYG editor widget is used, which of course takes 5 to 10s to load; or have a "worker" at Amazon Mechanical Turk review it (only takes 30 to 90s).
PPS, for balls of steel: Add a second act by only enforcing the first few levels. Then, upon login, tell them they need to change their password. Maybe also tell them if they install your "super special" security extension, they can use weaker password rules. If they stupid enough to really install it, let it send a "X-Block-Me: I am a scammer" header along with every http/s request.
The site doesn't seem to check if the conditions are met though. When the password was supposed to end with "dog", spammer used an invalid format and still got the next challenge.
This is wrong. You are logging their password attempts and then sharing them with the world. It doesn’t matter that you think you know they are scammers. What gives you the right to dispense vigilante justice by disclosing people’s passwords? Shame on you.
you are mistaking scammers with spammers, and also mistaking what the poster thinks with reality
the reality is they are spammers, because spamming the poster is the only way they can end up with a reply email containing a link with a valid key to interact with this API
if they didn't send unsolicited commercial emails, there's no way they can interact with this API and get their passwords logged
People reuse passwords and having your password appear in a list of known passwords, even without being associated to your email, is reason enough to change it.
It’s not the greatest but it’s also not awful. Ideally the 123 would be replaced by a less common sequence, and maybe add one more word, but as is, it’s fine.
I wouldn't be comfortable doing this, for one thing, we know people tend to re-use passwords. So any email/password info you collect should be treated with security like they just gave you their bank login, because some of them did. So then Troy has to report himself to his own service (haveibeenpwned).
Is punishing spammers for what they’ve done a helpful thing to do? Sure. Are spammers deserving of having their whole digital lives compromised? I don’t know.
Spammer burned a total of 80 seconds in Password Purgatory
The ability to deal with a bad actor by wasting a minute and 20 seconds of his/her time isn't cause for fist-pumping or high-fiving. The internet needs a better way to verify user identity. The lack of online accountability isn't worth the cost anymore.
We'll manually review all accounts that use (more than one of) those ip addresses. Works like a charm! :-)