This looks like a much less polished version of Clef (https://getclef.com/). Clef is a really awesome app and they're already powering this type of integration for a few hundred websites. One of the founders is an HN regular, although I can't remember his username (Jesse, reply if you see this).
That's me, thanks! We're glad you think we have a little more polish, but to be honest, we're really excited about any replacement to passwords making waves in the tech world. Ultimately, no single group is going to be able to tackle this problem alone, so the more critical thought we have, the better off we all are.
If anyone has any questions about Clef, I'd be happy to answer them, but I also don't want to distract from the discussion going on around this proposal.
One question: What's keeping you from developing a desktop client, so i can login without needing my phone around? From what i can see on your website, Clef simply uses a key stored on the phone to generate a password† that is then sent to the website to be logged in behind the scene. There shouldn't be anything stopping a user from using a program for this on any os, as long as it has the ability to obtain the nonce‡ from the website and the user-unique key.
† password used here in the loose sense of being a user identificator including both the identity of the user and a secret unique to the user
‡ which is even more simple than a QR, as it's simply a barcode, albeit an animated one (the animation doesn't factor into the value at all, right?)
Nothing is stopping us from a technical standpoint. Multiple devices is something we plan on adding, but we're being really careful about it because it increases complexity and introduces more vectors of attack. We also think that a phone is tied to a user's identity more than a computer or a tablet, so that's what we're really focusing on. Give the flow a shot and let me know what you think.
As a counterpoint: It's far more likely i might be mugged for the easy cash scored by a phone that has a 500€ market price, than to have my desktop stolen. ;)
One more question:
How about sites which don't have clef implemented? Can i enter a URL into the clef app (or use some kind of JS scriptlet to generate a QR code on the fly) and have it generate me the password for that, so i can type it in manually? Maybe even store usernames for such sites?
Right now Clef has zero use for me, as i've never even seen a website that implemented it. But something like that would at least add some use.
(Also i'm sad that you didn't confirm or deny the footnotes.)
Right now, sites need to explicitly integrate with us. We've thought a lot about creating something that manages passwords to bridge the gap. We've actually been working on this with some community members (Joe is here somewhere) and are hoping to roll something out in the next few weeks.
Sorry for not addressing the footnotes!
1. exactly, we generate a digital signature similar to SQRL
2. right. from a technical perspective, the barcode is simpler than a QR code; however, it's actually proven really important from a usability standpoint. by animating the interaction, we have more control of the user's mental model of what's going on and can provide a much more intuitive user experience.
Thanks for the answers. I'll be looking forward to see it hit hackernews.† :)
I've mentioned it elsewhere here, but i'd suggest you also look into https://github.com/habnabit/passacre , since its creators put a LOT of value in getting the crypto parts right and its main creator is very responsive online.
And thanks for answering the footnotes. It is an interesting thought that users can be helped by the wiggling animation of the barcode and its inherent suggestion. (That would be worth a trip report on how you got there.)
† I'd prefer to follow an rss feed, but your blog seems to be 90% marketing and only 10% user-relevant posts with no categories.
Mithaldu, just as a reference, I'm the guy working with Clef on enabling Clef on more sites.. We've got a pretty cool system, and it's getting dang close to release. Hopefully I'll have it finished soon!
How does this work for mobile sites? This looks great for using your phone to login on the desktop, but how do you use your phone to login on the phone? I'd love to push this where I work, but I don't think it will fly if it doesn't work on the mobile site.
When you click the button, you'll just be redirected to the app where you can confirm or deny the login. If you confirm, you'll be redirected back and logged in.
We thought about building a mirror contraption that would allow you to scan the code on your own phone...sarcasm
send me an email, I'd love to help convince your workplace to integrate.
Clef is actually 2FA already because it relies on both possession (the device) and knowledge (the 4-digit PIN that protects the app). We're working right now to (optionally) replace the PIN with finger print scanning, when available. Either way, the knowledge (or biometric) portion is much more about asserting ownership of the device (if it gets lost or stolen, you can deactivate online) than as part of the actual authentication process.
Possession of the device and typing a PIN into the _same_ device does not qualify as 2FA.
It's not 2FA unless information flows between the user and the authenticator through two independent routes. For example, in Twitter's (and others') 2FA, information must flow between Twitter's servers and the user through the Twitter UI as well as through a GSM text message. That's 2FA.
I am pretty sure that possesion of device and typing PIN into the same device qualifies as 2FA. A spy that watches you type your PIN can't log in without your device. At the same time, a thief that steals your device, but doesn't know your PIN, also can't log in. You need both; hence TWO FACTOR AUTHENTICATION.
The casual thief case is trivial. Surely, clef's goal includes protection against a somewhat more sophisticated adversary who is targeting you, specifically.
Someone gets some malware on to the phone and gets the run of it. Records the pin, later steals the phone, or is able to replicate the entire device.
This could be guarded against if the pin changed every time and was delivered through an independent channel, which is what 2FA if all about. A complete, undetected compromise of a single device or a single information channel should not be able to defeat 2FA. That doesn't appear to be the case here.
But 2FA doesn't protect you from even a single compromised device. If the computer you use to access the service is compromised, an attacker can simply intercept your next login attempt. The only difference is that in the case of CLEF the vulnerable part is your mobile, not your laptop.
Indeed, and the polish is what makes Clef awesome and a pleasure to use. Their Wordpress plugin is a great way to drive adoption, but I find myself wishing more sites supported it.
I have found 1Password to be pretty hellish, and I don't even consider not using a password manager an option anymore. So clef and SQRL sound great to me.
Fair enough; although, in our experience users have enjoyed the flow much more than passwords. I'd encourage you to give it a shot and let me know if it's as unbearable as it seems to you.
On mobile, when you click the button, you're just redirected to the app where you approve the authentication request and are automatically logged in.
> Steve Gibson is somewhat of a "fringe" charlatan. In some professional security circles, he is not considered a reputable security professional, rather more of a snake oil salesman peddling third-rate software with bold claims. While many of his claims are a bit outlandish or bold, few, if any, are demonstrably false. However, when asked to speak on security topics, Gibson is getting adept at putting his foot in his mouth. A single amusing quote may be laughable, but a series of them begin to paint a picture of someone who doesn't really understand security. Rather, he seems to know enough buzzwords and ideas to be dangerous to his clients.
Not to use a debate cliché, but isn't this a ridiculously shameless ad hominem? He's published the protocol and disavowed any intellectual property claim to it. Let's focus on critiquing the protocol.
Actually it's still ad hominem. The fact that you completely ignored the original post and instead attacked Steve Gibson, is indicative of an ad hominem attack. If you'd even said, this new idea is ridiculous because QR codes are inherently insecure (which is false) you'd be fine.
A: "All rodents are mammals, but a weasel isn't a rodent, so it can't be a mammal."
B: "I'm sorry, but I'd prefer to trust the opinion of a trained zoologist on this one."
B's argument is ad hominem: he is attempting to counter A not by addressing his argument, but by casting doubt on A's credentials. Note that B is polite and not at all insulting.
It is never fallacious to point out the historical unreliability of a source. It is doubly never fallacious to point out the unreliability of a source, on a given topic, when discussing a new claim, on that topic, from that source, because that information is relevant to how we approach and evaluate the new claim (i.e., claims from historically-unreliable sources should be subjected to greater initial scrutiny).
Also, you've presented no actual rebuttal of whether Gibson's history is relevant to evaluating his present claims. Rather you've merely stated the name of a logical fallacy. Which is, itself...
It's still an ad hominem - the merits of his argument should stand independent to who he is or his history on any topic.
Doesn't mean it's not worth talking about, though. After all, science is entirely founded on a kind of inductive reasoning, so logical fallacies aren't crazy to consider.
Gibson's personality isn't the thing in question here, the quotation above is specifically about his history in security. If the comment was about how he's a major asshole (just an example, I'm not saying that) in conferences or something like that, it would be an ad hominem, as that sort of information would not be relevant.
I disagree. He doesn't address the actual topic here at all. All he is doing is saying that Steve Gibson is a charlatan.
His history as a security professional has no bearing on the actual content here. We are all talking about an idea SQRL not Steve Gibson. If you said, "SQRL isn't worth my time because I don't trust Steve Gibson" that's fine, but the author made no note on SQRL at all, he just attacked Steve Gibson and let it be.
Sure there may be precedence to say that SQRL isn't worth your time, but Steve's credentials don't affect this idea at all. For all you know he may have been given the idea by a team of security researchers who wanted to see if the top post on Hacker News would be some bull shit argument about Steve Gibson. Obviously not the case, but come on let's talk about the freaking content here not the man.
The saying "throwing the baby out with the bath water" comes to mind. Let's look at SQRL and see if it actually makes any sense before we throw it all away.
It may not be as strong an argument as, say, going through the crypto with a fine-tooth comb and finding flaws. However, I'm not qualified to do that, and most of the people commenting here aren't, either. Even so, we might have to make a decision about going forward with the information we have.
Bringing the quality of a person's previous work into the discussion is a necessary shortcut. We can't all be expected to have expert-level knowledge on everything.
Sure, if you're using an appeal to authority as part of the argument in favor of the protocol. Hopefully we're relying more on logical analysis of the protocol than we are on the proposer's authority, in any security context. Isn't that one of the points of open protocols?
Personally, even if the design is ok, I don't care to give this chucklehead any publicity. Maybe the blind squirrel found a nut (see what I did there? SQRL?) by getting a design right. Doesn't mean it's anything particularly clever, or that we should use it and give him something to base his incessant self-promotion on for the next 20 years.
There are a few types of ad hominem but they all involve using some property of a person that is only tangentially related. This note about how the majority of the security community sees Steve Gibson speaks directly to an assessment of his ability in this field.
Looks more like a simple character attack than anything. Steve is prolific, experimental, and has been around a while, so it's understandable he hasn't done everything right, but I haven't found any other instances where he is considered a "charlatan" by other security professionals.
Gibson is prone to hype and proclaiming that the sky is falling which is why I rarely listen to his show. However, a lot of these are nitpicks where he either gave the wrong meaning of an acronym or oversimplified something while trying to explain it.
I can't make him out to be a buzzword slinger. I've listened to quite a bit of his show (although admittedly very little from the past year or so) and he definitely demonstrates good knowledge. Listening to him talk, my impression is that he brings the security mindset[1] to the table, rare among snake-oil peddlers or charlatans.
I think one of the main problems is that people in a field are generally critical of people who translate that field to a wide audience. You see that play itself out over and over. And that's what Steve does with his show, he tries to explain security to, more or less, laymen. And he only has an audio medium, which adds some difficulty. So, yes, he simplifies some things and this no doubt troubles a lot of security gurus.
Of course he's made mistakes on the show. I can recall a few, but most of them were caught and corrected later. He doesn't script it, so I'm sure you can find many examples of poor word choices or incorrect acronyms over the 300+ shows he's done.
I do think he's over-played the practical usefulness of some security products that he advertises on the show. I have experience with none of them (to my knowledge), but some of them just sound, to the trained ear, minimally useful. But, sadly, that's audio content advertising for you.
From the link, we have statements like:
> For whatever reason, Gibson tries to explain the Metasploit project as a "malware exploitation framework"
OK, that's a bad description. But he was describing Metasploit in passing using a description of Metasploit as it pertained to the subject at hand. And, if you read the actual transcript that they linked to, it was being used for malware exploitation. Seems like a silly nit-pick.
> You can't simply raise the spectre of global spying and hidden rootkits planted by Microsoft without either proving or disproving the allegation
No. If you see something alarming, you totally can. He didn't panic either.
> Steve said SSL connections are not susceptible to man-in-the-middle (MiTM) attacks? This is absolutely false.
Please. SSL/TLS has had vulnerabilities that allowed MITM attacks. They're ad-hoc and eventually get fixed. You can't just expect to MITM a random SSL connection. SSL is designed to be MITM-resistent, and saying "SSL prevents MITM attacks" is not in any way a bad description, especially when you're communicating to laymen.
> Further, having a switch does not absolutely prevent sniffing traffic. The popular Dsniff tool lets you do this.
Yep, he got that wrong.
> Close Steve, CSMA stands for Carrier Sense Multiple Access.
Yep, he got that acronym wrong. But I decided to check the next show[2]...
> [Steve] Also, I mangled an acronym, and I hate when I do that, especially acronyms that I know so well. I talked about CSMA, and I called it Collision Sense Multiple Access instead of Carrier Sense Multiple Access. And it has a CD on the end which stands for Collision Detection. [...] So the real acronym for Ethernet is CSMA/CD, which is Carrier Sense Multiple Access with Collision Detection
So he switched a word in an acronym, then corrected it next episode. But they complained anyway. I couldn't have scripted it any better to what I said above.
Those examples were skimmed from the first three links. The authors came across like they had a vendetta to nit-pick everything they could. They've blown their own credibility already as far as needless nit-picking, missing the forest for the trees, and not checking to see when he corrects his own mistakes (aka, doing their homework).
I'd hardly summarize all that as a "fringe charlatan". He may be a bit fringe-ish, but I don't see how he's a charlatan.
Here's why this is stupid: This is 1-factor authentication, but it's actually LESS secure than a username and password.
A password is something you know - it's in your head, so it can only be stolen if you save it somewhere, or if there's malware on your computer sniffing it.
The tokens in the phone are saved in the phone, so if you lose the phone, you've just lost your password/set of keys. On top of that, malware in the phone can extract the keys.
With malware on your phone, your accounts can be pilfered without your knowledge, at any time. Or if your phone is stolen. This is different from 2-factor, where you have to both know a secret AND have access to a device.
(If the prospect of malware on your phone doesn't phase you, consider that news articles over three years old reported hundreds of thousands of malware installs found by various security companies)
> In other words, only the smartphone's owner can use the system to assert their identity, and nothing will prevent them from asserting their identity whenever they wish to.
I think they mean "only the person currently in possession of the smartphone" ...
except they go on to say that whoever has the phone has to identify themselves to the phone using a strong password.
> The SQRL system was specifically designed to eliminate username and password authentication to remote websites. But controlling access to SQRL authentication itself requires the smartphone's owner to prove their identity to their own phone.
> And to that end, “a secret only the user knows” is still the best technique for users to repeatedly, quickly, easily and privately prove their identity to their own smartphone.
> The cryptographic design of the SQRL system inherently provides identification security for every website it contacts. In that sense the system itself is fully secure without any password protection. We refer to the SQRL password as a “local password” because it is only used to prevent others from using the SQRL smartphone app to impersonate its owner.
If you look at the crypto page the scheme he proposes uses an 'identity password' in combination with a strong KDF (scrypt). The resulting hash is XOR'd against the masked master device key.
Do you enter the identity password each time you scan a qr code? If not, it's still one factor. Even if you can enter it each time you scan, it's still only on the phone, making it the same if not less secure than doing it all on your desktop.
The user is vulnerable, not only to "malware" but also to the phone vendor (and anyone the vendor enables, notably government). The degree of control one can attain over a phone or tablet is much less than on a general-purpose PC, and the latter is hard enough to relatively secure. Smartphones are captive, remote-controllable devices and as such are not trustworthy for secure communications or any important secrets.
This fact, together with grc's implications that the scheme is novel, important or secure, supports the derisive view of the author in the sub-thread above. Also check out the stylistic buffoonery on his site.
drivebyacct2 - your account has been dead for about 100 days and 200 posts, basically no one can seen your messages unless they have showdead on. Here's the "offending" post that you were banned for https://news.ycombinator.com/item?id=5982741 (hint - the ban is completely unjustified)
I assume that in addition to the QR code there would be a link that would trigger an intent to open the authentication app with the necessary data. At least on Android that's how it could work.
A minor problem is that auth app has to return back to the browser with a new link which the browser may not be able to open within the original login tab.
I don't think that is the case. After the auth app communicates with the server you just need to click the login button on the original page. I don't think the auth app needs to load a specific url.
A front facing camera already flips the image so that the user sees what they usually see when they look in a mirror. So one mirror would end up being right. Except when you open the camera app, the mobile site is no longer displaying the QR code.
This is functionally the same thing as https://github.com/habnabit/passacre , only tied to a cellphone in exchange for sparing the user the burden to remember their login. It would be interesting if the SQRL website code would also display the nonce under the QR and the user could download a desktop program to paste the nonce in. Even if it weren't open source, as long as both windows/linux binaries are available, people could even set it up on a server of their own so they'd just paste the nonce at a private url.
I think you can MITM this by presenting legitimate (proxied) ycombinator.com QR codes on e.g. ycombigator.com.
The app would still authenticate to the legitimate site and then 'activate' the session, and ycombigator.com could have at it. HTTPS won't help either. The problem is there's no way for the phone app to transfer additional secure session cookies back to your desktop browser, so this has to be the case.
Sure, the app will display the legitimate domain or URL but the user probably thinks that's what they're viewing on their desktop anyway, so will likely accept.
This attack can be stopped by referer checking. A site that hosts a QR code should display a warning instead of the QR code if the referer doesn't match.
The malicious site in the middle can download the legit QR server side (e.g. using PHP) and simply spoof the referer, then present it to the visitor. Where will a referer check help?
It's not - it's really just a password manager. The "something I know and something I have" is completely removed by only requiring you to have the phone. If it's a password manager, then that is what it is; if it's meant for security, then it comes back to the recent article on fingerprints not being a password.
It's not really a password manager--there's no shared secrets. The site identifies you by a public key. For authentication, it gives you a nonce, and you sign it with the corresponding private key. All the secrets are kept on your device.
I've wondered about the "something I know" dimension as well. Perhaps a passphrase could be used (it already is used to secure the master key). It'd still be a major improvement, as only your local device would need it, and you wouldn't have to have a separate password for each site.
Right; I guess password manager was a bit of an over simplification there - sorry about that, as was the fingerprint analogy - I guess it's more a concern of someone having my phone and thus instant access. An additional factor would help with that by bringing in the "something I know" dimension.
Yeah, that's a concern of mine too. Looking deeper at the description, looks like he describes a passphrase-like "local password" on the "The user's view of the application" page [1]. Hopefully that would address that issue (at least as much as passphrases do for SSH keys).
(And no problem. If we were forced to comment using only precise terms, with no simplifications, comments would either be ridiculously long or nonexistent.)
I dunno about the fingerprints analogy, that's still more boneheaded -- I mean, I can change digital secrets a lot easier than I can change my fingerprints ;-)
That's a question I have. One potential answer is that it moves authentication out-of-band, in case there's a keylogger or other malicious mechanism of the machine you're using. Also, it removes the need to remember a separate password for every site. Your master key is secured with a password, but that's it. It also removes the need to have a shared secret with a site, and doesn't require any third-party involvement.
He spends a good chunk of the latest episode of Security Now [1] describing it's advantages over current schemes. The episode isn't up yet (I listened to it on the the site's live stream), but it should be soon.
2-factor can be done in the post-password sense (require the second factor after this SQRL step), or alternatively, the algorithm the phone does could use a second factor at that point.
Well, theoretically you could make an implementation which reads the QR code then displays what would have been posted to the server. Via a web-browser extension, you could then type this into a form field and have that log you in. You could base<something> encode it to make it a little easier.
So, it seems feasible offline with online required only for convenience.
Beyond that there's no explicit second-factor here, if you don't have Internet, what are you proving your identity to? If local, then run a local server...
The example in the article being login at the library - the library computer has Internet access, but your phone has to also connect to a wifi (if there is one), or the mobile network (if you are 3G subscriber and there's reception). Then factor in that you might go abroad, I don't know many places where they just give free internet access to anyone with a smartphone...
Well, it's not really 2-factor is it? It's just the phone part of a 2-factor login and no web form part. Presumably the screen shot that showed a login form was for people without the phone app.
Interesting - so you generate a key pair on your phone (this is your identity and can be backed up to a printable QR code). Then, to login to any site, you scan a QR code displayed beside a login form and your phone can verify it''s identity (i.e. proves that it's the holder of the key pair). Perhaps on the first sign-in, the site will ask for an email address or some details to create an account for you.
The thing this is most similar to is Twitter's recent 2FA implementation. Except, instead of the site automatically pinging the app to open on your phone (and then you ACKing or NAKing the ping), it requires you to open it yourself and scan a code on the screen (thus implicitly ACKing it.) Everything else happens the same.
It seems like the basic idea isn't unique. I was thinking of something similar a while back and found https://tiqr.org while looking for similar ideas.
This all sounds interesting, but I was just thinking this would be a great way to unlock a computer. Scan the 'lock' screen with your phone. Probably not good for an office environment (too easy to grab someone else's phone), but I would use it at home.
Thinking about the user experience for this. I want to authenticate, so I hunt down my phone, launch an app, scan a code, press a button in the app to submit. This seems to me like quite a long process versus the traditional typing in a username and password. What about the implications of a stolen phone?
With the traditional username/password system, you have to remember a unique, secure password for each site--or else you use a password manager, which involves more steps as well. He's also mentioned that this could be adapted into a browser plugin, so that would streamline use on a trusted machine.
As for losing the phone, he suggests a passphrase-like "local password" on "The user's view of the application" page [1].
I think this is pretty the same thing as storing a secure cookie?
Except the 'cookie' is the private key on the phone, so it's tied to the phone not to a particular browser.
Other than that, I don't see a difference between the two?
I think this is basically a long-lived session cookie, stored out-of-band.
I also don't see how the public/private keypair changes anything. Why not just store the nonce on the phone? If the nonce has only ever gone over https to the user, no-one else will know it.
How does this compare to OpenID? Seems similar except with the premise that a user is their own identity provider, which is a QR-code-reading app on their smartphone.
It's completely different. You don't need to remember an url, you don't need to remember a password, there is no third party involved in the authentication process, ... I could go on, but it's really a completely different authentication mechanism.
Have a look at http://tiqr.org for an existing implementation of this same idea. Complete with open source server implementation and android/ios apps (which are open sourced too).
I feel is something missing in that algorithm of check.
And... Not all services are web, not all web services are public (so mobile app will not be able to open url), and U don't understand why bots will not scan that QR.
Correct me if I'm wrong, but couldn't this still be susceptible to DNS Spoofing? This goes under the assumption that the URL has not been tampered with, if I'm not mistaken?
- "But no two visitors will ever have the same ID". - How is that confirmed, generating random 512 bit keys do not guarantee never having same ID. It's just quite unlikely.
- SQRL lacks strong web-site identification, that's a bad thing. Domain name isn't strong authentication. I would like to include strong site identity with within the QR-key. (Evil website attack, NS spoofing / MITM)
+ No shared secrets is a real bonus. Except, if the attacker has full access to the site already and steals password, they can steal everything else. Therefore if password hashes are stolen, it's major breach of security, and all passwords should be immediately reset. Of course nobody's using same password for separate sites. So, site was breached and thats it anyway.
/ Out-of-band authentication, well, in some cases yes, in some cases no. I wouldn't call this true out of band solution. Especially when using mobile browser, or shared WLAN connection. Fact that authentication data is also routed on same internet route at the server end sounds quite likely. Of course this can be fixed by service provider if they really want to do it.
+ No third-party, that's the only way to go. In anycase when there's a third-party, the solution already sucks badly. That's one of the main reasons why I hate most of SSO solutions. (Single Sign On)
- Mobile (nor desktop) devices aren't secure, having keys in mobile device isn't considered to be secure and those can be extracted. Default mobile device protection isn't good enough for password / identity protection. Most of people aren't even using simple 4 digit pin. Real + would come form completely separate authentication device.
- Using long password with mobile is horrible. My GPG private key password is 20+ chars including tons of special chars. Try typing that with mobile, repeatedly. If shorter passphrases / keys are used, not enough entropy is included.
- "A password lockout system". Eh? Same method should prevent any web-site password hacks too. ;) Anyway, with proper password, guessing should still be futile, read my statement about password earlier. If the password got even 128+ bits of entryopy, it's going to be long guessing marathon. I don't really care even if you try to guess 1 or 10000 pwd/seconds.
/ "such as a personal safe deposit box" - Is not truly secure, what a joke stament.
- Document doesn't describe how identity authentication is linked to the actual client logging in. Basically this would mean that there has to be cookie version of cryptoraphic challenge, or some other data linked to that, which has to be stored for a while at web-servers end. Could this be used to create resource consumption DDoS attack on the service? That's one of the reasons why I don't like solutions which require web-server to maintain state for non-logged in users.
You are incorrect on your first point. This is an example of the birthday paradox[1].
With 2^512 possible keys (~10^154) even if everyone on earth (~10^9 people) got a billion new IDs every second (~10^18 per person per year) it would be more than a hundred thousand billion billion billion billion billion years (~10^50 year) before there was an even chance of a single ID collision (~10^77 IDs => 50% chance of 1 collision). So I think we can safely assume there will not be a collision.
In fact it is almost always the case in cryptography that there will be some risk of failure but as long as that risk is very small the system is considered secure. This is way way way beyond the normal standards of security (say 10^30 or so).
Many of your other points are a matter of opinion. There will be always tradeoffs between usability and security. This system appears to offer similar or perhaps even improved usability to existing two-factor systems with enhanced security in some respects but similar weaknesses in other respects. I would love to hear the opinions of cryptographers and security experts.
I'd be careful if I were you. Cryptography and cryptographers are an odd bunch. "Never" literally means never, while in your comment "never" means not in our life times, or a thousand, or billions of years. In case of infinite time, 512-bit RNG'ed numbers have to repeat at some point in time.
You may consider this pedantism, but it's this pedantism and conservativeness that calls AES broken today.
I think, from a practical standpoint, if "never" means "never before the heat death of the universe" then I'd be pretty satisfied. Pure math is truly wonderful (I love theoretical stuff) but at some point it does have to shift back to the practical realm.
I agree with you. I was just informing the parent poster how differently cryptographers see the world as compared to us normal people.
Although, in some respects, I do get where they're coming from. New algorithms and computing paradigms develop that make the previous one broken. I'm just guessing here, but it maybe possible that probabilistic computers and awesome algorithms a few decades down the line are able to recover states of a CSPRNG by applying statistical techniques. Obviously, I'm way over my head here but this is a possibility.
If you say that the OP was talking about TRNGs then all bets are off. It could so happen that two consecutive 512-bits are exactly the same. The probability is extremely low (birthday low) but since it's a TRNG there's still a chance that it will happen.
There is one thing that could never happen. Something that's mathematically perfect fits that criteria - the One Time Pad. Even at actual infinite available time frames i.e. you travelling at the speed of light, with quantum computer on board your ship powered by the unobtainium drive, even with that sort of time frame, you shall never be able to decrypt the correct message.
> Something that's mathematically perfect fits that criteria.
No, it doesn't. A one-time pad still needs a source of randomness, and a guarantee no reuse. How certain are you that you don't have a backdoored RNG, or initialization vector reuse?
Or, for a fully general argument, how certain are you that we're not living in a computer simulation, where the Dark Lords of the Matrix can just lookup their logs to see what random values were generated for your one-time pad? Or if we go to that extreme, how can you be absolutely certain of the truth of anything - including mathematics - if said Dark Lords could be messing with our brain and memories?
Or more mundanely, how about this: a flurry of cosmic rays strike the RAM containing the message and by random chance flip the same bits that were set in the on-time pad. Tada, message decrypted.
I'm pretty damn certain none of those hypotheticals are the are even remotely worth worrying about. I may even be more than P(1 - 2^-512) certain that they are false. But it's still merely a very high, finite probability. P(0) or P(1) don't exist - you can approach them, but you can't ever reach them.
Any idea, no matter how crazy or out of place with our understanding of the universe could happen, at least in principle. Therefore if we want “never” have a meaning at all, we need to set an cutoff point where we stop caring. Obviously an appropriate value depends on the situation, but in this case I'm pretty sure that an appropriate cutoff is somewhere up of P(2^-512).