This is so good and important to show that these identity schemes are more about surveillance than security, as the security guarantees are limited and insufficient for any long period of time. An additional approach I might recommend for exploration would be to find the "offline mode," where it would have to re-use IVs and challenges over a short window when the app can't validate against the back end service. Other similar schemes I have seen implemented a single-use-key as a re-used limited-use-key to enable that use case.
The card he tested was apparently live in production, but one of the main vulnerabilities in protocols like these is in the 'personalization' stage of the setup, where each card gets a set of default 'provisioning keys,' which are used to register the card and get unique user keys for it. A sample of unpersonalized blanks would yield that, and the costs associated with mitigating this with batch specific keys for provisioning is typically too much complexity.
There may be a DoS vulnerability in some card schemes where you can use 'torn' NFC connections to get the key and transaction counter on the card applet to increment and desynchronize from the counter recorded on the server, bricking the card - or potentially many en masse with some SDR equipment.
Given the physical user enrollment costs, there are some basic impossibilities in these protocols that will always reduce their security to a set of trade-offs that depend on economics and obscurity. Security research like this acts as a check on the efficacy of totalitarian controls like digital id, and it is important work to continually demonstrate that there are risks and costs to the regimes that impose them. I am very grateful this researcher has done work to discredit this scheme.
> [...] one of the main vulnerabilities in protocols like these is in the 'personalization' stage of the setup, where each card gets a set of default 'provisioning keys,' which are used to register the card and get unique user keys for it. A sample of unpersonalized blanks would yield that, and the costs associated with mitigating this with batch specific keys for provisioning is typically too much complexity.
Is your point "some smartcard schemes don't use key diversification, so this one probably doesn't either", and more generally "some smartcard schemes are implemented insecurely, so surely this one must be too"?
> There may be a DoS vulnerability in some card schemes where you can use 'torn' NFC connections to get the key and transaction counter on the card applet to increment and desynchronize from the counter recorded on the server, bricking the card
Speculation again, or do you have concrete evidence for this protocol being susceptible to torn transactions? Some are, some aren't.
Also, you know what else you can do to a card when you have physical access to it? You can take a pair of scissors and cut it in half! A remote bricking capability would be concerning, but requiring an intentional and repeated transaction tear hardly seems noteworthy.
> Given the physical user enrollment costs, there are some basic impossibilities in these protocols [...] I am very grateful this researcher has done work to discredit this scheme.
It seems like you are generalizing from one concrete project to all smartcard systems, for what seem to be somewhat irrational motives.
These were presented areas for further research, as there are precedents for these vulns, and the economics of card distribution haven't changed much. This is a technical discussion about vulns in protocols, and given I've worked on them in the past, these were how I broke previous proposed protocols that were reimplementations of a variation on the EMV protocols. What's changed in the protocols is they have been simplified to use ECC keys (they couldn't support PKI previously because RSA keys were too long to fit in the SE key slots), but the main issue of effectively tokenizing authentication transactions (using those AES-GCM keys) has the same limitations as the original protocol - albeit with an ECC keypair to do app/device attestation.
Is this scheme vulnerable to that? Unknown. Are these attacks paths for additional vuln research that is exceptionally valuable and has significant policy consequences? Absolutely.
EMV is a completely different protocol, though! You wouldn't use a vulnerability discovered in TLS (or its implementation in OpenSSL) as an example for why SSH/OpenSSH is an insecure protocol, would you?
A smartcard is in the end just a general purpose computer (with some physical hardening and a really weird and poorly layered protocol stack that costs hundreds of CHF to even read; thank you, ISO!) – it can be just as secure or insecure as any computer system.
> Is this scheme vulnerable to that? Unknown.
Yes, but you sprinkle your comment with just enough domain language to make it seem like you have a concrete security concern for the scheme presented in the article.
I'd consider that pretty misleading, especially in the context of trying to make a point on policy, as opposed to security ("significant policy consequences", as you say).
Security protocols have a short list of moving parts and trade offs and where to attack them. RNGs, key storage, key derivation, counters, and access modes, to name a few. They're only completely different to people who don't understand them.
If you are using symmetric secrets in smart cards, you're using something within a degree of EMV because you need compatability with existing reader infrastructure, and alignment to existing standards to get anyone to accept it. the AES-GCM was the giveaway because I seem to remember a related vuln where to facilitate compatability in software instead of the smart card, the designers used the counter as an additional secret. If the designers of this scheme were not constrained by existing reader infrastructure and standards schemes, they would have done this in a more modern and elegant way, likely using a Schnorr signature or a ratchet.
Protocol designers make trade-offs that they don't advertise because they represent a consensus of the risk between the solution parties. It appears in this case, there are some obvious opportunities for further research.
> Security protocols have a short list of moving parts and trade offs and where to attack them. RNGs, key storage, key derivation, counters, and access modes, to name a few. They're only completely different to people who don't understand them.
Funny, I would have said that they only look very similar to one another to people who aren't deep enough in the weeds of a particular one.
Yes, they're ultimately all built out of the same building blocks. But just with these few you've mentioned, together with possibly having more than two actors in your system and corresponding privileged keys, the complexity of the aggregate protocol hockey sticks very quickly, and you absolutely can't reduce all problems to one another anymore.
Humans are made of atoms, yet when you're sick you go to a physician, not to a physicist.
> If you are using symmetric secrets in smart cards, you're using something within a degree of EMV
Oh, absolutely not. Sorry, but with this statement you show that you aren't familiar with other protocols in this space. There are so many smartcard (and adjacent, e.g. stored-value cards like MIFARE) protocols that use symmetric keys, yet don't share any of EMVs historical problems. I mean, even GlobalPlatform itself uses symmetric cryptography!
That's like saying that SSH and TLS are very similar protocols, since they both use asymmetric key exchanges to secure and authenticate a symmetrically-encrypted application layer channel.
A two party protocol that involves mutual authentication and key exchange has a short set of essential variations, with some features and even some theatre wrapped around it. Not sure if you're being obtuse or misleading, but yes, GlobalPlatform used symmetric cryptography because that's the literal compatability problem they impose that is a constraint on developing more modern smart card based protocols. There are also only a few main smart card vendors and they have ecosystem constraints that favor compromises like the ones discussed. Yeah, I totally don't know what I'm talking about.
The protocols and proposals I did evaluations for implemented the trade offs I mentioned above. The reason it's important for hackers to focus on these technologies is because this is literally the bar institutions use to make decisions about infrastructure security. What the original post demonstrated was the protocol implemented in these cards had vulnerabilities that were consistent with the limitations of using symmetric keys that incorporate constraints from legacy protocols.
My follow up was that there are some very obvious places to check for futher vulnerabilities, and the research is important to do so because it antagonizes just the sort of authoritarian personalities you want to keep in check in a free society. This is what hackers are for. I've made my contributions to ensuring privacy laws were upheld and that backdoored digital identity schemes could not survive, and I'm very glad a younger generation of hackers is taking up this most important work.
To anyone working on these problems, don't let the personalities discourage you, it means you're over the target.
Could you please expand how any of the findings, or any attempts at digital ID are about surveillance and totalitarianism? Your post, as it is now, is brandishing big and scary words based on flimsy assumptions and without any real backup.
saying this does not make it true -- volumes of political science are written on the topic of public identity. The perfect world where every instrument is applied fairly and in no other way, does not exist. A quick knowledge of modern investigation systems by law and tax authorities will be evidence of the alternate uses of this tech. Details on these topics could easily fill a book.
I don't have the knowledge of those volumes of political science and modern investigation systems where electronic identity can be applied to greater negative effect. That's why I was asking for more concrete examples instead of hand waving.
Really? Seems self evident to me...The entire point of binding an employee or user to a certificate, PIN and an x509 PKI system has always been for authorization and tracking. No hand waiving required. For instance, Militaries around the world use these cards to grant access to internal systems and to track and monitor the activities of the insiders within these sensitive systems. Seeing this creeping into the general public is creeptastic.
I agree that the capability might be present, but you forget how strong a stance the EU countries have for PII. A government that would abuse an identity mechanism would immediately be public knowledge and receive universal opprobrium inside the union and most likely outside of it too.
The parent comment is does not talk about the political desirability of public identity, but rather makes (largely baseless as far as I can tell, unlike the article) claims about the technical security of one particular scheme.
> The perfect world where every instrument is applied fairly and in no other way, does not exist.
True – yet the absence of a government-provided system often means that inferior means of identity verification will have to be used by people and businesses. (Have you ever opened a bank account or credit card in the US?)
That's why we get absurd private initiatives like Worldcoin that seem every bit as dystopian as some of the more poorly implemented government-based ID schemes.
I worked on digital identity schemes that used similar protocols to these a decade ago, and they were technologies in search of a use case because the vendors and project sponsors just wanted a mandate to implement controls nobody wanted. They're called domestic passports and a gesundheitpasses and the 20th century was defined by the regimes who implemented them. Identity and cryptography are areas where technologists cannot avoid moral culpability for their design and implementation decisions. This area is as real as it gets.
Those big scary words are technical terms of art in political science (defined by Arendt), and people who actually work in privacy and identity to ensure these technologies are not exploited by governments to abuse citizens, are engaged in the real work of governance. Part of that is demonstrating how even governments are bound by the laws of math and accountable to reason. I also understand how smaller words could be more broadly persuasive.
Digital identity is the key for scaling democracy up, decreasing bureaucracy and integrating the act of being a citizen in day to day life, instead of being a chore that everyone complains about.
It could allow for abuse for sure, but what technology doesn't. Assuming bad faith ab-initio and using those exact "terms of art" give out very strong vibes of conspiracy theories and sovereign citizen fetishes.
> Digital identity is the key for scaling democracy up
unintended consequences are the topic at hand right? In all the world's countries, which shows "democracy scaling up" ? The single largest country is called China, they use Digital ID for scaling.
Many EU countries, and the EU itself (with eIDAS) are rolling out digital identity schemes as well.
I'm able to communicate with most government agencies of one European country digitally, as a result – definitely beats having to take a 10-hour flight or to at least find a printer and mail envelopes back and forth across the Atlantic, containing "physical signatures" that no company or government can realistically validate anyway.
No, the topic at hand is a cryptographer finding vulnerabilities in a communication protocol and the originating body incorporating the findings (hopefully) in their fixes.
And since it's about France, what does that have to do with China?
Good technology can be used badly by bad people, I don't know if China does that, but I imagine that's what you mean to imply. Vague assertions like yours are not exactly supporting evidence.
is it not obvious that China is not a Democracy? the point in the comment is that Digital Identity does not scale Democracy in that case, the largest case in the world.
The idea is terrible even from the first lines, relying on the hardware key attestation means giving up the id card to Google and Apple approved devices which is absolutely not what you want as a country.
You are immediately prevented from creating an implementation which works on Windows/macOS/Linux, and it is reinforcing the Google/Apple duopoly in the smartphone market.
Moreover, you are relying on "trusted execution" remaining unbroken in thousands of different dirt-cheap smartphone models. It's not a question of "if" it gets broken, but "when". And revocation isn't exactly an option either, because you risk leaving millions of people unable to use your government-mandated app.
I believe that applications like these should be explicitly designed around the possibility of the user device being compromised or a malicious third party interacting with it instead - even if that means a loss of functionality.
And while you are at it, make it fully open source to make it easier for the security community to review it for silly things like the self-rolled crypto we see here.
I generally agree, although I do understand the motivation for requiring device attestation here: Since the card neither has a PIN pad, nor a display, there is a lot of trust on the mobile device to be honest when it comes to signature operations.
Another solution would be an external, Bluetooth-connected terminal with both a PIN pad (or biometric authentication) and a display, but that would mostly defeat the purpose of using smartcards: The entire point of schemes like this is that users don't need to buy and carry yet another device beyond the smartphone and an ID card that they already own.
So don't use it to do any operations which require you to absolutely trust the smartphone?
When it comes to interacting with the government, security is often more important than convenience. Having people go to their local police station to scan their card on a secure terminal is definitely worth it for high-profile operations.
Maybe that could work for something like transferring property or other operations most people do only a few times in their lives, but there are tons of medium or low risk governmental things I would love to be able to do on my smartphone.
For that, I'd rather have a relatively secure scheme that people can actually use, rather than a hypothetical perfect one that nobody owns the hardware for.
As a concrete example: My government started out with very high ambitions. Every citizen got a smartcard holding a personal X.509 certificate (usable both for e-government, PDF signature, and S/MIME email signature/encryption!), and the government even gave USB CCID card readers away. The result? Absolutely nobody used it. A few years later, they replaced that with a solution that stores everybody's private keys on a central server, combined with SMS-OTP for signature approval...
Tapping my ID card on my phone and entering a local PIN to sign, even if the phone isn't particularly trustworthy and signature requests could be spoofed, would be much, much more secure than that.
What they probably want to know here I guess is "is this device secure?" and there's just no technical answer to that, the result of the key attestation doesn't help you one bit to decide that.
The only way to do it is indeed to run some kind of computation on the device, usually those get plugged in to have more power.
The key attestation has a very small list of things it's actually useful for anyways, it's generally a bad idea to use it.
I mean, it can, to some extent: By only accepting attestations from an opt-in list of devices that the scheme operator has validated to be sufficiently secure.
That approach has a lot of downsides, obviously.
Unfortunately, what I'm often seeing is a "worst of both worlds" type of solution: There is a list of trusted (used as a proxy for secure) devices, but it's generated in a pretty arbitrary way.
My government actually requires FIDO attestation in such a way, but for the longest time, the only trusted hardware authenticator was by a company I've never even heard of in this space – Yubico was not considered trusted.
> I mean, it can, to some extent: By only accepting attestations from an opt-in list of devices that the scheme operator has validated to be sufficiently secure.
> That approach has a lot of downsides, obviously.
Yes, the main one being that it doesn't scale at all, especially to an ID card level which is used in a whole country.
> My government actually requires FIDO attestation in such a way, but for the longest time, the only trusted hardware authenticator was by a company I've never even heard of in this space – Yubico was not considered trusted.
I'm not surprised much, governments are usually big fans of the not invented here syndrome and reinvent the wheel with paid consultants.
Exactly! Ironically, Yubico is Swedish, yet the only supported authenticator is from a Taiwanese company, so something must have gone wrong in the usual scheme of "regional economic stimulus by expensive government contract" :)
What smells really bad about it is that the only domestic reseller of that FIDO authenticator apparently is the government contractor that built the e-ID platform...
I wonder why do they need the whole secure channel thing instead of making the card hold a client certificate and use standard mutual TLS with their backend server.
As adev_ mentioned, TLS implicitly requires a lot of things that can be considered expensive for very low power devices (dare I say "bloat"). Examples being things like X509 certificates.
Another thing to consider is that TLS supports multiple use cases and has multiple versions, so it has stuff like algorithm negotiation, version negotiation and some other things that are really unnecessary when you just want a secure channel between two parties with pre-shared keys[1]. If you strip all those extra things from TLS you'll basically end up making your own protocol anyway.
What I'd like to see is a standardized protocol for this exact use case: two parties with pre-shared public keys, no algo negotiation[2], and only sending what's really needed over the wire without extra metadata. TLS is not quite right for this use case IMO.
[1] Yes, TLS-PSK exists but it's largely an afterthought and not always implemented by libraries, because it's considered a niche use case for TLS. To some degree, same thing can be said about client certificates.
[2] I think the article does say that they have algo negotiation in their custom implementation anyway. Personally I'd hardcode something like ChaCha20Poly1305, mainly because it's reasonably fast in software and on low-power devices.
I agree that runtime algorithm negotiation makes sense to avoid sometimes. So just make a spec that factors out the algorithm, and make the choice of algorithm a compile-time thing.
Then there's no need to hardcode anything in the spec, you can just say you're using the protocol RFC-12345-ChaCha20Poly1305 — with RFC-12345 being the spec that leaves out the algorithm.
Yes, that's pretty much what I meant by "standardized". You'd be able to pick AEAD cipher, KDF, signature algorithm, KEX algorithm etc., while the spec would only describe the protocol itself without enforcing specific algorithms. Noise Protocol pretty much does this already: you provide a string that describes the set of algorithms you will be using, e.g. "Noise_NN_25519_ChaChaPoly_BLAKE2s"
I've heard about Noise but didn't think to mention it for some reason. Yeah, Noise is basically what I was trying to describe. Such a protocol doesn't necessarily have to use Noise though, however Noise helps avoid some implementation pitfalls that people not very experienced with crypto can make.
Smartcards, including the contactless kind (often called NFC, but that’s technically a related but different set of standards), have been able to perform RSA signatures since the 90s.
A secure channel between a backend holding the keys (usually in an HSM) and a card read by a mobile device is pretty standard, actually. That's how remote card top-up usually works.
The relevant smartcard standards (ISO 7816 or GlobalPlatform, I can't remember) provide that type of secure channel protocol by default.
My Yubikey (which I imagine is a bit more sophisticated than a NFC tag can be) is said to be able to hold Ed2519 keys and use them for signing, but I was never able to actually do that. :)
Elliptic curve signatures are even easier since they're computationally less efficient and can be implemented in software on newer smartcards!
I'm using a GPG key on my Yubikey, and while I haven't tried using Ed25519 and Curve25519 in particular (GPG support for these is still not ubiquitous), the GPG smartcard application in general works quite well for both SSH and actual OpenPGP use.
> Your budget in term of compute and memory is extremely low. I would be surprised a proper RSA/ECDSA signature from an X509 certificate can hold there.
The latest versions of the Java Card spec have VMs on them and have HTTP interfaces:
Not at all: NFC (or more accurately ISO 14443) is often used as just another interface for regular ISO 7816 smartcards.
There are other contactless standards (storage only, proprietary fixed-function logic etc.) too, but full-fledged Java Card smartcards are actually quite common: It’s what most contactless payment cards are.
> I'm currently analyzing the possibilities of using Yubikeys for TLS client certificate authentication over NFC on iOS devices as part of a single sign-on process, but from what I have been able to gather only OTP seems to been supported? Are there any plans for supporting PIV over NFC?
> This paper introduces a new online payment protocol called EMV-TLS, dealing with NFC enabled mobiles. EMV-TLS results from the merging of three technologies: EMV payment applications, SSL/TLS secure channels, and Near Field Communication radio interfaces. The main idea of this protocol is to remotely use an EMV-TLS chip, thanks to a secure TLS channel established with a server. The mobile acts as a passive modem that manages TCP/IP resources. Two classes of servers are defined; N1 class may only read the embedded information (card number, bearer name, validity date,), while N2 class has access to all chip resources and may generate cryptograms. A first experimental platform including an EMV-TLS chip, an Android mobile, and a TLS payment server has been realized as an early proof of concept.
> This paper introduces a new mobile service, delivering keys for hotel rooms equipped with RFID locks. It works with Android smartphones offering NFC facilities. Keys are made with dual interface contactless smartcards equipped with SSL/TLS stacks and compatible with legacy locks. Keys cards securely download keys value from dedicated WEB server, thanks to Internet and NFC connectivity offered by the Android system. We plan to deploy an experimental platform with industrial partners within the next months.
This document describes the support of the TLS protocol over the NFC
(Near Field Communication) LLCP (Logical Link Control Protocol)
layer, which is referred as LLCPS. The NFC peer to peer (P2P)
protocol may be used by any application that needs communication
between two devices at very small distances (a few centimeters).
LLCPS enforces a strong security in NFC P2P exchanges, and may be
deployed for many services, in the Internet of Things (IoT)
ecosystem, such as payments, access control or ticketing operations.
Applications secured by LLCPS are identified by the service name
"urn:nfc:sn:tls:service".
To be fair, that's the "connected edition" variant of Java Card 3 – to my knowledge, nobody actually uses that.
But even the "classic edition" usually supports regular secure channel communication. It's not TLS, but it achieves the same outcome (i.e. secure remote card access by a remote conceptual terminal).
That's what is done in Brasil for dealing with their IRS (it's optional for physical persons, but mandatory for corporations) google for e-CPF and e-CNPJ (the brazilian tax IDs for people and corporations, respectively). The brazilian IRS started using digital certificates some 20 years ago, and specially for corporations it greatly improved the life of accountants.
> Why not do the same thing credit cards or access smartcards do?
I think you would be surprised how bad the security on these systems is.
The credit card security relies mainly on the ability of the bank to rollback in case of "a shit happened" and in the payment terminal itself.
Probably not something you want to see to protect against identity thief nation wide. And you also can not trust individuals smartphone to do the right thing.
I think this is not true in most of the cases. The (security) technology behind the debit/credit cards using the SmartCard chip (IC) is pretty ubiquitous. It is the same as the security technology guarding the SIM cards in your phone and even your eSIM. Basically the protocols and the interface specifications are the same. In the end, they are just smart cards. Imagine this technology not being strong enough, because I remember the days when the security of the pre-paid public phone cards was quite rabish and any kid with some skills and knowledge could forge a card with unlimited credit.
It very happens that the father of the smart card technology to be a french guy [1] and the current biggest provider of this technology is the french aero-space/defense/security company Thales Group[2] followed by another frech company called IDEMIA.
There is a very nice biography of the technology [3].
> The credit card security relies mainly on the ability of the bank to rollback in case of "a shit happened" and in the payment terminal itself.
That's certainly not right, the electronic chip in a payment card relies on cryptography, and if you used than together with a PIN the bank has a strong argument to not rollback anything. If you're using a debit card like most of Europe, you're going to have a hard time convincing them.
I believe they are hoping to make a system which adds some sophisticated capabilities - for example:
* Can't read any identity information from the card contactlessly without a PIN, so you need somewhere to enter the PIN.
* Ability to prove your age without disclosing your identity, so you need somewhere to choose how much info to share.
* To validate ID cards for people whose phones are dead, the ability do to the above securely even on a sketchy nightclub bouncer's phone.
* Making all this stuff accessible on normal people's phones, not some $$$$ dedicated reader hardware.
* Ability to provably give another person permission to collect a parcel for you (I can only assume French parcel services are a lot more fastidious than ones in my country) with four-way coordination between the two people, the parcel service and the government.
Between these factors, I assume they want the mobile app itself to also be pretty heavily locked down so that even if that nightclub bouncer wanted his phone to record your PIN or whatever it wouldn't work.
And no doubt it's also crossed some people's minds that if there was ever another pandemic, creating a sudden need for a new vaccine passport, it sure would be convenient to have an extensible national ID system....
Noob question: why don't governments issue a private key to every citizen so that they can identify themselves "easily" in web forms and the like? The government would keep the corresponding public key.
You could go in person to any government building and request a new private key to override the previous one if needed.
Because securely storing the keys is quite difficult, and the whole system doesn't work if the keys are routinely compromised ("You could go in person to any government building and request a new private key to override the previous one if needed." doesn't cut it) - so the only reasonable way of issuing such private keys would be on a smartcard where it's reasonably difficult to extract/copy, and you could know if the actual card was lost.
Estonia has been doing this for a long time now with their ID cards. It can be extended to sim-based (mobile ID) or device/app based (smart ID) key pairs as well.
It's not just for the citizens but all residency cards and e-residency works this way, too.
I cannot understand, seriously, how we could have built a system where you have to have French documents in order to identify yourself to various services.
A friend's of mine dad is Polish. He is retired and worked for years in France. Now he cannot access all of his retirement data because some sites require France Connect and he does not have any French papers anymore.
When asked about that, France Connect's support basically replied "fuck you" (in French).
There must be thousands of people in his situation and yet, nobody cares.
There are similar hassles in Sweden with their BankID for people who can't get it, but have connections to Sweden one way or another. Swedish bureaucracy works quite well if you fit the mold, but can be totally useless if you don't. I think the issue is that as long as enough of the "normal" population thinks it work, no one really cares about those it doesn't work for.
This is exactly the same in France. The system is fine when you are in the mold as you are saying. It is at least much better than in the recent past.
Another aspect of French bureaucracy is that it becomes fuzzy when you are dealing at a more local level. Say you want to get a library card for which you have to show proof that you live in the city. I did not have mine handy, and after a lengthy exchange of gazes and after everyone has done "pfffff", "hmmmmm", they got a form from a drawer on wich you could state on your honor that you live there.
Lots of foreigners are rebutted after an initial no, when it is sometimes just an invitation to a longer exchnage.
Does anyone know why a private govtech business like Palantir doesn’t take over all these use cases? Governments are notoriously bad at tech, why isn’t there a massive private corporation catering to all these use cases and ensuring state of the art security? Instead of hiring local clowns that release half baked solutions like this.
In fact they do. Barely any development FOR the french government is done internally, all is externalized to "specialized" extern private companies (a known exemple Cap Gemini).
I wouldn't call them clowns, and they do have competent people but they totally do focus more on public money extraction than on quality, like nearly all monopoly position companies. Results are sometime good sometime not.
It's starting to change, though. In fact, it's very frustrating that the french government keeps delegating projects to SSII consultants long after it's built up its own very competent open-source-friendly developer workforce.
Palantir is a particularly bad example, it being american. France care a lot about sovereignty and would never allow (and for good reasons) an external entity to have that much control. It probably did ask for a contractor on an "appel d'offre" to build this so it's the same but with a french actor.
We, French, do actually have seriously competent people in charge of central government IT infrastructure. Local governments not so much and that is an euphemism, but French central government doesn't cut corners about that. Also, ANSSI is among the world's best in security auditing and they take very diligently their public service role of sticking their noses into your information system if you are legally classified as an "Opérateur d'Importance Vitale".
Also, f*k Palantir. No one wants such vampire squids anywhere near the crown jewels.
Good to know, thanks for the information. It's just that I am French (living abroad), and when I use administrative websites, I often encounter forms and situations that have clearly been coded "avec les pieds." Like the forms on embassy websites to requests passports, that instead of providing options to get appointments even 6+ months in the future, force the user to refresh the page multiple times per day in hope of finding a slot. Same thing with "RDVPermis", their new driver license application platform, where people are using bots to refresh the page every 20 seconds.
That's not security-sensitive, but it's clearly not optimally designed and I wonder who decides that it's better design to let people go mad refreshing the page - instead of letting them book even 6 or 9 months later, which sucks, but at least have a pleasant experience.
I also overall wonder why so many procedures in this country are so slow (such as passport appointments for people living abroad), and why nobody is figuring out a way to make it all faster in this age of technology. Surely, a private company with a startup mindset could automate and find ways to expedite all these things that have no reason to be so slow. I deal with bureaucracy in the US, and UK, and everything is usually at least 5 times faster and easier.
One company doing everything would just incite them to charge whatever they want at the taxpayer expenses, and do the bare minimum since there would be no competition to drive them out of business.
Because governments, like almost all entities, are beholden to budgets and accounting which incentivizes low spending. So you get lowest-bid contractors. On top of that, Europe (and especially France, IMO) isn’t real comfortable with foreign tech solutions at the moment.
I am far from understanding the technical details.
But it feels like they severly violated the rule of not running your own cryptography. If they had used TLS the MITM would have been much less likely as long as the app does not accept user-defined cerificates?
TLS is powerful, but not a fit for literally every scenario, especially if you have more than one, possibly variably trusted entities in your protocol. That's why there are still encryption and authentication layers above TLS.
One simple example are apt repositories: It's desirable to be able to host deb packages on many servers, often controlled by semi-trustworthy parties, and still allow clients to authenticate them. It would be possible to use TLS alone for package download, but that would require handing over your authentication keys to every single mirror server.
So arguably, Debian did "roll their own crypto" – but it was strictly for the better.
The card he tested was apparently live in production, but one of the main vulnerabilities in protocols like these is in the 'personalization' stage of the setup, where each card gets a set of default 'provisioning keys,' which are used to register the card and get unique user keys for it. A sample of unpersonalized blanks would yield that, and the costs associated with mitigating this with batch specific keys for provisioning is typically too much complexity.
There may be a DoS vulnerability in some card schemes where you can use 'torn' NFC connections to get the key and transaction counter on the card applet to increment and desynchronize from the counter recorded on the server, bricking the card - or potentially many en masse with some SDR equipment.
Given the physical user enrollment costs, there are some basic impossibilities in these protocols that will always reduce their security to a set of trade-offs that depend on economics and obscurity. Security research like this acts as a check on the efficacy of totalitarian controls like digital id, and it is important work to continually demonstrate that there are risks and costs to the regimes that impose them. I am very grateful this researcher has done work to discredit this scheme.