It's not worth your time or the reader's time trying to come up with a technicality to make it perfectly legal to do something we know little about, other than it's extremely dangerous.
Law isn't code, you gotta violate some pretty bedrock principles to pull off something like this and get away with it.
Yes, if you were just a security researcher experimenting on GitHub, it's common sense you should get away with it*, and yes, it's hard to define a logical proof that ensnares this person, and not the researcher.
* and yes, we can come up with another hypothetical where the security researcher shouldn't get away with it. Hypotheticals all the way down.
1. It should be legal to develop or host pen-testing/cracking/fuzzing/security software that can break other software or break into systems. It should be illegal to _use_ the software to gain _unauthorised_ access to others' systems. (e.g. it's legal to create or own lockpicks and use them on your own locks, or locks you've been given permission to pick. It's not legal to gain unauthorised access _using_ lockpicks)
2. It should be illegal to develop malware that _automatically_ gains unauthorised access to systems (trojans, viruses, etc.). However, it should be legal to maintain an archive of malware, limiting access to vetted researchers, so that it can be studied, reverse-engineered and combatted. (e.g. it's illegal to develop or spread a bioweapon, but it's ok for authorised people to maintain samples of a bioweapon in order to provide antidotes or discover what properties it has)
3. What happened today: It should be illegal to intentionally undermine the security of a project by making bad-faith contributions to it that misrepresent what they do... even if you're a security researcher. It could only possibly be allowed done if an agreement was reached in advance with the project leaders to allow such intentional weakness-probing, with a plan to reveal the deception and treachery.
Remember when university researchers tried to find if LKML submissions could be gamed? They didn't tell the Linux kernel maintainers they were doing that. When the Linux kernel maintainers found out, they banned the entire university from making contributions and removed everything they'd done.
No, people being polite and avoiding the more direct answer that'd make people feel bad.
The rest of us understand that intuitively, and that it is already the case, so pretending there was some need to work through it, at best, validates a misconception for one individual.
Less important, as it's mere annoyance rather than infohazard: it's wildly off-topic. Legal hypotheticals where a security researcher released "rm -rf *" on GitHub and ended up in legal trouble is 5 steps downfield even in this situation, and it is a completely different situation. Doubly so when everyone has to "IANAL" through the hypotheticals.
Best to leave it at that.
It's not worth your time or the reader's time trying to come up with a technicality to make it perfectly legal to do something we know little about, other than it's extremely dangerous.
Law isn't code, you gotta violate some pretty bedrock principles to pull off something like this and get away with it.
Yes, if you were just a security researcher experimenting on GitHub, it's common sense you should get away with it*, and yes, it's hard to define a logical proof that ensnares this person, and not the researcher.
* and yes, we can come up with another hypothetical where the security researcher shouldn't get away with it. Hypotheticals all the way down.