FWIW, N=1: I'm Dutch and don't recall the word "uitwaaien" being commonly used like this throughout society. It's only used in informal settings, e.g. between friends/family/colleague (AFAIK), and has little to no meaning outside specific contexts. It is, for instance, also used when joshing a person (with either teaseful or unpleasant intent) by proposing they should leave (the room or building): the phrase "ga jij maar even uitwaaien" means "(you should) go outside". When a cyclist informs a partner/friend/colleague about their intent to leave for (outdoor) cycling by saying "ik ga even uitwaaien". Hence I too am a bit surprised about this article.
"We’ve developed a method to assess whether a neural network classifier can reliably defend against adversarial attacks not seen during training [such as Elastic, Fog, Gabor, and Snow]. Our method yields a new metric, UAR (Unforeseen Attack Robustness), which evaluates the robustness of a single model against an unanticipated attack, and highlights the need to measure performance across a more diverse range of unforeseen attacks."
If anyone w/relevant expertise is willing to share thoughts on this: please do.
For further reading: this just in -> "Robust Machine Learning Algorithms and Systems for Detection and Mitigation of Adversarial Attacks and Anomalies: Proceedings of a Workshop" (2019) an open-access publication from the National Academies of Sciences, Engineering, and Medicine: https://doi.org/10.17226/25534.
Yes. Large corporate networks often still have (some) production systems that allow password-based authentication. I don't know how widespread it still is, but I still encounter it frequently at clients (which may be a skewed sample).
Just to be sure: PasswordAuthentication does not need to be enabled for the PoC to work, and username testing can also be used for software enumeration by testing for common/default non-SSH users, .g. "_tor", "debian-tor", etc. (I apologize for repeating here what I also stated in other comments in this thread, but this aspect should not be overlooked.)
+1. But just to be sure: that does not prevent testing for usernames and hence enumerating software by testing for known/common service account usernames (e.g. "_tor" on OpenBSD and "debian-tor" on Debian-based OSs). (No claim was made to the contrary; just mentioning this to prevent anyone from thinking otherwise.)
Exactly - it also works for non-SSH accounts, thus allowing software enumeration by testing for default/common/known default service users.
For instance, an OpenBSD box running Tor may have a user "_tor", a Debian-based box (e.g. Ubuntu) may have a user "debian-tor", and so on (depending on how Tor was installed, in my case via pkg_add & apt-get; usernames might vary for different OS/repo versions). I have tested this using the PoC against some of my own systems (the ones that have PasswordAuthentication still enabled) and it works for those.
...and apparently the same issue exists in Dropbear up until current version (2018.76 / Feb 2018), which has an entirely different code base. A comment on /r/blackhat [0] led a colleague and me to look at Dropbear's sources, and it happens to have logic that is sufficiently similar [1] for the same PoC to work; tests against v2018.76 and a couple of earlier versions (e.g. v2013.58) are successful.
Shodan shows some 66k services identifying as SSH-2.0-dropbear [2], as opposed to some 15k identifying as SSH-2.0-OpenSSH [3].
The 10-part series 'How to get Smarter: A guide to critical thinking, cognitive biases, and logical fallacies' published between in Jan-Apr 2018 at Life Lessons is also quite comprehensive. Covers 50 topics, 5 per post. I apologize for the length of what follows, but I believe this resource it worth it:
- Part 2: http://lifelessons.co/personal-development/howtogetsmarterpa...
45. Be a truth seeker
44. Be a realist
43. Open your mind
42. Don’t fall in love with your beliefs or your philosophy
41. Listen to your opponents and people who disagree with you
- Part 3: http://lifelessons.co/personal-development/howtogetsmarterpa...
40. Don’t dismiss things you don’t understand
39. Don’t assume you’re smarter than the stranger you’re speaking with
38. Don’t confuse your perspective for objective reality
37. Don’t confuse feelings with facts
36. Don’t believe every thought that passes through your head
- Part 4: http://lifelessons.co/personal-development/howtogetsmarterpa...
35. Beware of black and white thinking
34. Beware of the Dunning-Kruger effect
33. Uncertainty > The illusion of knowledge
32. Stand on the shoulders of giants
31. Have lots of gurus
- Part 5: http://lifelessons.co/personal-development/howtogetsmarterpa...
30. Don’t attack straw men (don’t misrepresent your opponents argument)
29. Beware of circular logic and reasoning
28. Watch out for red herrings
27. The genetic fallacy (examine the statement – not the speaker)
26. The fallacy fallacy (don’t confuse a bad argument with a false conclusion)
- Part 9: http://lifelessons.co/personal-development/howtogetsmarterpa...
10. Why you should try to prove yourself wrong – instead of right
9. Probability neglect & the relativity of wrong
8. Why you can’t trust statistics
7. Cognitive Dissonance
6. Sacred Cows
Alexandre Anzala-Yamajako posted interesting comments on this to [Cryptography] (@metzdowd.com):
> IMO a statistical approach based on taking a bunch of data a saying essentially "I don t see any signs that it s not random" is not a good approach for entropy seeding. The example is old but I could give you the output of an AES in counter mode with a null key and a null iv and no standard statistical test woud ever show you any defects while you have absolutely no entropy.
> You case is particularely worrisome for several reasons
1) you use a von neuman like extractor but you have also shown that your data is not only biased but also correlated
2) you don t seem have a model of your hardware source from which you could derive the output distribution
3) you do some wizardry to remove some correlation but nowhere show or prove that there isn t more corrolation to be taken care of or how
4) I didn t see in your document a justification of the fact that the manufacturer of the camera (soft and hardware) doesn t have more information than you and could therefore target defects in your entropy management procedure.
> You should have a look at the work of Viktor Fischer, David Lubicz, Florent Bernard and patrick Haddad. They invested quite a bit of effort to give entropy guarantees when using very specific hardware device.
Skibinsky subsequently responded:
> Alexandre, thanks for reading and suggestions! I will certainly check out your references.
> As it is probably obvious from the essay-style narrative, this is not intended to be a tight scientific paper, just our research log of first order ideas we coded up for minimal working prototype. You are correct on #1,#3 - current codebase doesn't addresses these issues. #2 is interesting, because besides wide variety of camera hardware that model should reflect, iOS camera parameters present us with an opportunity to create optimal hardware source. This is far from our area of expertise, so I hope somebody in open source community will pick it up from here and figure out both formal model and what physical settings will optimize the source.
> Thanks again for great suggestions, I will further emphasize impact of correlations & VN sensitivity to non-IID in final section.
> Most likely practical direction of course is simply use universal hash extractor instead of VN, since it relaxes a lot of requirements.
The BCP's scope is broader than state actors: "The motivation for PM can range from non-targeted nation-state surveillance, to legal but privacy-unfriendly purposes by commercial enterprises, to illegal actions by criminals".
Also, the BCP does not contend that an technology end-run around law exist (or that it is desirable). The BCP is about mitigating, not entirely preventing, the threats described: "'Mitigation' is a technical term that does not imply an ability to completely prevent or thwart an attack. Protocols that mitigate PM will not prevent the attack but can significantly change the threat."
Surely, given commercial practices such as HTTP header injection by Verizon and the Pharma saga in the U.K., a BCP that promotes privacy/security thinking in the design of new protocols is a good thing. Which is not to say that attackers, commercial or otherwise, will not find other ways; but let's at least try to increase the bar by weeding out unnecessary attack surface and information leakage.
I didn’t mean to say efforts to improve privacy through technology are bad or pointless, just that it would be dangerous to do that and only that. The complete solution is technological and cultural/legal. It is not superior lock technology that prevents homes from being burglarized daily, but the threat of legal consequences, although it is a good thing to have better locks.