Hacker News new | past | comments | ask | show | jobs | submit | oil25's comments login

That's the best solution - improved privacy and sometimes even reduced gas prices when using cash.


That would imply that cash is preferred, but, cash at a gas station - or any shop - is an attractor for robberies. At least with a skimmer the damage is purely financial, with robberies, especially in the US, there's a big risk of gun violence on top of that.

Can't rob someone that doesn't have cash, is all I'm saying.


You can absolutely rob a person that does not have cash though. Even ignoring the value of personal electronic devices credit cards themselves can be a target. You might be able to eventually unwind the financial impact but that doesn't deter robbery.


What you're describing sounds more like a mugging. He was talking about armed robbery of a convenience store. Is it possible to hold them up at gunpoint, forcibly take some of their card-processing equipment, and exfiltrate some money or credit card numbers that way? Probably, but that's not exactly the path of least resistance.


It's possible to steal a box of candy bars, or a case of beer, or the iPad cash register and on and on. Cash is not the only valuable a robber can target. Having large amounts of it on hand may make you a more appealing target but it is only one type of valuable.


> You might be able to eventually unwind the financial impact but that doesn't deter robbery.

Fewer and fewer people having anything on them to rob will deter robbery over time.


Until you find yourself escorted to an ATM at gunpoint.


This is why I don't carry a debit card tied to my main bank account. Just a "ATM only" account that I keep low.


This makes you the 0.000001% of the population.


Yay, guess I’m also part of that very rare club.


Cash is clearly preferred, as every gas station I can remember in the last 20 years offers a discount for paying cash.


What's the penalty? It ought to be equivalent to a DUI, given the impairment is similar if not worse from phone distraction at the wheel.


Driving drunk is safer than texting. With DUI, at least your reaction time is just scaled by a certain amount. Whereas when inputting something with a touchscreen, your road attention completely disappears for imperceptible spurts of time.


Sincere question - why is JavaScript required to sign up for Fastmail? Is it for browser fingerprinting? If so, what data is collected, how is it used and how long is it retained? No specific mention of it in the privacy policy. If I sign up in a virtual machine, can I later use Fastmail without running scripts?


You can use IMAP to access Fastmail without running Javascript (or I guess your own JMAP client if you wanted to write one - there isn't one that doesn't use Javascript yet) - but no, you can't use our interface without running Javascript - the client is written entirely in Javascript.


That's too bad, looks like I'll have to stick with GSuite Gmail to have browser-based, non-Javascript access to my email.


Curious how you are managing disk encryption for all those devices, if you are at all.


Until recently my threat-analysis concluded that it wasn't appropriate to run encryption on these systems (I use other systems that do use encryption). But times are changing, and now I'm starting to look at running FDE on everything. It will be a major exercise to convert, but I'm starting to look at the various options.

I'd be interested to know what other people do. If anything.


Same here, but I have found wireless performance to be subpar. Ended up double-NAT'ing a second APU with Debian to use 802.11. Still plenty happy with OpenBSD though.


> For usability while balancing security, cache PIN for at most a day.

https://github.com/DataDog/yubikey/blob/master/gpg.sh#147

This statement has no effect when using Yubikey - the PIN is cached by the key itself and it will remain unlocked indefinitely until it's physically unplugged. See https://dev.gnupg.org/T3362


Interesting. This hasn't been my experience, so not sure what's going on yet...


Anyone disillusioned by the thought that Apple values privacy would be well served by reading iOS, The Future Of macOS, Freedom, Security And Privacy In An Increasingly Hostile Global Environment - https://gist.github.com/iosecure/357e724811fe04167332ef54e73...

There is so much more to privacy than is made apparent to the user as a few OS knobs to "limit" ad tracking.


Saved this writeup for future reference, thanks. Agreed that privacy needs more analysis than trusting a few rather opaque OS knobs.

I am a little skeptical about some of the claims in that gist, though. One example is when they claim that APNS pushes require app access to a globally unique iOS activation identifier. That seems false. According to Apple’s dev docs at least, those tokens are device-and-app specific and have to be re-requested at app start time since they can be regenerated for a variety of reasons: https://developer.apple.com/library/archive/documentation/Ne...

Seems to have nothing to do with an activation UUID from a quick glance.

I appreciate a lot of the reference material in there, but this seeming mistake of conflating 2 different UUIDs makes me a little skeptical of some of the conclusions.

Edit for correction: I think I misread this part of the gist. They never directly say that the activation UUID is given directly to the app developer, just that Apple can track your social networking app pseudonym over APNS, "and possibly the social networking service" will be able to, as well.

This to me implied that the social networking service had the activation UUID, but the author never directly said that. If the notification has your pseudonym in it and Apple's storing that when a notification goes to APNS, it does seem like Apple would be able to tie that to your device if they're peeking inside the notification payload. The solution to this would be for the app developer to not include sensitive info in notifications or for the user to disable push notifications, but an E2E encrypted trustless notification solution provided by Apple would be much nicer.


> On iOS, there is no full-disk or full-volume encryption, only varying levels of file-based encryption, partially dependent on third-party developer choices, such that what is, and isn’t, encrypted (with encryption tied to the user passphrase) is not always clear to the end-user.

I'm not sure about this, either; all recent iOS devices have a DMA AES engine that performs encryption on anything that travels between storage and memory.


Yeah, that’s completely and obviously fucking wrong and makes me question this person’s skills to be honest.


Seems to be at least a few things wrong there. It’s completely false that iOS doesn’t have full-device encryption, for example.

Edit: I’m going to revise this and say that having read the whole thing there is very little of substance other than “Apple has a ton of metadata about your devices” at all, and the author doesn’t do a good job of quantifying the impact of that information exposure. On top of that, they cite iOS being closed source as a reason for its purported insecurity. Honestly the part about not having FDE is enough to make me question their competence more broadly.



Thank you for sharing this very helpful / valuable information. I’m always looking to go deeper down the rabbit hole of security.


Characterizing this software feature as an "attack" or "backdoor" is pretty hyperbolic. In order to abuse multiplexing, the adversary needs local code execution ability, by which point you've already lost.


I would classify it as "works as designed". That said, I have argued with the ssh developers at length about MaxSessions defaulting to 10. There are no syslog entries created for subsequent authentication and phishing attacks become incredibly easy. A coworker and I were going to demo how getting a developer to run a python/ruby script would lead to root access in production but they stopped the demo for fear they would have to mitigate the scenario.

Some would argue that getting someone to run a script is difficult, but we found that about 10% of developers want to be helpful and are not cynical enough to presume malice. They will run the script which will happily drop a ssh key, fire up sshd as the user, create an outbound connection to a passwordless shell-less VPS node and then we are that developer and can piggy back all their connections. Some developers are devops, so they also have prod access. Some places have passwordless sudo, too. In some places, you don't even need sudo, as the posix permissions of applications are sub-optimal.

If you try this, the script should have an obvious problem that requires running it to see. The developer/engineer will feel good that they helped you solve a trivial problem and you will have whatever access they have. Obviously get written permission for this type of pen-test with all the steps clearly documented and approved. Most important, ensure management agree to NOT shame the victims of the test. Get them to participate in the re-engineering of your network to harden it properly without adding excessive friction.


But then again, getting someone to run a script is "local code execution" so it really doesn't matter how SSH is configured, the compromise was already once the user ran a malicious script. What comes after is not so interesting.


I mostly agree. This just makes the attack substantially easier and removes all remote logging of the access. As far as the investigators will see, the victim of the attack performed the malicious behavior. Hopefully the edge firewall in front of the developer logs all outbound connections and who owns the IP at the time and hopefully they are not working from home/remote, or they have a corporate VPN that logs all outbound connections.

If I phish you and you run a script, but multiplexing is disabled, then I have to take a few extra steps on your machine to capture passwords assuming you have passwords set on your ssh keys. It also means I have to initiate a new connection rather than using your existing ssh channels. Depending on the environment and your laptop configuration, this may or may not increase my risk of being detected. This of course highly depends on what level of logging and remote monitoring of your laptop is in place.


What comes after is the interesting part. Because that's where the attacker will try to gain access to production and the clock for response and blue team for detection and eviction starts ticking.

Assume Breach mindset that Microsoft developed for instance - in case you are intersted to learn more. There is an entire domain/world of security engineering that starts when the initial compromise has happened. And it does/should not mean the adversary won, just because they have code execution on one host.


If a malicious user gained access to your machine, how SSH is configured isn't interesting. If you use that machine to connect to other machines, the attacker will be able to as well, regardless of how SSH was configured at the time.

Heck, the attacker prefers a certain SSH config, the attacker could just change it. Even if you disabled the feature at compile time, the attacker could just replace the SSH command in your shell with their preferred version.

This is just disabling useful features to maybe cause minor inconvenience. I find it about as interesting as telling someone to pull out the power cord of their monitor to increase security of their login prompt screen.


How machines are configured is very interesting, as adversaries make mistakes, and cam trigger detection for suspicious behavior. There is an entire security field that is concerned about what happens after a breach.

Coinbase recently had a very interesting article/blog post about something similar, how adversaries gained access to engineering hosts and how they detected it.

Of course how much you lock something down depends on the critically of an asset and so forth. E.g. in certain high security facilities slight variations of your monitor example are applicable.


That's a good point. If the attacker changes configuration or drops binaries, they make noise instead of living off the land care free, which make them easier to detect. I see.


I think this helps the attacker piggy back on the connection of a user who has the MFA device and is able to get deeper into the network than the bastion can w/o MFA?


First, the article doesn’t characterize the feature as an attack or a backdoor at all. It describes how a perfectly valid feature can be exploited to achieve deeper network penetration. I believe this technique was actually used to target Coinbase a few months back, as I recall from a post in HN a while back.

It’s useful in pivoting from a foothold attack at the boundary (e.g. Chrome zero-day) into the crown jewel backend which could be totally isolated to reduce the attack surface, but if you connect into it from a compromised host, this provides a convenient and hard to disable vector to piggyback onto the connection.

If there was no way to piggyback the session, even owning the developer’s terminal doesn’t gain you access to a secure system which has multi-factor authentication using a hardware token.


If the developer's terminal is owned the attacker can always find a way to piggyback the session, such as by attaching a debugger to ssh and injecting malicious commands as if the user had typed them (and hiding the echo so the user doesn't even know it is happening).


> by which point you've already lost.

Depending on how big you are and how much security is a core competency, even at this point it's important for your system to be architected in a way that can slow the attacker down in order to give your blue team time to respond.

Ideally you will have built your system to have multiple layers of defense. Reality is somewhat less ideal, but it's still valuable to discuss how to harden against amplification/persistence techniques after the initial breach.


additionally you would need to be able to execute code as the targeted user account. I think its acceptable to call this kind of use an attack or backdoor but not the feature itself. I use this feature daily while considering the risks.


I would like to see Shaun Jones, the article's author, compare and contrast the dangers of "ControlMaster auto" with Russell Jones, one of the authors of Teleport.


yeah, was going to say if you have access to the target user's `~/.ssh/config`, you have access to their keys.

Though I suppose if a key has a password on it, this attack might be useful.


Works fine here, even over Tor - Firefox 70.0.1 with javascript.enabled=false in about:config.


If I curl the article's URL, all I get is:

  <html>
  <head>
  <META NAME="robots" CONTENT="noindex,nofollow">
  <script src="/_Incapsula_Resource?SWJIYLWA=5074a744e2e3d891814e9a2dace20bd4,719d34d31c8e3a6e6fffd425f7e032f3">
  </script>
  <body>
  </body></html>
And then if I curl:

https://www.nccgroup.trust/_Incapsula_Resource?SWJIYLWA=5074...

Then I get an obfuscated Javascript blob:

http://dpaste.com/2H519EP

I can't understand how this page could work on any browser that doesn't enable Javascript.

The only possible explanation I can think of is that it must be sending different content based on user agent, or something, though messing around with sending different user agents via "wget -U" gets me more or less the same thing.


I tried again and now get the Incapsula crap - maybe related to IP address (which changes often on Tor)?

Edit: the page loads for the first time after assigning a new IP in Tor, but subsequent loads throw the captcha. Odd system.


I'm happy with LineageOS running on a Pixel, no Play Services and only open source applications installed over adb.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: