Hacker News new | past | comments | ask | show | jobs | submit login
Keystroke timing obfuscation added to ssh(1) (undeadly.org)
586 points by zdw on Aug 29, 2023 | hide | past | favorite | 255 comments



Keystroke timing has been a concern for terminal I/O since the 1980s and folks were using primitive encryption with stelnet and kerberos.

Most terminal applications use buffered I/O for password entry, which is still an important security feature. In that mode, nothing is sent to the other end until the user presses return. A MiTM only "sees" one packet no matter what, and with padding they can't even infer the password length.

For a time, there was rich pickings in applications that accepted passwords in unbuffered mode. Many of them doing it so that they could echo "*" symbols, character by character, as the user typed. That simple feature looks cool, and does give the user feedback ... but would leak the keystroke rate, which is the last thing you want on password entry.

I hope we preserve buffered I/O for password entry, because it's still better than what ssh can do with some obfuscation. But it's great to see ssh add this, and will help protect content that can't be buffered, like shell and editor input.


Aside, I've noticed that the current technique of rendering a fixed number of asterisks independent of the password length is quite confusing to users -- "that's wrong, it's the wrong length", resulting in attempts to type in the "correct" password and this obviating the benefit of the stored password.

Not sure how to fix that. I recall a visible hash of some form being used in the past (eg take a 2-digit hash, pair of with a smiley; I must have entered it right, it's showing me ROFL smiley), but that would aid shoulder surfed password entries, at least.


I've seen a GUI password input field that mutated an abstract line drawing on every keypress. Think random cross-hatching over the whole input field where the lines are nudged a little on every press.

(Not that that's necessarily a good idea, it still gives away timing/length information to e.g. cameras.)


I remember seeing this in Lotus Notes. Never saw it before that, or since.


Oh yes, that could be it! Employer made me run Windows in a VM to read corporate email.

My memory doesn't match videos I can find online of it, but that could be version differences.


At IBM someone made something called fetchnotes that worked like fetchmail, but worked with mail and calendar, made me able to get away with minimal usage of notes.


Yes. Wasn't it to make it harder spoof a password prompt with a static image popping up?


I think it was to provide an indication that the password was correct at a glance. (IIRC the number of dots in the password field was also generated, so it didn't necessarily match the number of chars)

The image was essentially a simple checksum. Each user would eventually memorise which icon was "theirs".


xsecurelock[https://github.com/google/xsecurelock] has a few variants on this.


I'm honestly seeing little value in asterisks with WFH and the move to passphrases. Feedback is important when you're typing a long phrase with complete precision. Plus shoulder surfing is simply not a thing when my physical security profile now involves a locked front door and a call to the police.


WFH also means Working From my backyard, the coffee shop around the corner, the library, a friend's house, a hotel room, etc.

Even for people who only work at home while working remotely, private homes can see a lot of traffic. I wouldn't assume all screens are kept and used in totally secure environments so we should probably still stick with masked passwords and telling users not to keep passwords written on a post-it note stuck to their monitor.


And now employees simply leave their laptop open with the SSH window up while getting their coffee because it's now so annoying to close the lid and correctly type the password.

>USB Rubber Ducky has entered the chat


If they can see the screen wouldn’t they be better off just looking at the keyboard to directly observe what’s being typed?


> the coffee shop around the corner

I would hope people in high leverage job roles would just avoid such behavior.


> I would hope people in high leverage job roles would just avoid such behavior.

I used to hope that as well. Then I met people and lost that hope. It's truly impressive how much stupid shit gets pulled by people that "should know better."


Plenty of value in confirming that you are hitting each key exactly once.


Why not just mutate a specific fixed-length line with every keypress?


You've never typed a password in while screen sharing?


I don't type passwords. My password manager fills them for me, or I paste them.


Unlocking the password manager means I need to type a master password in while in a public place. Feels higher risk when it is an unimportant website but potentially gives access to all websites. Still better than the passwords being accessible on disk but having individual passwords would reduce the impact of any password leak.


I have this InputStick USB [1] dohicky that I keep with my keys shows up as a generic USB keyboard when plugged in but is also an encrypted Bluetooth dongle (part of pairing allows you to configure a shared encryption key so that only devices that know the key can use the stick, and only sticks with the key are recognized by the client apps). There's a plugin to Keepass2Android that I use to type passwords from my phone. I use that to unlock my password manager (using a giant untypable passphrase). So entering mosterous passphrases is very easy... bot only if you can unlock my phone and use biometrics to open Keepass2Android.

It really is dumb that phones can just generically play USB HID (without running custom kernels)

[1] http://inputstick.com/

[2] http://inputstick.com/kp2a-plugin/


1password uses biometrics on my 7 year old MacBook Pro, so even if I'm out and about I still don't need to type it.


1p works great on my mac but still asks for a password from time to time, I'm not sure of the exact mechanic.

OTOH even Chrome's password manager now integrates with the Mac fingerprint auth.


It's every two weeks. If your threat model involves being spied on over the shoulder for your master password while in a cafe you "just" need to ensure you enter your password in a safe location every two weeks.


Oh god no, absolutely not. Always stop sharing for the duration of the password entry.


What if you're demonstrating a problem with a login screen? And yes, I've had to do exactly that more than once. I wouldn't do it with a particularly sensitive password (online banking etc) but there are enough passwords I use regularly for work purposes where it wouldn't be a significant risk for others to watch me type it in, certainly if the characters aren't revealed at all while typing. Though having password fields be able to detect your screen is being shared automatically and obscure what pixels are relayed would be nice.


Why use a good password while testing your login screen? I use "iamroot" and "password".


They're typically passwords that are only for testing accounts anyway, and that are known to the team members I'm sharing with. But...it's easy to slip up now and then and forget you're actually putting in a password while screen sharing that it's probably best not to have your co-workers know! Obviously the worst is your actual O/S password, as knowing that could potentially allow a co-worker access to other passwords that are quite sensitive, but I'm not sure it's even possible to screen share your O/S login screen - probably shouldn't be! It is a good argument for not re-using that password for any browser-based logins, but SSO policies tend to make that impossible unfortunately. Mind you I use a pin for my O/S login screen, whereas for browser-based logins you can't.


Sadly I think security systems will have to accommodate the possibility that someone else can see your screen. And hope that they can't see your keyboard.


I'm going to suggest that is correct and also unusual behavior.


Are you describing your experience or implying that the industry should change this because you can WFH?


The latter. They seemingly meant "I can WFH, so asterisks are meaningless to everyone. F@&# asterisks!"


> I'm honestly seeing little value in asterisks

They're essential ! How else would we encourage the average user to use as short and and as simple a password as they can get away with ?


Lotus Notes used to have that. (Might still do?)

https://security.stackexchange.com/questions/41247/changing-...


We used Notes at work until a few years ago and it still had it IIRC. I never stopped to think about why the pictures changed, that's interesting. Another annoying decision is that they prevented pasting passwords, which is very inconvenient when using a password manager. I ended up having to use one that simulated keystrokes.


I’ve seen some programs render three asterisks per key stroke. Defers human shoulder surfers from seeing the length of your password.

I think simplest and safest solution would be a shape that rotates at random interval for each key stroke.

Depends what problem you want to solve. Did keyboard register my press? vs did I type the same thing as last time.? have some different constraints.


The login fractal - a shape that is infinitely recurring, starts at a random place, and indicates entry with "zooming".


The browser could use a different rendering convention for autopopulated passwords. For instance, it could render a solid black bar (no characters for the user to count) or maybe the phrase "autofilled", perhaps with a strange background color / rendering convention.


What about : "You are typing..." like in IM apps


> Keystroke timing has been a concern for terminal I/O since the 1980s and folks were using primitive encryption with stelnet and kerberos.

I had a visual basic AI addon in the 1990's that could work out who was typing at the keyboard from their typing pattern within a few minutes of typing, which kind of rendered the logon process mute.

Today, that can applied to touchscreen logons by tying finger pressure patterns ie size and shape of finger contact with the touchscreen to a user, and when incorporating swipes or mouse movements in the desktop OS context, its possible to have a security app which can lock a system if someone is using a device and user account which is not theirs.

At the very least you can log every time one's GF/missus has gone through your phone.


> mute

moot?


Like a cow’s opinion.


That's the 4chan guy


Nah it's a brand of synthesizer.


No, the meaning of "moot" is clear. It simply means a question that at the current time has lost its relevance, or that at the current time has just become the only question that is relevant. Easy.


Whose username is derived from "moot".


Spiffy! Do you remember what were you using as your model?

(Btw, 'moot' not 'mute' :) )


I dont know, I dont think we even had that much access to tune it so to speak, this was VB4 (1991) back then, I dont think it was a VB extension but an OCX (1994) which is OLE2/ActiveX technology.

I think I got it from a 3.5" disc on the front of a computer mag from memory in the UK but it was a US company that wrote it. So there might be a copy of it in the wayback machine.

It was quite simple to use, so alot of their AI decisions or tuning was probably already made for us in order for it to be put out there as an addon.

But I have never seen anything since for an addon, and it seemed such a good idea in the scheme of things when it comes to computer security with all the hacking that is in the press today.


Password based ssh authentication should be used approximately never.


That is not the only time you use passwords over ssh, e.g. I don't use a password to remote into my desktop from my laptop, but I do use one when using sudo on the desktop.


Actually this is something that is relevant to my interests.

I prefer to have sudo ask for a password when I'm physically in front of the machine, but not if it's a remote session (e.g. SSH from my laptop to my desktop).

Maybe the SSH agent on the client can re-authenticate to the server when requested?


> Maybe the SSH agent on the client can re-authenticate to the server when requested?

There is a PAM module that does this: https://github.com/jbeverly/pam_ssh_agent_auth

Note that this is a bad idea from the security standpoint, as it requires SSH agent forwarding. Which means that, if the remote server is compromised, the attacker can use your SSH agent to log into other servers as you.


The local agent can ask the user to approve/deny signing requests.


Is there no way to forward fido tokens? Or the GPG agent with a Yubikey.

Under Windows, you can forward your smartcard over remote desktop. It's one of the few things Windows has I miss on Linux.


Forwarding the ssh agent (-A) is considered insecure. Instead man ssh recommends using a jump host (-J)


I was talking about the GPG agent, so that the key on the smart card can be used to for sudo elevation on the remote host. This usually requires user interaction with the key, so just having access to the agent wouldn't do much. I don't think the ssh agent would help with this.

To your point, I wonder whether that consideration holds when the private key is held on an external device, like is the case with a YubiKey. I use that setup, and I can't add the key to the ssh agent.

    $ ssh-add .ssh/id_yubikey_gpg.pub                                                                                                                                                                                                                                                                                                                     
    Error loading key ".ssh/id_yubikey_gpg.pub": error in libcrypto
SSH connections work fine with that key.


i attempt to use this and some programs recognize this and many just don't


Don't these apps just use PAM? Since the initial complaint was about sudo, I'd figure pam / polkit would handle this, and apps would call those to obtain privilege elevation.


FWIW, you can probably configure sudo to use something other than passwords. On a Mac you can use the fingerprint reader for example, it's just disabled by default.

And your terminal may come with a password manager too, which would be unlocked with whatever means.

Again, on a Mac with iTerm you can do this with a fingerprint.


That's not what the parent is talking about.

They're specifically refering to password authentication to make the ssh connection.


we're not necessarily talking about ssh authentication. Wouldn't that send the entire password as a single packet, anyway?


correct - this is for the post-auth session and not the authentication phase


How would you log in for the first time into a headless device?


Same way you'd get the password? It's either a physical or virtual server you more or less control, in which case the siblings' answers apply. Otherwise, it's probably some kind of image or something someone else controls, in which case bake in or send them your public key or certificate (if you've got colleagues in the same situation as yourself).


Getting a password does not require modifying the system. Injecting a public key does.


The password needs to be generated somehow, right? Assuming you don't you use a pre-baked password that repeats across machines, you could replace the password generation and retrieval with deploying a public key instead.


The remote system must generate its own SSH private key; you could use that opportunity to deploy the authorized keys before sealing the system as read-only.


You can commonly deploy the device/server with the client's public key.


What if it's mass produced and sold in a store?


That's assuming the device runs GNU/Linux with / mounted rw. But not everything is a laptop or a desktop.


No, it's assuming a device running a ssh daemon with something mounted rw or user-modifiable[0] that can hold an authorized_keys file. A NetBSD embedded board that configures sshd with `AuthorizedKeysFile /sdcard/config/authorized_keys` would be fine, for instance.

[0] For example, you could let the user write their key to an SD card and then mount it ro on the device.


So what do you do when the device has no long-term storage like an SD card?


Such a device is then simply not suitable for situations where the issues with SSH password authentication become relevant.


What kind of device runs sshd but has no persistent storage?


"One time, on first use, where absolutely necessary, and changing password immediately afterwards" seems a reasonable interpretation of "approximately never".


I don't know. I come across old AP/routers where I've forgotten the login credentials and find myself hard resetting them with some regularly, one that's above "approximately never" anyway.


I'm presuming the hard reset is to a factory-assigned password.

Is that uniform across all devices, or device-specific?

Practice I've seen for some years now is to have a label on the device with admin/root password, which is presumably neither uniform across devices nor trivially-determinable from device characteristics (e.g., MAC address, sequential serial numbers, etc.).

I'd still consider that practice reasonably tolerable, though you should be keeping better tabs on assets and credentials.


It could be totally fine if you disable WiFi and connect physically. At least the first time for setup.


I'd use a base image with a baked-in SSH certificate allowed.

Fairly trivial to make, at least with NixOS.


This hinges on this being either a VM or some hardware you've set up yourself.


what other situation would you be in?


Any device where you don't control the initial firmware, and the firmware doesn't support ssh keys. AP/Routers (consumer and commercial and industrial grade), Shared hosting with ssh but limited features (eg GoDaddy)...


For physical devices, you can usually connect them via a dedicated Ethernet cable right to your laptop, and set the initial password. They likely don't have the right network settings anyway to drop them right into the bigger LAN.

Otherwise I think you just prepare a certificate ahead of time, and scp it during the first connection, then immediately disable password-based access, or at least change the password. Any passive eavesdropping still needs to defeat the encryption somehow (no feasible ways are known now), even having seen the initial exchange.

If you have an active MITM attack, all bets are off, because the attacker could even grab the image with the pre-baked key you're sending, and copy or change the key. If this is not possible, then the pre-baked key would help. If your security is really important, don't use ther cheap GoDaddy's offerings with limited SSH.


Let's say you bought a router and now you want to log into it.


Then you connect physically and do whatever is necessary to prepare that router for your intended use.


Connect to what? It only has an ethetnet jack.


Are you being intentionally difficult or have you just never bought and set up a router?


Your laptop, for instance.


This is naive in the extreme. There are many scenarios where passwords are needed, for bootstrapping, a disgruntled admin leaving, etc.

There is a role for a common secret in a secure ecosystem (password, passkey)


That common secret is usually an ssh key which is held somewhere secure hopefully with auditable access.

For bootstrapping you can bake a bootstrapping key into your installer which is removed after the system is configured.


This is completely irrelevant to password based SSH authentication. The timing obfuscation is for the session _after_ authentication.


Does anyone know any SSH clients that support line-buffering of input?

I.e. where what you type doesn't get transmitted until you hit or click return/send?

I had one of these clients (but for telnet) back in more active MUD gaming days but haven't seen it with the few SSH clients I've used since... but always thought that would be a good defense to SSH keystroke timing data leakage, and potentially superior to this 20ms delay approach mentioned in this article, at least for some usage scenarios.

(Although now that I think about it, ideally you might want it to also transmit when someone hits tab so you could still have linux shell autocomplete...)


That would only work if the ssh client could know exactly what was going on in the user session. Like, how would that work if I were editing a file with vim? Or even just typing a command into the shell (where I might need to backtrack and edit the command)?

This doesn't seem very feasible or useful to me.


That's what the TTY settings are for. The program displayed controls line buffering by setting the TTY mode as it wants.

https://www.linusakesson.net/programming/tty/


That's more a choice a current-day shell etc does for you, wanting to control the editing experience. Run `cat` and it'll switch to line buffered mode, note how your arrow keys just input line noise, and watch the cat process with ptrace if you want to confirm it really receives the whole line in one read syscall.


mosh is quite smart with this


Not exactly what I was looking for in terms of the security side of things but perhaps more sophisticated in terms of the editing handling. Cool, thanks for the reply!

Money quote from https://mosh.org/#techinfo :

Remote-shell protocols traditionally work by conveying a byte-stream from the server to the client, to be interpreted by the client's terminal. (This includes TELNET, RLOGIN, and SSH.) Mosh works differently and at a different layer. With Mosh, the server and client both maintain a snapshot of the current screen state. The problem becomes one of state-synchronization: getting the client to the most recent server-side screen as efficiently as possible.

This is accomplished using a new protocol called the State Synchronization Protocol, for which Mosh is the first application. SSP runs over UDP, synchronizing the state of any object from one host to another. Datagrams are encrypted and authenticated using AES-128 in OCB3 mode. ...

Roaming with SSP becomes easy: the client sends datagrams to the server with increasing sequence numbers, including a "heartbeat" at least once every three seconds. ...

Instant local echo and line editing

The other major benefit of working at the terminal-emulation layer is that the Mosh client is free to scribble on the local screen without lasting consequence. We use this to implement intelligent local echo. The client runs a predictive model in the background of the server's behavior, hypothesizing that each keystroke will be echoed at the cursor location and that the backspace and left- and right-arrow keys will have their traditional effect. But only when a prediction is confirmed by the server are these effects actually shown to the user. (In addition, by default predictions are only displayed on high-delay connections or during a network “glitch.”) Predictions are done in epochs: when the user does something that might alter the echo behavior — like hit ESC or carriage return or an up- or down-arrow — Mosh goes back into making background predictions until a prediction from the new batch can be confirmed as correct.

Thus, unlike previous attempts at local echo with TELNET and RLOGIN, Mosh's local echo can be used everywhere, even in full-screen programs like emacs and vi.


I would assume that doesn't work well for text editing.


> Most terminal applications use buffered I/O for password entry

does the existance of this patch implicate that openssh does not behave that way?


i don't really type passwords into remote connected terminals anymore and haven't for some time. shrug

while shoring up the existing holes is worthwhile, people should really be using keys or difficult to type passwords in 2023.


so that's why my terminal doesn't show "*" as i type my password?

tripped me up the first time i used a shell. interesting to see why


This reminds me of professional Bridge. They split the teams with a wall and pass their cards through a window at the same time to prevent communication through timing.

https://youtube.com/watch?v=RVZLNRmO3vo


And yet they cheat through the screens.

https://en.wikipedia.org/wiki/Blue_Team_(bridge)#Cheating_an...

https://en.wikipedia.org/wiki/Fantoni_and_Nunes_cheating_sca...

https://en.wikipedia.org/wiki/Fisher_and_Schwartz_cheating_s...

And those are the ones we know about. :)

As far as actually using bridge in a job interview once. I did. In bridge there's a rule where if your partner gives you a hint not via the bidding, you must take the opposite approach if logically possible. It is called "Active Ethics". I had an interviewer try to lead me by the nose to the answer way too hard, in a debugging interview. So I'd stop and check EVERYTHING I could think of first before doing what he said. I told him I was doing it after the interview, and to look up active ethics if he needed a further explanation.

Got the job.


While I admire your ethics, I feel like a lot of technical job interviews are structured such that you're supposed to actively collaborate with the interviewer. The interviewer is allowed to give you hints or suggestions, and they're very interested in how well the candidate takes hints.

And sometimes the hint can be a trick! I recently did an interview where the interviewer asked if I should use a shortcut to compare two strings, one that assumed there's only one way to normalize a string. I almost fell for it, but then I hesitated and mentioned that I was concerned about some languages where that assumption wouldn't hold. They agreed and were happy that I chose the safer approach.


There's a difference between collaborate and get clubbed over the head with the answer.

This guy was doing the latter, and it was meant to be an interview to test raw debugging/diagnostic skills. If I just followed the breadcrumbs, I'd show no real skill.

In a coding interview, I'd follow the hints.


Also many interviews are structured so there simply won't be time to finish the exercise if you're going slow.


In coding interviews, that's very true.

In this interview I wasn't concerned about that. If you are looking to see if someone understands Linux by testing diagnostic skill, if they are coming up with 3-4 different failures to check for every step... They are doing their job.


Higher diagnostic skill would be checking the highest-likelihood scenario (that the interviewer had created) first.


No... that's just listening to someone tipping you an answer.

Which if you are doing real diagnostics is often 100% the wrong answer.

I double check other people's work, and make sure that everything is right. Because one small misstep can result in not diagnosing an issue correctly.

Listen to what the user reports as the issue.... not the cause. Always work from the symptoms to the problem.


It probably wasn't the situation in your case, but I often give straightforward hints if the candidate is struggling with something that I don't want them to spend time on so we can get to the significant material.

E.g. in an algorithms interview they get stuck on an unrelated python issue (many people interview in python but don't use it day to day), or in a system design interview they get stuck on designing extra-credit subsystem C when they haven't finished subsystems A and B.

If they aren't getting it after a couple hints, I'll just tell them the answer or tell them to come back to it later.

Anyway, I would be very careful if you aren't going where the interviewer is pointing you. If you think it's a trick or you want to practice Active Ethics, then I would call that out in the moment since you might be messing up the flow of the interview at best and come off as hard to work with at worst.


I was very polite, but I just mentioned each other path.

In all situations, judgment is required.


Oh, I know. Attackers will continue to attack. In my opinion, professional bridge is a doomed game. Decades of added steps to prevent cheating complicate too much an already very difficult game, and determined, smart people are still very successful at bypassing them anyway.

I still want to learn to play at a reasonable level though, I'd rather waste my time on bridge than chess. But it needs to be home games, and there's no way I'm going to find the partners when spades and bid whist are out there and easy to learn.


As someone who has played in the Grand National Teams - Flight C. :)

It has problems. Cheating is a huge issue, as is sportsmanship. If you know bridge. I used to play precision with an 11-13 1NT. When people saw our convention card, they'd often ask to swap tables with other teammates. (Clearly not legal.)

When I was playing on a team where all 4 of us played the same convention card those people made me laugh so hard.

Cheaters will cheat. I played clean, I had fun. I haven't had time to play for a while. But man, bridge is a funny little world.


Surely then you're just in a game of bluff with a Sicilian ... ie then you just feel your partner to do the opposite and make sure it's caught, resulting in them taking the action you intended?

IANABridgePlayer, clearly.


Remember, partner has an ETHICAL issue. Partner must work AGAINST you. If they can infer that you might mean something other than what you are signalling, they must take that into account.

I've been in the situation in game a few times. Thankfully, my decisions were pretty cut and dry.

You don't have to do non-obvious things. If you are going to accept any invitation to game... You are going to accept even if partner looks happy, what I wouldn't do is throw out a slam exploring bid if I was on the fence about it.

If I was absolute top of range... I'd go ahead and make the bid. Because there is nothing that would change based on partner's actions.


Then they see what you're doing and have to act accordingly. Eventually... something about a land war in Asia.


If people are just expected to be human state machines and are penalized for not doing the prescribed automata, then you might as well flip a coin for the trophy and skip the game.

This is like saying a catcher can't signal to a pitcher.

Information-passing is a human skill that adds a dimension to the game. Let the best win.


Yeah, I lost all interest in Bridge when I found out the people who play it hate 100% of the interesting parts and had outlawed them, and that every time someone comes up with another cool approach, they outlaw that, too.

Initially learning the game it was like “oh wow, that feature of the game has some really cool implications! This is amazing!” but then reading about how real bridge tournaments run, yeah, they crafted the rules to remove every single one of those cool implications.

[EDIT] to be fair, the basic rules would also result in a terrible game as soon as people got too good at exploiting them. I just think they’ve managed to find another way to ruin the game while keeping it technically playable.


The extent to which this just seems to be openly true is startling. Some games, in response to new strategies that are particularly effective, by embracing them and setting aside older approaches. Some games respond by rebalancing and changing rules to keep the game working well. Bridge just bans the strategies themselves (eg, https://en.wikipedia.org/wiki/Strong_pass and https://en.wikipedia.org/wiki/Highly_unusual_method ).


Strong Pass there's very good reasons to outlaw. It is simply too destructive a method to yield an interesting game.

HUMs also tend to end up being very destructive to the opponents, because they really don't understand the full implications of the bid. And may not have discussed how to bid over it. Heck I've run into this with people playing over a strong club system, and they haven't discussed what 2C means.

In the end... many games end up with a few rules to make them interesting. I will not defend the ACBL here, I think the WBC is pretty much on the mark last I watched.


> Information-passing is a human skill that adds a dimension to the game.

Nah. You choose the game that you prefer. You can play the game where you cheat all the time, but don't play it with people who like bridge without asking them first.


Bridge has a built in channel for communication that has very limited bandwidth. The bidding conventions are about maximizing how much you communicate with limited symbols and almost no attempt at secrecy. Effectively it'd turn the game into one where players play with their hands face up, because that's the most effective way to communicate. That doesn't sound very interesting to me.


If you want to invent a version of bridge where surreptitious information-passing is part of the game, more power to you, but it’s not the same game.


Quite untrue.

There is this thing called "Bridge Judgement" that you are allowed to use.

Just because your hand has 10 HCP... but has 13 spades... doesn't mean you will pass. You'll bid your 7 Spades and call it a day.

Bridge has many shades of grey. It is learning how to dance them correctly that is hard.


Wow, looking at this with a red-team cap on, there is so much human "messiness" to exploit here. It shouldn't be too hard too be able to pass a bit or two of information.


It might be interesting for a security person to try to come up with ways to hypothetically assure a trustworthy bridge game, assuming no limits on costs or inconvenience (i.e. if a trustworthy bridge game takes three months to play, or requires launching a satellite into orbit, so be it.)


Bridge is a really weird game. It's all about secret communication with your partner, but it's not allowed to be secret. You can communicate, but no communication! Very odd.


Bridge tournament rules are crafted as if everyone involved wishes they were playing a different game, but are for some reason stuck with the basic rules of Bridge. There’s a pile of rules about how you aren’t allowed to do all kinds of things that the basic rules would enable.

It’s like if baseball couldn’t change field size or mound height or whatever and just had to add lots of rules about how you aren’t allowed to throw too fast or hit too far et c., but kept the physical reality of the game the same.


Communication through bidding is fascinating. Any type of collusion during any kind of auction is fascinating.


It seems there is still a possibility for passing information. For example, you can shove the little table across the barrier, or slowly slide it to indicate something. That's how the guy in the upper right passed it the first and second time.


There are endless ways to pass information. Notice the sibling comment about "active ethics." It's the game sort of saying "there's really no fool-proof way to keep you from cheating, so please just be a good person. Even to the point that if you're put into a situation where you could accidentally cheat, you should intentionally play non-optimally."


Here's a 2008 article about a 2001(!) paper noting such timing attacks: https://lwn.net/Articles/298833/

>This weakness was outlined in a 2001 paper entitled Timing analysis of keystrokes and timing attacks on SSH" [PDF] which looked specifically at the timing-based attack:

>In this paper we study users' keyboard dynamics and show that the timing information of keystrokes does leak information about the key sequences typed. Through more detailed analysis we show that the timing information leaks about 1 bit of information about the content per keystroke pair. Because the entropy of passwords is only 4-8 bits per character, this 1 bit per keystroke pair information can reveal significant information about the content typed.

I thought this was fixed a long time ago and I thought there was a fix pushed around the 2012 time period. I'm totally shocked this has not been previously address.


> I'm totally shocked this has not been previously address.

same same

i rekon there is more going on


>previously addressed*.

I am the king of typos and misspelling.


Some day we have to use packets which are pre-filled by random data to hide our keystrokes in. Not quite steganography, but close. Could also be used to make traffic-analysis harder/impossible even?


The NSA and others have done this for decades. Run the line at full utilization, fully encrypted, and just put data on when you need to. Not too hard, when your lines are dedicated.


This is the only way to ensure that no information is leaked.


Yes, just make sure your timing is always the same... Any little thing will leak information.

But let's be honest rubber hose cryptography is the real way to get things done.


You could do steganography with this. There's work on getting a language model to re-word an innocuous cover-text by using a minimum-entropy, key-derived distortion of the probability distribution that is used to sample words. Then, if you use the same model on the receiver side, and have the key, you can decode the covertext back into the ciphertext. This also works with images, too. https://openreview.net/forum?id=HQ67mj5rJdR


Reminds me of numbers stations. Constantly broadcasting numbers around the world that mean something to someone . . . whenever they happen to mean something to someone. With full knowledge that the world's intelligence services (among others) are constantly listening too.


Some messaging protocols do this.


Is there a technical term for this?


I just found the term "traffic padding", which seems to accurately describe this.


in the world of VoIP it's just "constant bitrate"


Anti-timing attack? Anti-traffic analysis?


SSH traffic is encrypted, so to an observer, the packages look like random data already.


But size and timing of the packets can leak information, hence the mitigation under discussion here.


Nym?



Can’t get on board with a privacy tool if the first thing they ask you to do is to join their Telegram channel


Since I'm the one that said it originally, I tend to strongly agree. I really can't stand it. Its such an absolute, unnecessary respect killer.


tend to sympathize, it's slack or discord invites that trigger it for me


So Tor with a blockchain, and you have to pay for it?

> Users pay a fee in NYM to send their data through the mixnet.


To be fair, Tor costs too, it's just that someone else is picking up the bill.


Tor is not a mixnet, since it cannot delay individual packets or messages, which is a requirement for actual "mixing". Tor is onion routing.


nullsoft waste


This makes me wonder about newer terminal emulators on maccOS like Warp[1], and if they're for example taking all input locally, and then sending it over the remote host in a single blob or not? I imagine doing so would possibly break any sort of raw-mode input being done on remote host but I'd also imagine that is a detectable situation in which you could switch into a raw keystroke feed as well.

[1]: https://warp.dev


In general once you’re connecting over SSH the connection itself is always in raw mode and then the remote host deals with its pty normally (which can be in line or raw mode). Terminals with special shell integrations usually need them installed on the remote host too (some have support that does that somewhat transparently though).

This is why mosh can have better behaviour than pure SSH over high latency connections. However this feature isn’t going to apply to mosh.


I wonder if SSH can honor line-buffered mode. It should be able to detect it, but then if it incorrectly switches to line buffering then random stuff might deadlock.


It's really hard for me to imagine that an app that markets "AI for your terminal" is going to be "more secure and private" than some standard Unix tool.

Perhaps some very specific example of a security feature (such as protecting against timing attacks) could be protected against in a new tool, and not in the older more standard one. But it seems far more likely that many other security features would get forgotten in the newer tool, and by adding "AI" so many more attack vectors would be added.

It's honestly hard to even believe in the privacy claims of warp. Almost all NLP tools in today's age seem to fall towards cloud solutions, which almost immediately makes that likelihood of privacy close to nil.


If they’re designed to take in data at some baud rate, wouldn’t the blob feed in at that rate too?


What is the threat that this mitigates?


An eavesdropper cannot see the content of your keystrokes, but (previous to this feature) they could see when each keystroke is sent. If you know the target's typing patterns, you could use that data to recover their content. You could collect the target's typing patterns by getting them to type into a website you control with a Javascript enabled browsers, or from an audio recording of their typing. (Some online streamers have been hacked as of late using AI models trained to steal their passwords using the sounds of them typing on their keyboards).


> Some online streamers have been hacked as of late using AI models trained to steal their passwords using the sounds of them typing on their keyboards

do you have any sources for that?

I've only seen this mentioned from research results recently but no real world exploitation reports.

https://www.bleepingcomputer.com/news/security/new-acoustic-...


Years ago when I saw a paper on that topic, I tried recording my own keyboard and trained a ML model to classify keystrokes. I used a SVM, to give you an idea of how long ago this was.

I got to 90% accuracy extremely quickly. The "guessed" keystrokes had errors but they were close enough to tell exactly what I was typing.

If I could do that as an amateur in a few hours of coding with no advanced signal processing and with the first SVM architecture I tried, it must be relatively easy to learn / classify.


Also, if the goal was to guess a password you wouldn't necessarily need it to be really accurate. Just narrowing the search space could get you close enough that a brute force attack could do the rest.


https://github.com/ggerganov/kbd-audio

It's quite good at decoding my own typing, although I am a quite aggressive typist and that may help. I haven't tried it on others, though (honest, officer).


I gave that a bunch of tries over the last half an hour with longer and longer training data and it never got better than random chance.


I didnt find an article about actual hacks carried out with that technique, but here’s a HN discussion [1] from this month about a paper on the topic.

From that discussion it sounds like you need to train on data captured from the actual target. Same physical keyboard in the same physical space with the same typer.

Pretty wild despite those specific conditions. Very interested to know if people have actually been attacked in the wild with this and if the attackers were able to generalize it down to just make and model of a keyboard, or if they could gather enough data from a stream.

[1]: https://news.ycombinator.com/item?id=37013704


IIRC there is at least one paper, maybe around 2005, where they were able to determine what was being typed in an encrypted ssh session, using packet timings correlated to collected human typing statistics. Looks like this adds noise to prevent that.


Alternatively, use the SSH compression option that works on blocks of data


The original exploit concern was the use of the "Viterbi Algorithm."

http://www.cs.berkeley.edu/~dawnsong/papers/ssh-timing.pdf [2001]

The addition of ML has greatly improved the accuracy of audio decoding - use a silent keyboard in any insecure physical locale.

https://arstechnica.com/gadgets/2023/08/type-softly-research...


Basically you can analyze typing speed to make some assumptions

For example, since users tend to type their passwords quicker than other things, you could see how many keystrokes were sent in a burst and guess the user's password length when they sudo something.


A paper came out recently that uses keystroke timings+deep learning to fingerprint users (and authenticate them in this case): https://www.usenix.org/system/files/usenixsecurity23-piet.pd...

In this specific paper's use case, it's not a security threat, but you can definitely cast it as information leakage.


The timing of key stokes leaks information. Here's the 2001 paper that describes the problem:

https://www.usenix.org/legacy/events/sec01/full_papers/song/...


How much latency does this add? Latency, particularly unpredictable latency, is one of the greatest stressors in software development work.


It's right there.

> ...by sending interactive traffic at fixed intervals (default: every 20ms) when there is only a small amount of data being sent...


> Latency, particularly unpredictable latency, is one of the greatest stressors in software development work.

It took me a second, but I'm pretty sure the comment above is referring to latency in the user experience; namely, the delay between a keypress and perceived result. [1]

FWIW, tools like Mosh [2] go a long way towards reducing perceived latency. Mosh displays the user keypress as soon as it is registered locally (which happens without a perceptable delay). To indicate that it has not round-tripped, the character is shown in a washed-out color, last I checked. (Or maybe underlined?) After the round-trip completes, the character is displayed normally.

[1] If you greatest stressor in software development is the latency of your keypresses, you sound very lucky to me.

[2]: https://mosh.org


This latency is predictable by design though, no?



It also depends on the previous one for the PING/PONG messages used to simulate keystrokes and terminal echo: https://github.com/openssh/openssh-portable/commit/dce6d80d2...


I know some people do network monitoring for hands-on-keyboard shells (presumably) by measuring packet timing, I wonder if this will mess with those detections and if so by how much.


I hope that kind of thing goes the way of other corporate efforts to break/backdoor encryption for the sake of "security". IMO, it's really the wrong way to go about security. Sure it would be nice to know if some automated script is being used to log into a machine, but better design can mean that information isn't important.


This has nothing to do with breaking encryption and of all the sketchy corporate surveillance tooling that's deployed for security purposes (so say nothing of HR purposes) monitoring for shells on the network seems about as benign as it comes.


It's only benign if we don't see new policies that say "everyone must disable keystroke obfuscation so we can still spy on traffic".

If a company's security strategy relies on the ability to tell if a given stream of encrypted bytes is shell traffic, and that it can be fooled by timing obfuscation, they need a better strategy. Attackers won't care to follow a "no timing obfuscation" policy.


I've definitely encountered security teams that thrash between different broken policies. For instance, one employer simultaneously had these two policies:

- All developer laptops must be able to log into prod

- You must type a 2FA pin each time you access the test environment, and that includes nightly automation scripts.

I imagine they'd love to run a thing that detected and blocked scripted access to the test environment, but allowed it in production.

(In case it isn't obvious, I agree that corporate security teams shouldn't use strange network monitoring heuristics to interfere with common engineering and ops workflows.)


What non-malicious use case is there for this?


Network monitoring for unauthorized/unusual access, reading more into how this works I don't think this would actually change anything, you can probably still discern scripted vs manual shells it would just be a bit harder.


presumably for checking compliance with a policy that forbids it


Sending a packet every 20ms seems like a lot of extra traffic.


An empty IPv4 packet is 20 bytes, and an empty IPv6 packet is 40 bytes. An empty TCP header is 20 bytes. Therefore, if you want to send a single byte over TCP, you need 41 bytes over TCP/IPv4, or 61 bytes over TCP/IPv6.

Let's call that 64 bytes/packet for a small packet.

20ms/packet = 50 packets/sec = 3200 bytes/sec = 3.125KiB/s.

For comparison, a copper-wire non-broadband modem in the early '90s ran at 33.6kbps (kilobits/sec) which worked out at 4.1KiB/s. So a packet every 20ms wouldn't even saturate 30-year old modem tech. And believe me, that was slooooooow!


In the early 90's I was still using 2400 baud, maybe 9600


I went from 2400 to 14.4; 9600 was the limit before trellis modulation, but IIRC it jumped from 14.4 to 33.6 rather quickly.

[edit]

After some quick googling, 33.6 wasn't standardized until 1996 (compare to 14.4 in 1991), but the manufacturers released modems ahead of the V.34 standard with DSPs so that they could be upgraded to the standard when it was available.

14.4 did catch on almost overnight in the early 90s though as the modems were no more expensive (and sometimes cheaper) than slower modems.


You kids and your high-speed 2400 baud modems. When I was dialing up, we had 300 baud AppleCats and we liked it.


I had an Atari 800, with an MPP-1000c modem. Those babies could, when connected to another modem of the same model, push the speed up to 450 baud. They were odd devices, connecting to the computer through one of its joystick ports.


I remember 56k (V90!) in the late 90s. I can't remember when 36k came in, though.


The 56k was only in one direction, made possible by having the ISP modem on an ISDN PRI. In that configuration the only ADC in the fast direction is the high resolution one in the modem.


But slow forced people to use their brains. Around 2002 I did WFH using some 30 kbit/s practical speed. My X11 desktop was shared pixel-accurate with decent response times over VNC.

20 years later if someone shares their code over Google Meet I see some blurred stuff. And red font takes 3 seconds to become clear.


The US hasn't really improved infrastructure since the 90s though. So yeah, it's slow, but it's also still that slow for many people.


yeah, keeping the total bandwidth used to be less than a dialup modem connection was an explicit goal when choosing the 20ms default interval.


Only while you are typing and for a random time period after you stop. 50 packets per second, while I'm typing, doesn't seem like too many packets.


OK, it wasn't very clear if this was all the time or what.


I didn't read the code, but as I understood it, it was more like a frame rule ("imagine a bus stop..."), where your keystrokes will be delayed/buffered for a few milliseconds and then sent in regular 20ms interval bursts


Fortge chaff packets, sure. But aggregating keystrokes for 20 ms into one packet may save data.


I can type pretty fast (100+ WPM) but I'm sure as hell not getting multiple keystrokes into a 20ms window


Guess that'd depend on how big the packet ends up being.


TCP has an overhead of 20 bytes. I'm not sure how much openssh adds, but if it's just a keystroke I can't imagine it'd be over 64 bytes. Add those together and multiply by 50 packets per second (20ms between each packet), and it works out to a whopping 4.2kB/s.


Does mosh do something similar? It seems like that'd be way more effective in a protocol that's already much more tolerant of random latency spikes already.


It's a nice security feature, true

Though I'm curious how does the project keep working with CVS in $year, I wonder if everybody just uses git cvsimport and just forgets about CVS most of the time


“No mention of openbsd on the internet is complete without a long thread about source control migration.” — tedu@



All of these reasons boil down to "if it ain't broke" and "that's what we're used to".

Switching VCS for a project of this size is always complicated and OpenBSD devs are famously "old school" and conservative with their software choices.

I used to use CVS before switching to SVN and later DCVS like Mercurial and Git. The claim that "it is unlikely that any alternative versioning system would improve the developer's productivity or quality" is absolutely laughable IMO.

This is especially true nowadays where CVS support in modern tooling, if it even exists, is usually legacy and poorly maintained.


> All of these reasons boil down to "if it ain't broke" and "that's what we're used to".

"Works for us". Which is a pretty good argument.

> The claim that "it is unlikely that any alternative versioning system would improve the developer's productivity or quality" is absolutely laughable IMO.

Why is it laughable exactly? I mean for me I can't use CVS due to the lack of atomic directory checkins, but if they don't need them or they have already a system in place which may even better tie with their development/release style than any generic VCS could, why bother?

> This is especially true nowadays where CVS support in modern tooling, if it even exists, is usually legacy and poorly maintained.

You make that sound as a disadvantage...


I think the plan is for OpenBSD to switch to got https://gameoftrees.org/ when it is ready.


The explanation all makes sense. But the key line of “we all know cvs” is effectively exclusionary to all the other developers in the world who don’t use cvs. At some point they will need new talent which will be harder to get.


If you know git or any other version control then using CVS really isn't that hard; many commands are similar.

And everything is exclusionary to someone. Pure git email workflow? Exclusionary to people who find it hard/difficult, or use email in a different way (e.g. only gmail web UI). GitHub Pull Request workflow? Exclusionary to people who don't want to run non-free JS, or don't want to use "Micro$oft GitHub", or don't like using web interfaces.


Accusing on of the first pioneers of Open Source as being "exclusionary" has got to be a joke.

In many ways, they were there first.

I really don't understand why there's such a tendency to demand "monolithic social networks" even in open source software development. Connecting to people is great when feelings are mutual, but we don't even have a right to be left alone without being accused of being anti-social?


Based on that rationale, anyone using Typescript is being exclusionary to developers who don't know Typescript.

They picked a system that suits the projects work flow, is well documented and a relatively low learning curve for anyone interested. I doubt cvs would be the main turnoff for someone looking to be an OpenBSD developer.


> that suits the projects work flow, is well documented and a relatively low learning curve for anyone interested

Maybe well documented, but "suits the project flow" and "low learning curve", absolutely not

(I mean, ok, maybe "low learning curve" if you're developing a very simple UNIX project in the 90s)

CVS is one of the tools where I literally never look back and say "ok this was nice". SVN and Mercurial something here and there. CVS? Never


That argument would be more analogous if you picked say CoffeeScript. The point is it’s something that used to be reasonably popular but for reasons the vast majority of the world has moved on from.


CVS isn't hard to learn; it's not a barrier for someone who's interested in working on OpenBSD in the first place.


CVS isn't "hard to learn", devs worthy enough to make meaningful contributions to OpenBSD can probably make sense of it in less than a day. It's just... extremely anti-ergonomic given the other options we have today.


Git's popularity only exploded around 10 years ago at most. CVS is more than 30 years old. Make the math.

There is reason Perforce, as crap as it is, is still as popular as it is.


>There is reason Perforce, as crap as it is, is still as popular as it is.

Vendor lock-in? I hated perforce where I used it, but it was mandatory-ware.


It seems you know the project, do you have an idea how they are financed for so many years ?


Mostly donations. https://www.openbsdfoundation.org has some overviews.

It's a fairly small project and doesn't have too many costs, relatively speaking.

Previously the main income was from selling CD sets (they intentionally limited the web download options), but they stopped doing that about 15 years ago orso.


Thanks, very clear :)


Makes you wonder: why do people still use password authentication with SSH?


Passwords are sent all at once from the client to the server. This feature is for obfuscating your keystroke timing within the encrypted connection.


Even if you don't use password authentication you may still type sensitive information in a SSH session. For example, password when using sudo.


Or any form of authentication for that matter, e.g. AWS.


When I ssh to a device it asks me for a user name and password. Thats probably why.


They are saying, "why don't you use public key cryptograph to create an identity on the remote machine?".


I dont see that option when I ssh to a machine. If you want better defaults then offer them. I was being deliberately obtuse but barely.


I recommend you do set up key authentication. You'll get more convenient logins and better security. This page should document how to do it: https://www.ssh.com/academy/ssh/copy-id


I suppose what the parent is saying is that's not the default. Fresh install of anything is still password and GitHub key import isn't a panacea.


I don't see how ssh-copy-id has anything to do with Github.


Which is commonly deployed via password authentication


For 40% of global use cases there’s probably probably little to no risk.

The rest is probably a mix of good, bad, ‘just enough’.


A central aim of of SSH is confidentiality. There's a lot besides passwords that you can deduce with traffic analysis, especially if you can correlate with other observed events.


How else would I upload my public key?


A service may provision an account with a provided ssh public key, so that you never log in with a password, even once.

It's sort of a chicken-egg problem though, presumably you do have a password somewhere along the line, such as in a portal where you created your account and uploaded your public key.


I'd say there are more valuable things you can do to improve security than solving the problem of "having to ssh in with a password one time to upload a key"


Maybe. Not having a password on the server eliminates all the risks associated with weak or leaked passwords. And then you can configure SSH to reject password logins altogether. It's not an insignificant benefit.


I'd say there are more valuable things you can do to improve security than solving the problem of "having to ssh in with a password one time to upload a key, then updating the config to reject password logins".


If you can't securely ship a public key to a fresh machine, then how can you trust the software running on that machine?


SSH password login is secure. Keys are preferred since you can't have asdf1234 as a key, but if you as the initial person to set up the server are the only one allowed password login and use a decent password, you're fine


The correct answer is using client certificates, but they're a great deal of pain to set up compared to "ssh-copy-id" (or using username/password!)


...Key-distribution is to encryption systems as cache-invalidation is to computer science. Both of which are subforms of the ur-problem of signal-propagation which itself is stemmed from the physical principle of causality.

Only way through it is to shut up and do it, sadly.

The implementation details of doing it are often either A) have physical possesion of computer, and do initial insecure setup within a "secure realm" you control, or B) redefine your "secure realm" to include the hardware being in someone else's possession, and do what they tell you and pray they are trustworthy.


This is irrelevant for SSH password authentication. The obfuscation is for the session _after_ authentication.


For real; you can even make sudo work with SSH_AGENT. Add hardware key and it's pretty nice setup.


Convenience. Keys are more work.


how are keys more work vs password?


something-you-know auth is generally less work than something-you-have auth, since you need to ensure you always have the key handy whenever you would want to log on.


When I started reading this sentence I thought you had them backwards because I was thinking "something I have" as being a public/private key pair for an arguable definition of "have", so when I hit your comma, the confusion was fixed. But now I'm not so sure I was wrong.

I hate having to type my password multiple times in the morning for work, but only partly because of the 2fa on my phone that goes with it. If my computer could just detect my phone being nearby (indicating my presence) that would be great. Then something-I-have would actually out-convenient something-I-know.

Don't push button start cars kinda do this with the key fob? Why are computers lagging behind cars in tech? Usually the other way around.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: