Hacker News new | past | comments | ask | show | jobs | submit | Nimi's comments login

Is there a writeup describing the exact timing side channel? The advisory states that the vulnerability affects all RSA padding modes, which seems to imply non-constant-time BigNum operations. However, OpenSSL implemented RSA blinding even before the fix, which is supposed to prevent those class of problems. So this should be interesting :-)

(I did find the commit fixing it, but it's huge, and I can't follow the change: https://github.com/openssl/openssl/commit/b1892d21f8f0435deb...


Moving away from obsolete crap isn't the solution, it's the definition of the problem.

One could argue that the CA/Browser forum has achieved some success with moving away from SHA-1. As a spectator, I don't understand why this process is not repeated for similar obsolete primitives or standards.


I read an blog post by a guy with a long experience with this. What happens is large players demand that there be a 'reasonable' deadline for compliance. And then half the companies involved sit on their hands for two and a half years and then demand an extension. And then another and next thing you know you're still using RSA fifteen years after people knew they needed to stop using it.

Only solution I can think of is to create some sort of license where once the sunset deadline is established, the license to use it expires hard on the deadline.


That's very interesting, do you happen to have a link for the blog post?



Thanks that would be the one. I get this feeling that encryption protocols and standards often end up and all sorts of dank corners of the web infrastructure and finding and updating all of these is really messy task. And I suspect service providers and their customers haven't been really good at keeping track of everything.


Fascinating. I still feel I'm missing something basic here: If Microsoft, Google and Mozilla announce they're not going to accept any particular crypto primitive two years from now, and this time there won't be any exceptions, CAs and websites just have to abide, don't they?


The browsers say what they accept, the server says what it provides and something in the intersecting set will be used.

If (as a random example that didn't annoy me at all for 2 years) a website also needs to support SmartTV devices which only accept obsolete certificates then your server has to either break them or not.


Then a bunch of big companies announce they'll use another browser to be able to keep using it


Another browser beside Chrome, Firefox and IE? OK, so Symantec announces that they will only use Opera. Even then, they have to deal with their customers, website operators who need a certificate trusted by the big 3 browsers, leaving. In fact, now that Let's Encrypt certificates are free, it seems like this is the Symantec CA's worst nightmare.


Not CA:s, but clients like banks


Consider another explanation: If you use the term "rape culture", you essentially claim that in today's society, oftentimes rape will be accepted (FWIW, I agree with this statement). However, there appears to be a consensus in today's society that genocide is always a bad thing; generally that opinion is expressed without any qualifiers (as opposed to qualifiers like "legitimate rape" etc.)


As this is the second time I see companies complaining on HN about the short deadline, and there seems to be a consensus that transitioning to another payment provider realistically always takes more than 5 days, you might consider having a longer default deadline that makes it possible to transition in time. Just my 2 cents.


Both the pitch here and the site seem a bit light on endorsements from companies which have previously hired your interns, although I'm guessing you have at list a few good endorsements on file? Maybe consider giving visibility to a few good "reviews" from companies. Just my 2 cents.


Found myself in similar situations in the past. My tentative solution was to maintain a corpus of resources for practice (beyond the usual suspects of implementing the standard data structures etc.), and hope that when the chance comes, 1-2 hours each day for a week before an interview would suffice. And when I do practice, I don't use a code editor, either paper or plain editor with no way to execute the code, simulating the (horrible) conditions of interviews as best as I can.

Obviously, this doesn't get you the benefit of being constantly drilled, but should provide some advantage. If you do care enough to get yourself a bigger advantage, maybe try to clear a few weeks before a round of job searching (if that's how you operate), then use those weeks for a few hours of practice each day.


Thanks!

I prob. missing something about [you're] "handing me your private keys to unlock that 1c output. Now if you ever released Transaction 1, I can spend both the outputs". What happens if you release transaction 1, then immediately release a transaction moving the funds away from that address? Seems like I have to be very vigilant and closely monitor the blockchain, fearing this will happen?


It requires dual signatures. One party has them both, the other doesn't: the one without has to use the pre-signed 1-day-locktime transaction.


Interesting - can you please elaborate on those different cognitive skills?

(also, both your plight and the efforts you put towards getting out of it sound very serious - best of luck to you)


Learning how to perceive emotional states in greater detail/resolution both in quality and in change over time is a big one. So is being able to observe their effect on your behavior and if possible to intervene and do something healthier. There are also things that map very well to neurological processes that are known to get completely wrecked during addiction, such as the ability to put your prefrontal cortex in charge and pursue deferred rewards.


A few random thoughts/questions:

- This sounds similar to the incast problem which occurs in datacenters, but this happens on consumer Internet - cool.

- After reading both this post and the TCP/NC paper, it seems to me like TCP/NC is unfair to vanilla TCP. If all TCP/NC does is send the same number of packets, but the packets are "more sophisticated" encodings of the original data, link utilization would be the same. So apparently, TCP/NC is more aggressive than vanilla TCP, and that's fine, but I think it should be acknowledged (haha). When they say stuff like "TCP doesn’t see the packet loss, and as a result there’s no need for the TCP senders to reduce their sending rates", it's a bit unclear what they mean - you can just as well modify the TCP stack to ignore the packet loss and not reduce the sending rate, without network coding.

- Why not use TCP termination? You could install a performance-enhancing proxy at the Sat gate, and make sure the link is always 100% utilized.

- "Let’s increase the queue memory" - I thought this should theoretically work. See for example http://yuba.stanford.edu/~nickm/papers/sigcomm2004.pdf. If folks familiar with the apnic effort are reading, I would love to know if they tried such measures and what happened.

- Could CoDel improve the situation here?


Their phrasing appears to state they use TCP/NC, a network coding-based variant of TCP introduced in an academic paper by (some of) the same people behind this initiative:

http://arxiv.org/pdf/0809.5022.pdf


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: