Hacker News new | past | comments | ask | show | jobs | submit | gigq's comments login

"I hope to see the industry improve in this respect, but in the meantime I'm happy to exploit this imbalance as a competitive advantage."

I wonder how much of requiring a computer science degree is simply because the people hiring have computer science degrees. Seems like a self perpetuating insiders club.


I have a CS degree and would value it not at all when judging a candidate for a web/mobile dev job.

It's interesting that this story appears at the same time that an article about how memory games designed to improve your intelligence only improve your memory with that memory game - not in broader contexts.

Chess masters are no better than beginners at memorizing chess boards with a more random arrangement of the same pieces.[0]

Expertise in humans seems to be highly specialized.

So then why do we expect algorithmic knowledge of a CS degree from people writing modern crud web apps?

It's ridiculous.

[0]https://psy.fsu.edu/faculty/ericsson/ericsson.mem.exp.html


Conversely I don't have a CS degree and I take a rather cynical view of candidates who have do. Perhaps I've created a self perpetuating outsiders club at my company. :)

In all seriousness, though, it's because the CS majors I've interviewed have (generally) been great at theory, crap at application. I once had a guy hold forth for 10 minutes about cardinality and then flop sweat when I asked him to explain the difference between "GROUP BY" and an "ORDER BY".


I am also not impressed by a lot of CS majors I interview. I often ask them to come up with a simple sort algorithm where performance doesn't matter or to reverse an array. A lot of them have problems with that. I am very tempted to try fizz-buzz...


I never thought of that, but seems very possible


As someone with a computer science degree I can say I've never questioned it being on my companies job postings. But when I think about it I'd rather see a candidate with an excellent github account than a degree.


But that means you're heavily biased against programmers that may be very capable and successful but don't work on open source projects. That reduces the catchment size considerably.


I also agree with the article author to give them a take home project to work on. Just saying that a computer science degree and a 10 min chalk board test are a pretty terrible way to make new hires.


How long of a take-home? In my current job search, I think I've finally hit my limit of overly burdensome take-home exams that significantly disrupt the childcare and exercise routines that I depend on. I think any take home that takes more than 2 hours of my time should require the company to compensate me at a fair hourly rate for completing the test for them, otherwise, given how many companies do not place this asymmetric burden on candidates, it's just not worth it.

I also think many of the same biases and inefficiencies that haunt terrible whiteboard hazing interviews still apply to take-home tests. If you say something like, "write this as if it's going into production" it's a disaster -- whatever qualifies as good enough for production is a subjective opinion that varies from organization to organization -- and often varies considerably within an organization too.

A great programmer pressed for time might slap something together and add some notes in a write-up about the extensively different way they would work on it if they were being paid a wage, given quiet working conditions, and tasked with it in the regular setting of a job. And would likely get rejected, despite being a great programmer.

Meanwhile, a candidate coming from some situation without hardly any time constraints, perhaps taking time off between jobs or during a university break if still in school, might spend 20 hours on something that should only take 4 hours, polish the entire thing, write a full test suite, and implement extra features. They might be more likely to be hired even if they are an inferior programmer, simply because they had access to significantly more free personal time to fit it in.

It can even be prohibitively hard just fitting in round after round of technical screens and 1-hour phone calls that probe your experience or ask you to solve short form problems, especially if you're interviewing with many places and they all want you to do that.

Making it take-home sounds better in theory, but there still are a ton of failure modes that people don't adequately account for.


Excellent github != good programmer


This is the company the FBI supposedly contracted (http://www.cellebrite.com/Pages/cellebrite-solution-for-lock...), if that's the case it's just older devices that don't have the secure enclave which is not surprising.


This bummed me out as well, but it's worth noting if you have an Apple Watch that has a barometer in it.


Most retail accept them through companies like coinbase or bitpay but they immediately convert them to USD so their exposure to a bitcoin hack is minimal.

The reason they do this has more to do with the fluctuating price though as they can't risk the value dropping after they sell something.


SHA256 is used for mining but that would only expose new coins that are being generated. If there was a vulnerability there the community would switch to a new mining algorithm to secure new blocks.

Addresses are protected by hashing a public key to the bitcoin address (https://en.bitcoin.it/wiki/Technical_background_of_version_1...). If you've always sent coins to new addresses like the hierarchical deterministic wallets do then your public key is only exposed after you spend the coins, so you are not susceptible to a cracking attempt on the public key.


> If there was a vulnerability there the community would switch to a new mining algorithm to secure new blocks.

This would be extremely interesting to watch as a new algorithm would render all (or at least most) existing miners useless as they use SHA256-specific-ASICs. The economic and control fundamentals of Bitcoin completely change if you change the hash algorithm.

Not to mention it takes non-zero time to choose a new hash, make the code changes, and deploy it worldwide. Even if somehow you do that in hours or days, what happens in the mean time? Bitcoin would be entirely unsafe to use and anyone using it as their primary financial store better hope they have some food stockpiled (and bills paid)!

Luckily I've not heard anyone question the math behind Bitcoin's proof-of-work algorithm, so I think this possibility is exceedingly unlikely as the math is well understood.


Breaking Bitcoin's hash algorithm would require a feasible pre-image attack on SHA-256. Such a thing would have far greater consequences than forcing Bitcoin to change its proof-of-work algorithm. Tons of protocols and applications would be rendered insecure. We'd all be scrambling to fix TLS, SSH, IPSec, GPG, and dozens of other key pieces of software. I'm not sure how Debian-based systems would upgrade, since deb packages are verified by SHA-256. In short, it would be utter chaos.


Every system you mentioned other than Bitcoin already supports alternatives to SHA-256. There'd be a scramble for sure, but no different than the current steady stream of OpenSSL vulns. Probably less of a scramble than many OpenSSL vulns as SHA-256 could often be disabled via simple config file changes.

Bitcoin on the other hand would have to fundamentally change due to miners with ASICs. The code change might be minimal, but the economic, political, and psychological impact would be huge. For miners to lose their entire infrastructure investment in ASICs may cause an unrecoverable drop in hash rate as it takes 2 weeks for difficulty to recalculate.


This is why SHA-3 exists. If it starts looking feasible that SHA-256 is going to be broken, we already have alternatives. We've already seen TLS et al switch from hashes like MD5 and SHA-1 a few years ago. Pretty much all the other systems have systems in place to migrate in case one part of the validation system is cracked. Bitcoin doesn't, and given the fractured state of the community, having a migration system would be difficult at the very least.


Yeah, I just pulled the title from the github issue.

I got it and liked the image for the project, I just think it's a little crazy this is a controversial issue and wanted to see what hacker news thought.


Biggest issue might be copyright to the image. I assume it is from one of her music videos. Can they claim 'fair use' without getting a C&D letter to take it down?


A counter argument I'd give is has anyone ever described a TV as being too harsh, as in they can't watch it for an hour without experiencing "fatigue" that causes them to need to stop. Of course with a TV this sounds ridiculous we just want the most life like picture possible so these measurements are a great way to accomplish that.

With headphones though while something may sound more accurate it might not be enjoyable to listen to. It can be too harsh to our ears and cause listening fatigue. While the Sennheiser HD800 are a great pair of headphones I find them very difficult to listen to on a neutral amp for long periods of time. The only way I could really enjoy them was to get a warmer tube amp. In the end I replaced them with the Audeze LCD-3 which I would argue are less accurate headphones but are far more enjoyable.


Heh... Folks use software like f.lux to purposefully distort their displays in order to reduce fatigue.

The ability to adjust the output to one's liking should not be confused with an inability to faithfully reproduce input.


That's a pretty good comparison. In both cases you are adding warmth to a source one via software and the other via an amp.


In pomodoro if you get interrupted to the point where you need to stop what you are working on you are supposed to stop the timer and start over when you can.

It forces you to make sure interruptions are actually worth dealing with vs putting them off till after the timer is up.


Good point, but by itself restarting the timer isn't much of a deterrent (at least to me). It makes me wonder if a better incentive might exist so that you can pause the timer but with a weightier tradeoff.


Plex has a feature called Direct Stream that supports this use case:

https://support.plex.tv/hc/en-us/articles/200250387-Streamin...


I haven't used it but I did attend the talk on it at the AWS conference when it was announced.

It is not using DynamoDB under the covers. It basically uses a fork of MySQL for the front end and their own storage layer built from scratch on the backend. It does use concepts from the Dynamo white paper for redundantly storing the data to disk which removes the need for MySQL style replication. But it's not using the DynamoDB service currently offered from Amazon.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: