Hacker News new | past | comments | ask | show | jobs | submit | ruswick's comments login

I see that you are only targeting the big companies. This is really smart, since those companies tend to pay better and their employees will have a bunch more cash to throw at their house, leading to a bigger commission.

What are your plans for adding new companies? Are you going to branch out into firms that might not pay as well? Stick with the "top 1%" companies but in different cities?

Real estate service that focuses on tech people seems lucrative. Good idea all around.


You can use the search box to find your company if it's marked on google maps. It'll try to pull an image too using the Clearbit company logo API [0].

The list is crudely sorted by number of page views so it'll tend to just show the big companies.

[0]: http://blog.clearbit.com/logo


One reason is that this site only seems to have listings for the "top companies." These companies are very hard to get a job and and tend to pay well, so the home listings are probably skewed to suit buyers that work at the elite companies and have more money to blow than a normal person.

I bet that if and when this site begins targeting the other 99% of companies, the listings for those will trend lower to reflect the average salaries.


I've had a similar experience. A company I was trying to intern at asked me to write a rails app for them before they would think about giving me an interview. I wrote the app (which all-in took maybe 10 hours) and sent the recruiter a link to the heroku instance the app was running on and the repo on github.

I never got a response. I think three months later I might have gotten a one-liner saying they had gone in "a different direction" or some bullshit like that.

It's insulting and ludicrous for companies to treat prospective employees so disrespectfully.


Yep. Flexbox is maybe the best development to happen to CSS in a decade. Makes layouts that were once impossible without JS into trivial three-lines-of-css solutions.


That's a pretty common thing. I don't see the problem.


Docker addressed this last week at dockercon, announcing Docker Notary to securely publish and verify content.

https://github.com/docker/notary


It being a common thing is the problem. It's teaching insecure habits.


Ok, I wonder how many do bother to check checksums after downloading binaries?


Many do, likely without even realizing it. It's common functionality in Linux package managers.


What do you think about this solution: introduce a layer on top of SSL that just verifies whether the private key of a certain explicitly stated site has signed a file?

In other words, compromising the server wouldn't be enough, because that doesn't give you the SSL key, so it would still fail "curl|is_signed_by site.com|sh", which they can only pass if they compromise the private key?

Better than the current system?


> compromising the server wouldn't be enough, because that doesn't give you the SSL key

But it does. The server needs to have the SSL key to be able to serve requests over HTTPS.

It may be encrypted with a password, but at that point you're severely degrading your integrity assurances (compared to offline executable/archive signing). Might as well do it right with offline signing, right off the bat.


interesting. What if a script just uses the SSL infrastructure to get the private key associated with a domain name, without actually needing anything at that domain name to come over SSL? Then the private key does not have to be live/online at all, but could be used to verify the shell script. This is getting complicated, but if there is infrastructure, it should be possible to use it.

Personally I think curl of an https URL is not the worst thing in the world.


I think you need to read up a bit more on how asymmetric key cryptography works :) Verification is done using the public key, the private key is used to sign something. That's why it's so useful. This is a good read if you want to learn more: https://www.crypto101.io/

Basically, the separation between 'server serving the downloads' and 'machine signing the release' is intentional, and should be maintained. Consider it an 'airgap' of sorts, although it usually isn't one in the strictest sense of the word.

Making release signing depend on the SSL infrastructure (which is already rather broken in a number of ways) in any way, is a bad idea. Verification is a different story, but secure code delivery is a hard problem anyhow.


I understand exactly how it works. How do you get a code-signing paradigm down to something as simple as curl | sh though? (Well not as simple, but still a human-readable one-liner that works on nearly all Linux systems.)

I thought maybe a single-line invocation might piggy-back on SSL as follows:

- get a server's public key that is not online or able to answer requests (because if it were it couldn't be airgapped)

- but still use the key to verify the script that's downloaded from the server that is online.

- only pass the code to sh if it was properly signed by the offline server.

Then the offline server could be "https://key.meteor.com" and the private key wouldn't have to be anywhere but an airgapped machine.

I don't know if there is more of the SSL infrastructure that I'm missing though (I'm not an expert) or if this could practically be reduced down to a tamper-evident one-liner (a la curl https://install.meteor.com | sh). It would be a marked improvement over just passing anything from a potentially compromised server straight to bash though!


> How do you get a code-signing paradigm down to something as simple as curl | sh though? (Well not as simple, but still a human-readable one-liner that works on nearly all Linux systems.)

You don't, really. Not currently anyway. Retrieving a binary/archive and doing out-of-band verification are two logically separate steps.

The problem with your suggestion is that SSL is about transport security. It verifies that you are talking to the right server, but does not provide any guarantees beyond that.

It's not really possible to shoehorn release signing into that, without additional infrastructure.

It doesn't matter how you combine things - the server and the signing system are (and should be!) two separate entities, and you cannot rely on the server to tell you who the signing system is (because that'd give you no better security than not having a signing system at all).

> It would be a marked improvement over just passing anything from a potentially compromised server straight to bash though!

It wouldn't, because as far as I can tell, you're still relying on the server to tell you what the correct release signing key is. How else would you obtain it?


thanks - drop me a line and I'll reply, this thread is getting old and deep. thanks for your thoughts though and hope you do write.


That's exactly what docker notary is.


[flagged]


I am downvoting them because shell piping is not relevant to Meteor 1.2 announcement, this topic was discussed on multiple occasions and they offer no alternative.


Sure it's relevant.

They're announcing "hey we have these new features" and I'm saying "hey look, they still haven't fixed this big glaring problem that is very much relevant to how their software is used".


Downvoting because it's a non-issue. If I'm copying and pasting something from the web into my terminal, it's because I trust the source.

It's not any less secure than downloading a tarball and running scripts inside of it. That's equally insecure and people have been doing that since the dawn of the internet.


Hack their webserver, replace the contents of https://install.meteor.com/ with malware, instantly pwn anyone who pipes that to their shell.

Worse: the people who are most likely to curl|sh are DevOps folks with the keys to their company's kingdom.


What do you want them to do? The obvious solution is to change it from "curl|sh" to "curl|{something about whetehr PGP says this is properly signed by the private key belonging to public key blahblahblahblahbalhMETEOR.COMkey. If yes:}|sh"

But the problem is anyone compromising the site can just change the line from "blahblahblahblahbalhMETEOR.COMkey" to "attackerchangedblahblahblahblahbalhMETEOR.COMkey" right on the web page, and people will copy the one verified against the wrong key. So that doesn't work.

Nor do clients have caches of PGP signatures, nor is there some totally obvious third-party that you can verify it with. You can't just go:

curl|{check_if_signed_with_www.this-site.com}|sh (which would pass visual inspection - the attacker would have to change www.this-site.com to something else) because there is no obvious mechanism to do that. Who will tell you whether https://install.meteor.com/ has signed it?

Well, HTTPS will kind of tell you. So "https://install.meteor.com/" is a lot better than nothing...

If you're going to entertain the idea of the HTTPS site being compromised to serve whatever they want, well, there is precious little you can do about it.


> What do you want them to do?

I want them to not use a one-liner. Step-by-step:

1. Download the files

2. Download the public key

2a. verify the public key if you've never seen it before (publish in the blockchain, have lots of high profile technologists sign it, etc)

3. If the verification matches, then proceed.

Teaching developers to value "clever one-liner hack" over "secure, dependable solution" will lead to bad habits.


if you're going to include "2a" you can refactor all of your steps into:

1. Google "meteor.com compromised" and decide whether it's currently compromised. If it isn't:

2. Run curl https://install.meteor.com|sh

It saves a few steps and is equally secure - you know, since you're just going to go based on what other people think and include no programmatic check whatsoever. (your 2a).


2a can be swapped out for a better PKI system at any time. Relying on whether it's public knowledge that Meteor is compromised or not is not nearly as resilient.


so swap it out for a better PKI system. There is literally nothing in any of your steps that can't be automated, except for the totally nebulous 2a "publish in the blockchain, have lots of high profile technologists sign it" which 9/10 people are not qualified to judge.

There is no reason you couldn't automate your whole suggestion, except for that one, which makes it infeasible and open to all manner of social engineering.


Commit a race condition to glibc,musl,uclibc and fuck up almost every software on the planet.

It's convenient and that does not mean it's a good practice but i doubt using an other method would minimize a risk when the meteor.com would actually get owned.


> i doubt using an other method would minimize a risk when the meteor.com would actually get owned.

GPG signing, keep the private key offline, publish the public key in the blockchain and have a lot of high profile technologists sign it so it can be independently verified.

See also: PHPUnit. https://phpunit.de/manual/current/en/installation.html#insta...

(They provide an example shell script for quickly downloading and verifying the latest versions of their install)


I completely understand your point.

But at the end it's about people ...your example with PHPUnit can be abused like this https://thejh.net/misc/website-terminal-copy-paste How many people do you think will bother to paste the script to a text editor and check for evil parts ?


Anybody who runs unverified code, through any medium, when the option to run trusted code is available, deserves to get pwned.

https://github.com/paragonie/password_lock/blob/master/run-t...

^- For the record, I keep scripts like this in my Git repositories.


It sounds like your process went ok. But one hour tests are one thing. Projects that may take 10 or more hours are totally different. It really is amazing what some of these companies will demand of their applicants before even granting an interview.

And although this system might be more accessible than traditional recruiting, it is far from perfect. It is the job of the community to demand better and more respectful hiring practices from companies. Paying the applicant for the time they spent working on the project or offering a traditional interview for people who work full time and have families would be a good start.


Oh God.


The point is that they pay you the $100k for the time you give them as an employee. They don't pay you a cent for the interview project, so you shouldn't give them a single second in return.


They don't pay you for the interview, but that usually costs the company several man-hours worth of work just for the time an applicant is on-site. Should applicants be reimbursing companies for failed interviews?


Maybe. I've always thought it would be interesting to make applicants pay an application fee. This would cut down on people "spraying and praying" with their application, and would lessen the workload for companies. It would also justify spending more time and effort in reading applications, since the company isn't just wasting resources reading bad applications.

Even under the current system, at least the waste is symmetric. I give up a few hours of my time, and the company gives up a few hours of its time. There's equity. The mutual work that the company and applicant do offset each other.

A take-home test model skews that balance in favor of the company. An applicant can spend 10 or more hours working the project. The company can run it through an automated testing suite and have a recruiter spend five minutes looking it over. The system is designed to waste more of the applicant's time and less of the company's.


> Even under the current system, at least the waste is symmetric.

But it's never symmetric. Someone in HR spent time setting up the job posting and managing the process. Someone in management took the time to respond to HR and review your resum, then approve the interview. At least one person at a time is in the interview with you. Then the entire team will spend time afterward breaking it down.

A single hour spent by a bad candidate wastes at least 3 man-hours of work by the company, and most likely more.


> A single hour spent by a bad candidate wastes at least 3 man-hours of work by the company, and most likely more.

So? Companies need employees. They have to do what it takes to get them. If it weren't worth it, they wouldn't do it.

It makes sense to me that more time in aggregate is spent by the company than the candidate, because the company has dedicated recruiting/HR/coordination people that handle the process. I, as an already full-time employed developer, don't have as much time to burn with interviewing.


There is no way in hell that you will get someone to pay an application fee on top of having to burn a vacation or sick day.


That interview costs me at least $500 cash -- my take home pay for the day of vacation I would have to give up. I value my vacation days at rather more than $500 actually, because I have so few of them.


Exact same thing here. I've made it a personal policy not to tolerate these types of take-home tests. My time is way too valuable to spend hours on a project only to have it ignored by the company.


Also keep in mind that applying to jobs is frequently throwing your application into a black hole and just praying. Sending a resume into the black hole is one thing. Sending a large codebase or project you worked on as part of the application is entirely different.

I remember a few months ago, I was applying for an internship with a company and they requested that I write a web app to test coding proficiency before they thought about granting an interview. I spent about 12-15 hours writing the thing, deployed the app to Heroku and put the code on Github. I sent a link to the app and the repo to the recruiter. I never got a response of any kind.

Although some companies do try to make the application experience as pleasant as possible for the applicant, the majority of companies don't give a shit about the people who want to work for them, and make no attempt to respect the time of their applicants. Placing greater demands on the applicant is an easy way to shift more of the workload from the recruiter to the applicant. The applicant is the loser in this situation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: