Hacker News new | past | comments | ask | show | jobs | submit login
MongoHQ Security breach (mongohq.com)
186 points by steveklabnik on Oct 29, 2013 | hide | past | favorite | 121 comments



Hi everyone. Joel here from Buffer.

Some of you might have heard about our security breach on Saturday. I just wanted to leave a quick note and clarify that this MongoHQ security breach was the method used to obtain our users' access tokens, which led to the wave of spam on Saturday.

This is a key final piece of our investigation which brings it back full circle. I'm very happy that we have been able to gain the full understanding and can be confident there is no backdoor.

I want to be clear that this is still our fault. If access tokens were encrypted (which they are now) then this would have been avoided. In addition, MongoHQ have provided great insights and have much more logging in place than we have ourselves. We’re also increasing logging significantly as a result.

I've updated our security breach blog post with this information. If you want to see the full set of events, take a read here: http://open.bufferapp.com/buffer-has-been-hacked-here-is-wha...

Let me know if you have any questions about this. I'll keep an eye on this thread.


Just trying to develop a timeline here.

  Buffer security breach - October 26, 2013 [1]
  MongoHQ security breach - October 28, 2013 [2]
But the Buffer security breach was via MongoHQ, so MongoHQ has likely had the issue since at least the 26th, and probably earlier, since the attackers had to have enough situational awareness to target Buffer. I guess my point is, MongoHQ likely had the issue for a while and it went undetected.

[1] http://open.bufferapp.com/buffer-has-been-hacked-here-is-wha...

[2] http://security.mongohq.com/notice


Joel and Josh have it right, we found the actual breach yesterday. You also have it right, the breach happened before Monday. We're hoping to find reasonably conclusive evidence of a start date we can share with affected customers.


Hi Justin. To clarify, from what I understand, October 28 is the date MongoHQ detected this. They've provided us with the logs of database access, and unfortunately the queries leading to our spam attack on Saturday started as early as October 19.

I can understand that MongoHQ wanted to obtain the full picture here and not put other customers at risk by exposing this information before the situation was fully locked down.


I'm not sure what the implication (if any) of your post is, but the timeline seems fairly reasonable to me.

I wouldn't be surprised if it was Buffer's investigation that tipped MongoHQ off to the breach.


Have you considered not outsourcing critical parts of your company to startups that are iterating fast and potentially breaking things? Could you share the reasoning behind Buffer choosing to store their customers' private data with MongoHQ, instead of your own secure infrastructure?


This is the elephant in the room and I'm surprised it hasn't been mentioned by any of the other comments.

When it comes to infrastructure as a service, it appears to me that there is an imbalance between the sensitivity of information entrusted to external systems on one hand, and the standards such systems are held up to on the other.

EDIT: case in point, on http://mongohq.com I can't find a single mention of the word "security". The language is all about ease of use, performance, scalability, disaster recovery, and low cost, but as far as I can tell not a single word is spent on what procedures are in place to safeguard your data from unwanted access.

EDIT: removed inflammatory comment.


"security" is always left for after-the-fact ---- exactly as evidenced by this disclosure and their poor in-house practices.


While I can see your point of view, and the grandparent's, let's be fair. These are startups in a rapidly evolving, highly competitive marketplace and they have limited resources. If they spent months triple checking every dotted i and crossed t they might never even launch, and there'd be no company.

Like everything in software it's about tradeoffs. Maybe they erred a little too far on one side of the curve, so let's learn from that. But it's unfair to expect startups to be in the same league as banks security-wise. Do you have any idea how much a good pentest costs?


I reject the tradeoff that your internal customer support 'impersonate user' web app would be available via simple password on the open internet.


As a founder of another startup (octocall.com) choosing to host our database with another company (postgres, hosted with Heroku) the decision is a simple one: You trade infrastructure management complexity for a monthly fee. Especially when you're starting out, the team is small, and time is much more expensive than whatever your database provider is charging you per month.

It's a great trade-off for most early-stage companies, because managing databases is hard. I'd rather leave it to the experts who specialize only in managing databases. You and your product team have a thousand and one other things to think about other than managing your database. Your provider may end up making mistakes, but that's part of the risk you take.

Security breaches are a mess for everyone involved, and we're in relatively new territory here in the Infrastructure As A Service space. Overall, I have little doubt that IAAS overall is a good thing. As an industry, we'll learn and improve on how to deal with all things "security", but we're clearly not there yet.


+1 Using Heroku Postgres as an example gets you a lot for free, e.g. WAL-E disaster recovery.

Also farming out a piece of infrastructure and having a secure stack doesn't have to be mutually exclusive.


That's sort of a loaded question, no? A database is, in one way or another, critical to almost every business. I don't think that means every business should build one in house.


He's not talking about building the database, he's talking about hosting the database. And he's not claiming you shouldn't use third-party hosts, he's suggesting restraint in who you choose, in waiting for a track record to accumulate.


"The backdoor that was created through one of our partners, MongoHQ who are managing our database."

Looks like the database is managed by MongoHQ. That's what OP is talking about.


Security-after-the-fact is actually +EV just like certain risk distributions are +EV in finance. Companies must balance where they want to be in the secureness spectrum against the investment cost to get there and it seems that high grade security isn't worth the opportunity cost for a pretty big class of companies and customers.


If access tokens were encrypted (which they are now) then this would have been avoided.

That answers my question! (See https://news.ycombinator.com/item?id=6619265)

Kudos for how you guys handled this during this tough few days.


That's right. In addition, Facebook has an 'appsecret_proof' method where you can require signing of all API calls with the app secret. We've now implemented this. Details: https://developers.facebook.com/docs/reference/api/securing-...

Thanks for the kind words :)


So you were not storing billing, login, or other such account information in MongoHQ, just the access tokens?

Or were you somehow able to identify which fields from your MongoHQ database were actually accessed by the hacker?


It's not cool throw your upstream vendors under the bus.


(I am one of the founders of MongoHQ)

Buffer has been exceedingly fair with us, we are fully in favor of them giving customers all the information they have.


Seems like they're pretty clear that they're still culpable for designing a system that relied on the trust of a 3rd party vendor to protect user data.

They waited until after MongoHQ made their own disclosure, and all evidence (including comments on the post) point to a fairly good working relationship between the two.

I'm sure both parties wish this hadn't happened, but I don't see any bus throwing...


Huh? This is pretty normal, I think. When a service has downtime because AWS was having a bad day, say, they normally declare that...


As much as the actual cause of the concern (apparently serious employee error) was preventable, I was still impressed by their detailed disclosure, the steps that have been taken to mitigate it, and actual details about what was compromised other than the generic "non-credit card personal data".

I don't use MongoHQ for any projects right now (I evaluated their offering a year or so ago and opted to go with MongoLab, which I am still suing), but their response impressed me. There is indeed such a thing as screwing up properly, imho.

Kudos to the MongoHQ team, and best of luck in dealing with this mess.


> opted to go with MongoLab, which I am still suing

this is either an unfortunate typo or a great backstory ...


Hah! Oh god, yep, that was a typo. Part of me wishes I had a cool backstory to tell though... (not that I think MongoLab is in the least deserves to be the object of such action, of course!)


still "suing"...hahaha. slightly different meaning the misspelling conveys, especially in this incident!


I guess you are suing them for good service. Sorry HN for lack of insight, I really needed the laugh.


I guess You meant "using" not "suing".


Seriously. I wish other services did security notifications with this much detail (I'm looking at you linode).


Long story short: MongoHQ is securing access to critical administration functionality behind a VPN and will be requiring two-factor authentication instead of having that functionality available to anyone on the internet via a simple password.

What's interesting is that I think we've all been there at one point. You start building something, it gains momentum, then you get so bogged down with adding functionality and fixing bugs that you never get around to implementing the security features you know you should. At what point do you say "Okay, we have too much at risk to not correct this immediately"?


that functionality [critical administation functionality] available to anyone on the internet via a simple password

I mean COME ON, MAN.... there are mistakes, and then there's incompetence.


It depends on the simplicity of the password...

I mean yeah if the password was 'letme1n' that's one thing. Whereas if it's a "at least 16 characters, mixed cased, punctuation and no english words"; maybe that's another.

But, I would have thought at the very least, ssh with keys-only on the external-facing bastion host.


What with keyboard loggers, unsecured wifi, video cameras, stolen computers, stolen iPhones (with email access, now you're vulnerable to password resets), there are just too many attack vectors for even a 16 character password to suffice. You need to be protected by both something you have and something you know. (My iPhone has a 16+ character password. It's a pain. It's worth it.)

Heroku's lack of 2-factor auth has literally given me nightmares.


I'm not a MongoHQ customer (and as I don't use either AWS or MongoDB... it's unlikely I ever will be), but this is one heck of a disclosure.

Other companies (looking at you Linode...) could stand to learn a thing or two from how MongoHQ is handling this. Extremely transparent, and explaining in very clear terms the steps they'll be taking to mitigate the breach and prevent future incidents.

All that being said... Ouch.


Especially bad for people that were using MongoHQ to manage their DB backups to S3:

As a precaution, we took additional steps on behalf of our customers to invalidate the Amazon Web Services credentials we were storing for you (for the purposes of backups to S3). While this prevents the abuse of your AWS credentials by any malicious party, it may have resulted in additional unintended consequences for your AWS environment if you were utilizing the same AWS credentials for other purposes.

If you had S3 backups configured with MongoHQ, when possible, we have disabled the IAM access keys via AWS. In any case, please go to the AWS Management Console and regenerate any keys given to MongoHQ .


I was one of those people bitten by this last night. My client called and told me he was getting access denied when trying to upload files through his CMS. After some digging I found the S3 key had been revoked. This was concerning, as I hadn't touched the CMS code I wrote in like 3 years and I've had issues deploying old stuff to Heroku in the past. I really wish MongoHQ had contacted me first about revoking the keys.


We're very sorry about this, we went into a little bit of a panic when we realized that we held IAM credentials that gave full access to peoples' EC2 accounts and did what we thought was best. In hindsight, we should have gotten ahold of Amazon immediately and let them manage that process.


Best practice would be to create an IAM user for each purpose rather than sharing the same AWS key across all of your apps, for this exact reason


At the time this project was put together, IAM didn't exist. But I agree that this would be the best approach going forward.


Yes, for this reason, and because having separate keys allows you set appropriate access controls limited to the function they are being used for.


So this is why our app broke yesterday. We've spent the last 24 hours wondering how our AWS key was removed. Thankfully I was able to learn about this via HN. Still waiting on a note from MongoHQ. And when we went to file an issue with AWS there was no Premium Support option.


We're very sorry about this, we went into a little bit of a panic when we realized that we held IAM credentials that gave full access to peoples' EC2 accounts and did what we thought was best. In hindsight, we should have gotten ahold of Amazon immediately and let them manage that process.

You should have a support issue open in your AWS portal now, if you need any help getting new keys for other apps. If you can't find it hit us up at support@mongohq.com and we'll escalate.


Our application also had problems due to this. Would have been nice to receive a phone call or email.


Also I believe you should alert us as soon as you find out how long your servers have been compromised for.

As a company we are now trying to figure out how long our access tokens have been out in the wild.


You are 100% right on both points. We'll be updating our security page as we get more details, I expect we'll have some rough timeline information tomorrow.


Thanks. I appreciate being overly cautious when it comes to security rather than under cautious. And this has made us realize that we should have a unique profile for each service.


I am commenting without knowing the specifics of your application, so apologies if it doesn't apply.

You should look into using separate AWS keys for your DB backups and whatever it is your app uses those credentials for. This not only prevents any future availability issues because of key revocation, it also allows you to set fine grained permissions on your access keys limited just to what they're being used for.


We're enacting these changes at the moment. Thanks!


While annoying for users who we're using the credentials they provided for other things (which they really shouldn't have been doing anyway, and hopefully MongoHQ makes it clear that they shouldn't) this really seems like about the only thing they could safely do under the circumstances.


One thing I hate seeing is startups and security. Not that mongo is bad or anything this is just a good time to say it.

Build your product correctly from the start and have a security policy early. Don't leave XSS in your website, don't just hack code together and throw it on a server. As a security guy I love seeing new startups products because so often the code is so freshly written and immature that anyone it's asking for trouble. Measure twice cut once


Some times taking more time (and/or better coders) to do something 'right' vs. just hacking it together is the difference between failure and success. So from an individual startup's POV this may not be worth doing. Having enough users/success to warrant a security breach is a luxury problem when viewed from the eyes of an entrepreneur with no product.

On the other hand, collectively, startups losing everyone's credentials/data are burning the commons wrt people's willingness to use new products.

But then on hand #3, it's not as if BigCo's have a much better record, so I guess this gets chalked up to the general 'why in the world do we release software this bad?' and the general answer is 'because people have been trained to accept it'. In the same way that if the car was invented today it'd never surpass our current health&safety considerations.

So I guess my point is, software is terrible in general, and security is just one aspect of many. As long as we don't have a better way to write it, it'll keep being bad. It's not fair to expect startups to set a higher standard, when they're optimising along several other dimensions at the same time.


You're right that it is a luxury but you are wrong that it is unfair to expect startups to set a higher standard.

Startups have fewer assets than big companies. If you can't secure 1 webserver or you have a security breach with only 3 employees, there is something seriously wrong.

If you expect to one day have 10,000 servers you better believe I expect you to be able to handle 3 or 5 or 10 by yourself.


How, or where, does one learn how to implement a secure application (say, nginx + uwsgi + python + mongodb)?

This is a serious question - the best that I'm aware of are blog posts, but every time someone writes a post about security, HN comments (et al) give contradictory advice.


> Some times taking more time (and/or better coders) to do something 'right' vs. just hacking it together is the difference between failure and success

Maybe it's high time we start taking "not screwing over your customers" into account when considering what makes a success.


And yet companies do take a lot of care when it comes to design and usability. This tells me that if customers were to be more aware and security conscious, everything would fall into place. Startups would cut corners there no more than they do with usability. Right now there's a perverse incentive, if they do the right thing, they fall behind the competition. But if the competition has to do it, it's not a problem.


Yup, though I am an early adopter I am definitely less willing to just hand over my data to a random startup recently than I have in the past.


Capitalism often presents a choice between aggressively building unsafe products, or losing out to the competition. It's extremely hard to win in a fast market by being stable, secure, and correct, at least when it comes to software.


Capitalism will also say that I'm most likely to succeed if I can do all three.


Does it really?


> Not that mongo is bad or anything this is just a good time to say it.

To be clear, this breach had absolutely nothing to do with Mongo. It occurred via a web interface which could've been put in front of any hosted database.


If I were running a company, I'd buy everyone a license of 1Password and make its use mandatory. Sure there are still plenty of attack vectors, but it's just too easy, and 1Password is such a cheap way to mitigate risk (not to mention you're doing your employees a huge favor by reducing their vulnerability outside of work).


Better yet, how about making sure your employees can only connect to your internal tools when on a properly secured VPN and not "externally" from home/coffee shop/airport?


Can you explain why this is better than allowing access using 2FA over HTTPS (with a non-crappy set of cipher choices)?

IE What does the VPN buy you, specifically, on the employee side?

(I understand entirely what it buys you on the other side of the equation, such as a smaller attack surface, i'm just trying to understand why you would think having a VPN would have made this particular case more secure)


The value of a VPN over individually-secured HTTPS/TLS+2FA connections is that you can configure the VPN once, use very standard networking tools to continuously ensure that your internal services are only available over the VPN, and not have to worry about individually securing different internal services.

Another benefit is that as your internal userbase changes, you can revoke access from a single point and be reasonably assured that you've mitigated risk, which is something you only get with individually-secured services if you have a reliable directory system.

A problem with individually-secured ops/support systems is that most 3rd party code is not ready to be securely deployed Internet-facing.

Both approaches are totally workable, but the VPN approach is easier.


I guess, in a lot of ways, i'm not sure about " and not have to worry about individually securing different internal services."

This is essentially something you need to worry about anyway, for other attack reasons.


That's true and a point worth making, but as a practical matter, breaching almost anyone's perimeter is game-over. To not have that be the case, you need to design from the ground up so that internal services don't trust their own deployment network; it's difficult, time consuming, and in many cases confining (ie, it makes some services prohibitively difficult to deploy).

It's for this reason that pentesters learn quickly that the "make an arbitrary HTTP query from the target's own server" bug is usually sev:critical; for instance, in virtually any Fortune 500 network, that pivot gets you (with a little effort and 50 lines of code) to a JMX console somewhere, and from there code execution.

There's no good reason not to do both (ensuring that your internal services are authenticated reasonably and don't expose functionality or information pre-auth, AND setting up a VPN). But the VPN is the most valuable step.


Yes: from the internal rather than the external threat (bad guy employee).

As tptacek mentions, breaching perimeter security from external is "game-over" in most cases.


Right, I think having a non-split tunneling VPN forces employees remote computers onto the "corporate" network where they are governed by the rules of that network and establishing rules for access to internal resources is done centrally.

I just wouldn't trust having something critical like "impersonate user" on the open internet - even if secured by https + 2fa.


It's far simpler to remove the need for passwords alltogether or at least minimise them than to offset the liability against your employees using security software correctly.


detected unauthorized access to an internal support application using a password that was shared with a compromised personal account

Someone's updating their resume tonight.


Seems a bit harsh to fire someone for that. People make mistakes. If anything, you know that this person isn't going to make the same mistake ever again.

Of course, if they do, then firing might be the way to go. First time mistake, second time incompetence.


Perhaps something like this should be in a policy handbook?

"Employees may not use passwords that are used with any service outside of the company."

Zero enforceability (but some companies would probably ask for a list of outside passwords I'm sure), but in the face of a direct policy violation, 100% fireable, and can help a company in terms of liability.


I doubt he'll be fired just for that, but yeah, that is a big mistake on his part.

In general though, in any security breach the most common way to pivot is via password re-use. You'll see this happen with many privileged employees at almost every company.


> Someone's updating their resume tonight.

At a start-up, it most likely was not an engineering decision, but an agility trade-off. Get that product out the door now!


"Move fast and break (other people's) things!" ?


And this is why you use 2FA and put this behind a corporate VPN in RFC 1918 space. Why they are just planning for that now is amusing.

"Our support tool includes an 'impersonate' feature that enables MongoHQ employees to access our primary web UI as if they were a logged in customer"


2FA and VPNs are not exclusively the only way to secure things. X.509, bastion servers, airgaps that require physical access to a secure facility etc are also valid options, dependent on your systems and their configuration.


Granted, but an airgap would make working with some internal support tool a bit cumbersome :)

Bastion servers if properly firewalled might be OK for a short term solution. The concern there is if you allow unfettered ssh (for example) is someone watching for the inevitable brute-forcing that will ensue?


Mandatory SSH keys mitigates the brute forcing risk, and turns it into a nuisance. My employer presently has this arrangement and has done so for a while. Bastions only get you in the door: different entrances for different environments, users keys are only propagated to the machines they need.


Roger that. I keep thinking of my customer support people as non-technical and for whom ssh keys, port forwarding & bastion hosts are way over their heads but your point is taken. There are other (cheaper!) ways to secure an internal network.


Disable login via password, install fail2ban to help with the extra overhead/traffic.


If you have ssh running anywhere, please disable password access. Use keys. It should come installed like that.


I bet many products have a similar 'impersonate' feature. Nothing wrong with that if you take proper precautions.


Nope. Having it on the external internet is what's wrong here.


These guys have great support. A client of mine was using them and had a cluster get wonky. The performance of their website got really slow during an in-opportune time. MongoHQ got right on it, fixed the issue and gave them 6 months of service and doubled their instance size free or charge to make things right..... Sure it would have been better if the servers had not gone down, but that can happen on any system. They also followed up for the next couple of weeks to make sure that things were running smoothly.

I was very impressed with their customer service and would highly recommend them.


That's nice. Their application security setup, processes & procedures are apparently severely lacking. So, you might want to think about recommending them to your other clients.


It would seem to me that New Relic license keys would also be available to the hacker(s), as they are required to link MongoHQ to New Relic.

In speaking with New Relic, these keys cannot be regenerated.

Wondering if anybody could advise as to what security issues that might create? I presume it could potentially give access to some infrastructure information. Have emailed their support but waiting on a reply.

I'd like to chime in that, this circumstance notwithstanding, MongoHQ has been great during the time we've been a customer of theirs.


From New Relic support:

With the license key someone could configure an application to report to the account affiliated with the key. They would not be able to access the account or use the key to retrieve any personal information from the account.


While I commend the team at MongoHQ for detailing the attack, the strategy used to restrict any further damage and implementing new processes and additional security, I've heard this story one too many times.

When are companies going to realize that security is a critical aspect of their business? Why are people only acting when there's some big security breach, using trivial systems before this?


So is the story here that someone targeted and broke into a mongoDB employee's email account, found a password for an admin panel, then used that to extract details stored by Buffer to access people's FB & Twitter accounts? I wonder if Buffer was the target, or if it was just the first valuable thing the attacker happened across?


Disclaimer: I haven't used MongoHQ.

Yesterday, I decided to go with MongoHQ for my current project and this morning I saw they got hacked. Irony aside, couple of months back I decided to check AppFog (PaaS) and the next day they were acquired. :)

I believe this is a great time to become a new user of MongoHQ - security has been tightened, experience has been built. The "storm is over" and the chances of a new "storm" in near future (with the measures these guys have taken) have decreased substantially.

I think they reacted pretty impressively, once the breach was detected.

As much as I understood, the biggest flaw in their operations was the fact that many/all internal/support systems were exposed publicly and not within a VPN.

Still, they were around for just about 2 years, which isn't that much time, after all.


Can you please choose Microsoft and decide to use Windows? I want to see what happens tomorrow. :)


I can be used as a weapon. :)


So I am wondering if any service I'm using is using MongoHQ to store my sensitive information in plain text and was breached. Like, it is a bigger deal than just "change your password" when an attacker can "god mode" into customer databases.


The question remains: Will they disable god mode? I'm truly not comfortable with this feature, I should be able to block total access from my database. I deal with extremely sensitive personal information of thousands of users, and I would never give access to it for support people from a company I hired - Not even MY support team has access to it!


I think in those cases you may not be legally allowed to use a 3rd party managed service to handle the data, although it probably depends on the country where your company is operating.

Also that "god mode" only makes things easier, if you're hiring a managed service by definition it means someone must have access to manage the service.

The answer is: manage it yourself effectively locking access to any 3rd party.


There is a difference between "someone must have access" and "a random support account at that company can have full access".

I agree with parent that customer should be able to disable god mode on their account.


This is the sort of circumstance where you really don't want to be using a hosted databases provider, to be honest....


So why use their service then? Stand up your own database and use that.


Putting support functionality on the VPN means folks from outside the organization can't get to it without VPN access. It also means that you must give more people within your organization access to your VPN which raises the chances of having a compromised account be able to access other valuable assets on the VPN such as your production hardware. How do you make a trade-off between these?


VPN access doesn't have to be all-or-nothing. They could (and should) lock each employees VPN access down as much as possible, i.e. support personnel has access to their support tool and nothing else, etc.

Production hardware should be on a separate network/VLAN/whatever anyway.


Pretty poor communication -- we were also among those who had to get new AWS credentials with no idea as to why ours were no longer working. Luckily we jumped on it pretty quickly, but I hope this kind of thing is handled much differently by providers in the future. Very inconsiderate of the impact on users.


I've never understood what people use MongoHQ for. I mean, the data is completely open and it doesn't seem to have any support for authentication.


What do you mean the data is completely open? Do they generate just a tokenized endpoint to which everyone can have access to unauthorized?


"impersonate" feature in suppot system - I hate this feature, I always refuse when some one asks me to build a feature like this .


At least they used bcrypt to hash user passwords.


For the non-experts among us, does this imply that having the bcrypted passwords in hand is almost worthless from an attackers point of view? Would it be feasible at all to derive the plain-text passwords from that encrypted data?


No, the hashed passwords are certainly not "almost worthless." It does depend on the cost setting as well (you can configure how fast bcrypt takes to compute via this setting).

Anyone using the password "password" or "password123" or other basic common passwords, plus passwords that are just dictionary words, will likely be cracked if the attackers make a dedicated effort no matter what setting is in use. In general though, cracking bcrypt hashes is much, much, much slower than cracking MD5 or SHA1 hashes.

So anyone using a somewhat decent password is likely safe from having it cracked. While with MD5 or SHA1, you generally need a fairly entropic password over at least 10 characters in length to remain safe (a password of totally random 10 characters still wouldn't be cracked with MD5 or SHA1, but it's rare that people use /dev/urandom output as a password).


With sane defaults, bcrypted passwords are "worthless" in the hands of a casual attacker.

Somebody would have to REALLY want to access your account and have SIGNIFICANT financial/computational resources to crack it. And that only gives them one password. Significant and independent work is required for each individual password.

On the other hand, there's nothing stopping a developer from doing something really dumb like setting bcrypt iterations to 1.


>> On the other hand, there's nothing stopping a developer from doing something really dumb like setting bcrypt iterations to 1.

I _really_ hope that doesn't turn out to be the case here.


Much harder, not impossible.


The name of this company is unfortunate. It doesn't appear that this service has any affiliation with MongoDB, Inc.


Mongo is a trademark of MongoDB so I suspect they got permission because naming your company after a trademark is a quick way to get into trouble.


I guess you'd feel the sameway about mongolab


Anyone got any advice on setting up a VPN and 2 factor authentication? We use Ubuntu, AWS, and Rails.


I would configure your AWS instances inside a AWS VPC and then host the VPN endpoint inside the VPC. Then you can open up the VPN as needed via security groups. This gives you a secure channel directly to your instances -- something like this:

                  +------------------+
                  | AWS              |
                  |  +-----------+   |
                  |  | VPC       |   |
   +------+       |  | +-----+   |   |
   | You  | +----------->VPN |   |   |
   +------+       |  | +-----+   |   |
                  |  +-----------+   |
                  |                  |
                  +------------------+
AWS also provides a multi-factor authentication mechanism @ http://aws.amazon.com/mfa/


Send me an email to dpalacio@authy.com and I'll give you access to our Authy OpenVPN plugin free.


Sent!


[deleted]


Yes, really.

It's a support system - do you want them to paste in their private key to login?


ok :P - Sorry, by "internal support application" I understood a helper/daemon application that handles something internally.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: