Hacker News new | past | comments | ask | show | jobs | submit login
Hashicorp Vault v1.0 (hashicorp.com)
409 points by blopeur on Dec 4, 2018 | hide | past | favorite | 92 comments



Looking at some of these encryption-as-a-service providers, I'm a bit confused on one of the selling points. From my limited understanding: with a traditional system, you encrypt in the database but your encryption key likely exists on your main server, possibly in an environment variable. Attacker compromises main system and has access to both encrypted data and encryption keys. So, you instead use something like Vault to request an encryption key in real-time from a remote service and thus don't need to store it on your server. So, one of their selling points on their site is that Vault is better because two systems would have to be compromised by an attacker in order to decrypt sensitive data. The part I don't understand though is, if an attacker has compromised my server, could they not just initiate a request to Vault for a decryption key at that point? I feel like I'm missing something because this sounds like it remains a single point of failure.


It's part of security in depth. Yes, if they own the server, they could keep requesting decryption keys. The main thing this protects against is someone getting a copy of the encrypted data, then breaking into a server and getting a key that is good forever. By using this system, it renders their copy of the encrypted data useless.

Then you have to add in defenses for the active attack, such as rate limits, anomaly detection on access patterns for decryption keys, and the usual host and network based intrusion detection.

It's one part of a complete security strategy.


Yep.

Another aspect is separation of responsibility - don't forget insider-risk. Security is a process, not a product, as the cliche goes.

Where I work, we use Vault for (among other things) authentication between internal services. The developers do not have access to Vault and never see tokens/certs/passwords/etc, which are created by a different group. So if you want to run a rogue service, you need a conspiracy across departments, increasing your risk of detection.

Same principle you see in accounting, of course. You develop process, loci of responsibility and audit trails designed to enable the desired outcomes while at the same time making attempts to defraud the system impossible, obvious, or investigable after the fact, in descending order of preference.


"So if you want to run a rogue service, you need a conspiracy across departments"

Or a developer that plants a back door in the code. He then exploits it on the production server.

Maybe time is better spent on code reviews ?


You're right! That's one approach to exploiting this.

It's worth considering that multiple approaches, across multiple departments, can be used. A good environment might use Vault for authentication between services and require that code cannot go into production without a review and enforce it in code. Then you tightly control who has access to the administration of those restrictions and log all usage to somewhere else, such that even if someone does compromise the infrastructure to allow them to insert their back door it can be detected and the culprit identified.

Again, you're completely right. Code reviews can be a great way to spot malicious code! You're also right that the Vault usage pattern that parent pointed to definitely has vulnerabilities. It's perhaps worth considering that this approach could be used in a context where it might not be the only defense. Perhaps you could ask parent for a more detailed explanation of their org's information security practices?


As the other poster noted, this is not the only mechanism employed, which would be pretty silly. Among some of the others are, indeed, code reviews, static analysis and anomaly detection.

Another point worth noting about insider threats: overconfidence on the part of the attacker is frequently the defenders' friend. Metaphorically speaking, the crook knows about the camera over the door and will avoid showing their face to it, but didn't notice the other ones. And because they work there, they think they have a better handle on how things work than they actually do.


> The main thing this protects against is someone getting a copy of the encrypted data, then breaking into a server and getting a key that is good forever. By using this system, it renders their copy of the encrypted data useless.

Does it? Is there any example on when encrypted data and a key in ones possession can be prevented from decryption?


(I haven't used Vault but I’ve used systems that solve related problems, like KMS.)

One approach is that the app server doesn’t possess the encryption key at all (or at least not the master key). Instead, it calls a remote service to decrypt each item as needed (or the item-level data key aka envelope encryption key).

In this way, an attacker can compromise the entire data set and the app server, but they still can't decrypt the data. They have to maintain ongoing access to a compromised app server in order to decrypt data items one by one, which (i) could take a long time for large data sets (ii) can be slowed down by rate-limiting (iii) runs the risk of being noticed (iv) if detected, attacker's access can be shut off immediately.

Additionally, you might manage and monitor your key management servers differently than your app servers; for example, you have the expectation that no one will ever log into key servers routinely, so any interactive access or unexpected running processes can generate alarms. The set of people who have access to key servers is different and much more limited than the set of people who have access to the app server. The key server can run in a different virtual network from the app server while providing extremely limited access to the app server (just e.g. TCP on the one port needed to provide this service).

This approach contrasts to schemes where the app server or data store has an encryption key. If the attacker compromises that, they can lift the entire data set and encryption key out of your systems and process it later -- and it's irretrievably gone. With the key server approach, stolen encrypted data provides no value on its own, and the attacker needs ongoing access to the key server to make sense of the data.


You're looking at a bad example. Good backup encryption generates a local AES key on a ramdisk, encrypts the backup with that, decrypts the AES key with a public key and stores the result. This makes it cryptographically hard to access the backups. The host never has access to the private key to decrypt the AES key to encrypt the backup into a usable form. There's no vault involved there because public keys aren't security relevant.

On the other hand, we're running a lot of mutual TLS authentication via CAs in vault. Vault has the private key of these CAs, and authenticated hosts can request signed client certificates for these CA chains. In this case, vault enforces certificate parameters, TTLs, CLR and other things. You could keep requesting certs, but once I revoke the vault authentication you stole, you're done once the certificate expires within 3 days.

And on top of that - yes, it's possible to escalate to vault using local credentials for vault. However, that's a very different threat level. Botnets won't abuse your vault instance. Heck, script kiddies wouldn't know how to abuse your vault instance after getting shell access.

If someone can RCE into a box, and find your vault integration, and abuse your vault integration, you're in for a ride and usually, vault won't be your problem at this point.


Vault provides an audit log. Since each process gets their own token it allows to correlate which machine/container got compromised. I think we will see more tooling being developed around that in the future to make the search more efficient.

The other aspect is that if secret rotation is well exercised, it means that in case of a breach, taking an instance out of commission and rotating all the secrets is not a risky task anymore. If developers copy-and-paste tokens on slack, the window of attack is limited to the TTL of the secret. It also prevents developers from using the same secret for another unrelated service.


You have N number of encryption keys for different parts of your data, then a single client server can only decrypt a small part of the data.

Since you need to request the key each time to decrypt things like key rotation become easier since you can do them centrally. Even if you can decrypt data X on a client node the key you got an hour ago for X doesn't work anymore, you must request a new one.

Then the server can do things like impose rate-limits. Maybe you can decrypt A-F, but only at a certain imposed rate, and alarms will be raised if the rate is exceeded.


You're basically right. However, it does mean that your secrets aren't stored on disk, so if your backups or SAN are compromised then you're still good.

It is also good for syncing secrets across multiple servers, and for keeping the secret up to date, if you have a password that needs to update nightly or monthly or whatever.


We use Vault in our environment and deployments... Vault has a concept of using tokens for authentication to pull secrets. You can assign max number of uses, max TTL and other parameters when you generate them.

For our deployments, we generate a single token with a max # of uses that matches the target server count for our deploy and also a very short TTL of 5 seconds. Our code gets pushed and the token is passed to each server (in an env var) during that time and a command is passed to refresh environment vars with secrets from Vault. So if a token gets compromised it's very likely to be used up and/or expired.


Very simply, it removes the possibility of keys showing up in config in source control or in server backups.


I like Ansible's approach to this better. We keep all our secrets in ansible-vault and have a password to unlock it for those "who need to know".

Sure, this means provisioning VM's requires human interaction to unlock the vault (once) during a kickstart. So what. I'd rather do that than have plaintext or even pre-hashed passwords lying around in source control.

As others have pointed out, Hashicorp Vault seems like a good solution to the problems of another Hashicorp product ... Terraform, which used to keep passwords in its state files (not sure if it still does).


You can also configure Ansible to lookup to Hashicorp Vault on the fly.

https://docs.ansible.com/ansible/2.5/plugins/lookup/hashi_va...


Unfortunately it wouldn't work for our network even if I wanted to use it, since our systems aren't allowed to connect outbound to the Internet (for good reason). I suppose I could set up a Vault server locally (?), but it's complete overkill for a problem we already solved in a simpler way.

Ansible is gradually adding modules for about everything. It's come along way since I started using it back in the 1.8 or 1.9 days.


Vault is usually hosted in-house.

Vault offers features such as short-term secrets which are unique to each client. Vault itself manages creation and destruction of the credentials on the server, allowing it to enforce credential lifetimes


Those are neat features, but our systems do not require them, and for our small team it would be unnecessary overhead.

And by in house, I presume you would mean within a cloud hosted environment, eg. AWS VDC, Google Cloud etc. in the most common case.


How do you handle autoscaling? Server restart? Autoscaling requires credentials as part of the VM image. For server restart the credentials must be stored on the machine.


Not everybody uses or needs autoscaling. In fact I would venture that the vast majority of companies doing business on the internet, even those that have sunk their money into the cloud computing gold rush, do not need autoscaling.


I would argue that everyone needs auto scaling, even if it's only to use it to scale manually, or change/upgrade instances easily. Being able to just tell your auto scaling group to do +3 then -3 to replace your 3 instances without downtime is quite nice.


We host internally on our own hardware and have other much simpler ways of achieving the same thing, should we need to.


You put the service account token on the filesystem and then have it removed from the filesystem after loading it. then it lives inside a process, where it is still recoverable by a determined adversary who attempts to obtain root and can inspect process memory, but you can further protect that by encrypting it in memory with an ephemeral key (which also should not be at a predictable location).


When you deploy a traditionally configured application to the server you will have a bazillion credentials to access different stuff like databases, external systems. Think how you are going to secure that key store you need for SSL? What is the point of having it encrypted when the password lies in a configuration file two folders away?

Vaults like this allow you to not have that cryptographic material on the server.

Now, this doesn't solve all problems. As you mentioned, when you have breached server security you have access to everything.

But, there are other problematic situations that this helps solve. Now you don't have to provide secrets when you are building images and you will not have secrets on the filesystem when somebody only gains partial access that lets him only read files.

This also lets you see who and when is accessing secrets and lets you manage secrets (think how you are going to replace database password?)


The real utility of these products is in generating dynamic short lived credentials. The longer a set of credentials exist the greater chance they end up written down or widely distributednor going home with a former employee. With something like Vault your credentials have a lifespan of days, hours, minutes, whatever the shortest timespan your comfortable with is. That limits your attack surface greatly. If your doing a break glass procedure you can be assured that credentials are revoked after a reasonable amount of time.


Vault provides telemetry. The secret request patterns for each service are available. Depending on complexity,anomaly detection can be build on top.


In addition to what others have said, having a secrets management system like this makes re-keying easier in the event of a compromise. If one app is compromised, you can easily update the DB keys without having to recompile/redeploy every app that uses the DB. You would still have to redeploy the compromised app to change its keys for accessing the secrets management service, though.


Vault looks great, but I always balk at the operational overhead. Also the cost is significant. I'm at a tiny org, though.

For smaller orgs and projects, Mozilla Sops is really great:

https://github.com/mozilla/sops

It encrypts your secrets at rest using Google KMS, Amazon KMS, and various other cloud provider key services. You can then put those secrets into your code repository, cloud file storage, etc. and give your build pipeline a service account with the ability to decrypt the secret files.

Scales like crap, but is quick and dirty when you need it.


I think the unique value of Vault is not that it can change the locks so that old keys no longer work. For this it needs to understand how to reset keys on for example every database [0]. I don't think Sops does that.

I think there is a network effect to Vault that the more locks it can change the more people will use it and the more plugins will be added to it.

I think Vault is hard to use and we're thinking of integrating it in GitLab to give it a friendly UI [1].

0. https://www.vaultproject.io/docs/secrets/databases/index.htm...

1. https://gitlab.com/gitlab-org/gitlab-ee/issues/7569


I agree on the operational load, but Vault is free. Did you mean Vault Enterprise? There's two versions - OSS and Enterprise, but a lot of small and mid-sized companies are able to use the free version at scale. Especially if you're a smaller org, OSS should be fine, so your costs are the VMs it's running on and the humans running it.


I think he is talking about the burden of running a SPoF in highly available fashion, with backups, DR plans, monitoring, logging and all the baggage that comes with it.


Right on. This is also what prevents me from using (unmanaged) Vault--not having the time to invest in learning the failure modes and internals, backup and restore strategies, in addition to the best ways to use it for the 5-10 patterns where it would become important.


Anther solution is to store secrets in the Terraform state and use a secured state backend (eg: S3+KMS) to share them. It makes key sharing and rotation a bit easier while not promoting copy-and-paste.

You can use the "random" provider to generate secrets, the "tls" provider to generate certs and I also wrote a "secret" provider[1] to hold arbitrary secrets.

The best part is that it doesn't require to maintain a long-running process. For small shops it's a good upgrade over storing credentials in code or using git-crypt.

[1]: https://github.com/tweag/terraform-provider-secret


You don't want to store secrets in state files. Imagine you run a CI/CD system where you run terraform plan. Now secrets is all public.


https://github.com/hashicorp/terraform/issues/516

TLDR: Terraform should use Vault for storing secrets in state, but does not support it yet.

(it is occasionally unavoidable storing secrets in state due to resources orchestrated)


That's fine. Secrets are market as "sensitive" and not displayed in the plan output.


As Cloud Auto Unseal is now part of the Open Source version, that mitigates the biggest operational overhead, doesn't it?


Correct, but it doesn't automate creation of new clusters (for dev, qa, stage, etc). You'd still need github.com/sethvargo/vault-init or similar to automate initialization, but unsealing can be automated in OSS now.


You can use terraform to deploy Vault clusters. There's also a Helm chart available for deploying Vault onto Kubernetes.


It is still infrastructure that you run, this is a real cost for lots of people. If you're running other people's terraform or whatever it's also a black box of software YOU are on the hook for but that you don't know anything about until you have to (3-5 hours after the service has been down).


But nobody's saying to run "other people's terraform or whatever," or that you should be running a sensitive service that you "don't know anything about until you have to." Common sense doesn't go out the window just because we're talking about hosting Vault within your infrastructure.


You would think, but that's exactly what tons of people do. Many people who do not include the cost of running a service in the cost. It's very easy when the install is nice and easy to just say "sweet it's running" and pat hands "it's done". Then you realize you have no idea what's going on when it's totes on fire. At least with hashi products they're open source so you have a chance. And there's enterprise support, but you're still bleeding till you can get them in to help you out and understand what insanity you did with their product.

This is why saas is preferable in a lot of situations. If you're not great at ops and make bad decisions, hopefully the SAAS folks are better at this than you. If you're really good at ops and think a ton about this stuff, then running it yourself makes sense a lot of time. And yes with SAAS now you have lockin and other problems which has their own set of solutions you should make sure you are doing, like layers of abstraction.

Then you get into the self-fulfilling-infrastructure scenario. We're a vault shop, everyone use vault even for stuff that makes no sense to use vault for. Then rinse and repeat.

Or you get into the sunk cost fallacy with your ops team... "what will they do if we replace this with a SAAS", so you keep services around just to not fire people, not because they're the best solution anymore.

Lots of places to make bad decisions.


Again, I am not recommending that people run a service that they don't know how to operate.


You aren't, you're espousing a sane policy of actually understanding what you're doing; I agree with you.

I'm pointing out the whole "just run my pod" thing with tfn, or kube, leads people to thinking they're installing a phone app, not a multi-host, multi-protocol piece of software.

We're in violent agreement. We're just disagreeing about what the average person assumes.


Apologies for the blatant promotion attempt but you may want to check out our simple (in pretty much every sense) KMS implementation. https://github.com/phaistos-networks/KMS It’s inspired by vault and Google KMS and scales horizontally.


Could you go further into why SOPS scales badly?



Why does it scale like crap? Sops is on my shortlist to eval because we have a secrets mgmt problem.


A few reasons.

SOPS is fine, I was more referring to my implementation.

The reason my pipeline scales poorly is because it requires a full build and deployment cycle to update my dev / stg / prod configurations.

Also, if you store the encrypted files in our git repos as I do, you get constant merge conflicts and basically useless git history.

It is an extremely lazy implementation and literally the bare minimum I could do to get get my application configurations updating in my CI/CD stack.


We (Mozilla) store our SOPS-encrypted files in git repos and we don't really get merge conflicts, but that's just because we structure secret files in a way such that two people don't edit the same file at the same time often. Git history is also fine. Have you considered configuring SOPS as a differ for git?


Congrats on the big milestone.

I’ve been extremely happy working with HashiCorp tools for the past several years.

Vault provides sooooo much out of the box, it’s hard for me to imagine spinning up a new project without it anymore. Which leads to my biggest fear...my jaded-self is expecting an ‘unfriendly’ acquisition (Microsoft, Alphabet) and/or some onerous licensing/pricing changes.


I've been really happy with most of the Hashicorp tools too (Nomad is a bit iffy at the moment for me). Vault has given us some pretty seamless ways for managing our secrets sharing. I can't rave enough about Consul. It solved some seriously difficult CS problems (although a lot of credit goes to Raft too).

Congrats Hashicorp. Huge fan of your work!


What issues have you had with Nomad? We've been using OSS Nomad in production for a year now with very few problems, if any.


Oh interesting... a couple problems we've had: 1. The Nomad agent either dies or the process continues to run but in a way that seems unresponsive to requests. 2. When conditions change on the host (low disk space but we free up some disk space) Nomad doesn't seem to pick up on this and retry a failed deployment. This is after we've told it to re-evaluate. Perhaps we misunderstood what it's suppose to do under those conditions.


I'm quite unimpressed at HashiCorp's products quality and stability.


Which features of Vault are common across your projects?

I've been looking at Vault for implementing envelope encryption and using Amazon KMS's API for encrypting keys is very similar.


We provision very short lived certificates which are provisioned via Vault backed with Consul. It generally works very well.


So, speaking of. There are rumors in the community of imminent Microsoft acquisition announce. Just rumors though.


I was speculating that Amazon might acquire them at HashiConf[1]. Personally I'd rather see HashiCorp go public eventually then get acquired.

[1] - https://news.ycombinator.com/item?id=18118321


Any links to details?


My understanding of vault is not ironclad, but from what I have read it seems it allows ephemeral passwords that allow your application to get access to a service at time of initialization, and then the password ceases to be valid. Which means your application has access, but there's no credentials floating around anywhere that they could be compromised later.

If anyone could correct me if I'm wrong, that would be great.


You've more or less got it. You authorize to the vault and store the secrets in memory. So no passwords on-disk/in-source 'floating around'.


But how do you authorize to the vault then? Aren't there credentials for that part?


Yes, there is the "initial secret" problem. There are a variety of different ways Vault handles that (look up the auth plugin docs), but at my org we use AWS IAM auth. So, app servers authenticate via their IAM instance profile (provided/managed by AWS) while developers assume a specific IAM role and then authenticate via that (which is how we enforce MFA for Vault without paying the crazy enterprise pricing).

Note that with AWS IAM auth, AWS is a trusted third party, and accounts with high-powered IAM access (think AWS admins) end up having a great deal of authority in Vault, too. But for us, at least, these assumptions are reasonable.


That's one piece of the functionality, but there's quite a few more. It's a pluggable system that can also generate more than "passwords" (like cloud IAM credentials, database credentials, PKI)


Went to use Vault for Enterprise and heard we got an invoice for half a million dollars. Went with another solution.


Hey, I'm one of the founders of HashiCorp.

We'd like to make more of Vault Enterprise available to smaller companies with lower palatable price points. This will be reflected in certain features omitted as well as a lower level of support.

For now, please understand that our target _enterprise/commercial_ customer at the moment are Global 2000-esque companies. We currently have almost 10% of the global 2K as paying customers of Vault Enterprise. The features we've built along with the support you get reflect that (dedicated TAMs and so on). But we recognize that certain features of Vault Enterprise would be useful to smaller companies (replication and so forth).

To that end, we're currently planning some new packaging/pricing aimed at this type of user. I have no timelines on when we'd publish that, but its something we're actively doing now. This should make Vault Enterprise more affordable by smaller companies (think 5-figures/year instead of 6-figures/year).

Meanwhile, we're also making more "quality of life" features like auto-unseal available in Open Source. As we continue to add more features and value aimed directly at the Enterprise user, this lets us reevaluate and move more features into OSS and we have continued to do so throughout the life of Vault Enterprise. We hope this helps smaller companies adopt Vault successfully. And note that this is a great example of the model working: our success in Vault has funded growth in Vault staffing and that growth in Vault staffing has directly led to more OSS features and the growth in funding has led to our ability to more confidently make more features free. The community plays a huge role here, too. This is exactly how we intend for it to work!

One thing we learned rather painfully is that selling to the 4-figure vs 5-figure vs 6-figure vs. 7-figure/year customer is each a _very_ different company-building exercise. The expectations of the 7-figure customer (and we have a number of those) is dedicated TAMs, dedicated support reps, quarterly in-person meetings about the state of the install, high impact on the roadmap, and much much more. That of course requires a certain kind of staffing. Very often this staffing scales "down" to lower price points but very rarely does the staffing at lower price points scale "up" to higher price points.

As a company, we chose to go after the large enterprise deals first. This of course alienates some of the smaller deals since the large deals suck the air out of your org a bit. But as we've acquired more and more customers, grown, acquired more funding, etc. we're moving in that direction rapidly.

So, hopefully we'll get there soon! I'm sorry that you were quoted at a point that didn't work for us mutually and I'm glad you find an alternate solution that worked for you. For others in a similar position: we're working on it!


Thanks for this update. That explains a lot to me.

I'm a fan of pretty much everything you guys do, and like how you design and architect your products, but haven't yet crossed into the paid plans (haven't had the volume for it). And although a 5 figure sum is manageable for some of my customers (but still a tougher sell since they aren't technical), a six-figure sum just would never work since they aren't technical and just can't wrap their heads around why this is so important. Especially if they ask "can you guarantee I won't get hacked if I spend this money?" and I say "no, definitely not, but you'll be safer."

A SaaS model would work wonders for the next tier, the Fortune 5,000,000.


Hi Mitchell.

I had to kill a rollout of Vault at one billion dollar (revenue) company for the following reasons:

* the engineers doing the PoC could not/would not document how to operate it in production

* the managers did not take the unsealing responsibility seriously ("I'm in mgmt., don't call me on Sundays again.")

* our network was perceived as flaky.

Some cheap solutions are:

* provide some pre-written runbooks for administering Vault that people can cut-and-paste into their wiki

* provide some diagrams and scenarios for unsealing that can be adopted

* have the Vault server monitor and log network health (latency, bad packets, etc.)


sounds like you have problems in your org unrelated to vault.


> sounds like you have problems in your org unrelated to vault.

Unfortunately for engineers doing the deployment, Vault magnifies any weaknesses your organization already has. That's the nature of centralized key mgmt.

For example, I know one large company ended up using macros to unseal Vault to solve the key mgmt. problem I mentioned. In other words, the unseal keys are in plain text on the servers.

Probably happening more often than you would initially expect since nobody wants to drive down to the data center.

The remarkable thing with AWS KMS is that it's so seamless - it's idiot-proof compared to a self-hosted distributed system.


Obviously that's not ideal, but it's probably still more secure than using no secret management system at all.


Thank you for a very detailed and honest answer much better than marketing fluff that often get posted.


While we have you here, can you maybe shed a bit of light what is your commitment to Nomad these days? I hear you only talk about the k-word and Nomad is nowhere in the official communication. As someone who enjoys the philosophy and integration (with vault and consul) way more than k8s it makes me really concerned building on a product that might be abandoned soon.


Hey, I'm one of the co-founders of HashiCorp.

To keep it brief, we are more committed to Nomad today than before. The team has doubled in the last year, and we plan to grow further next year as well. Our goal has always been to build a simple, general purpose scheduler, that composes well with the rest of the HashiCorp ecosystem.

Kubernetes is an important ecosystem and a platform we tightly integrate with across our other products (Terraform, Consul, Vault). We've always believed that our tools would be "mixed and matched" with different technologies, and that pragmatically we should support the broadest range of integrations.

Nomad is an important piece of our ecosystem, and we have many open source users, enterprise customers, and our SaaS offerings are built on it. Rest assured, it's not going anywhere!


Quality software and developers cost money.


That sounds very reasonable for what you get (assuming it's not node / core locked) and for a moderately sized org.

I wish companies would publish their "whale sized" pricing. I know they're coming up with different pricing for different customers but it'd be great to be able to put a stake in the ground to make a judgement of utility. $40k / month with a warranty or SLA on patches is a lot cheaper than a team of "devops" maintaining it on their own (assuming it's core to the business ops).


Yeah. I totally agree. I understand why companies have a call us message on their enterprise pricing pages (don’t want to give your competitors an edge). But at the same time I just want the matrix of features are in the community edition and ones that aren’t. And then if we still want we will call and negotiate a good price. The guys at Cockroach Labs did just this but again their pricing is hidden behind an email to sales. That’s their prerogative but a bit annoying.


You're talking in general I presume but if not it's further down https://www.hashicorp.com/products/vault/enterprise


I guess we ca that style “Medical bill” invoicing.


What was your motivation to go with Enterprise?

It seems like most features are OSS now.


Confidant is another open-source product in this space https://lyft.github.io/confidant/

Disclosure: I work at Lyft.


That looks nice but it does requires things to live in DynamoDB, so if you use multiple cloud providers and/or on prem, that might be limited.


Agreed... Neat project, but a hard dependency on AWS is an issue for us.


I spent a few months of side work time working on a "secure-deployment-seed" project, https://github.com/jteppinette/secure-deployment-seed. It is a set of Ansible playbooks/roles that have Vault/Consul at the center of a standard web deployment where privacy/security is taken to the Nth degree of perfectionist driven insanity.

I ended up never using it, because it never really felt "perfect" to me.. There are so many circular dependencies between systems (DNS/Consul-Template/Consul/Vault/Ansible) and bootstrapping is just complete hell. Dive into that repo and witness it for yourself.

I can see myself using this setup if I was ever just doing Ops work, but when you are also doing everything else, it is just too much.

Anyways, congrats to the Hashicorp team. Your stuff really is topnotch.


There's so much detailed craft and love poured into Hashicorp's codebases, this is great. Congrats Mitchell!


We use https://square.github.io/keywhiz/. It provide secrets as files in a directory, securely. So no special API, client libraries required to access it.


You can use Consul-Template (https://github.com/hashicorp/consul-template) -- yes, it really needs a rename -- to do this with Vault (or Consul).


What are the real-world ephemeral workloads that batch tokens are intended to address?


This is a feature we built to specifically address large scale serverless workloads that are requesting a huge number of short lived tokens. And this isn’t theoretical: it was directly driven by multiple paying customers. Fun!


"Expanded Alibaba Cloud Integration" haha.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: