Hacker News new | past | comments | ask | show | jobs | submit | tom_pulo's comments login

Awesome little paper. I wish there were more papers like this! BTW, you guys might enjoy https://betterexplained.com - the author explains math concepts in new more intuitive ways.

The folks at Fermat's Library actually annotated this paper not too long ago: https://fermatslibrary.com/s/how-to-explain-zero-knowledge-p...


Without any idea of what ZKP is used for, I did not find it the explanation very intelligible. Later I found the simple explanation on Wikipedia and was able to connect the dots https://simple.wikipedia.org/wiki/Zero-knowledge_proof


The "simple" version leaves out an important bit from https://en.wikipedia.org/wiki/Zero-knowledge_proof:

> Peggy, being a very private person, does not want to reveal her knowledge (the secret word) to Victor or to reveal the fact of her knowledge to the world in general.

Without that, the example seems overly convoluted.


*she


and it looks like she's already written a few :)

http://www.danah.org/


I appreciate the analysis but I don't think the title of this should be Getting Started with Deep Learning.

If you are actually looking to get started with deep learning, you should go elsewhere. This is a review of frameworks and tools people use for deep learning


With all of this talk about docker, Kubernetes and the rest I feel peer pressured into ditching my monolithic heroku rails app and switching to the distributed services heaven that docker seems too advertise.

Can anybody that has made the switch give me a convincing argument about why I should switch to (or not)? My feeling that docker is great if you are VP of Engineering at Netflix, but is probably not the best thing if you are starting a startup and just need to get things done.

Disclaimer: I'm not religious about this and I'm totally open to being convinced that I'm wrong.


> With all of this talk about docker, Kubernetes and the rest I feel peer pressured into ditching my monolithic heroku rails app and switching to the distributed services heaven that docker seems too advertise.

I successfully run lots of Docker microservices in production, and I strongly advise you to keep your app on Heroku as long as you can. :-)

Microservices make sense in two circumstances:

1. You have multiple teams of developers, and you want them to have different release cycles and loosely-coupled APIs. In this case, you can let each team have a microservice.

2. There's a module in your app which is self-contained and naturally isolated, with a stable API. You could always just make this a separate Heroku app with its own REST API.

But in general, microservices add complexity and make it harder to do certain kinds of refactorings. You can make them work, if you know what you're doing. For example, ECS +RDS+ALBs is halfway civilized, especially if you manage the configuration with Terraform, and set up a CI server to build Docker images and run tests. But it's still a lot more complex than a single, well-refactored app on Heroku.


I need to write my journey through docker article. Now I get it I'm pretty much 100% convinced it's the best way to go, simply because everyone who needs to work on something gets a clean dev environment that just works. The bit's about deployment/test/ci/production all just about being the same is wonderful.

The next thing I realise is that I'm very happy dealing with infrastructure now in a way I wasn't before; I've looked through docker files and know what they install and why and if anything goes wrong it provides me with an immediate goto which is let's add more servers or lets rebuild the environment and switch over (should there be more users or more load than expected). Docker will buy you time here.

Docker removes the temptation to start editing stuff on servers in the event of issues.

In terms of doing a startup I think other people here are better to advise; if you are MVP no but anything bigger than that I think it'll pay off.

Managing secrets is still an absolute pain though...


You state that the biggest benefit is consistency. You can get that without Docker. Try Ansible.


Ansible is not fashionable enough ;-)

Seriously though, knowing docker really well is more likely to improve my career and also having the ability to remove the devops issues associated with setting up dev environments is awesome. My Mac broke this week and I was able to switch to a different machine in 30 minutes because of that.

Does Ansible provide isolation of different dev environments? I think not.


>Seriously though, knowing docker really well is more likely to improve my career

Unfortunately true -- for now. However, your career would be even better served by gaining experience in non-fad technologies.

Also, "career development" is an offensive reason to deploy a technology for your employer. I recognize that it is common, but it's still improper to prioritize resume points over the employer's long-term stability and interests. Personally, when a candidate gives off that vibe to me, I pass on them every time.

>Does Ansible provide isolation of different dev environments? I think not.

I don't understand. Anything you can script in Docker, you can script in Ansible. They both allow the user to pass in arbitrary shell commands and execute anything they want on the target. How does this not accommodate "isolation of different dev environments"?

Maybe you mean that since you can execute a Docker container on your Mac, you don't need to set up a "local" env? Docker transparently uses a virtual machine to execute a Linux kernel on the Mac. You can execute an Ansible script on a normal VM the same way (optionally using something like Vagrant to give more simplistic, Docker-like (which is really Vagrant-like) CLI management).


It sounds like your current setup is working just fine, so I don't see a compelling reason to switch.

When you start to deploy many applications, and they need to talk to each other, and you need service discovery, automatic restart, rolling upgrades, horizontal scaling, etc - then Kubernetes brings a lot of value.


If you're a startup, then I'd look at the growth you're expecting. Containers scale well, and when you're big, maintaining multiple heroku apps eats into developer time that can be better spent somewhere else.

Of course, if you've just started, and are getting an MVP out the door, don't worry about docker just yet. And also don't listen to the microservices people. It'll be like putting the cart before the horse.


You can switch to Pivotal Web Services[0] (disclosure: I work for Pivotal on Cloud Foundry) and get the best of both worlds.

PWS is based on Cloud Foundry, which allows routing by path. So as an intermediate step towards decomposing your app into services, you can deploy different copies and have them respond to particular routes.

Cloud Foundry uses Heroku's buildpack code, with very minor changes, with additional testing. Your app will stage and run identically.

If you decide to switch to docker images, Cloud Foundry can run those too.

Cloud Foundry is a complete platform, rather than a collection of components. Test-driven, pair programmed, all that jazz. More mature and production-tested than any alternative that I'm aware of.

I think it's awesome, but I'm biased. Feel free to email me.

[0] https://run.pivotal.io/


> distributed services heaven

"The grass is always greener on the other side"

There are plenty of upsides to the distributed approach. But there are downsides to distributed too which don't get discussed as much. Things like communication between nodes, fault tolerance, monitoring and handling failure. Same case with having many microservices. Also this stuff becomes time consuming if you are a solo dev / small dev team.

IMO one approach isn't better than the other for all cases. Maybe I'm a bit of a laggard here, but I still like Heroku and believe in just doing enough infrastructure to support where your app is / is going in the near future.


The ecosystem around containerization is still emerging and is in a rapid state of flux. I really wouldn't recommend getting production anywhere near it for the next 2 years minimum, and realistically, you probably want to wait more like 5.

Both Docker and k8s change quickly and both lack functionality that most people would consider pretty basic. Google may have transcended into a plane where persistent storage and individually-addressable servers are a thing of the past, but the rest of us haven't. Many things that an admin takes for granted on a normal setup are missing, difficult, convoluted, or flat out impossible on k8s/Docker.

We're converting our 100+ "traditional" cloud servers into a Docker/k8s cluster now, and it's a nightmare. There's really no reason for it. The biggest benefit is a consistent image, but you can get that with much less ridiculous tooling, like Ansible.

My opinion on the long-term: containers will have a permanent role, but I don't think it will be nearly as big as many think. Kubernetes will be refined and become the de-facto "cluster definition language" for deployments and will take a much larger role than containers. It will learn to address all types of networked units (already underway) and cloud interfaces/APIs will likely just be layers on top of it.

The hugely embarrassing bugs and missing features in both k8s and Docker will get fleshed out and fixed over the next 2-3 years, and in 5 years, just as the sheen wears off of this containerland architecture and people start looking for the next premature fad to waste millions of dollars blindly pursuing, it will probably start to be reasonable to run some stable, production-level services in Docker/k8s. ;) It will never be appropriate for everything, despite protests to the contrary.

I think the long-term future for k8s is much brighter than the future for Docker. If Docker can survive under the weight of the VC investments they've taken, they'll probably become a repository management company (and a possible acquisition target for a megacorp that wants to control that) and the docker engine will fall out of use, mostly replaced by a combination of container runtimes: rkt, lxc, probably a forthcoming containerization implementation from Microsoft, and a smattering of smaller ones.

The important thing to remember about Docker and containers is that they're not really new. Containers used to be called jails, zones, etc. They didn't revolutionize infrastructure then and I don't think they will now. The hype is mostly because Docker has hundreds of millions of VC money to burn on looking cool.

If Docker has a killer feature, it's the image registry that makes it easy to "docker pull upstream/image", but the Dockerfile spec itself is too sloppy to really provide the simplicity that people think they're getting, the security practices are abysmal and there will be large-scale pwnage due to it sometime in the not-too-distant future, and the engine's many quirks, bugs, and stupid behaviors do no favors to either the runtime or the company.

If Docker can nurse the momentum from the registry, they may have a future, but the user base for Docker is pretty hard to lock in, so I dunno.

tl;dr Don't use either. Learn k8s slowly over the next 2 years as they work out the kinks, since it will play a larger role in the future. In 5 years, you may want to use some of this on an important project, but right now, it's all a joke, and, in general, the companies that are switching now are making a very bad decision.


I'm actually swinging hard against containers at the moment.

I've been playing around the runv project in docker (definitely not production ready) and running containers in optimized virtual-machines...and it just seems like the better model? Which, following the logic through, really means I just want to make my VMs fast with a nice interface for users - and I can, with runv I can spin up KVM fast enough to not notice it.

Basically...I'd really rather just have VMs, and pour effort into optimizing hypervisors (and there's been some good effort along these lines - the DAX patches and memory dedupe with the linux kernel).


Excellent analysis, much in line with my own thoughts and experience. Thanks for taking the time write this down.


Isn't the goal of software development to reduce complexity, both in real life and the software itself? I would argue is if your work is on Heroku and you know what your doing, why chase after intermingled microservice hell?


It is cool for big teams where operations play a big role, but I've found myself switching back to Heroku for some small (as in team size) projects.


I haven't worked anywhere that's doing containerization since the rise of the container managers being discussed here, but I can comment on some of the benefits we saw from deploying Docker at a previous job in what was, at the time, a fairly standard Consul/Consul-Template/HAProxy configuration with Terraform to provision all the host instances.

1. We could run sanity tests, in production, prior to a release going live. Our deploy scripts would bring up the Docker container and, prior to registering as live with Consul, make a few different requests to ensure that the app had started up cleanly and was communicating with the database. Only after the new container was up and handling live traffic would the old Docker container be removed from Consul and stopped. This only caught one bad release, but that's short bit of unpleasantness that we avoided for our customers.

2. Deployments were immutable, which made rollbacks a breeze. Just update Consul to indicate that a previous image ID is the current ID and re-deployment would be triggered automatically. We wrote a script to handle querying our private Docker registry based on a believed-good date and updating Consul with the proper ID. Thankfully, we only had to run the script a couple of times.

3. Deployments were faster and easier to debug. CI was responsible for running all tests, building the Docker image and updating Consul. That's it. Each running instance had an agent that was responsible for monitoring Consul for changes and the deploy process was a single tens-of-megabytes download and an almost-instantaneous Docker start. Our integration testing environment could also pull changes and deploy in parallel.

4. Setting up a separate testing environment was trivial. We'd just re-run our Terraform scripts to point to a different instance of Consul, and it would provision everything just as it would in production, albeit sized (number/size of instances) according to parameterized Terraform values.

Docker also made a lot of things easier on the development side too. We made a development version of our database into a Docker image, so the instructions for setting up a completely isolated, offline capable dev environment were literally install Docker/Fig (this was before Docker had equivalent functionality), clone the repo and tell Fig to start (then wait for Docker to pull several gigs of images, but that was a one-time cost.)

As I see it, the main thing that Kubernetes and the rest of the container managers will buy you is better utilization of your hardware/instances. We had to provision instances that were dedicated to a specific function (i.e. web tier) or make the decision to re-use instances for two different purposes. But the mapping between docker container and instance/auto-scaling group was static. Container managers can dynamically shift workloads to ensure that as much of your compute capacity is used as possible. It was something we considered, but decided our AWS spend wasn't large enough to justify the dev time to replace our existing setup.

Having not used Heroku, I can't say how much of this their tooling gives you, but I think it comes down to running your own math around your size and priorities to say whether Docker and/or any of the higher-level abstractions are worth it for your individual situation. Containers are established enough to make a pretty good estimate for how long it will take you to come up to speed on the technologies and design/implement your solution. For reference, it took 1 developer about a week for us to work up our Docker/Consul/Terraform solution. If you look at the problems you're solving, you should be able to make a pretty good swag at how much those problems are costing you (kinda the way that we did when we decided that Kubernetes wouldn't save us enough AWS spend to justify the dev time to modify our setup). Then compare that to the value of items on your roadmap and do whatever has the highest value. There's no universally correct answer.


I very much like reading this type of stories where founders candidly talk about their business, how well they are doing, what did it take to get where they are (remove a good chunk of the usual BS, add some numbers). Indie Hackers is a great source (https://www.indiehackers.com) for this type of stories.

I have a feeling that this is the dream of a lot of people on HN. Having an easy to run business with a couple of people, without crazy competition, not needing to go sell it to VCs, and making a good amount of money while running it. Not judging, just observing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: