Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Go Micro – A Go microservices development framework (micro.mu)
68 points by chuhnk on Nov 9, 2019 | hide | past | favorite | 45 comments



go mod graph | wc -l

787

Oh my. And the most fascinating part of it:

  - google.golang.org/grpc@v1.17.0
  - google.golang.org/grpc@v1.19.0
  - google.golang.org/grpc@v1.19.1
  - google.golang.org/grpc@v1.20.1
  - google.golang.org/grpc@v1.22.0
  - google.golang.org/grpc@v1.24.0


I once went through the steps to ask help because the plugins repo didn't compile as it used a different minor version in the go. mod file. I was told it was my problem because I wanted to use gomod.


I'm unfamiliar with that command. What is this showing?


Dependency graph. Piped to wc to show number of lines in output.


Not knocking the OP, or his/her effort, but I think the number of microservice frameworks exceeds the number of organizations that actually have the problems that microservice architecture was designed to solve.

We're doing it for the project I'm working on at work, and it's my opinion a colossal waste of engineering time and effort. We're a big company, but we're not FAANG. Our user-base will never be even likely to be 100k users total. But hey we're doing this in the name of industry's current 'best-practice.'

I can't wait for the microservice & scrum trends to die off.


OP here. I agree with you completely. People shouldn't be building these frameworks. Let's rally around one or two frameworks per language where appropriate and drive that as the common development method, as we do for infrastructure.

I'll also mention that yes, I agree, in most cases microservices are not the answer. This really comes down to the natural evolution of companies that scale to 50+ engineers split across multiple teams and continuing to grow. The software architecture should model after the org and allow teams to execute independently. If this is possible with monolithic architectures or anything else then that should be the approach taken.

In my case, I came from Google and Hailo, environments in which scale mattered on all levels and I didn't see the tools out there (back in 2015) to solve these problems for everyone else.

Rails is for Ruby. Spring is for Java. Micro is for Go

I see a world in which Micro can be used to write even a single service with a model that can scale to dozens. But more importantly I want to help unlock the reuse of services and the power of what microservices enabled for me at Hailo and what I've seen it do for others.


Similar. A place I worked at sharded it's SQL database into one microservice per database table. It ended up being something like 70 docker containers across 4 docker machines. Kicker was this was across 6 repositories. We had much less than 100k users.

I'm all for microservices dying out. It's an awful fad. Making 6 pull requests to add a function argument should not be a thing.


Microservices might be awful for all kinds of reasons, but your pull request problem is caused by your multirepo setup, not microservices. You can have microservices and a monorepo, for example.


Came here to say this. OP definitely has a problem, but not with microservices, it seems. Additionally, there are different levels of microservices.

Starting creating a communication pattern and separating concerns all the way to the table pattern they have setup, which does seem like overkill.

The former is useful in building teams, the latter is useful in dealing with scaling issues.


God be with the poor people who will have to maintain these systems in a few years when they are legacy and run on old tech. If you think maintains COBOL code is bad then this will be a lot of “fun”.


They will just re-write them in the framework or pattern du jour.


Jee. Sus. Why? What benefit did anyone think that would have? Better than half the point of using SQL goes out the window if you can’t query across tables.


Wait until you have half a dozen teams or other services directly accessing your tables and now you need to update the schema or data container for scaling reasons, but you can't because they don't have the time to update their access patterns. Or the team that thoroughly misunderstood how the data was to be used and built their tables and workflows on your tables, forever linking their structure together. Far better to expose data through APIs if you have multiple teams doing their own things in different code bases.


I’m familiar with patterns for divvying up DBs along lines of responsibility and access needs, but the described situation was one per table.


You are right, that for most use cases using microservices isn't necessary, but it doesn't mean they can't be still useful. Although there is some added cost in the beginning, providing separation of concerns can be quite useful for security and in later phases it keeps the monolith from growing too big and difficult to maintain/extend.

I see microsevices as return to the Unix philosophy:

>Write programs that do one thing and do it well.

>Write programs to work together.


I would argue that, unless you are supporting many developers working on the same project simultaneously (as in, hundreds if not thousands), that microservices will actually slow development without improving quality or robustness.

Many things are significantly easier in a monolith. Integration testing, reasoning (and verifying with tests) about how components interact, refactoring of interfaces etc. As soon as you pull components out into microservices many assumptions developers may not even realise they make about developing in a monolith go out the window.


Every microservice you carve out of a monolith gives you at least 2 public APIs you didn't have to worry about before, and makes local development that much more complicated. I had a situation where I needed spin up 20 microservices just to wire up an A/B test for marketing, and everyone kept asking me what was taking so long while refusing to listen to the trade-offs of their request. Good times.

I vote for punting in microservices until the value proposition is clear. Otherwise you just end up with a macrolith that makes you dream of monolithic good old days


Of course, this should have been the approach from the beginning, and it boggles my mind why people pick technologies or paradigms without considering their requirements and the tradeoffs of their technical decisions.

I think part of it is because of the hype machine, where people only talk about how awesome things are that they invented, instead of talking about what problems it solves, what it doesn’t solve, and what its tradeoffs are. If you are reading something to evaluate a technology and it doesn’t talk about all three of those things, discard what you’re reading, because it will mislead you.


I find reasoning about component interest harder in a monolith. Monolith code has free reign to access other parts of the monolith; instead of having to understand a component's inputs and outputs, you have to keep the entirety of the monolith in mind when reasoning through any particular piece of code.

Smaller services are also easier to test, for the same reasons. Services force the team to limit scope. While one can try to do the same in a monolith, it's too easy to "just this once" rely on some back channel data passing or assumption of the internal state of another part of the monolith.


Teams that can't keep components of a monolith adequately separated will crash and burn when trying to build microservices. It is far more complex.


We are replacing our monolith with micro services like we are google, kubernets, docker containers, aws, grpc, node, react. Its gonna kill the company. To many new shiny things at once.


I see debugging as a significant challenge in a micro-services environment. Even with distributed tracing, how do you reason about the order of events across services? We've had trouble separating cause-effect in a monolith let alone a micro-service.


Some of the features here seem to be slightly at odds with security. Like auto discovery. Not knowing a priori what's running where doesn't sound like clear security separation.


Micro services don't only provide horizontal scaling but also operational scaling, including risk of deployments, downtime, release coordination and contributions from multiple workstreams or teams.

Sometimes applications with only 100s of users need microservices simply due to the complexity and range of the workloads.

The moment your monolithic frontend and backend need to start doing asynchronous work, you'll want to build a "microservice" to pull from a queue.


> Micro services don't only provide horizontal scaling but also operational scaling

I think it's more accurate to say that they don't necessarily provide any horizontal scaling, but can provide operational scaling. This is only if there's some natural team divide, and the impact of the introduced complexity does not outweigh the benefits of the teams being able to test and deploy independently.

Sadly in most cases I have come across, this is not the case -- the complexity introduced and/or problems caused by lack of ecosystem maturity outweigh any potential organisational benefit.

> release coordination

Microservices can make release coordination significantly harder i.e. when a feature release requires multiple deployments from separate teams, I definitely wouldn't list this in the pros column, it's very much an "it depends". Other tangential factors, e.g. monorepo vs multi-repo, can be more significant.

> The moment your monolithic frontend and backend need to start doing asynchronous work, you'll want to build a "microservice"

I agree queue-based background work is a case where services are a good fit (sadly this is not what a lot of people are doing with their codebases when they "go microservices"), ... but it can also be simpler to deploy the same exact same monolithic codebase to a worker, and only execute the part that is performing your async task.

(If your worker is a lambda, sure, that isn't going to work.)


> Microservices can make release coordination significantly harder i.e. when a feature release requires multiple deployments from separate teams, I definitely wouldn't list this in the pros column, it's very much an "it depends". Other tangential factors, e.g. monorepo vs multi-repo, can be more significant.

If you're operating a monolith, every release requires coordination from every team, no?


I was referring to the act of deployment itself. A single deployment of a monolith, without needing to ensure a dependent service whose pipeline is owned by another team is "deployed first", can be a lot easier to organise.

As I say it can depend a lot on other factors like repo organisation, pipeline maturity.

I struggle to see how release coordination can be something microservices inherently improve upon, do you have an example comparison perhaps?


Right. I don't know what issues people are having specifically with microservices, but we're in a sort of hybrid place where we have about 30 engineers split into a few teams. We collectively build/operate ~7 microservices and a dozen lambda functions, queues, etc, all of which are deployed together about ten times per day.

This has been working very well for us, and I would like to see us buy into the microservice model more with teams that each operate one or two services instead of each team committing a little bit to every service. In that model, each team would only need to worry about their own service(s) instead of interactions across every service (or rather, this would be less of a concern).

The only piece that doesn't work so well for us is local development; we've tried Docker Compose and PM2 to run the whole fleet of services in either containerized and native-process configuration (respectively), but the former is slow due to terrible Docker for Mac filesystem performance and the latter introduces all sorts of environment sanitization/consistency issues.


>The moment your monolithic frontend and backend need to start doing asynchronous work, you'll want to build a "microservice" to pull from a queue.

That’s a multi process architecture. We’ve done that for ages. Multiple processes doing what the do and communication is via TCP or named pipes.


Why can't you spin up a new instance of the monolith and designate it as the "queue worker"? I've done this before with both Laravel and Rails and it worked well enough while keeping things simple.


We're small and early, but I've really enjoyed starting with microservices from the beginning. Each team simply exposes a grpc endpoint to everyone else and can otherwise be off sprint schedule from other teams, make generally whatever tech decisions make sense etc. Deployment velocity has been higher that other companies I've worked with.


The only reason we have standalone components in our setup, is due to they really pick stuff to do and process it. Additionally, they can work on their own and can be updated independently. This of course requires the protocol between the systems to be backwards compatible.


People have been using microservices for decades. Think about the UNIX filesystem. Applications don't know anything about writing to disk blocks, the filesystem service handles that. And the tools that interact with the filesystems can stay very focused to the task at hand. cp only knows how to copy files; ls only knows how to list directories. If you want to add a new feature to ls, you need not worry about breaking cp.

On the "web application" side, we've also been doing the same thing since the LAMP days. Your web app has no understanding of how to efficiently store records; the database service handles that. Your web app has no idea how to efficiently maintain TCP connections from a variety of web browsers all using slightly different APIs; your frontend proxy / load balancer / web server handles that. All your app has to do is produce HTML. It's popular because it works great. You can change how the database is implemented and not break the TCP connection handling of the web server. And people are doing that all the time.

All "microservices" is is doing this to your own code. You can write something and be done with it, then move on to the next thing. This is the value -- loose coupling. Well-defined APIs and focused tests mean that you spend all your time and effort on one thing at a time, and then it's done. And it scales to larger organizations; while "service A" may not be done, it can be worked on in parallel with "service B".

That said, I certainly wouldn't take a single team of 4 and have them write 4 services that interact with each other concurrently. That isn't efficient because not enough exists for anyone to make progress on their individual task, unless it's very simple. But if you do them one at a time, you gradually build a very reliable and maintainable empire, getting more and more functionality with very little breakage.

The problem that people run into is that they act as the developers of these services without investing in the tooling needed to make this work. You need to have easy-to-use fakes for each service; so you don't have to start up the entire stack to test and play with a change in one service. You need to collect all the logs and store them where you can see all of them at once. You need monitoring so you can see what components aren't interacting correctly (though monoliths need this too). You need distributed tracing so you can get an idea of why a particular high-level request went wrong. All these things are available off the shelf for free and can be configured in a day or two (ELK, Jaeger, Prometheus, Grafana). (The other problem that people run into is bad API design. There is no hack that works around bad API design in microservices; your "// XXX: this is global for horrible reasons" simply isn't possible. You have to know what you want the API surface to look like, or it's going to be a disaster. It's just as much of a disaster in monoliths, though; this is how you get those untestable monstrosities that break randomly every release. Microservices make you fail now, monoliths make you fail later.)


> You can write something and be done with it, then move on to the next thing.

That's part of the problem, you really can't. Requirements change. You have to update that service, and hope it's backwards compatible because if not, have fun updating all the services that interact with it.


I don't think requirements change nearly as often as people think they do. Open up your repository and look at the oldest files... probably still running in production, and never needed any changes. (Additionally, a careful design will still be the right one even in the face of changing requirements.)

Making changes backwards compatible is fairly simple. If semantics change, give it a new name. If mandatory fields become optional, just ignore them. (This is why you use something like protocol buffers and not JSON. It's designed for the client and server operating off of different versions of the data structure.)

Having two ways to do something is always going to be a maintenance burden, and make your service harder to understand. It is not related to microservices or monoliths. The same problems exist in both cases, and the solution is always the same; decide whether maintaining two similar things is easier than refactoring your clients. In the case of internal services where you control all the clients, refactoring is easy. You just do it, then it's done. In the case of external services, refactoring is impossible. So you maintain all the versions forever, or risk your users getting upset.


Curious if you use Kubernetes?


There are breaking changes like every second minor release, functionality gets removed, dependent repositories get deleted/renamed/moved by author. PRs are discussed on the wrong level and author is very opinionated.

Do not use this framework unless you want to end up in an inconsistent mess!


Thank you for the honest feedback. We're moving fairly quickly in regards to the evolution of the framework. Some of that results in breaking changes and you're right we haven't established the right channels for communication. To be quite frank, open source and public library maintenance was never my goal or part of my experience, I build platforms and remain mostly behind the scenes. So I apologise for the pain but hopefully people have found the framework to be useful despite some of the issues.


I like the way the author releases minor fixes. 1.14.0 - Remove the consul registry. What have people to do who're running hundreds of microservices in prod relied on go-micro with consul registry? Any migration path without stopping the world? What about the cost of person-hours we need to spend changing the code everywhere? The business wants to run the services 24x7 and does not depend on the mood of the third-party framework author.


I'm assuming you're from a certain large corporate from whom no one actively engages our community or makes any comments on PRs or creates any issues or contributes anything for that matter and does not actively pay for support.

As a developer and user of open source I completely understand your pain. As a maintainer who has built and managed this project alone for the past 4 years I would tell you that you have many options in how you make use of a completely free open source project with a liberal Apache 2.0 license.

You are entirely free to fork the project, pin your dependency to a certain release, to actively engage in the community, to file an issue when you have concerns and to of your own volition use something entirely different if you are unhappy with your usage of a free tool.

We are in the process of moving from a totally free open source project maintained by one person to a small team building a product and business around these tools. During that period there may be some pain and issues, we'll move fast and potentially break things and in that make many mistakes but hopefully people will engage to help us move in the right direction.

If you are a company who relies on this piece of software for the 24x7 uptime of your business and this adds measurable value to your company then perhaps it would make sense to engage in some sort of SLA or support agreement for this critical software that you currently pay nothing for.


Service Discovery, load balancing. Aren't these things that should be done by the underlying platform?

In other words: if I'm using this with K8s doesn't K8s do that for me? What major benefits do I still have by using Go Micro?


I'd help a lot if the project showed rationale, alternatives, its taken choices and the corresponding tradeoffs.

Otherwise, one is essentially invited to blindly adopt someone else's design, which is particularly reckless in distributed systems.


Having built a micro service of sorts just yesterday with nothing more than net/rpc, it wasn't that bad.


I fear this does too much.

All applications should care about is an API to do what they want, so all you need to decide on is a messaging protocol, which is probably going to be GRPC. (Why GRPC? I picked it out of a hat. JSON is very brittle when service definitions change, so you want an IDL. Feel free to pick one and then never care about it again, it doesn't really matter.) Then if you want publish/subscribe, you write a publish/subscribe service and make API calls to it. SendMessage / WaitForMessage / etc.

Service discovery and load balancing are already solved problems. Use Envoy sidecars, Istio, Linkerd, etc. for load balancing, tracing, TLS injection, all that stuff. Use your "job runner"'s service discovery for service discovery (think: k8s services, but feel free not to use k8s. It's just an example.)

The tools you really need for success with microservices:

1) A way to quickly run the subset of services you need.

For unit tests, I prefer "fake" implementations of services. Often your app doesn't need the full API surface of an upstream service. If you have a StoreKey / RetrieveKey service, an implementation like "map[key]value" is good enough for tests. Make it super simple so you test your app, not the upstream app, which already has tests. (Do feel free to write some integration tests as a sanity check for CI, but keep the code/save/test loop fast and focused!)

For the "try it out in the browser", I'm pretty unhappy with the available tools. You want something like docker-compose without requiring docker containers to be built. I ended up writing my own thing to do this at my last job. Each service's directory has a YAML file describing how to run the service and what ports it needs. Then it can start up a service, with Envoy as a go-between for them. That way you get http/2, TLS (important for web apps because some HTML features are only available from localhost or if served over https, and your phone is never going to be retrieving your app's content from localhost), tracing, metrics, a single stream of logs, etc. I got it optimized to the point where you can just type "my-thing ." and have your web app working almost like production in under a second. It was great. I wish I open-sourced it.

2) Observability. You need to know what's going on with every request. What's failing, what's slow, what's a surprising dependency?

2a) Monitoring. With a fleet of applications, it's unlikely that you'll be seeking out failures. Rather they just happen and you don't know how often or why. So every application needs to export metrics, and these metrics need to feed alerts so that you can be informed that something is wrong. (Alert tells you something is abnormal; the dashboard with all the metrics will let you think of some likely causes to investigate.) Just use Prometheus and Grafana. They're pretty great.

2b) Distributed tracing. You don't have an application you can set a breakpoint in to pick apart a failing request. So you need to ephemerally collect and store this information so that when something does break, you have all the information you would have manually obtained all ready for you, so you can dive in and start investigating. Just use Jaeger. It's pretty great. (Jaeger will also give you a service dependency graph based on traces. Great for checking every once in a while to avoid things like "why is the staging server talking to the production database?". We don't know why, but at least we know that it's happening before someone deletes production.)

2c) Distributed logging. You will inevitably produce a lot of interesting logs that will be like gold when you're debugging a problem that you've been alerted to. These all need to be in one place, and need to be tagged so that you can look at one request all at once. The approach I've taken is to use elasticsearch / fluentd / kibana for this, with the applications emitting structured logs (bunyan for node.js, zap for go; but there are many many frameworks like this). I then instructed my frontend proxy (Envoy) to generate a unique UUID and propagate that in the HTTP headers to the backend applications, and wrote a wrapper around my logging framework to extract that from the request context and log it with every log message. (You can also use the opentracing machinery for this; I personally logged the request ID and the trace ID; that way I could easily go from looking at Jaeger to looking at logs, but traces that weren't sampled would still have a grouping key.)

The deeper logs integrate into your infrastructure, the better. As an example, something I did was to include a JWT signed by the frontend SSO server with every request. Then my logging machinery could just log the (internal) username. Then when someone came to my desk and said "I'm trying to foo, but I get 'upstream connect error or disconnect/reset before headers'" and could just look for logs by their username. Much easier than trying to figure out what service that was, or what URL they were visiting.)

Anyway, sorry for the long post. My TL;DR is that you must invest in good tooling no matter what architecture you use. You will be completely unsuccessful if you attempt microservices without the right infrastructure. But all this is great for monoliths too. Less debugging, more relaxing!


Your assumption is that the framework handles this complexity for you. This incorrect. It provides abstractions to the developer which then allows them to build on these while allowing them to be swapped out for the most appropriate underlying infrastructure.

The point is that building distributed systems as a whole requires a level of understanding in this space but not one that should require you to initially focus on infrastructure or even take that into consideration while writing software. Ideally you should be given these as abstractions which allows you to build distributed applications and offload operational concerns to the relevant parties while still coherently having the sum of the parts work together.

The tools that you mention are infrastructure. And while an environment, a platform, should be provided that gives you the insights and the relevant foundation, it really should not be the primary concern of the developer.

Developers should not be forced to reason about infrastructure.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: