Hacker News new | past | comments | ask | show | jobs | submit login

> In general everything about it feels like it makes projects easy to work on for 5 days, abandon for 2 years, and then get back into writing code without a lot of problems.

To me this is one of the most underrated qualities of go code.

Go is a language that I started learning years ago, but did't change dramatically. So my knowledge is still useful, even almost ten years later.




I've picked up some Go projects after no development for years, including some I didn't write myself as a contractor. It's typically been a fairly painless experience. Typically dependencies go from "1.3.1" to "1.7.5" or something, and generally it's a "read changelogs, nothing interesting, updating just works"-type experience.

On the frontend side it's typically been much more difficult. There are tons of dependencies, everything depends on everything else, there are typically many new major releases, and things can break in pretty non-obvious ways.

It's not so much the language itself, it's the ecosystem as a whole. There is nothing in JavaScript-the-language or npm-the-package-manager that says the npm experience needs to be so dreadful. Yet here we are.


Arguably it's just the frontend. You can use old node in backend as much as you please. Frontend UI expectations evolve so quickly while APIs & backend can just stay the same, if it works it works.


It's not just "old node", it's also "old webpack" and "old vuejs" and "old everything". Yes, "it works" and strictly you don't really need to update it, but you're going to add a lot of friction down the line by never updating and at some point the situation will become untenable.


Old webpack and vuejs yes, because it's the UI which does get quickly old, but APIs or also things you would do with Go don't really have to change much as time goes on. A simple Express server and that's it.


This hasn't really been my experience, maybe if you're only using a single dependency (like express) it's easy but if you have any reasonable amount of npm deps it becomes quite untenable very very quickly. Mostly in ways you don't expect.

Like one dependency upgrading needing you to move to node v16 but doing so causes half your app to implode.

I don't really see this type of things in modern projects with other languages.


> You can use old node in backend as much as you please.

Not really, or at least not always.

I once tried to run some old Node project on Apple Silicon. It relied on some package that wanted to download binaries, but that ancient version didn't support Apple Silicon yet. Upgrading the package to an Apple Silicon supporting version required updating half of the other dependencies, and that broke everything. Eventually, I just gave up.


I agree but those first 5 days are going to be a mixed bag as you pick through libraries for logging, database drivers, migrations, as well as project organization, dependency injection patterns for testing, organize your testing structure, and more.

If you have a template to derive from or sufficient Go experience you'll be fine, but selecting from a grab bag of small libraries early on in a project can be a distraction that slows down feature development significantly.

I love Go but for rapid project development, like working on a personal project with limited time, or at a startup with ambitious goals, Go certainly has its tradeoffs.


I think 80% of this is people coming to Go from other languages (everybody comes to Go from some other language) and trying to bring what they think was best about that language to Go. To an extent unusual in languages I've worked in, it's idiomatic in Go to get by with what's in the standard library. If you're new to Go, that's what you should do: use standard logging, just use net/http and its router, use standard Go tests (without an assertion library), &c.

I'm not saying you need to stay there, but if your project environment feels like Rails or Flask or whatever in your first month or two, you may have done something wrong.


Go these days has had a good few stdlib improvements that reduce your reliance on third party libraries even further.

The http router now handles path parameters and methods, so you might just pull in a library to run a middleware stack.

There is structured logging in the stdlib, which works with the existing log package for an easy transition.

The thing I’ve struggled with is structuring a project nicely, what with the way modules work, especially for services that aren’t exactly ‘micro’, and especially when the module and workspace system is still pretty unintuitive.


I completely agree with the comment, except for the Flask example. Django would be a better example.

Both Flask and Golang's http package have simplicity and minimalism as their philosophy. Of course, most mature projects will eventually diverge from that. But both can start out as a single file with just a few lines of code.


I really think the library search is more of something you inherit from other languages, though database drivers are something you need to go looking for. The standard library has an adequate HTTP router (though I prefer grpc-gateway as it autogenerates docs, types, etc.) and logger (slog, but honestly plain log is fine).

For your database driver, just use pgx. For migrations, tern is fine. For the tiniest bit of sugar around scanning database results into structs, use sqlx instead of database/sql.

I wouldn't recommend using a testing framework in Go: https://go.dev/wiki/TestComments#assert-libraries

Here's how I do dependency injection:

   func main() {
       foo := &Foo{
           Parameter: goesHere,
       }
       bar := &Bar{
           SomethingItNeeds: canJustBeTypedIn,
       }
       app := &App{
           Foo: foo,
           Bar: bar,
       }
       app.ListenAndServe()
   }
If you need more complexity, you can add more complexity. I like "zap" over "slog" for logging. I am interested in some of the DI frameworks (dig), but it's never been a clear win to me over a little bit of hand-rolled complexity like the above.

A lot of people want some sort of mocking framework. I just do this:

   - func foo(x SomethingConcrete) {
   -     x.Whatever()
   - }
   + interface Whateverer { Whatever() }
   + func foo(x Whateverer) {
   +     x.Whatever()
   + } 
Then in the tests:

   type testWhateverer {
      n int
   }
   var _ Whateverer = (*testWhateverer)(nil)
   func (w *testWhateverer) Whatever() { w.n++ }
   func TestFoo(t *testing.T) {
       x := &testWhateverer{}
       foo(x)
       if got, want := x.n, 1; got != want {
           t.Errorf("expected Whatever to have been called: invocation count:\n  got: %v\n want: %v", got, want)
       }
   }
It's easy. I typed it in an HN comment in like 30 seconds. Whether or not a test that counts how many times you called Whatever is up to you, but if you need it, you need it, and it's easy to do.


I've been writing Golang for years now, and I heavily endorse everything written here.

Only exception is you should use my migration library [0] instead of tern — you don't need down migrations, and you can stop worrying about migration number conflicts.

One other suggestion I'll make is you probably at some point should write a translation layer between your API endpoints and the http.Handler interface, so that your endpoints return `(result *T, error)` and your tests can avoid worrying about serde/typeasserting the results.

[0] https://github.com/peterldowns/pgmigrate


This looks excellent!

The go tools for managing DB schema migrations have always felt lacking to me, and it seems like your tool ticks all of the boxes I had.

Except for one: lack of support for CREATE INDEX CONCURRENTLY (usually done by detecting that and skipping the transaction for that migration). How do you handle creating indexes without this?


Thanks for taking a look!

Long-running index creation is a problem for pgmigrate and anyone else doing “on-app-startup” or “before-app-deploys” migrations.

Even at moderate scale (normal webapp stuff, not megaco size) building indexes can take a long time — especially for the tables where it’s most important to have indexes.

But if you’re doing long-running index building in your migrations step, you can’t deploy a new version of your app until the migration step finishes. (Big problem for lots of reasons.)

The way I’ve dealt with this in the past is:

- the database connection used to perform migrations has a low statement timeout of 10seconds.

- a long-running index creation statement gets its own migration file and is written as: “CREATE INDEX … IF NOT EXISTS”. This definition does not include the “CONCURRENTLY” directive. When migrations run on a local dev server or during tests, the table being indexed is small so this happens quickly.

- Manually, before merging the migration in and deploying so that it’s applied in production, you open a psql terminal to prod and run “CREATE INDEX … CONCURRENTLY”. This may take a long time; it can even fail and need to be retried after hours of waiting. Eventually, it’s complete.

- Merge your migration and deploy your app. The “CREATE INDEX … IF NOT EXISTS” migration runs and immediately succeeds because the index exists.

I’m curious what you think about this answer. If you have any suggestions for how pgmigrate should handle this better, I’d seriously appreciate it!


I think that’s the safest approach, but it’s inconvenient for the common case of an index that’ll be quick enough in practice.

The approach I’ve seen flyway take is to allow detecting / opting out of transactions on specific migrations

As long as you always apply migrations before deploying and abort the deploy if they time out or fail, then this approach is perfectly safe.

On the whole I think flyway does a decent job of making the easy things easy and the harder things possible - it just unfortunately comes with a bunch JVM baggage - so a Go based tool seems like a good alternative


Makes sense — Flyway is so good, copying their behavior is usually a smart choice. Thanks for the feedback!


I can definitely get behind using some other migration library! Thank you for writing and sharing this!


Thanks :) if you have the time, I’d sincerely appreciate feedback on it and especially on its docs/readme, even a simple github issue for “this is confusing” or “this typo is weird” would be really helpful.


Bold of you to flat out drop down migrations.

I guess having a new up migration to cover the case is better, but its nice to have a documented way of rolling back (which would be the down migration) - without applying it programmatically. But it helps if other team members can see how a change should be rolled back ideally.


Glenjamin gave a great answer. I’ll just add that in my experience (being responsible for the team’s database, at a few companies over the years), down migrations are NEVER helpful when a migration goes wrong. Roll-forward is the only correct model. Besides, there are plenty of migrations that cant be safely rolled back, so most “down migrations” either destroy data or don’t work (or both.)


The key here is that in production it's almost always not safe to actually apply a down migration - so it's better to make that clear than to pretend there's some way to reverse an irreversible operation.


> I agree but those first 5 days are going to be a mixed bag as you pick through libraries for logging, database drivers, migrations, as well as project organization, dependency injection patterns for testing, organize your testing structure, and more.

So the same as every other language that lacks these in the standard lib?


C++ comes to mind. Probably the worst language that's used significantly in the industry in terms of tooling and dependency management. The committee is trying to fix this, but almost no one can update to the newer standards.


I stumbled on this Go starter project that has enough batteries included to get you started I think. You might find it useful

https://github.com/mikestefanello/pagoda


I'd suggest starting with the standard library instead. All other libraries come and go, standard will be there as long as Go is alive.


Right and mikestefanello/pagoda seems like a comprehensive framework combining the labstack/echo/v4 routing framework and entgo.io/ent "ORM" - among other things like HTMX and Bulma.

That is a highly opinionated grouping of tools that gets you to a poor persons version of a full stack framework that many in the Go community would flat out reject.


I really don't understand the hate for a framework in "the community". I had stayed away from Go for about 3 years and I posted on r/golang asking if anything had popped up like a django in go and got nothing but hate.

I chock it up to people enjoy writing the same boiler plate stuff over and over in their paid jobs where they have the time to do it.

To your point, I've got my own set of libraries that I think are The Best™ for all my reasons that at least keep me productive.

Echo for the routing... sqlc for the db interface... I actually _love_ gomega and ginkgo for BDD but it makes people actually angry so I settle for testify and for logging we have slog now but I usually grab the logging lib from charm bracelet for nice colorized, leveled logging.


Hey, I checked your Reddit thread [1] and I see a good discussion with useful suggestions. I don't see "anything but hate" at all. There are pros and cons to using frameworks. True, many people seem to prefer to use smaller libraries. I personally don't like magic because sooner or later I lose track of what's going on and need to start guessing. With direct code I can understand faster without having to read docs and layers of source code. It's all about finding the right balance of abstractions.

1 - https://www.reddit.com/r/golang/comments/1anmoqg/what_go_lib...


Fair. Maybe this is good evidence of my naturally pessimistic disposition. I was quite active with the initial replies and didn't feel like it was giving any credit to a framework.


I liked gomega and ginkgo when I worked with it. But by virtue of running tests serially, the race detection becomes useless during testing, and I think it's a must have. Has it changed?


I have no idea. I never considered that it would affect using the race detector and it's been so long now I don't know that we cared / ran into this issue so no clue if it has been fixed.

Thanks for opening my eyes to this shortcoming.


The best thing I’ve found is beego: https://github.com/beego/beego

Not affiliated in any way, but to me it’s better than most of the alternatives


And since it isn't a "framework" but rather just a wired up collection of libraries, it would be pretty simple and even a good learning process to change out the couple of libraries one doesn't like (say, in case they prefer not to use ent, and backlite).


About 50% of the time learning a new language I find that the consensus framework/library choice is not quite to my taste. It isn’t that it’s bad, it just often feels like one thing went viral and then ends up boxed in by their success such that people double down/put up with it rather than evolve it further.

Point being you’re probably going to spend those first five days evaluating the options. The “community” doesn’t know your taste or your needs. You have no idea what their goals are, or what the average skill level is. All of those things can make a big difference in selecting tech to build atop of.


I had this exact same experience with Go. I picked up .Net (and Asp.Net for web stuff) on linux recently and found it to be much more easier to get started batteries included than Go. Don't need external libraries for any of the things you mentioned (ORM, logging, metrics, DI etc.).

Razor pages are very interesting as well. I haven't used them enough to have a solid opinion yet but I really liked them for quick server rendered pages.


I would also say library quality can be generally low. E.g. there are numerous flag parsing libraries but not a single one comes even close to clap in rust.


spf13/cobra and urfave/cli are fine, I think. As sibling said, you can't expect Go to have something like Rust's clap because the metaprogramming capabilities are so different.

On the other hand, I find it sad how terrible stdlib's flag library is. I'd love to have something like Python's argparse, which is not perfect but enough for most of the time. Go's flag doesn't even work well for small programs. It should've been more like spf13/pflag, but as we've often seen with Go, they went "screw decades-old conventions" and did something strictly worse.


As far as I understand, they respected decades-old conventions. Just not the ones we needed (Plan9 instead of POSIX/GNU getopt).


I have tried all of these. They all have their niceties but they also have their problems. They do not come close to clap in flexibility though. Argparse from python is nice, you're right. Thanks for explaining that, i'll look into how different the metaprogramming capabilities are.


On the other hand, if you're looking for a side project, writing your own flag parsing library is a lot of fun! I've been sinking time into mine for a few years and it's always really satisfying to use in my CLIs


You can’t just compare a library from another language, because they’re different, if all flag parsing library were inspired by Clap it’ll be a living nightmare for language that isn’t rust


Why can't I compare a library from another language? I have spent a significant amount of time with all of the popular flag parsing libraries in Go and none were as flexible yet easy to work with as clap. Anyway, I am not necessarily talking about syntax but moreso about the feature sets and overall quality.


This used to be true of PHP as well, though them finally picking off the hairy bits of bad assumptions has eroded that a fair bit. For a long time you didn't usually need to concern yourself what version of PHP 4-5 or so what version you were running on.


> Go is a language that I started learning years ago, but did't change dramatically.

A lot of people underrate that quality I feel. C had that quality but pushed it to neurotic levels where it would change so slowly even when it needed to do it faster. Other language in contrast change too fast and try to do too many things at once.

You do not often get credit for providing boring, stable programs that work. I hope they continue to do it this way and do not get seduced into overloading the language.

We have a lot of fast moving languages already, so having one that moves slowly increases potential choice.


It’s why I moved my personal site from Phoenix to Go and it has proven to he a great choice. I have zero dependencies so they never go out of date.


Absolutely, but not really underrated. The large and useful std lib plays an important role in the long term stability.


As a teacher, I have a program I've been using for the last 8 years or so. I distribute the compiled version to students for them to compare their own results with what's expected. The program's written in go. I update it every other year or so. So I'm exactly in the case you describe.

I never had any issue. The program still compiles perfectly, cross-compiles to windows, linux and macos, no dependency issue, no breaking change in the language, nothing. For those use-cases, go is a godsend.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: