Hacker News new | past | comments | ask | show | jobs | submit login
Some Go web dev notes (jvns.ca)
410 points by tosh 79 days ago | hide | past | favorite | 149 comments



> In general everything about it feels like it makes projects easy to work on for 5 days, abandon for 2 years, and then get back into writing code without a lot of problems.

To me this is one of the most underrated qualities of go code.

Go is a language that I started learning years ago, but did't change dramatically. So my knowledge is still useful, even almost ten years later.


I've picked up some Go projects after no development for years, including some I didn't write myself as a contractor. It's typically been a fairly painless experience. Typically dependencies go from "1.3.1" to "1.7.5" or something, and generally it's a "read changelogs, nothing interesting, updating just works"-type experience.

On the frontend side it's typically been much more difficult. There are tons of dependencies, everything depends on everything else, there are typically many new major releases, and things can break in pretty non-obvious ways.

It's not so much the language itself, it's the ecosystem as a whole. There is nothing in JavaScript-the-language or npm-the-package-manager that says the npm experience needs to be so dreadful. Yet here we are.


Arguably it's just the frontend. You can use old node in backend as much as you please. Frontend UI expectations evolve so quickly while APIs & backend can just stay the same, if it works it works.


It's not just "old node", it's also "old webpack" and "old vuejs" and "old everything". Yes, "it works" and strictly you don't really need to update it, but you're going to add a lot of friction down the line by never updating and at some point the situation will become untenable.


Old webpack and vuejs yes, because it's the UI which does get quickly old, but APIs or also things you would do with Go don't really have to change much as time goes on. A simple Express server and that's it.


This hasn't really been my experience, maybe if you're only using a single dependency (like express) it's easy but if you have any reasonable amount of npm deps it becomes quite untenable very very quickly. Mostly in ways you don't expect.

Like one dependency upgrading needing you to move to node v16 but doing so causes half your app to implode.

I don't really see this type of things in modern projects with other languages.


> You can use old node in backend as much as you please.

Not really, or at least not always.

I once tried to run some old Node project on Apple Silicon. It relied on some package that wanted to download binaries, but that ancient version didn't support Apple Silicon yet. Upgrading the package to an Apple Silicon supporting version required updating half of the other dependencies, and that broke everything. Eventually, I just gave up.


I agree but those first 5 days are going to be a mixed bag as you pick through libraries for logging, database drivers, migrations, as well as project organization, dependency injection patterns for testing, organize your testing structure, and more.

If you have a template to derive from or sufficient Go experience you'll be fine, but selecting from a grab bag of small libraries early on in a project can be a distraction that slows down feature development significantly.

I love Go but for rapid project development, like working on a personal project with limited time, or at a startup with ambitious goals, Go certainly has its tradeoffs.


I think 80% of this is people coming to Go from other languages (everybody comes to Go from some other language) and trying to bring what they think was best about that language to Go. To an extent unusual in languages I've worked in, it's idiomatic in Go to get by with what's in the standard library. If you're new to Go, that's what you should do: use standard logging, just use net/http and its router, use standard Go tests (without an assertion library), &c.

I'm not saying you need to stay there, but if your project environment feels like Rails or Flask or whatever in your first month or two, you may have done something wrong.


Go these days has had a good few stdlib improvements that reduce your reliance on third party libraries even further.

The http router now handles path parameters and methods, so you might just pull in a library to run a middleware stack.

There is structured logging in the stdlib, which works with the existing log package for an easy transition.

The thing I’ve struggled with is structuring a project nicely, what with the way modules work, especially for services that aren’t exactly ‘micro’, and especially when the module and workspace system is still pretty unintuitive.


I completely agree with the comment, except for the Flask example. Django would be a better example.

Both Flask and Golang's http package have simplicity and minimalism as their philosophy. Of course, most mature projects will eventually diverge from that. But both can start out as a single file with just a few lines of code.


I really think the library search is more of something you inherit from other languages, though database drivers are something you need to go looking for. The standard library has an adequate HTTP router (though I prefer grpc-gateway as it autogenerates docs, types, etc.) and logger (slog, but honestly plain log is fine).

For your database driver, just use pgx. For migrations, tern is fine. For the tiniest bit of sugar around scanning database results into structs, use sqlx instead of database/sql.

I wouldn't recommend using a testing framework in Go: https://go.dev/wiki/TestComments#assert-libraries

Here's how I do dependency injection:

   func main() {
       foo := &Foo{
           Parameter: goesHere,
       }
       bar := &Bar{
           SomethingItNeeds: canJustBeTypedIn,
       }
       app := &App{
           Foo: foo,
           Bar: bar,
       }
       app.ListenAndServe()
   }
If you need more complexity, you can add more complexity. I like "zap" over "slog" for logging. I am interested in some of the DI frameworks (dig), but it's never been a clear win to me over a little bit of hand-rolled complexity like the above.

A lot of people want some sort of mocking framework. I just do this:

   - func foo(x SomethingConcrete) {
   -     x.Whatever()
   - }
   + interface Whateverer { Whatever() }
   + func foo(x Whateverer) {
   +     x.Whatever()
   + } 
Then in the tests:

   type testWhateverer {
      n int
   }
   var _ Whateverer = (*testWhateverer)(nil)
   func (w *testWhateverer) Whatever() { w.n++ }
   func TestFoo(t *testing.T) {
       x := &testWhateverer{}
       foo(x)
       if got, want := x.n, 1; got != want {
           t.Errorf("expected Whatever to have been called: invocation count:\n  got: %v\n want: %v", got, want)
       }
   }
It's easy. I typed it in an HN comment in like 30 seconds. Whether or not a test that counts how many times you called Whatever is up to you, but if you need it, you need it, and it's easy to do.


I've been writing Golang for years now, and I heavily endorse everything written here.

Only exception is you should use my migration library [0] instead of tern — you don't need down migrations, and you can stop worrying about migration number conflicts.

One other suggestion I'll make is you probably at some point should write a translation layer between your API endpoints and the http.Handler interface, so that your endpoints return `(result *T, error)` and your tests can avoid worrying about serde/typeasserting the results.

[0] https://github.com/peterldowns/pgmigrate


This looks excellent!

The go tools for managing DB schema migrations have always felt lacking to me, and it seems like your tool ticks all of the boxes I had.

Except for one: lack of support for CREATE INDEX CONCURRENTLY (usually done by detecting that and skipping the transaction for that migration). How do you handle creating indexes without this?


Thanks for taking a look!

Long-running index creation is a problem for pgmigrate and anyone else doing “on-app-startup” or “before-app-deploys” migrations.

Even at moderate scale (normal webapp stuff, not megaco size) building indexes can take a long time — especially for the tables where it’s most important to have indexes.

But if you’re doing long-running index building in your migrations step, you can’t deploy a new version of your app until the migration step finishes. (Big problem for lots of reasons.)

The way I’ve dealt with this in the past is:

- the database connection used to perform migrations has a low statement timeout of 10seconds.

- a long-running index creation statement gets its own migration file and is written as: “CREATE INDEX … IF NOT EXISTS”. This definition does not include the “CONCURRENTLY” directive. When migrations run on a local dev server or during tests, the table being indexed is small so this happens quickly.

- Manually, before merging the migration in and deploying so that it’s applied in production, you open a psql terminal to prod and run “CREATE INDEX … CONCURRENTLY”. This may take a long time; it can even fail and need to be retried after hours of waiting. Eventually, it’s complete.

- Merge your migration and deploy your app. The “CREATE INDEX … IF NOT EXISTS” migration runs and immediately succeeds because the index exists.

I’m curious what you think about this answer. If you have any suggestions for how pgmigrate should handle this better, I’d seriously appreciate it!


I think that’s the safest approach, but it’s inconvenient for the common case of an index that’ll be quick enough in practice.

The approach I’ve seen flyway take is to allow detecting / opting out of transactions on specific migrations

As long as you always apply migrations before deploying and abort the deploy if they time out or fail, then this approach is perfectly safe.

On the whole I think flyway does a decent job of making the easy things easy and the harder things possible - it just unfortunately comes with a bunch JVM baggage - so a Go based tool seems like a good alternative


Makes sense — Flyway is so good, copying their behavior is usually a smart choice. Thanks for the feedback!


I can definitely get behind using some other migration library! Thank you for writing and sharing this!


Thanks :) if you have the time, I’d sincerely appreciate feedback on it and especially on its docs/readme, even a simple github issue for “this is confusing” or “this typo is weird” would be really helpful.


Bold of you to flat out drop down migrations.

I guess having a new up migration to cover the case is better, but its nice to have a documented way of rolling back (which would be the down migration) - without applying it programmatically. But it helps if other team members can see how a change should be rolled back ideally.


Glenjamin gave a great answer. I’ll just add that in my experience (being responsible for the team’s database, at a few companies over the years), down migrations are NEVER helpful when a migration goes wrong. Roll-forward is the only correct model. Besides, there are plenty of migrations that cant be safely rolled back, so most “down migrations” either destroy data or don’t work (or both.)


The key here is that in production it's almost always not safe to actually apply a down migration - so it's better to make that clear than to pretend there's some way to reverse an irreversible operation.


> I agree but those first 5 days are going to be a mixed bag as you pick through libraries for logging, database drivers, migrations, as well as project organization, dependency injection patterns for testing, organize your testing structure, and more.

So the same as every other language that lacks these in the standard lib?


C++ comes to mind. Probably the worst language that's used significantly in the industry in terms of tooling and dependency management. The committee is trying to fix this, but almost no one can update to the newer standards.


I stumbled on this Go starter project that has enough batteries included to get you started I think. You might find it useful

https://github.com/mikestefanello/pagoda


I'd suggest starting with the standard library instead. All other libraries come and go, standard will be there as long as Go is alive.


Right and mikestefanello/pagoda seems like a comprehensive framework combining the labstack/echo/v4 routing framework and entgo.io/ent "ORM" - among other things like HTMX and Bulma.

That is a highly opinionated grouping of tools that gets you to a poor persons version of a full stack framework that many in the Go community would flat out reject.


I really don't understand the hate for a framework in "the community". I had stayed away from Go for about 3 years and I posted on r/golang asking if anything had popped up like a django in go and got nothing but hate.

I chock it up to people enjoy writing the same boiler plate stuff over and over in their paid jobs where they have the time to do it.

To your point, I've got my own set of libraries that I think are The Best™ for all my reasons that at least keep me productive.

Echo for the routing... sqlc for the db interface... I actually _love_ gomega and ginkgo for BDD but it makes people actually angry so I settle for testify and for logging we have slog now but I usually grab the logging lib from charm bracelet for nice colorized, leveled logging.


Hey, I checked your Reddit thread [1] and I see a good discussion with useful suggestions. I don't see "anything but hate" at all. There are pros and cons to using frameworks. True, many people seem to prefer to use smaller libraries. I personally don't like magic because sooner or later I lose track of what's going on and need to start guessing. With direct code I can understand faster without having to read docs and layers of source code. It's all about finding the right balance of abstractions.

1 - https://www.reddit.com/r/golang/comments/1anmoqg/what_go_lib...


Fair. Maybe this is good evidence of my naturally pessimistic disposition. I was quite active with the initial replies and didn't feel like it was giving any credit to a framework.


I liked gomega and ginkgo when I worked with it. But by virtue of running tests serially, the race detection becomes useless during testing, and I think it's a must have. Has it changed?


I have no idea. I never considered that it would affect using the race detector and it's been so long now I don't know that we cared / ran into this issue so no clue if it has been fixed.

Thanks for opening my eyes to this shortcoming.


The best thing I’ve found is beego: https://github.com/beego/beego

Not affiliated in any way, but to me it’s better than most of the alternatives


And since it isn't a "framework" but rather just a wired up collection of libraries, it would be pretty simple and even a good learning process to change out the couple of libraries one doesn't like (say, in case they prefer not to use ent, and backlite).


About 50% of the time learning a new language I find that the consensus framework/library choice is not quite to my taste. It isn’t that it’s bad, it just often feels like one thing went viral and then ends up boxed in by their success such that people double down/put up with it rather than evolve it further.

Point being you’re probably going to spend those first five days evaluating the options. The “community” doesn’t know your taste or your needs. You have no idea what their goals are, or what the average skill level is. All of those things can make a big difference in selecting tech to build atop of.


I had this exact same experience with Go. I picked up .Net (and Asp.Net for web stuff) on linux recently and found it to be much more easier to get started batteries included than Go. Don't need external libraries for any of the things you mentioned (ORM, logging, metrics, DI etc.).

Razor pages are very interesting as well. I haven't used them enough to have a solid opinion yet but I really liked them for quick server rendered pages.


I would also say library quality can be generally low. E.g. there are numerous flag parsing libraries but not a single one comes even close to clap in rust.


spf13/cobra and urfave/cli are fine, I think. As sibling said, you can't expect Go to have something like Rust's clap because the metaprogramming capabilities are so different.

On the other hand, I find it sad how terrible stdlib's flag library is. I'd love to have something like Python's argparse, which is not perfect but enough for most of the time. Go's flag doesn't even work well for small programs. It should've been more like spf13/pflag, but as we've often seen with Go, they went "screw decades-old conventions" and did something strictly worse.


As far as I understand, they respected decades-old conventions. Just not the ones we needed (Plan9 instead of POSIX/GNU getopt).


I have tried all of these. They all have their niceties but they also have their problems. They do not come close to clap in flexibility though. Argparse from python is nice, you're right. Thanks for explaining that, i'll look into how different the metaprogramming capabilities are.


On the other hand, if you're looking for a side project, writing your own flag parsing library is a lot of fun! I've been sinking time into mine for a few years and it's always really satisfying to use in my CLIs


You can’t just compare a library from another language, because they’re different, if all flag parsing library were inspired by Clap it’ll be a living nightmare for language that isn’t rust


Why can't I compare a library from another language? I have spent a significant amount of time with all of the popular flag parsing libraries in Go and none were as flexible yet easy to work with as clap. Anyway, I am not necessarily talking about syntax but moreso about the feature sets and overall quality.


This used to be true of PHP as well, though them finally picking off the hairy bits of bad assumptions has eroded that a fair bit. For a long time you didn't usually need to concern yourself what version of PHP 4-5 or so what version you were running on.


> Go is a language that I started learning years ago, but did't change dramatically.

A lot of people underrate that quality I feel. C had that quality but pushed it to neurotic levels where it would change so slowly even when it needed to do it faster. Other language in contrast change too fast and try to do too many things at once.

You do not often get credit for providing boring, stable programs that work. I hope they continue to do it this way and do not get seduced into overloading the language.

We have a lot of fast moving languages already, so having one that moves slowly increases potential choice.


It’s why I moved my personal site from Phoenix to Go and it has proven to he a great choice. I have zero dependencies so they never go out of date.


Absolutely, but not really underrated. The large and useful std lib plays an important role in the long term stability.


As a teacher, I have a program I've been using for the last 8 years or so. I distribute the compiled version to students for them to compare their own results with what's expected. The program's written in go. I update it every other year or so. So I'm exactly in the case you describe.

I never had any issue. The program still compiles perfectly, cross-compiles to windows, linux and macos, no dependency issue, no breaking change in the language, nothing. For those use-cases, go is a godsend.


It's sad https://pkg.go.dev/embed was not mentioned in a post about web development in Go :-)

Having a true single binary bundling your static resources is so convenient.


Massively underrated. It's actually used to build the pkg.go.dev website itself.

https://github.com/golang/pkgsite


it does mention it:

there’s just 1 static binary, all I need to do to deploy it is copy the binary. If there are static files I can just embed them in the binary with embed.


It is mentioned now.


Which time golang read file, build time or run time ?


embed package allows to embed assets directly into the binary, so the files are read once during build time and then you can access them as e.g. a byte slice, a string, or a special FS object that acts like an in-memory file system


Build time


Build time. It literally embeeds the file in the binary.


There are some good tips here.

As for sqlc, I really wanted to like it, but it had some major limitations and minor annoyances last time I tried it a few months ago. You might want to go through its list of issues[1] before adopting it.

Things like no support for dynamic queries[2], one-to-many relationships[3], embedded CTEs[4], composite types[5], etc.

It might work fine if you only have simple needs, but if you ever want to do something slightly sophisticated, you'll have to fallback to the manual approach. It's partly understandable, though. It cannot realistically support every feature of every DBMS, and it's explicitly not an ORM. But I still decided to stick to the manual approach for everything, instead of wondering whether something is or isn't supported by sqlc.

One tip/gotcha I recently ran into: if you run Go within containers, you should set GOMAXPROCS appropriately to avoid CPU throttling. Good explanation here[6], and solution here[7].

[1]: https://github.com/sqlc-dev/sqlc/issues/

[2]: https://github.com/sqlc-dev/sqlc/issues/3414

[3]: https://github.com/sqlc-dev/sqlc/issues/3394

[4]: https://github.com/sqlc-dev/sqlc/issues/3128

[5]: https://github.com/sqlc-dev/sqlc/issues/2760

[6]: https://kanishk.io/posts/cpu-throttling-in-containerized-go-...

[7]: https://github.com/uber-go/automaxprocs


I agree that sqlc has limits, but for me it is great because it takes care of 98% of the queries (made up number) and keeps them simple to write. I can still write manual queries for the rest of them so it's still a net win.


It gets mentioned a lot in the context of database/sql and sqlc, but Jet has been a great alternative so far, most notably because of its non-issue with dynamic queries support.

https://github.com/go-jet/jet/


Unfortunately it relies on CGO for SQLite, which is a bummer


Good point. The generator (and tests) do use mattn's SQLite driver, but you're free to choose any database/sql-compliant implementation in your program, so that means setting up the necessary toolchain only in your development machine, even just using a very simple zig setup. I think it should be fairly easy to make it use a pure-go driver, some of which are nearly drop-in replacements and have only minute differences like the string name being "sqlite" instead of "sqlite3", but of course that's extra stuff to test and maintain (but still only running when you're generating from a target database).


Yeah, it'd be much nicer if libraries were designed to be driver agnostic, like redka which supports 4 different SQLite drivers:

https://github.com/nalgeon/redka/tree/main/example


Library is driver agnostic. Jet cli code generator indeed uses mattn/go-sqlite3, but you can run your queries using any driver you prefer. You can also customize jet generator and use any other driver for code generation.


> I learned the hard way that if I don’t do this then I’ll get SQLITE_BUSY errors from two threads trying to write to the db at the same time.

OK, here's a potentially controversial opinion from someone coming into the web + DB field from writing operating systems:

1. Database transactions are designed to fail

Therefore

2. All database transactions should done in a transaction loop

Basically something like this:

https://gitlab.com/martyros/sqlutil/-/blob/master/txutil/txu...

That loop function should really have a Context so it can be cancelled; that's future work. But the idea stands -- it should be considered normal for transactions to fail, so you should always have a retry loop around them.


It's controversial for many good reasons. You make the general claim that retrying a db transaction should be the rule, when most experts agree that it should be the exception. Just in the context of web development it can be disputed on the account that a db transaction is just a part of a bigger contract that includes a user at the other end of a network, a request, a session, and a slew of other possible connected services. If one thing shows signs of being unstable, everything should fail. That's the general wisdom.

More specific to the code that you linked to, the retry happens in only two specific cases. Even then, I personally don't find what it's doing to be such great engineering. It hacks its way around something that should really be fixed by properly setting the db engine. By encroaching like this, it effectively hides the deeper problem that SQLite has been badly configured, which may come to bite you later.

Failing transactions would raise a stink earlier. Upon inquiry, you'd find the actual remedy, resulting in tremendous performance. Instead, this magic loop is trying to help SQLite be a database and it does this in Go! So you end up with these smart transactions that know to wait in a queue for their turn. And for some time, nobody in the dev team may be aware that this can become a problem, as everything seems to be working fine. The response time just gets slightly longer and longer as the load increases.

Code that tries to save failing things at all cost like this also tends to do this kind of glue and duct tape micromanaging of dependencies. Usually with worse results than simply adjusting some settings in the dependencies themselves. You end up with hard to diagnose issues. The code itself becomes hard to reason about as it's peppered with complicated ifs and buts to cover these strange cases.


Transactions are hard, and in reality there's a shit-ton of things people do that has no right to be close to a transaction (but still are), and transactions were a good imperative kludge at the time that has just warped into a monster that people kinda accept over the years.

A loop is a bad construct imho, something I like far better is the Mnesia approach that simply decides that transactional updates are self-contained functional blocks and the database manages the transactional issues (yes, this eschews the regular SQL interfaces and Db-application separation but could probably be emulated to a certain degree).

https://www.erlang.org/doc/apps/mnesia/mnesia_chap4.html


You'll just end up looping until your retry limit is reached. SQLite just isn't very good at upgrading read locks to write locks, so the appropriate fix really is to prevent that from happening.


I've needed the exact same loop on (an older) Postgres to stop production from hitting transient errors. It's fundamental to the concept of concurrent interactive transactions.


You should never do blind retries in an infinite for loop, ideally it should be a generic retry function ( bounded ) that type check the error.


Yes, that's exactly what the linked code does: Calls your function, and if it returns an error, check through the wrapped errors to see if it's one of the SQLite errors which should be retried. If it is, try the transaction again; if not, pass the error up.


OK, a bunch of the replies here seem to be misunderstanding #1. In particular, the assumption is that the only reason a transaction might fail is that the database is too busy.

I come from the field of operating systems, and specifically Xen, where we extensively use lockless concurrency primitives. One prime example is a "compare-exchange loop", where you do something like this:

    y = shared_state_var;
    do {
        oldx = y;
        newx = f(oldx); // f may be arbitrarily complicated
    } while((y = cmpxchg(&shared_state_var, oldx, newx)) != oldx);
Basically this reads oldx, mutates it into newx (using perhaps a quite complicated set of logic). Then the compare exchange will atomically:

- Read shared_state_var

- If and only if this value if equal to oldx, set it to newx

- In any case, return oldx

In the common case, when there's no contention, you read the old value, see that it hasn't changed, and then write the new value. In the uncommon case, you notice that someone else has changed the value, and so you'd better re-run the calculations.

From my perspective, database transactions are the same thing: You start a transaction, read some old values, you make some changes on those values. When you commit the transaction, if some of the the thing's you've read have been changed in the meantime, the transaction will fail and you start over again.

That's what I mean when I say "database transactions are designed to fail". Of course the transaction may fail because you have a connection issue, or a disk issue, or something like that; that's not really what I'm talking about. I'm saying specifically that there may be a data race due to concurrent accesses. Whenever there are more than one thing accessing the database, there is always the chance of this happening, regardless of how busy the system is -- even if in an entire week you only have two transactions, there's still a chance (no matter how small) that they'll be interleaved such that one transaction reads something which is then written to before the transaction is done.

Now SQLite can't actually have this sort of conflict, because it's always single-writer. But essentially what that means is that there's a conflict every time where there are two writes, not only when some data was overwritten by another process. Something that happens at a very very low rate when you're using a proper RDBMS like Postgres, now happens all the time. But the problem isn't with SQLite, it's with your code, which has assumed that transactions will never fail do to concurrency issues.


I always see SQLite as recommended, but every time I look into it there are some non-obvious subtleties around txn lock, retry behavior, and WAL mode. By default if you don't tweak things right getting frequent SQLITE_BUSY errors seems to occur at non-trivial QPS.

Is there a place that documents what the set-and-forget setting should be?


_journal_mode=wal&_busy_timeout=200 seems to work well enough.


You shouldn't blindly retry things that fail as a default, and you should really not default into making the decision of what to do on a server that is just on the middle between the actual user and the database.

Handling errors on the middle is a dangerous optimization.


Others have said much about a transaction loop, but I also don't think that database transactions are necessarily designed to fail in the sense that the failure is a normal mode of operation. Failing transactions are still considered exceptional; their sole goal is to provide logical atomicity.


I don't think this controversial. Retrying failed transactions is a common strategy.


You're the first person I've heard say so. When I was learning DB stuff, there were loads of examples of things that looked at the error from a transaction. Not a single one of them then retried the transaction as a result.

The OP's comment is a symptom of this -- they did some writes or some transactions and were getting failures, which means they weren't retrying their transactions. And then when they searched to solve the problem, the advice they received wasn't "oh, you should be retrying your transactions" -- rather, it was some complicated technical thing to avoid the problem by preventing concurrent writes.


I'm in the opposite camp. Transactions can fail, however retrying them is not the solution. It hides the symptom (which is not a problem with proper monitoring), but more importantly, it can lead to overwhelming the db server. Sometimes failure happens because of the load, in which case retrying the query is counterproductive. And even in cases where retrying would be the correct approach, it is the responsibility of the app logic, not of the db connection layer, to retry the query. Imho of course. :)


Ah, interesting. Maybe my experience has been unusual then.

I agree with you the retrying transactions is relatively simple and powerful.


Wouldn't you need two contexts? One for retry cancellation, one for underlying resources to be passed on to?


GOMEMLIMIT has really cut down on the amount of time I’ve had to spend worrying about the GC. I’d recommend it. Plus, if you’re using kubernetes or docker, you can automatically set it to the orchestrator-managed memory limit using something like https://github.com/KimMachineGun/automemlimit — no need to add any manual config at all.


Oh this is a good find. Thank you for sharing that link!


https://pkg.go.dev/go.uber.org/automaxprocs is another useful one if you set CPU limits


Other note

Sooner or later you will hit html/template, and realize it's actually very weird and has a lot of weird issues.

Don't use html/template.

I grew to like Templ instead


Templ [1] is great!

Another go mod that helps a lot when massaging JSON (something most web servers end up doing sooner or later) is GJSON [2].

--

1: https://github.com/a-h/templ

2: https://github.com/tidwall/gjson


stdlib templates are a bit idiosyncratic and probably not the easiest to start with, but they do work and don't have "weird issues" AFAIK. What issues did you encounter?


I don't know what issues others have had with it, but for me one notable thing is that html/template strips all comments out. This is by design, but it's not documented anywhere. I've proposed making this configurable, but my proposal has gotten no traction so far.


https://github.com/golang/go/issues/54380

I didn't know about that. I agree it qualifies as a "weird issue".


I am just trying Templ. I like what I am seeing for the most part. There are some tooling ergonomics to work out. Lots of "suddenly the editor things everything is an error and nothing will autoimport or format" back to mostly working. Click to definition goes to the autogenerated code instead of the templ file. Couple things like that. But soooooooooo much better to deal with code gen than html/template. That thing is a pita


What’s so bad about html/template?


passing data to templates that call templates that (maybe call other templates that) use the data. It is easy to call things in the wrong order, not provide the right values, think you have access to some data and totally don't, there is no type help, there is a bit of ceremony to get functions available, and I'm sure there is something else I'm forgetting. Just overall, a pain to work with.

So far, I'm enjoying in Templ that I can clearly see what arguments and types are passed to whichever views/partials and that I can simply use standard Go functions to do whatever I need them to do.


Good to see author's mention about routing. I am mentally stuck with mux for a long time and didn't pay attention to the new release features. Happy that I always find things like these on HN.


Nice new feature, would actually make me want to use Go without Gin.


I am over Gin and have been for years yet everyone keeps using it because it has inertia. The docs are garbage.

Big fan of Echo and it has much better docs.

https://echo.labstack.com/


Thanks for the suggestion, will give it a try. I'm more familiar with Python than Go. I know my way around the Python ecosystem and can make informed decisions about which tool to use. Not so much with Go, so I appreciate your advice.


I had to move from Gin to echo for my personal site, the routing in Gin was refusing to serve static resources at the root path without some headache.


I've grown to prefer go-chi over Gin (or Echo), since it's just the standard library with some QoL features on top.


Chi is amazing. I love the philosophy of extending the stdlib instead of writing an alternative. I try to keep that in mind when writing my own libs or helpers now, and I'm very satisfied with the results.

For example I made a lib to write commands (like cobra or urfave/cli), but based entirely on the `flag` package: https://github.com/Thiht/go-command


> For example I made a lib to write commands (like cobra or urfave/cli), but based entirely on the `flag` package: https://github.com/Thiht/go-command

Looks nice! I'd like an easier way of setting both long and short flags for a command, i.e. --verbose and -v should do the same. Using `flag` I have to declare everything twice to achieve this.


Nice CLI lib ! I'm still looking for a Argh or Typer equivalent though.


I like it, but with the new http.ServeMux rolled out in Go 1.22, is there any use for Chi anymore?


Good question. The middleware stack it provides is nice.


What I love about Go is its simplicity and no framework dependency. Go is popular because it has no dominating framework. Nothing wrong with frameworks when it fits the use case but I feel that we have become over dependent on framework and Go brings that freshness about just using standard libraries to create something decent with some decent battle tested 3rd party libraries.

I personally love "library over framework" mindset and I found Go to do that best.

Also, whether you want to build a web app or cli tool, Go wins there (for me at least). And I work a lot with PHP and .NET as well and love all 3 overall.

Not to mention how easy was it for someone like me who never wrote Go before to get up and running with it quickly. Oh did I mention that I personally love the explicit error handling which gets a lot of hate (Never understood why). I can do if err != nil all day.

A big Go fan.


I like Go for this reason as well. In Python I found the Flask framework to be suitably unobtrusive enough to be nice to use (never liked Django), but deploying python is a hassle. Go is much better in that area. The error handling never bothered me either.

I think if Go shipped better support for auth/sessions in the standard library more people would use it. Having to write that code yourself (actually not very hard, but intimidating if you've never done it before) deters people and ironically the ease of creating Go packages makes it unclear which you should use if you're not going to implement it yourself.


I am a Django apologist because I grew up with Django. So with that being said, I'm not out to convert you but I am genuinely curious what you don't like about it. Promise I won't refute anything I just like to try to understand where it turned off folks.

I don't like flask because it seems just easy enough to be really productive in the beginning but you eventually need most of the things Django gives you. So I would rather pick up Django Rest Framework or Django Ninja than Flask or Fast API. In those cases I jump straight to Go and use it instead because the same library decisions on the Go side give me a lot more lift on the operations end (easy to deploy, predictable performance into thousands of requests per second if built correctly).


It's opinionated in a way I dislike. I don't actually have anything against opinionated software--tailwind is very opinionated about how you should write CSS but I like it because it matches up with how my mind works. But I find Django very jarring. I can't point to a specific thing about Django, it's more that if I were to design a framework from scratch it would look nothing like Django, so I experience a lot of friction trying to shape my ideas about how to build an application into Django's way of doing it. I have the same problem with Rails as well.

I agree with you that Django provides most things that an application will eventually need, and if I were managing a team that was starting a project from scratch I think Django would be a reasonable choice, but aesthetically something about it irritates me.


I have to learn an entite framework and if I want to stray away from what it wants the magic makes it hard.

For one, I so migrations with raw SQL onto the server, I just don't trust it any other way and I dislike ORMs, I even dislike query builders.

But for a big framework like Django you can't remove those batteries easily and you have already strayed away from the narrow paved road.

If I'm spinning up an API endpoint for my existing stack, I'm picking flask ( well no I'm picking go because WSGI is a pain in the... to deploy ) purely because I don't need auth + rate limiting + an orm and all that. I need endpoints exposed to do what I need, literally the rest is already handled by my API gateway and it will be tied into our existing management dashboard.

Django may be great to spin up a quick project but I found I needed to stary for the paved road pretty quickly so I rather picked a different tool...

This doesn't apply to everybody either, for aome Django is the correct solution.


My main gripe about go is that it's decent for the middle and late stages and really really bad to start with. You'll spend way too much time rewriting stuff you literally get for free by running "rails new" or "bundle add devise"


I love using Go for personal projects, but I keep finding myself recreating the same redis-based session storage logic, authentication, admin vs public routes, etc. Really does burn time in the beginning, even though it's fun to write the code.


There is definitely space for an opinionated set of libraries and boiler plate code for golang projects like this.

Having said that, I’d bet that the go community consensus is that you build one out yourself and reuse it. So most times I end up copy and pasting the same logic rather than recreating.


100% this. I have a set of commonly-used code in a repository we use at work. AuthN/AuthZ, code specific to our infrastructure/architecture, common http middlewares, error types, DB wrapper, API clients, OpenAPI Server generation, etc.

However, my personal projects have a different set of code I reuse (more CLI- and tool-heavy focus), and I'm sure other environments would have an entirely different set of reused code.

On the opinionated library side of things, I did follow LiveBud for a while, and Go-Micro but haven't really sat well with the experiences from those, given how they lock you in to different aspects of your stack.


> (actually not very hard, but intimidating if you've never done it before)

These stuff are never really hard, but you will make countless vulnerabilities that way. The most important job of a good framework is cutting down significantly on the possible to get wrong use cases, exposing you to already safe APIs.


I'm curious in what sense you find Python difficult to deploy? My company has tons of Python APIs internally and we never have much trouble with them. They are all pretty lightly used services so it it something about doing it on a larger scale?


Forcing something like WSGI and distributed computing is the biggest thing architecturally.

I'm currently moving 4 python microservices into a single go binary. The only reason they were ever microservices was because of WSGI and how that model works.

In any conventional language those are just different threads in the same monolith but I didn't have that choice. So instead of deploying a single binary I had to deploy microservices and a gateway and a reverse proxy and a redis instance, for an internal tool that sees maybe 5 users...

It was the wrong tool for the job.


I don’t see why WSGI would enforce any of that. Just sounds like someone jumped the microservices hype train…. You can as easily fit it in one python program as in one go binary.


I can keep an effective in memory store of data and expect it to even be the same in memory store of data? When a wsgi server is spawning multiple processes?

Or kick of long running backend tasks in a other thread?

These are things that python forces you to do in a distributed manner. Partly because of the gil and partly because those pesky cloud native best practices don't apply outside of the cloud...

And well if I'm going to have to reimplment an http server not ontop of wsgi how about just using a different language that doesn't have those same fundamental problems for the use case...

There's things I would happily use python for. A webserver just isn't one of them.

I mean it takes 10 lines of code to have a webserver running in golang, the language is build for it.


There is a world of difference between having to install python, libraries (some of them may require C-based libs and compiling, therefore also install gcc), then configure wsgi, hope it doesn't clash with other python versions or have docker and containers... or just generating a fat binary in go.


Deploying a Go application: copy executable to server, done.

Deploying a Python application: 1) Install python globally (oof) 2) figure out which venv system to use 3) copy project to server 4) install project to venv 5) figure out how to run it inside the venv so that it'll find the correct package versions from there instead of using the global ones.


Yea. I have worked with Python and Ruby just a little to know that deployments are a pain with those 2. Go is a breeze. You do need to setup systemd etc to run the binary but thats it.


Some great points about the downsides of Go. Btw I was a Flask junkie in early days.


I recently put together a "stack" of libraries to use for a new webapp project in Go; here's what I ended up using:

- go-chi for routing - pgx for Postgres driver - pressly/goose for migrations (I like how it can embed migrations into the binary as long as they are in SQL/not Go) - go-jet (for type-safe SQL; sort of - it lets you write 100% Go that looks like 98% SQL) - templ - htmx (not Go-specific, but feels like a match made in heaven for templ) - authboss for auth

I'm very happy with all of these choices, except maybe authboss - it's a powerful auth framework, but it took a bit of figuring out since the documentation is not very comprehensive; but it worked out in the end.


If you decide to use SQLite with one (single thread) writer pool and an other reader pool, this may help: https://github.com/bxcodec/dbresolver

Also sometimes if I have two tables where I know I’ll never need to do a JOIN between them, I’ll just put them in separate databases so that I can connect to them independently.

If this data belongs together and you're just doing this to improve concurrency, this may be a case where BEGIN CONCURRENT helps: https://sqlite.org/src/doc/begin-concurrent/doc/begin_concur...

If you want to experiment with BEGIN CONCURRENT in Go you could do worse than try my SQLite driver: https://github.com/ncruces/go-sqlite3

Import this package to get the version with BEGIN CONCURRENT: https://github.com/ncruces/go-sqlite3/tree/main/embed/bcw2


Does she (or anyone else here) use net/http's built in https support? Seems implied by saying the built-in web server is used in production.


Yes, all the time! The built-in net/http has great TLS support

https://eli.thegreenplace.net/2021/go-https-servers-with-tls...


Is it the norm to use the go server behind something else, I'm implying, I guess.


the whole gobin / gopath thing was annoying as a beginner: I just want to build this module and use that local module

go build ./... Goes where ?


I believe it is recommended to not use gobin/gopath anymore.

go test ./… tests all files in the project, so I assume build does something similar.


My experience with go build ./... is that it compiles everything but it doesn't make the binaries.

> When compiling multiple packages or a single non-main package, build compiles the packages but discards the resulting object, serving only as a check that the packages can be built.

From https://pkg.go.dev/cmd/go#hdr-Compile_packages_and_dependenc...

A bit annoying when you want to build a bunch of executables, but it's not something I need often and it's easy to script.


  go build -o some/dir/ ./...
will actually output the binaries


Sick, I couldn't figure it out when I needed it so I just manually created a 20 line script to cd into each directory and go build.


Are the any "fronty" back-end (or straight client/desktop) jobs using Go? i.e. I'd like to use Go on the job but all I see is AWS/Kubernetes/mix-of-DevOps kind of positions.


For application development - Go is underrated. And heavily. I say this coming from Python which is really great but I like the go's damn simplicity more which is reflected everywhere.

What makes me happy is that lots of critical infrastructure tooling is also in Go from datbases to web servers and cluster orchestrators.


I've been using go for a month now in a new job and hate it. It feels like they learned nothing from the past 20 years of language development.

Just one huge problem is that they REPEATED Java's million/billion dollar mistake with nulls. The usual way to get HTTP Headers using Go cannot distinguish between an empty header value and no header at all because the method returns "nil" for both these cases. They could've adopted option types but instead we are back to this 90s bullshit of conflating error types with valid values. If you're programming defensively, every single object reference anywhere has to be checked for nil or risk panicking now.. like why, after we literally named this a billion dollar mistake in Java, why would anyone fucking do this again?

We have helper methods in our codebase just do to this:

    fn checkThingIsA(ctx) {
      thing := ctx.get(thing)
      if thing == nil || thing != Thing.A {
        return false
      }
      return true
    }
In any sane language this is one line:

  ctx.get(thing).map(|x| x == Thing.A).unwrap_or(false)
In Go, we have to make helper methods for the simplest things because the simplest 1-liner becomes 4 lines with the nil/error check after. We have 100 helpers that do some variation of that because everything is so verbose that could would become unreadable without it.

I hate that they made and popularized this backwards dumpster fire of a language when we should know much better by now.


> The usual way to get HTTP Headers using Go cannot distinguish between an empty header value and no header at all

HTTP headers in Go are maps, which have a built-in mechanism for checking key existence, which distinguishes b/t empty and missing. No nils involved.

  if vals, ok := headers["Content-Length"]; !ok {
    // no content-length header was passed
  }


It looks like this idiom is specifically for maps, but the recommended way (by their own docs) to access Headers is using the `.Get` method, which canonicalizes header names before looking at the map. This method predictably does not support the idiom, because they made a stupid special case idiom that applies only to maps instead of using an Option type that would work in all cases, including custom types that might have similar requirements.

Like this doesn't work:

    nonexist, ok := req.Header.Get("X-Custom-Header")


Sounds like you're actually just mad that it didn't have generics from the beginning. That horse has been beaten to death for years.

I don't think it's fair though to characterize different points on a tradeoff curve as "stupid". If you're mad that you're forced to use a language with inappropriate tradeoffs for your context, then the right person to blame would be whoever picked it for your project to begin with.


I don't think your example is very compelling but I completely agree with your general point.

I read the Go book by Donovan and Kernighan and I have been working full-time in Go for the last year (my work is otherwise interesting so this is tolerable). It is painfully obvious that the authors are stuck in 1986 in terms of language design. Go is C with modernized tooling (in some ways it's worse...).

It's a horrible idea that has been extremely well executed. And the idea is essentially to make a language as easy as possible for people with imperative language brain damage to learn, make it as simple as possible and then make it simpler than that.

A good example is that despite taking almost everything verbatim from C, the authors decided that the ability to specify that some variable is read-only (i.e. `const`) is "not useful", so one of the few redeeming qualities of C is simply absent from Go.


Go does have const: https://go.dev/tour/basics/15


> people with imperative language brain damage to learn, ..

Perhaps this attitude is why function languages are not as popular as they could be.


Go as a language is not fun at all. Nor very good. Weaker type system and less language features that increase productivity and readability than C#, Java, Kotlin and Typescript, no null checks.

Go as a runtime is outstanding.

Go's tooling, stability and governance are very good.

Nothing is perfect. Enter into your compromise.


I understand the pragmatic point of view but that doesn't mean I can't point out its flaws or that this was some necessary compromise - it was a choice. There's no reason the Go platform couldn't be as stable as it is and also not be a braindead language from the 90s.

As an example Rust has its flaws but it does have all the modern language niceties with great tooling, build system and a solid platform.


Sounds like you are storing interfaces in the context. I wouldn't do that. I do hit the occasional nil reference error, but it is usually very rare. If you deal with concrete types, not interfaces, you don't have to worry about that very weird nil but not nil thing. And always use constructors.


I fully agree that Go's error handling needs improvement, but you have gone into a rage about the language while fully forgetting the VERY basic Go "comma ok" idiom used everywhere.

Please read Effective Go https://go.dev/doc/effective_go before making production software.


Your "sane" language looks quite insane to me, unreadable mess at a glance.


It almost looks like a Rust syntax to me, which is only one way to do that. A concise syntax is always possible if that's prioritized, like `(ctx.get(thing)? == Thing.A) ?? false` or `ctx.get(thing)?.(_ == Thing.A)`. Actual Rust programmers would also prefer to check against the explicit value whenever possible: `thing == Some(Thing.A)`.


Yes, language design should cater to people who have spent a little bit of time using and learning it, because they represent the majority of hours spent using that language. What a language looks like to someone who's never seen it is sort of irrelevant. That code could also have been written more imperatively, and probably would be by many people for those who don't like the functional style.


Not to talk about the lack of sake error handling in Go.


Saké is rarely an error.


I hate that they made and popularized this backwards dumpster fire ..

Well I think you should hate the fact that other language authors despite using better programming paradigm and using advancement in last 2 decades have not been able popularize their efforts enough to wipe out Go.

Go devs did what they did and made it open source. And from what I see they did not do any relentless marketing to make it popular.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: