Hacker News new | past | comments | ask | show | jobs | submit login
Go run (breadchris.com)
199 points by breadchris 9 months ago | hide | past | favorite | 168 comments



`go run main.go` breaks if your `main` module is divided into multiple files.

Use `go run .` instead - it's shorter and it works with multiple files


This is a good tip! It also captures what has been frustrating about golang for me. The language feels a bit stuck between simple default cases and allowing complexity.

I feel like there are two relatively distinct populations of go developer: those who love how easy it is to start (true!) and those who are frustrated by the compromises the language has made to allow for more complex cases (required!). There's also a hidden third population of people who no longer sing the praises of golang as a simple, straightforward language but accept its compromises and write productive code with it. Those people, I think, write fewer viral blog posts.


I don't get it. The default case is simple (go run .), and the complex case (specifying the relevant .go files one by one) is a little bit more complex. What's frustrating with that?


It's not this specifically - it's when this "kind of thing" comes up in golang in particular. Every language needs to pick when to hold back complexity and when to make the user deal with it and I just personally dislike golang's particular balance. I also respect the work! It just irritates me.


> I don't get it. The default case is simple (go run .), and the complex case (specifying the relevant .go files one by one) is a little bit more complex. What's frustrating with that?

The problem is many Go tutorials start out by teaching the complex case first and leave the simple case to later (if they cover it at all).


Maybe it is because the simple way requires knowledge of packages, which are covered later perhaps, since many tutorials go straight to "go run helloworld.go"


You don't need to cover packages. You could just say this: the standard convention is that the source for each Go program lives in its own directory, and it starts running the code in a file called `main.go`. To run the Go program in the current directory, run `go run .`

Introducing the concept of "packages", and the fact that the directory is a package, can be deferred until later.


Yup. This was the main thing that bit me when I was first getting into Go. File names are kind of like classes, and directories are kind of like modules. The encapsulation sits at a slightly different layer than you might expect.


In many ways this just took what people were already doing with other languages and codified it.

The main benefit is that you can easily figure out what is where!


> The main benefit is that you can easily figure out what is where!

Navigating unfamiliar Go codebases yields very few surprises: things are almost always where I expect to find them, and it's great! This is hardly the case with other languages, where I have to rely on grep or trace function-calls


Your expectations may vary depending on where you come from. There are many places one can come from. It's advisable to minimize expectations or assumptions when learning something new, as they could impede your learning process.


It's natural to have expectations based on your experiences.. I think the person you replied to is just trying to help people who might misunderstand go based on those expectations. I think you're getting unnecessarily deep here.


In that case, it would have been necessary to specify the language they come from. The only hints given were 'classes' and 'modules'. Is it Java, Python, JavaScript, C++, Swift, Ruby, VB.NET? All these languages have classes and modules, and they all draw the line of encapsulation at different layers.


Yes, absolutely. Thank you for clarifying what I was saying. Regardless of where you are coming from, it's likely to be the places where there unstated assumptions / cultural-norms that differ from your own where you will experience the biggest "lift" when encountering a new technology. The more a culture aligns with a "lowest common denominator," the more it will be readily understood, and the less it does, the more it will act as an exclusivity gate.

Either could be desirable or undesirable depending on your goals. It's good to be aware of the dynamics so that you can make an informed choice about how to present your code.


On the other hand, learning something tabula rasa takes way longer than if you scaffold it with assumptions. Otherwise, each new skill/language you learn would take as long as the first one.


Fun fact: `go run .` was retrofitted on after `go run main.go` because go run was initially designed to only accept explicit filenames as an argument [1]. I can't imagine how people used to use `go run` without the ability to specify whole packages (globs don't work well because it includes test files as well).

> Potential design based on discussion with proposal review:

> go run [go flags] [single-package-or-*.go-list] [subprocess flags]

before that it was just

> go run [go flags] [*.go-list] [subprocess flags]

[1] https://github.com/golang/go/issues/22726


Wish `go run` worked for hashbang shell scripts. But you have to do a lot of hacks to make it work like: https://gist.github.com/posener/73ffd326d88483df6b1cb66e8ed1... describes


This is not a tip. It comes straight from

   go help run
"usage: go run [build flags] [-exec xprog] package [arguments...]

Run compiles and runs the named main Go package. Typically the package is specified as a list of .go source files from a single directory, but it may also be an import path, file system path, or pattern matching a single known package, as in 'go run .' or 'go run my/cmd'."

Nothing is said about go run main.go.


And if your main.go is in a sub-directory, e.g. cmd/pathto/cli/main.go:

    $ go run ./cmd/pathto/cli


great point, this is the one thing that I wish was more intuitive. You don't have to do this if main.go is the only file in the main package and all other code is referenced by a package.


Isn't it supposed to be `go run ./...`?

(long time I didn't use Go, back then at least, using `.` instead of `./...` could cause subtle issues)


you can also do `go run hello.go world.go`, but yeah `go run .` is probably better


Huh. Learned something today. Thank you.


Huh. Learned something today. Thank you.


I don't think it's simple. Just `go run` would be far more simple. Right now I first have to figure out if its `go run .` or `go run cmd/main.go` or some other thing.


This one was voted down but there is a good point here. There is no way to figure out what binaries there are (maybe you could make `go list` show all the `package main`s?, not sure).

Beyond that there's no way to discover what build flags may be needed to give you the binary that the developer intended. Be it tags, ldflags, cgo support.


Temporary gopath, "go install ./...", list contents of bin.


I agree, especially since 'go build', 'go install', 'go test', 'go generate', 'go vet', 'go fmt', etc will do what you expect when you run them without parameters. I think the difference may be that 'go run' expect one package to run, while the others can take multiple. If you have a library (no top-level main package) that comes with two tools foo and bar, I don't think 'go run' could know which package to run. Example tree:

    go.mod
    go.sum
    library.go     // package library
    cmd/foo/foo.go // package main
    cmd/bar/bar.go // package main


This is what Rust does with `cargo run`; I think Poetry for Python tries to do the same thing but I haven't used it in awhile.


That is how `cargo run` for rust works by default. As long as there is just one executable. If you have multiple executables in a project you do need to specify which one though.


'go run cmd/COMMAND' is what I like best. I normally don’t bother descending further than the root of a project in the shell because all my other interactions are in Emacs rather than a terminal (I rarely use the shell in Emacs because the shell is so much less powerful for most stuff). Maybe that’s weird. I think folks who live in vi spend a lot more time bouncing around directories in the shell.


Or, even better, `./main`


That's not better but confusing as the "main" binary doesn't exist (The main point of go run), and you'd always have to type the full name as you can't autocomplete it.


Who said it needs to be a binary? Name the source file "main" and put a shebang at the top.


> I don't think it's simple.

I mean, it kinda is. If you're a Go developer that went further than hello world, you will probably be aware of these two possibilities:

If . contains a "main" package then it's "go run .", otherwise the source for the binaries probably reside in "./cmd/X" and you have to run "go run cmd/X".

You can probably find Go projects that don't follow these rules, but I doubt anyone would want to interact with them.


You also have to know whether it's `cd cmd; go run main.go` or just `go run cmd/main.go`. And you have to know to set CGO_ENABLED=0 if you want to run it on Linux instead of MacOS.


It's not really a Go thing but a build system thing. It's useful to have your build system know what is an "executable target" and how to run it.

Bazel does this for _all_ languages. I'm sure most other modern generic build systems do too.

AFAIK "go run" is trivial and for single-file scripts it's fine. But for more complex cases (like the NPM equivalent thing) I actually think it's a bit of a shame that it's even needed. I don't really know why we have per-ecosystem build systems (Maven, Go, cargo, whatever the hell you're supposed to do in Python these days, the nebula of web front-end tooling, etc etc).

Admittedly I do not really know the ins and outs of any of these systems in detail. I'm sure there are some good reasons why they exist under the hood.


I'm not sure there really is a good technical reason they exist. It's cultural. It basically goes like this:

- the inventor of new language 'coolang' has a way that they make their project

- it's kinda messy, so they clean it up into a tidy script with a few clear and straightforward commands and/or flags, and give you "cool build," "cool install" for making sure all the necessary dependencies are present, etc

- a community builds around coolang organically

- everybody is so used to running "cool build" that that's just how it's done. New features get added around these conventions

It's cultural, that's all it is. But like all small tight knit communities, it's important to understand the culture of the community in order to engage with it on its own terms. Its just humans being humans.


There's also a technical reason, which is that the build system is written in the language it targets. So the cool tool is written in coolang. That's obviously not required, you could use any programming language for the cool tool, it just happens that all people that care about the cool tool, understand the needs of the ecosystem, have issues with missing features etc. already have a non zero intersection of languages they know of: they all know coolang.

If coolang decided to try to add coolang support to Bazel instead, they would probably have to learn Java[1]. Current maintainers or contributors to Bazel don't know coolang, and they don't care about it much, especially in the early stage. And maybe coolang developers don't know Java, or even actively hate it with a passion (that's why they were on the market for a new language). And even if some coolang developer decided to contribute to Bazel, the barrier would be much higher: being a mature build system with so many features and different needs, surely working in it is going to be complex; there will be many different concepts, and layers, and compromises, and APIs to work with. So for them it just makes more sense to use coolang so that all coolang developers can contribute to it having a real need for the cool tool to improve.

[1] I know nothing of Bazel. So just bare with the example even if it's technically not correct.


A nit (hopefully a welcome one given that it supports your statement) is that Bazel's rules are written in a language called Starlark, which has python syntax just without classes and a bunch of limitations surrounding switch statements and loops.

The core of your point is correct: who wants to both support an additional tool chain and an additional language for building things? Terrible sell.

Go itself is a little bit of an edge case because they recommend leaning on Make, but ironically they do not use Make for its intended purpose and all the (actually good) functionality that Make gives you is reimplemented from within the go compiler.


The build system should come with it. Even if it requires me to follow conventions, it’s magnitudes better than rolling my own. I can add to it if I need to.

The worst offenders are C and C++ projects. Make? CMake? You’re on your own. During development, it’s so good to be able to just runtime run source like in go and bun.


As an embedded developer I shudder to think of all of the work that would go into /runtime run source/ to have it build objects, link them into some kind of format, convert the format to a series of flash addresses and data, connect to my JTAG over a network, halt execution, erase the flash, load the executable file into flash, verify the load, and try to signal a PMIC or other chip to reset the device to start it back up.

Or you could just read "Linking a single object file" from the GNU Make manual's catalogue of rules, which describes how to do exactly what you want with the caveat that you still have to run the program after it's built and linked: https://www.gnu.org/software/make/manual/html_node/Catalogue...


You have to do that work anyway... Why not encode it in your build system?

(Admittedly I have never tried this with a modern build system. My real world approach for that would be a janky phony rule in a Makefile, with a bunch of MAKE_VARS the user has to set on the cmdline/in the environment to set up the toolchain/serial port etc. But in principle I have always believed it should be possible to make this process as easy as compilation)


Are those vars discoverable? As in can you infer what they should be? If you can’t, or are building for all unknown possibilities, wouldn’t it be safer to enumerate known var values to perform that?

Why must everything be explicit? Wrap it all up into one var.

    cmake -DBUILD_FOR=esp32 .

Then provide your own little runtime.sh.

For other languages and build systems, a runtime run sourcefile is the easiest way to get someone going, leave the edge cases out. Just get it running so they can hack on it.


No, you have to manually document them and then the docs go out of date and you can't realistically set up CI for this little runner script so certain use cases get broken and blah blah blah and it sucks!

Same for runtime.sh.

This is why the 'runtime run source' is so useful! I'm totally on board with it. I just don't think we need one implementation per ecosystem.

One runner per project (the 'make run' approach) is worse than one per ecosystem, which is in turn worse than just having one for everything!


As I stated, for C/C++ - folks like their build systems the way it is.

For everyone else, they want runtime run sourcefile and can then build scripts around that to package it up how they want to. For local development, I shouldn't have to wait 40 minutes for a compile to see a div change.

The toolchains that would benefit from being able to run quickly, compile quickly, are what we want. Having to do 15 steps to get your code on an embedded device is just part of the territory. The rest of us have pipelines.


have you ever used tinygo? I have been curious how much that project gets used. It seems to me that rust is probably going to be the language of choice some point in the future.


I looked at that recently for a project I'm working on, but walked away when I found that important parts of the net package are pretty much nonexistent on ESP32.

You know, like net/http, for example...

I might have misread the docs, but somehow I doubt it.


The Go standard library's net/http package doesn't yet compile due to some dependency issues, but tinygo provides its own net/http package to stand in as a replacement[1].

[1] https://pkg.go.dev/tinygo.org/x/drivers/net/http


yeah it looks like you are right: https://arc.net/l/quote/veycnsqt


Make and CMake are not examples of what I was talking about!

In Bazel (and, I assume, other similar systems), you can just "bazel run target" and it always works. Doesn't matter what languages the thing is written in.

Make and CMake are certainly not like what. (Yes, you can have a "run" target, but that's not the same thing).

So my minor gripe is that 'runtime run source' does not need to be a per-runtime thing.


I use Bazel's friend at work but can't imagine using it without also having build_cleaner. Is there an equivalent in the wild?


I am not sure actually, there is this but it doesn't seem to support C++ or Python in the way build_cleaner does, which sounds like it would indeed be kinda annoying: https://github.com/bazelbuild/bazel-gazelle


> I'm sure there are some good reasons why they exist under the hood.

User expectation's, mainly. In fact, Google didn't even use the go tool internally, which I expect hasn't changed, using Google's build system à la Bazel instead. It was created only for the wider audience.


bazel is pretty cool, I have seen it work at Uber for the go monorepo at impressive scale (and of course it works for google). When I need to scale up the build process this will be the tool i reach for, but for starting out it is another technology that someone would have to learn.


Bazel is one of those tools that either someone teaches you or you need a PhD to figure it out. Which is massively frustrating.

The Opensource rulesets are also not fantastic in my opinion compared to the Google ones, so most peoples first impression of the tool is sub-par.


I think the issue with Bazel may be that it works extremely well at Google where for 80% of the code at the company, building is just a completely solved problem, with amazing tooling integration, and it's glorious.

Whereas in the open source there is much more manual setup to get it working smoothly. And the manual setup is much easier in the tool your ecosystem already knows.

So it may not actually be a wonderful system in and of itself (outside of the Google monorepo). My comment was mainly about the principle rather than an endorsement to adopt Bazel!


> Yeah, and then what happens if you want to use modern syntax like esmodule, or maybe you want to use types with typescript? You are going to have to use npm.

I don't understand what the problem is here? Every installation of Node comes bundled with npm. If it doesn't, that is a package maintenance problem.

> Fun fact: One of the understated features go run is that it will automatically download any dependencies the code references; how cool is that!

This feels like a massive antipattern. Why is this lauded as a "feature"? Why do I want my build system to automatically reach out to the Internet and download random code, without an explicit request to do so like "npm install"?

This is even more antipattern-ish when you consider that Go dependencies are literally just repos on Github (or possibly on some random git server), instead of a centralized and moderated registry like npmjs.org.

> amazing, for js we not only have npm, yarn, pnpm, and bower (am I missing any?) but we also have completely new runtimes bun and deno.

So it's now considered bad to have multiple implementations of an open standard, compared to the exclusively-Google-developed Go runtime? This sounds akin to arguing in favor of a monopoly over a competitive market with consumer choice.


Everyone who clones a node project will call npm install before the call npm run. Having a seperate install command doesnt make it more secure, it just makes 1 more thing for newbies to learn and another thing to go wrong when you pull master and someone added a package and you called run without installing again.


If you pull a Node project that depends on malware, "npm install" will fail, assuming npmjs has unpublished or withdrawn the malicious package.

There is no such safeguard when your dependency system downloads random code from random git repos. Even worse so when this is done automatically, when a developer doesn't expect a command to do so.

If I run a command that depends on a third party library or resource, and I don't have that library, I fully expect it to fail. Is that not basically universal behavior in Unix?


The go global module proxy serves the same purpose as you describe with npm.

https://proxy.golang.org/

The Go team is providing the following services run by Google: a module mirror for accelerating Go module downloads, an index for discovering new modules, and a global go.sum database for authenticating module content.


> This feels like a massive antipattern. Why is this lauded as a "feature"? Why do I want my build system to automatically reach out to the Internet and download random code, without an explicit request to do so like "npm install"?

It's not random code, it's code you've expressly used.


Plus, there's likely far less "random" code in the tree, due to go packages generally having fewer dependencies. It's a cultural thing, yes, but it's there in practice.


> I don't understand what the problem is here. Every installation of Node comes bundled with npm. If it doesn't, that is a package maintenance problem.

Node and npm are two commands, and when you go to find packages, you will see people telling you to use pnpm, yarn, or npm. I would expect one tool to do this for me, especially for the most popular language in the world.

> This feels like a massive antipattern. Why is this lauded as a "feature"? Why do I want my build system to automatically reach out to the Internet and download random code without an explicit request, like "npm install"?

https://chat.openai.com/share/8bd82c15-c939-4e82-aad8-086995...

> This is even more antipattern-ish when you consider that Go dependencies are just repoed on GitHub (or possibly on some random git server) instead of a centralized and moderated registry like npmjs.org.

I find this to lend itself to a more decentralized future. I see notable projects owning their code and distributing it positively. You still need the source code for something to run at the end of the day. If you are worried about the code continuing to be there, that is the purpose of a proxy cache, which makes it very easy: https://proxy.golang.org/. Also, the code is distributed on github. So, if github working is a concern, we probably have much bigger problems.

> So it's now considered harmful to have multiple implementations of an open standard, compared to the exclusively Google-developed Go runtime? This sounds akin to arguing for a monopoly over a competitive market with consumer choice.

A hammer looks like a hammer because that is the most effective way to hit a nail. Since I am "building" code, I want my tools to feel as reliable as a hammer. I will not argue that Go is the best language ever invented; I see it as the most accessible language to make things happen fast and reliably until a better one emerges. When that happens, AI-generated refactoring tools will be so good, and Go code is so quickly parseable that I will let it loose in my Go code bases to refactor them into that language.


Hammer is not screwdriver. you want screwdriver and hammer to blend into one tool.

Does your hammer has built-in car or drone to bring nails from store? No. that's why some people think that it's reasonable to split programs that have different modes if operations.

Your choice, but note that it's not universally accepted true or demand.


So on one hand, you're saying decentralization is a good thing, but you're also saying that a centralized proxy solves all these problems?

Also, a proxy and a package registry are not the same thing.


> So it's now considered bad to have multiple implementations of an open standard

Seeing how many CPU cycles have been wasted on autoconf generating code and testing for various ancient/obsolete C compilers and configurations has taught me that yes, it's not a good thing to have multiple slightly incompatible implementations.


> Every installation of Node comes bundled with npm. If it doesn't, that is a package maintenance problem.

Not really.

Ubuntu's node comes without npm, and to install the latter it wants to get about a hundred of dependencies. Mind you, this is still one of the most popular distros. Would you call their approach "a problem"?


This is not true. Installing Node on Ubuntu with a package manager using the official instructions [0] installs both Node and npm.

If you are using an unofficial repository, and that maintainer did not include npm in his Node package, then yes I would consider that a problem.

> to install the latter it wants to get about a hundred of dependencies

Yeah, as software engineers, let's discourage code reuse, shall we?

Node and npm are not trivial pieces of software. I expect it to have lots of dependencies. This is hardly surprising.

[0] https://nodejs.org/en/download/package-manager#debian-and-ub...


Thank you for saying this!


Golang is such a elegant language.

But comparing it to JavaScript isn't fair. JavaScript has paid my bills for years, but it's held together by collective hope.

The only thing missing is a decent mobile framework. I'm using Fyne, but it just looks dated. At least for my current app it's functional though.


Eh, I still don't get it.

Go seems too high level for low level work - use Rust, manage your own memory, no garbage collector.

Go also seems too low level for high level work - use TypeScript with all the nifty ES6 features, powerful type system, exceptions, etc..

Where does Go fit in here?


If Node/Typescript isn't performant enough, you need to move a lot of bytes around, but training a developer team on Rust seems like a massive organizational expenditure.

Go is probably the most efficient language for 0 -> Production. The standard library has everything you need to build a production backend service. There's zero build system shenanigans. Anyone who's seen a C-like language can start writing mediocre code today, and be pretty well off in 2 weeks.

So far at Notion we're solving all our problems with Typescript/NodeJS, but I'm currently working on a distributed system with Consul that needs to move a lot of bytes in and out of files in somewhat complicated ways, and boy howdy am I feeling the painful performance ceiling of single-core NodeJS, and I'm sure if I sat down to rewrite the performance sensitive part in Go, I'd be done in a few days and it'll do 10x the throughput of the NodeJS service with the same resources.


I mostly agree but... you can't use worker threads or something with Node to distribute the work? It's only like one line to submit a job to a worker thread, how is that much more than "go thing()"?


There’s tools for parallel execution in Javascript like Worker or node:worker_threads but they have two big drawbacks that make them somewhere between annoying and useless:

1. No shared objects between threads. You can share non-resizable contiguous byte arrays (SharedArrayBuffer) but 98% of existing code makes normal objects and arrays, and if you want to send those to another thread, you pay a serialization memcopy round trip (no cast a buffer in this language). This severely limits threading to “shared nothing” style workloads. Can you pass a node HTTP request to another thread? No :(

2. Each thread worker needs to boot up from scratch from its own entry point file. This forces some pretty weird code layout and imposes a big boilerplate overhead as well as runtime overhead. And remember - no sharing! So if your threads need a common resource like a Postgres connection pool, they’re going to create their own copy.


Yeah I agree. I made it work for one big system but Java or Go would have been nicer lol. Luckily I was able to just:

2. You can do this w/o restarting worker each time, which helps. Just keep it alive and submit work to it.

and then coordinate state in parent worker.


Node is single threaded.

No more needs to be said after that.


onboarding 10 Typescript Backend developers is like onboarding 10 developers from entirely different languages. Some are _heavily_ OOP driven, and turn literally everything into a class. Some are heavy on functional programming and start using curried functions everywhere. Others are used to classic express servers, while another group has only ever worked with graphql/prisma, or has only deployed on lambda functions and hasn't really seen express-based routing.

Literally everyone comes with their own project setting, they all have to get used to that specific folder structure, or those eslint/prettier settings. And on top of error-handling in JS/TS is miles behind Go's (and that's despite Go's error handling also being clunky and not as elegant as Rust's or ocaml's, but still much much better than JS's). It is _extremely_ easy to mess up a typescript project, it's significantly harder to mess up a go project (although obviously still easily possible :)

Btw. I also don't hate typescript/JS, I think it's a great language that allows for a big variety of expressiveness in entirely different programming domains, I personally use it all the time and enjoy it. I just don't think it's a particularly great language to scale a team with.


Go fits into when you don't want to comb through 50-line stack traces that exclusively reference nested dependency after nested dependency.


Go fits in well in the backend where Java would have been, but with a better stdlib, simpler tooling and a smaller deployment footprint.

IMHO it's great for docker-based services. And that's a pretty big marketplace.


Rust is also really really hard.

A lot of this is just syntax, but I've just about come to the conclusion that I'm too stupid to learn it.

Golang is very easy, I can generate small binaries to do cool things without too much code.

Typescript is dragged down by the legacy of JavaScript, things randomly break all the time, configuring babel is the stuff of nightmares.


I felt the same way about rust until I started working on https://google.github.io/comprehensive-rust/ and in a couple days have wrote several working rust programs.(trivial ones)

It took me from a couple years of "I should learn rust" to "I've written some rust and ran rust programs" in a few hours.


> Go also seems too low level for high level work - use TypeScript with all the nifty ES6 features, powerful type system, exceptions, etc..

You can solve most problems with if-else and loops. This wasn't something I was aware of before Go, but now I see how simple it is and can be.

It strips the problem domain down to its core because you're forced to express the solution in the simplest form it can be. I know a lot of Go haters throw vitriol for exactly this reason (see fasterthanli.me/articles/lies-we-tell-ourselves-to-keep-using-golang), but the truth is simplicity really gets you 80% of the way and most of the time that's enough.


Go makes error handling explcit, which is a very important part of development. Not only this makes you more conscious on thinking what you need to do when something goes wrong, but also makes codes more maintainable in my opinion.

I strongly prefer go error handling compared to a throws-type-error-handling language.

Also, with this comment I hope to get some pushback: I haven't kept up with the latest typescript, python or any other language features. I'm talking from almost a purely ignorant perspective so I hope to learn a bit more on how developing with other languages feels like.


> Also, with this comment I hope to get some pushback: I haven't kept up with the latest typescript, python or any other language features. I'm talking from almost a purely ignorant perspective so I hope to learn a bit more on how developing with other languages feels like.

Can't push back there - every other language I'm aware of uses at least one (and often both) of "throwing exceptions" or "returning Result types which either contain your actual data, or an Error", both of which let you just write your logic and wrap it in a single handler rather than repeating `if err != null return _, err` everywhere (or if you _want_ to handle each error individually, you can!)

I've gradually reached the conclusion that Gopher's really just do prefer GoLang's verbose repetitive approach. And, y'know what - good luck to y'all. It's not for me, but I'm trying to get better at just letting people enjoy things :)


how does that work with try/catch? try/catch is significantly more verbose than just if err != nil // do something imo, and also much more brittle.

Agree re: Results type in Rust and Ocaml, etc. Those are better in my view too. And yes, you can define a Result<any> return type in Typescript as well (and in fact that's what I mostly when I write typescript and works ok) but unlike Rust this is definitely not 'idiomatic typescript/js' and other developers who might not be familiar with Result types will probably initially dislike and then probably dismiss it.


> try/catch is significantly more verbose than just if err != nil // do something imo...

Further to what the other replier said (about the ability to bubble-up errors), try-catch also lets you handle multiple errors in one block:

``` try { fileOutput1 = getSomethingFromFileSystem() fileOutput2 = getSomethingElseFromFileSystem() fileOutput3 = ... } catch (FileSystemException e) { // handle } ```

If I understand it correctly, GoLang's idiom would claim that this is a bad thing to do, and each error should be handled individually. Which - sure! That's _usually_ a reasonable, defensible, and safe position. But that means that GoLang's approach is always as verbose as its possible to be, whereas try/catch at least has the _possibility_ to condense handling.

> ...and also much more brittle

Can you be specific about what you mean by "brittle"? To me, it denotes a lack of flexibility - that is, if thing1 changes in an unexpected-but-still-legal way, then thing2 is likely to break. I can't see how that applies to try/catch-vs err-check - in both cases:

* The exception/error is bound to a variable

* (in most well-typed languages) the Type of the exception is checked by the type system, and/or (in every language, inc. GoLang) properties of the exception are checked by code

* Something is done (a standard code action, a return/throw of an exception, or a program termination)

You can write a brittle GoLang check (only checking for, say, `if e.message = "a very specific error message"`), and you can write a very flexible try/catch block (with a fallback `catch (Exception e) {doSomethingGeneric()}` - or, indeed, the _most_ flexible "try-catch" is "don't even catch it, let it bubble-up and let your framework/application handle it")


There is no need to have try-catch at every function invocation. One can do this only at the level at which one needs to handle the error.

In Go, every call made to a function is 5 statements and lines. Go code tends to bloat up the screen quite a bit and eyes glaze over.

    result, err := f()
    if err != nil {
      return nil, err // I don't want to handle this here but at callers.caller.
    }
    return result


> Also, with this comment I hope to get some pushback

More of a push forward, really: if error-handling guarantees are what's driving you away from dynamically typed langauges, Go is pretty much the worst place you can land that isn't C. It doesn't make you check nils, it doesn't remind you to check error values from functions that you call only for side effects (though the linter will, admittedly), and it doesn't have sum types so there's semantic ambiguity even in the common case - that is, in `data, err := fn()`, it's common to assume that at most one, and perhaps exactly one, of `data` and `err` will end up non-nil, but that's not a constraint you can express with the type system.


I agree with not being able to rule out nil checks, I just realized how arbitrary I am with nil checks, else it can get very nil-check bloated in some common scenarios. However the other two haven't been an issue for me so far.

I'd love to have the chance to explore the nuance of what other tradeoffs include going with any other language, but certainly requires more nuance than a deep comment response might trigger.

But just trying my luck, what do you think is worth trading off the more exhaustive error handling? (Regardless on dynamically typed or not)


I look at Go like Python + multicore world. (and nice to have speed from compilation vs JIT). And in my career that's almost exactly what we've used it for: rewriting higher load services from Python (2.7 at the time) to Go.


> Where does Go fit in here?

Where you move past academic language discussion and start using the tooling. Typescript is a pretty nice language but the tooling around it is practically unusable. It's laughable how bad it is. Outside of browser work, you're going to pick Go – and still would even if they made the language 10x more flawed – over Typescript every time just to not have to deal with that ecosystem.

Granted, people are trying to make it better. Dahl going on his Go kick and wanting to copy its lessons in the Typescript world via Deno has lit a fire, but there is still a lot of work to do.


> Typescript is a pretty nice language but the tooling around it is practically unusable.

I work mainly with node/ts and totally agree, maybe just add that by tooling it is whole ecosystem as well. This problem is not visible if you work either with relatively small code base or new code base. But as soon as you have something old and big you'll see where the pain comes from.


Can you elaborate on what's hard about TS tooling? It's really easy to just use `npx tsc`


Probably not. I live in a parallel universe where `npx tsc` does nothing except spit out available arguments. I can first `npx tsc --init`, after which `npx tsc` converts the TS files into corresponding JS files, but that puts you no further ahead. You still need tooling to do anything with those files. In a universe where `npx tsc` knows what tool you need every time you run it – something completely incomprehensible in this universe – it is undoubtedly also impossible for those in that understand what we go through in this one.


What in practice causes the most pain for mé are the various module formats in combination with TS. Just getting my test runner (Mocha) and Node and the bundler and... to work with TS and the chosen module format is always _not_ fun. Combined with package updates that break the current working solution because they now natively support es modules. I hope these problems will all disappear in the future, but I'm somewhat sceptical. And TS is slow - not C++, Haskell and Rust slow, but still. But I never used TS/Node for anything big (backendy), but just small frontends and VS Code extensions, where the time of getting everything set-up to work takes a relatively larger part of the "actual" work.


For TypeScript, "deno run" seems much the same?

(It's only a subset of the JavaScript ecosystem, but you can import a lot of npms nowadays.)


iirc a lot of deno is inspired by the good parts of go, so that's not an accident


And npx tsx index.ts for TypeScript, which supports ESM (although it does pull tsx)


Except I cannot `go run ~/that/project/over/there` as the use of go modules means I have to change directory to be inside the package first. I'm not sure why that is exactly, but it's always been a nit I've found frustrating.


Especially when you can `go run that/project/over/there@latest`

Although, with slight modification, you can `go run -C ~/that/project/over/there .`


thats mostly true. you need to at least be at the top level of the module to do go run. any higher and you get a missing go.mod error.


I totally clicked the link thinking I'd read about some benefits of... running. Yes, as in doing sports.


> But I can run node main.js? Yeah, and then what happens if you want to use modern syntax like esmodule, or maybe you want to use types with typescript? You are going to have to use npm

No, you do not. You can just use .mjs extensions for esm. You can also run typescript to transpile your code and then run it with node. You can even use loaders, etc.

Saying you are going to have to use the included package manager in node is probably the weakest argument for using go over node.

Can you run some language superset over go magically without some transpilation? No, you cannot.

You cannot build a argument comparing js to ts vs go, it doesn't follow.


yeah i was going to say its unfair from the start... node is anyway a runtime for a language... go is a language in itself, and also happens to compile to something much more flexibly runnable...


Clicking is my favorite part of JavaScript. I just move my mouse onto some blue text and click and the software that I want to use is installed/updated and runs, usually in under a second.

In the 50+ year history of software development I haven't heard of any other software stack has been able to realize this is important. go run is close but it's still 10 times slower, maybe even 100 times slower, depending on if you want to count the git clone and how good you are at typing.


What blue text has to do with JavaScript? You can create such straughtforward tool for any language, and it's running shell commands under the hood in all cases.


here's a gray text that will install and run a javascript app when you click it, in fractions of a second: https://natto.dev/


None of this is part of Javascript, Goland does most of what you describe.


> One of the understated features go run is that it will automatically download any dependencies the code references; how cool is that!

All this plus talk about non-standard JS runtimes like Node, but no mention that this is how browsers have worked almost forever.


JavaScript started out as an interpreted language but ended up more like a compiled language due to minifying, TypeScript, JSX/TSX, and so on. So it's not simple anymore.

At this point, URL imports are actually bad due to the confusion between source and compiled code. Ideally, imports should always point to source code. Bundling / minification should happen at the application level; it's not a library concern.

So in that sense, Go's a lot cleaner since it's always been a compiled language.


Go (the language) is a lot "cleaner" (than JavaScript, the language—and not the various runtimes, previously mentioned in the earlier comment), because with Go (the language), there's more code mangling going on.


This isn't a benefit of go, but rather a drawback of the counter-example of typescript... All tools generally designed to work for creating small utilities ({ba,z,...}sh, python, perl, go, swift, ...) have this feature.


Most of these examples don’t automatically fetch the dependencies. Having come from Python, Go’s tools are notably simple.


cargo, sbt, bazel, and probably many others also have a `run` command that does pull dependencies and do build steps before running.


> Most of these examples don’t automatically fetch the dependencies.

Quite frankly, I don't want to automatically fetch dependencies at the same time I am running the code. IMO those should be separate steps, and combining them together in one is not a good idea.


Why not ?


Because I don't want the code I'm running to change out from under me when I tell it to run because some dependency got updated (or for any other reason, for that matter). That's a recipe for disaster.

Running the code is a separate step from determining what code I am going to run; the latter includes determining exactly what versions of all dependencies I am going to run. The two should not be combined.


But the versions are locked right ? Similar to what package-lock.json etc does. So whats the issue ?


If the versions are locked, then after the first download, nothing should be downloaded again unless I explicitly change a requirement and/or a version. So after the first time with a given set of requirements and versions, I suppose "go run" would be fine since it won't actually download anything.

But for that first time, I still want to separate the two steps, for the reasons I've given elsewhere in this discussion.


What happens when the dependency are updated and not compatible anymore ?


Dependencies won’t update themselves since they are locked to their versions. If the developer manually triggers an update, and the dependencies aren’t compatible, either the code wouldn’t compile or it’s behave weird. In both cases, what’s the advantage of separating out the fetch-dependencies part?


That's why you have a go.mod file that specifies the dependencies for you. Just run go mod tidy and it generates/updates it for you. You get these reproducible builds for free this way.


A Go module specifies the exact versions of its dependencies. These versions do not change unless the author explicitly updates them.


"go mod download && go run ."

What's the point in making `go run` error out when it already knows what dependencies to get and how to get them.


> "go mod download && go run ."

No: "go mod download" if I want to update dependencies; then look to see what got updated and how it will affect what I'm doing. Then "go run".

> What's the point in making `go run` error out when it already knows what dependencies to get and how to get them.

Because I don't care what "go run" knows. I care what code is going to run when I say "go run". I want that code to be the code I already know is there and understand. I don't want it to be some new code that "go run" downloads because it sees that an update to a dependency is available. Downloading that update and understanding what effects it has is something that I want to do before "go run", not as part of it.


That is the purpose of go.mod/go.sum. `go mod download` never updates anything unless you change go.mod.


> Yeah, and then what happens if you want to use modern syntax like esmodule [...] You are going to have to use npm.

Why? Node's had built-in support for ES Modules for eons now.

Everything's easy when you stick to default tooling, duh, like `node run.js` or `go run.go`. You'll loose that `go run.go` the moment you need code generation.


> You'll loose that `go run.go` the moment you need code generation.

Not really.

  //go:generate ...
It's pretty handy.


> even rust you need a cargo file

And a look at Github shows me Go has a go.mod file. I don't see the point the author wants to make, neither of them affect the build/run command.


    $ cat helper.mjs
    export const sleep = (dur) => new Promise(resolve => setTimeout(resolve, dur))

    $ cat main.mjs
    import { sleep } from './helper.mjs'
    await sleep(1000)
    console.log("Go is great; but weird throwing node under a bus here?")

    $ node main.mjs
    Go is great; but weird throwing node under a bus here?


$ cat helper.mjs export const sleep = (dur) => new Promise(resolve => setTimeout(resolve, dur))

  $ cat main.mjs
  import { sleep } from './helper.js'
  await sleep(1000)
  console.log("Go is great; but weird throwing node under a bus here?")

  $ node main.mjs
file://main.mjs:1 import { sleep } from './helper.js' ^^^^^ SyntaxError: Named export 'sleep' not found. The requested module './helper.js' is a CommonJS module, which may not support all module.exports as named exports. CommonJS modules can always be imported via the default export, for example using:

import pkg from './helper.js'; const { sleep } = pkg;

'What is CommonJS?'


this was a joke comment in the morning.. but actually just turned out to be an issue now. Was installing nanoid in a project using commonJS and typescript.. all jest tests suddenly failed. So I looked into jest.config - did I need to change something in terms of transpilation? or some new babbel config? some other secret flag somewhere? no, because turns out that

npm install nanoid Nano ID 5 works only with ESM projects, in tests or Node.js scripts. For CommonJS you need Nano ID 3.x (we still support it):

This whole module bit in node has been a total disaster. Incredibly frustrating


if I were a beginner developer, I now have to have the tribal knowledge of the difference between .js and .mjs. I don't see anyone widely using .mjs to write their code either.


> what happens if you want to use modern syntax like esmodule, or maybe you want to use types with typescript? You are going to have to use npm.

Shout out out to Bun (and Deno too?) for allowing you to treat typescript as an interpreted language. Great for scripting with all the bells and whistles.

(Go is great, just pointing out that running TS does not actually require NPM anymore)


Tangentially related: I am currently scoping out an idea for how language models could be used to augment decompilers like Ghidra.

At a surface level, this was partially an intellectually interesting project because it is similar to a language translation project, however instead of parallel sentence pairs, I will probably probably be creating a parallel corpus of "decompiled" C code which will have to be aligned to the original source C code that produced the binary/object file.

Then I realized, the only way I could reasonably build this corpus would be by having some sort automated flow for building arbitrary open source C projects...

Perhaps I will attempt this project with a Go corpus instead.


an interesting project. go contains many source artifacts which make decompilation a bit more straight forward as well. I havent seen anyone really attempt this for go, but would be notable research


If it turns out that its easier for a language model to translate "Ghidra C" into readable Go code than to deal with CMake/Bazel/GNU autoconf/Ninja/Apache Meson/etc I wonder if that says more about the language model or the state of C/C++ toolchains...


You can build the equivalent simple c++ program by just calling `make` with no arguments.

Though TBF then you have to type ./a.out, and so then you want to do `make && ./a.out` and then...


>so then you want to do `make && ./a.out` and then...

Shell scripts ...


Sadly I haven’t been able to write any production Go for a few years now after switching companies. However, I got bit by the seemingly innocent _platform.go “feature”. I had a file that organized a bunch of windows for a cross platform GUI app. Well it turns out something like file_windows.go only compiles on windows. Our CI environment was compiling all the code but suddenly all platforms except windows started failing.

Was funny when it was diagnosed but not so funny for the time where I was deeply confused why things broke.


Go is neat to write tools, no doubt. But I think the author is not up to date with the javascript / typescript ecosystems.

npx, tsx, deno, *.mjs and bun make it convenient to run typescript tools.

Bun has recently added a very interesting feature, bun shell: https://bun.sh/blog/the-bun-shell


"go run" was a great way to run go code as scripts. But maybe it is not now. Why? because Go 1.22 introduced a change that breaks backwards compatibility (changing the semantics of "for;;" loops). Without language version specified (such as in go.mod files), the change will often cause unintended damage.


It's quite rare to find any Go repos without go.mod nowadays. So what's the problem again?


Most Go scripts are not public.

But anyway, just being rare, so deserves to be ignored?


Yeah, IMO, `go run` is a really under-appreciated part of what makes Go productive and low-friction.

That, and the ability to cross-compile without installing a cross-toolchain for the target platform. Having spent an inordinate amount of time writing build systems and compiling/distributing cross-toolchains, this is a _huge_ deal.


I love it! Go is simple. Sometimes too simple, but that works for me.

I do see makefiles periodically like the author notes, but that’s almost always related to secondary build objectives, such as cross-compile or containers etc.


This article would have worked a lot better without the second paragraph. The writer's simple pleasure of typing "go run ..." should not be predicated on believing that deno doesn't exist.


is anyone actually using Deno in production for larger projects?


does anyone run production code with 'go run'?


depends on how you view/classify production, sometimes yes I do!

if I refactor some too complicated bash script into a script.go file and then execute it against production DBs/APIs with a `go run`.


Dealing like this with uncomplicated bash scripts also has a great future.


I am new to go, how do people usually run go code in production?


That's missing the point. The article isn't about an advantage for large production projects.


combined with gosh - a golang shell interpreter it's pretty easy to create scripts that run on all the platforms and architectures, even future targets

      go run mvdan.cc/sh/v3/cmd/gosh@latest -c ' go run github.com/mikefarah/yq/v3@latest n foo.bar.hello world | go run github.com/cezarsa/glolcat@latest'


Gosh is a dead project currently, right ?


Are you assuming that based on visiting the vanity import path in a browser?

https://github.com/mvdan/sh is the repo looks like v3.8.0 was released 2 weeks ago.


imo this is the biggest thing deno and bun have going for them. running typescript from the shell is so nice and easy


Personally I've always found it easier to do:

  $ go install ./...
  $ <cmd name>


Helpful makefile directive...

  run_%:
      cd ./cmd/$* && \
      go run  .


go build is great too!

I recently was dealing with some docker containers that we needed to abuse. The app within the containers was not returning helpful errors. One quick script and a go build later I had a portable binary that could return a responsible error message.


I often feel like Docker shouldn't even be needed for Go apps. It's just so easy to have your dependencies in order if everything is statically linked.


Bun run


Frankly I put all the correct command in in package.json file based on project

npm run dev is what I run all the time

It was never a blocker for me


> bun run :) bun hing.ts same with python but not compiled you need to install python that’s the beauty of go for me even rust you need a cargo file

...are they claiming that you can run Go without installing Go?


Wait until you discover `nix run`. If I want to one-shot a command and not even worry about dropping into and out of a shell: `nix run nixpkgs#file some_mystery_file.xyz` will do the trick.


Go haters are in full display with this one. LOL

"go run" is yet another wonderful feature of an awesome language.


Maybe.

Now, go run.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: