A well-engineered UNIX-y "pipeline" is not unlike an impure functional program, yes. Well-behaved processes share no state (i.e. they don't contend over the same files), and messages and data passed over UNIX pipes are immutable, much like data in a functional program.
For example: one system I implemented needs to take a few hundred very large CSV files every day, aggregate and sort them, perform some complex processing on them, and output a result in a very different format to many different output files. It's an extremely complex system, and each component is in Go, performing a specific task, e.g.:
* combining the many source files
* cleaning the source files
* performing some aggregation on the stream
* splitting the stream into many parts that other components can read from in parallel ('named pipes' in unix make this very nice)
* splitting the stream into many output files
etc. Every component is dumb and does one thing, but a single controlling program is responsible for handling command line arguments that describe the overall outcome and setting up the stdin and stdout stream of all of the components to create the final result.
There's a beautiful simplicity to systems implemented like this, and it means you can take advantage of existing tools like grep, awk, sed, sort, cut, etc. to do a lot of the heavy lifting more reliably and quickly than you could probably implement yourself, while still coding the overall system at a reasonably high level of abstraction. Go doesn't lead to this approach directly, but it's very pleasant working with it as a citizen of this wider environment.
For example: one system I implemented needs to take a few hundred very large CSV files every day, aggregate and sort them, perform some complex processing on them, and output a result in a very different format to many different output files. It's an extremely complex system, and each component is in Go, performing a specific task, e.g.:
* combining the many source files * cleaning the source files * performing some aggregation on the stream * splitting the stream into many parts that other components can read from in parallel ('named pipes' in unix make this very nice) * splitting the stream into many output files
etc. Every component is dumb and does one thing, but a single controlling program is responsible for handling command line arguments that describe the overall outcome and setting up the stdin and stdout stream of all of the components to create the final result.
There's a beautiful simplicity to systems implemented like this, and it means you can take advantage of existing tools like grep, awk, sed, sort, cut, etc. to do a lot of the heavy lifting more reliably and quickly than you could probably implement yourself, while still coding the overall system at a reasonably high level of abstraction. Go doesn't lead to this approach directly, but it's very pleasant working with it as a citizen of this wider environment.