Very good comparison, stuff that most other blogs don't talk much about -- scheduling, fault isolation, garbage collection strategies. I guess they don't because other frameworks/languages don't provide that. It usually stays at syntax level with obligatory mention of generics.
Fault isolation and pauseless garbage collection is something that is very important in some contexts. Often the need for it becomes apparent the second time around, after one version of the system has been plagued by large mutable shared state bugs, or strange, un-predictable response times in a highly concurrent system.
Do you pay in terms of raw CPU performance by copying messages and keeping private heaps per lightweight process? Yes you do. There are no magic unicorns behind the scene. That is the trade-off you get for getting fault isolation and soft realtime properties. But keep this in mind, one of the biggest slowdowns a system incurs is when it goes from working to crashing and not working.
Also, no matter how strong the static compile checking is, your system will still crash in production. It is usually a hard bug that has been lurking around not a simple "I thought it was an int but got a string", those are caught early on. It will probably be something subtle and hard to reproduce. Can your system tolerate that kind of a crash in a graceful way? Sometimes you need that.
In the end, it is good for both systems to be around. If you need fault tolerance, optimization for low latency responses, supervision strategies, built-in inter-node clustering(node = OS process or instance of running Erlang BEAM VM) you cannot get that in any other way easily.
Now, one could think of trying to replicate some of the things Erlang provides. Like say build tools static analysis tools to check if go-routines end up accessing shared state. Or say, devise a strategy to use channels-based supervision tree. Heck, if you don't have too many concurrency contexts (processes, go-routines) you can always fall back on OS process-based isolation and use IPC (+ ZMQ for example), as a mailbox. But then again Erlang provides that in one package.
> To replicate erlang, is easier if used the Actor model? Or can be done with CSP?
There is overlap between then two. CSP is _usually_ synchronous and Actor model is asyncronous. CSP is focused on channels. As in you send messages to a channel and the other side receives them. Channel has an identity. In Erlang you send a message to a process (its main concurrency context). A process has a process ID used to identify it. Like say you have an address at your house and and I send a letter to you.
Goroutines don't have identity. You can't easily send a message to a goroutine. Kill a goroutine, see if it died to start another one and so on.
> Also, what is need to made a soft realtime language/runtime?
Quite simply you need one part of the system to not block other parts from making progress (returning a result). Imagine your serve a page to one user but because some other user is sent some input that take a long time to process, the first user doesn't get a response, he has to wait.
Or say you have some data structures and a common heap between concurrency contexts. A garbage collector might have to stop all goroutines in order to check if it can collect any garbage. So that introduces a pause. Java Azul JVM the only VM with shared that that has concurrent and pauseles garbage collector. It is very impressive, look it up how it works. In Erlang it is easy. Processes heaps are private to each process so they can be collected independently without getting in the way.
Oh I agree 100% here, I had a Java version of a project that went from 15k qps to 60k qps just by switching to the Azul JVM.
That said I still ended up crushing that with Nginx/LuaJIT and I didn't need a proprietary JVM that wouldn't work on some systems because of kernel modules it needed to install etc.
"Channel has an identity". In wikipedia (https://en.wikipedia.org/wiki/Communicating_sequential_proce...) say "CSP processes are anonymous", but don't tell about the channels. So, how can a channel have a identity? Why is usefull? If the channel is a type declaration, why I need something else? This is something that actor have (PID, because you send "emails") that I don't see why is necesary in CSP
The problem is that in most systems you cannot completely avoid shared state. We usually move the responsibility to manage shared state into the DBMS, but the DBMS has to be written in some programming language as well.
As the amount of available RAM grows, in-memory architectures become more desirable, especially for analytics workloads. But how do you do in-memory analytics without shared state? It just doesn't scale.
What we need is a way to make access to shared state explicit, not the default. We also need pausless garbage collection or optional garbage collection so that large amounts of shared in-memory data doesn't cause latency problems.
Pauseless garbage collection conditioned on never using shared state is not good enough for the kind of systems I'm talking about.
As I understand it, ETS is itself an in-memory database that comes with its own data model, but it doesn't let me build one out of my own data structures and algorithms, which is what I had in mind.
Also, I think, all data has to be copied in and out of ETS. It cannot be referenced in-place, which is going to slow things down quite a bit. (Correct me if I'm wrong. I have no first hand experience with ETS)
I've added an update to the top of the post because I didn't make the point clear enough:
I’m seeing that I did not make the point of this post clear. I am not saying Go is wrong or should change because it isn’t like Erlang. What I am attempting to show is the choices Go made that make it not an alternative to Erlang for backends where availability and low latency for high numbers of concurrent requests is a requirement. And notice I’m not writing this about a language like Julia. I have heard Go pitched as an alternative to Erlang for not only new projects but replacing old. No one would say the same for Julia, but Go and Node.js are seen by some as friendlier alternatives. And no, Erlang isn’t the solution for everything! But this is specifically about where Erlang is appropriate and Go is lacking.
Erlang is a language with a purpose that I can relate to. I develop a cluster database product, and erlang seems to have an answer for many problems that I have actually faced.
For instance: cluster global pids. When you send a message to a pid, you don't have to go through a dance of handling errors and timeouts just because it might live on another node. If the node goes down, there are several ways to handle that, including monitor_node() which gives you a message that you can handle in one place.
I haven't used erlang in production, but I've invested some time to learn about it because I see that potential value.
I don't really see that from Go. It seems to fall into a "general-purpose" bucket where it competes with Java, C#, python, ruby, haskell, clojure, scala, etc. I don't necessarily like all of those languages, and for any given one you can probably pick out some Go advantages. A lot of people say Go hits a nice sweet spot for many applications.
But Go just doesn't speak to any problems I actually have. It can't replace the C/C++ code, because Go has a GC, and the code is C/C++ because it needs to manage memory[1]. It could replace the python code, perhaps to some benefit, but there would still be a bunch of awkward communication between processes and all of the associated error handling mess. And it can't replace Java, because that's a public API we offer, and lots of people are comfortable with Java.
Go should have been another cluster language/VM that really could compete with erlang, in my opinion. To me, Go is just another language.
[1] Rust does seem to have something to offer here.
As mentioned in the post and having basic experience with go, go doesn't solve my problems and it's presence of nil makes it feel like a fancy C.
Rust does have something to offer and I've helped a bit with a thing called oxidize (not the compilation phase in rustc) as an attempt to make a web framework (or at least a routing layer on top of an http lib from the community)
I wonder how this author would feel to compare Erlang and Rust once it matures.
But Go just doesn't speak to any problems I actually have. It can't replace the C/C++ code, because Go has a GC, and the code is C/C++ because it needs to manage memory[1].
You can write go code that minimizes GC load. It's not like Java or Smalltalk where absolutely everything is a "value" that's really a reference. It's doable to default to pass by value and simply not have lots of pointers all over the place. You won't be able to completely reduce all GC pressure, but it can be minimized.
For data processing, you often want to just load as much of the data in memory as you can, easily 10s of GB. I'm sure there's some theoretical answer from GC folks about how to handle every situation, but in practice it doesn't usually work out well. GC languages target good performance on "normal" applications, where parameter passing and short-term heap allocations are the biggest memory problems you need to deal with.
For data processing, you often want to just load as much of the data in memory as you can, easily 10s of GB.
In Go, you can write your system, such that the GC doesn't have to get so involved with your objects on the heap. Simply use arrays and slices and avoid using pointers otherwise. Slices contain pointers, but if they are pointing to structs in an array that themselves contain no pointers, then you will greatly minimize GC pressure. I suspect you can use the "unsafe" package in Go to do allocations outside of the GC heap, in which case, the GC will not bother traversing there. However, there is the question of whether you would want to do this, as you are giving up type safety and GC and Go has no other facilities like RAII.
Apparently, programmers at Twitter do tasks of the size you describe with Scala on the Hotspot JVM.
We have large processes like that in the JVM as well, so it's not like we're allergic to garbage collectors. But we do see problems that are not a problem in our C code.
It's possible that Go might have ways of avoiding this, but it would take a while to evaluate and it doesn't offer much in the way of guarantees (i.e. it may change in the future). Unsafe code would be pointless... Might as well use C.
Rust seems to offer memory control, safety, and strong guarantees. That seems more appealing for the problems I work on.
Just because Go doesn't fit your use case doesn't mean it should be something else. In most cases Go IS a better choice than C/C++ in a server side case. For instance let's say you are writing a server for a multiplayer game. Most of the server's activity is spent on waiting for IO and managing concurrent socket based connections. There are drawbacks and risks to managing memory in this type of system and Go's concurrency makes writing this code much easier. Another great go use case is a rest based web service, a lot of these are typically written in Java, Ruby, or Python. I love python but go has serious advantages over all three of those languages in this use case. This is where go was designed to be used and it's best spot. The glue servers that are most commonly used for web applications. Go is designed to be simpler, safer, and easier to write than C/C++, Java, or C# but faster, more modern, and more fault tolerant than python or ruby. It's not a perfect language by any means but it does fill a nice sweet spot.
"Most of the server's activity is spent on waiting for IO and managing concurrent socket based connections."
That sounds like a story I've heard a hundred times before. For the cases where it's true, good engineers have already been using one of the many other languages that fills that niche. For cases where that's not the primary problem, Go doesn't help.
That doesn't mean Go isn't a better option for some use cases. Great. I just don't understand why it's worth reinventing the universe to get something a little better for some use cases. Fine with me -- I'm not the one doing the work. I am just saying that I don't get it.
Especially for Google, a company which could have really made a difference in the way we write software.
I'm writing a multiplayer game in Go. The reason I chose it is that it allows me to control memory layout at a low level. This can be very important for achieving multicore parallelism. (Also known as optimizing for fewer cache misses.)
A meta comment - I'm reading a lot of the "use the right tool for the job and stop arguing" statement in language/framework/system/machine comparison threads these days.
I find the "right tool for the job" to be a total conversation stopper. It stops the bikeshedding type arguments, true, but it also stops potentially illuminating comparisons. Can we, as a community, agree to stop bringing it up in comparison arguments?
Imagine a new programming language and system being presented in an article. It is healthy and useful for the article to say "We designed system X so it is easier to express Y kind of programs. A, B, C are the complications encountered when doing the same with systems I, J, K." rather than "We designed system X to express Y kind of programs. We like it, but if you don't, use the right tool for your job."
While many of us are polyglots, we do seek to minimize the number of parts when building a system, so such comparisons are often meaningful at some level.
While we're at it, "the right tool for the job" always struck me as a bad analogy. Nobody would question a carpenter's tool choices because the only thing that affects others is the quality of the final result (e.g. how the building will stand up to stress).
But using a software language, framework, library, database, OS, or other platform makes it inextricable from the the rest of the product when judging quality. In some cases, you can make a black box argument like "if a user is unable to observe any poor qualities, then the product must be of high quality"; but that only really applies when you are the sole developer and always will be. For larger projects, there are others involved, and they will be affected if the building blocks are poor.
Granted, that doesn't mean that all discussions about platforms are productive, but it means there is some room for illumination and progress.
Additionally, for many people, the "right tool" is the language they know best, even if it's not 100% the best at any given task, because it's better to get some working code rather than stop, learn a new language, build something with it that sucks, redo it, and so on.
There are exceptions to this of course, sometimes something really is inadequate, but for many people a general-purpose language is going to be "good enough". I'm an Erlang fan, but I do think this sometimes hurts its adoption.
I agree. The "right tool for the job" argument is rarely useful in computing, because computers and general purpose programming languages are basically toolboxes and "the job" is rarely one thing.
Is it just me or the author fail to explain all the Go's detrimental design clearly? Most of the points he listed there are pretty much personal taste thing and basically what he was saying is "Go has so many problems because Go is not designed as Erlang".
For example, he said
"But when it comes to complex backends that need to be fault-tolerant Go is as broken as any other language with shared state."
Why? Why shared state is so bad in Go? Isn't it taken care by Go's channel anyway?
Also, why Pre-emptive Scheduling is bad? Isn't Error Handling still pretty much just a matter of personal preference?
Why Introspection makes Erlang so much better? What's the practical key problem for Go that can not be tackled without instrospection?
And I completely failed to understand the point of "Static Linking".
I'm not trolling. I don't have Erlang experience, and most of the problem the author pointed out was not bothering me, so I honestly want to see WHY they are problematic in Go
Shared state is bad because it breaks local reasoning. Without enforcing certain disciplines (i.e. always sharing by passing messages via Go channels) it's hard, if not impossible, to reason about local behaviors without considering other parts of the code. Immutable vs shared mutable state is a design choice. Erlang chose immutability for safety, while Go chose shared mutable state for practical reasons, but as a remedy Go recommends best practices to avoid the drawbacks caused by it.
Pre-emptive scheduling is bad because you can have a goroutine running a tight loop starves other goroutines. To address that problem, you should manually call runtime.Gosched() to yield control and allow the scheduler to run other goroutines. Erlang's reduction-based scheduling does not have this problem and can be very fair.
Goroutine lacking identity is a major design difference from Erlang. In Go, channels have identity, but goroutines don't. In Erlang, it's the other way around: processes have identity, and channels are implicit a.l.a mailboxes. In theory you can simulate one style in the other, but the implication of this design choice is very proud. I'm a Go fan, but personally I think Erlang's model is easier to reason about in scale, and it has this nice symmetry with OS threads/processes (you can kill them easily. Good luck killing a goroutine).
There are also a couple technical trade-offs that are imposed by choosing mutable shared-state.
For example, if Erlang processes had shared state, killing a process could get much trickier because you could potentially kill a process while it is changing the shared state, leaving it in a corrupted state. The language would have to work around this by providing atomic blocks or operations to guarantee such won't happen. At the end of the day this would mean more rules the developer needs to keep in mind to write safe software.
Another example are tools like [Concuerror](http://concuerror.com/). It is an Erlang tool that executes your code and finds concurrency bugs and race conditions deterministically. The tool works by identifying all parts where there is communication and shared state in between processes and instrument them. After instrumenting them, it executes the code considering all possible interleavings, which ends up finding the bugs deterministically.
I have been to a workshop and the tool runs extremely fast and its codebase is quite tiny (~2000LOC) because communication and shared state in Erlang exists only on few places which helps reduce the number of interleavings and make instrumentation straight-forward.
However, if you have shared state, every mutation could possibly be mutating a state that is shared in between goroutines, so the places where you'd need to instrument end up being too many, which would generate too many interleavings. You could alternatively try to reject those code paths during instrumentation, at the cost of making instrumentation much more expensive.
> Erlang chose immutability for safety, while Go chose shared mutable state for practical reasons, but as a remedy Go recommends best practices to avoid the drawbacks caused by it.
That's my point. I didn't say it clearly before. I understand why shared state is bad in general. I don't understand if channel takes care of most scenarios why it's still one of biggest problem for Go listed by author.
> Erlang's reduction-based scheduling does not have this problem and can be very fair.
What's the key difference between Erlang's reduction based scheduling and cooperative scheduling?
> What's the key difference between Erlang's reduction based scheduling and cooperative scheduling?
Cooperative scheduling is like reference-counting: you have to tell the compiler when it's safe to context-switch. Preemptive scheduling is like garbage-collection: the runtime decides on its own when it will switch, and you have to deal with making that safe.
Reduction-based scheduling is a hybrid approach, and is, in this analogy, a bit like Objective-C's Automatic Reference Counting. Under reduction-based scheduling, context switches are a possible side-effect of exactly one operation: function calls/returns. This means that it's very easy to reason about when context switches won't happen (so it's not hard to write atomic code), while not having to worry about making them happen yourself (because, in a language that defines looping in terms of recursion, all code-paths have a known-finite sequence of steps before either a function call or a return.)
If you only share state by passing messages you should be fine. The challenge is to make sure the "only" part.
Erlang's reduction-based scheduling basically counts how many steps each process has executed, and automatically switches to other processes once a process has finished a certain number of reductions. So even if a process is running tight loops, it would not starve other processes. Go's scheduler currently cannot promise that.
The article's complaint against Go's scheduling is that it's not fully preemptive. For each Erlang Process (green thread/lwp) the VM keeps a counter for the number of allowed expression reductions, and any process can be preempted after any expression reduction. Preemption in Erlang isn't blocked by loops without function calls.
In contrast, Go preemption is thwarted by loops without function calls.
Interesting read, I have recently been porting a very low latency high scale project of mine from Nginx/LuaJIT to Go just to learn Go.
I have already right off the bat ran into the concurrency issues even with Go 1.3beta. And the GC locking causes all my connections to drop and thus causes thrashing.
That said I've coded in many languages in my 30 years of development while often reverting back to C over and over again as I've needed the lower level solutions to some big problems. I've enjoyed learning Go and plan to continue because it just "feels right". It's hard to explain but I can see it useful for numerous problem sets and I don't have to dive into C++/Java again to get convenient memory management and hopefully I'll never look pthreads in the face again.
However, I have really been amazed at the speed of LuaJIT. If you would have told me the fastest toolset I could use for my low latency high scale project was an "interpreted language" I would have laughed you out of the room. I did try Python (Cython) and Java and numerous other tools. But so far LuaJIT has turned out to be the fastest, not the most elegant to read but coder time -vs- return on that time is the highest thus far.
I am hopeful that Go will mature in the runtime in ways that will make it compete with Nginx for it's amazing event driven non-blocking architecture. With that in mind I think writing many things in Go will be useful and improvements of the underlying tech will just be magnified by all the code that now needs just a simple recompile to capture those changes. It's like the old days of gcc when I found hand coding some ASM was more useful and now a days it kicks out some amazing code with little need for inlines except in the most extreme situations. Here's to hoping Go traverses that path faster then gcc did and we all will have a more enjoyable time solving the problems we love to solve every day.
I have already right off the bat ran into the concurrency issues even with Go 1.3beta. And the GC locking causes all my connections to drop and thus causes thrashing.
This sounds somewhat incredible and unlikely to be a result of Go, more likely to be a facet of a naive implement (we can all break any language). I've built a number of extremely high capacity/concurrency systems in Go to great success, as have many other very large organizations, so the notion that it's just fundamentally immature or broken doesn't fly.
All of the talk about GC in Go is a bit curious, because Go actually makes very little use of GC -- it very heavily favors the stack, versus many other platforms (.NET, Java, others) that use the heap for virtually everything, and turn most everything into an object. The simple fact that Go has a GC doesn't mean that its GC use is the same as all other languages that use a GC.
> his biggest surprise was Go is mostly gaining developers from Python and Ruby, not C++
Why is that a surprise? I think it's logical, and Go can be expected to attract developers who are used to garbage collection (Java, Python, Ruby etc.). Not so much C++ developers who prefer to have control and choice (i.e. pay for what you choose, rather than get it handed down forcefully). Rust is a better candidate for attracting more C++ developers than those from Java, Python and Ruby background.
Because it was never the intention. Pike and Thompson, by their own admission, hated C++ and set out to write a better language to replace it. Python and Ruby developers where never on their radar during the whole design phase. It might be logical and obvious in hindsight, but it certainly came as a surprise to the creators of the language.
I see. I guess they misjudged features which C++ developers actually valued as something bad and in need of fixing. I.e. while trying to improve, they removed something that was actually good (such as replacing RAII with GC). And on the other hand not fixing real fundamental problems (such as concurrency safety for example). This probably made the language less attractive as a C++ replacement, but still attractive for those who didn't look to replace C++.
If it wasn't for the Google brand -- with the power to promote Gophercons and hire full team language devs -- there wouldn't be a chance to see Go rise above Nimrod, D et al.
What they do have going for them is:
1) A rather complete standard library (because of many paid full time devs compared to any indie language)
2) An easy familiar syntax
3) A simple (but far from complete or unique to them) concurrency story with Channels/Goroutines.
4) Nice tooling (again due to many paid full time devs -- Java and C++ have better, but most languages have worse).
If you compare RAII and GC, RAII looks better to me practically in every aspect. The only advantage of GC over RAII is perceived simplicity (developers don't need to think about ownership specifics). But the price for it is too high (unpredictability), and that simplicity quickly deteriorates into overblown complexity when people hit problems caused by GC. Pragmatically, RAII is simply better and cleaner, as well as fits better with concurrency safety which implies clarity in memory ownership.
RAII for shared resources also induces cache problems if not implemented with lock free algorithms, which are quite hard to get right without some GC help.
There is also the issue for cyclic data structures and a stop the world effect when destroying big data structures.
Personally, I favour GC with (with/using/defer/scope) than pure RAII and was a bit disappointed that the C+11 GC API proposal was just partially adopted into the standard.
Combining RAII and GC (explicitly choosing what to use for various data) can be an interesting hybrid approach. Looks like Rust does exactly that, allowing you to explicitly define which data is garbage collected (while the rest is handled with standard RAII):
Pike and Thompson were out of touch with what people use C++ for this century. Back in the 90's people used to write practically anything in C++ and Go would have been a suitable replacement for many of those uses, but now if you're starting a new project in C++ it's probably something that needs some degree of manual control over memory, so Go isn't a suitable replacement.
>Pike and Thompson, by their own admission, hated C++ and set out to write a better language to replace it.
Well, they might have hated it, but they didn't understood correctly what its strong points were and why people use it for the projects they use it for. If they did, they'd have made something like Rust.
As it is, they solved problems that mostly people doing backend services in Java/Python/Ruby had.
I didn't actually realise this until it was pointed out; if nothing else, mandatory automatic memory management really hurts its usefulness for a lot of the things that people still use C++ for.
The biggest turn-off about Go, for me, is that the community seems to be incredibly unfriendly to newcomers. He touched on the lack of REPL, but it goes further than that.
For instance, there is very little in the toolchain about debugging other than "use GDB". For someone very familiar with the workflow of typing "debugger" in Javascript code, being able to stop the world at any state, examine variables, and having a fully-functional REPL to test expressions, Go's way of "debugging" code is...well. I don't really know how to do it in Go. The general answer seems to be something along the lines of "think about the code you wrote and then write it correctly you idiot."
Seriously, this is the canonical debugging advice, from Rob Pike himself:
"When something went wrong, I'd reflexively start to dig in to the problem, examining stack traces, sticking in print statements, invoking a debugger, and so on. But Ken would just stand and think, ignoring me and the code we'd just written. After a while I noticed a pattern: Ken would often understand the problem before I would, and would suddenly announce, "I know what's wrong." He was usually correct. I realized that Ken was building a mental model of the code and when something broke it was an error in the model. By thinking about how* that problem could happen, he'd intuit where the model was wrong or where our code must not be satisfying the model."*[1]
Honestly not trying to be a brat here, but I do this a lot myself, and I'm no Ken. Sometimes sitting back and just examining the symptoms of a bug can help narrow down the context that it was likely to have arose from (if you know the code base well enough). I often find that firing up the debugger tends to lead down tangential rabbit holes, particularly when dealing with a heavy framework.
That said, a REPL would be nice. Seems like go compiles fast enough that some ambitious dork could make a REPL-faker?
Absolutely, that's true. Obviously, not every problem needs to be solved with print statements, a debugger or a REPL. But it can help in many cases, and it's better to have too many tools than too few. Of course, it's flippant to just say "there should be a REPL" -- presumably, there are good reasons (better than the one stated) why there isn't one, or isn't one yet, in Go.
presumably, there are good reasons (better than the one stated) why there isn't one, or isn't one yet, in Go.
Languages that have a REPL were engineered from the start to allow it. Go doesn't do late binding so much. It prioritized a higher degree of control over low level memory layout and substituted fast compiling for a REPL.
Most Go programmers I know use printf debugging, not GDB. It fits with the Go philosophy of doing everything from the code, minimizing tools and config files.
This is not because the tools are so bad we are forced to bang rocks together. It's because writing code is an effective way to diagnose bugs. Even when I'm in a full-blown IDE with a GUI debugger, I prefer to use printf debugging.
Printf debugging and its equivalents is something that can be done in all languages. I use it in Erlang too from time to time in dev.
What makes it win is how easy it is to use. What makes it lose is how tedious and manual it is. And how it is restricted to an environment where you are allowed to recompile and load code all the time, and where the output is not going to mess up any other data stream.
I'm starting to phase out printf debugging in favor of tracing, because the tracing tools in Erlang are just infinitely better and more flexible, AND they can work on a production running system without impacting performance in any major way.
I'm able to match specific functions, with specific argument sets. I can see what they returned, and match only specific processes or generations of processes. I can do it from a REPL (because there is one, even in production systems, without needing to run stuff in a screen session), without messing up any related output stream.
Printf debugging is attractive because of how simple and minimally disruptive it is when you're already writing code. As much as it amazes me, it turned out not to be the most convenient tool available for most cases, even if I still use it from time to time.
I do too, I think this is more efficient when I understand what's going on.
However, let's say I'm given a third party library, and I know essentially what it does. I want to "see" the properties and methods of the object, I want to "inspect" values, and I want to "test" expressions, all at once. If you're just using printf, then between each of those steps you need to write a line of code, recompile, and reach that state in your program again. Like I said, if you already know essentially what's going on, deep dives like this become less and less important. However, I just think it's a boneheaded philosophy for a language / toolchain to just like...I don't know, assume you're good. I'm trying to become good, but I am getting more and more frustrated with Go every time the default answer to my problem is "be less of an idiot"
It's not necessarily a matter of being good. Writing efficient code doesn't mean that you can build a mental model for a reasonably complex projects, which will naturally accumulate dependencies. If you're lucky, you'll get a mental model of the component you're working on, and some of the more problematic dependencies. That's where being able to dive into arbitrary code during execution becomes crucial. Especially if the problem occurs in a piece of code you don't have access to at the layer you operate, and hence can't display the value for easily, or at all.
Yes, I usually rely on printf as the Go fmt library string formatting options are so powerful ("%#V" for instance). I have however found that sleep is the best debugger. Move away from the computer, visualize the problem in your head, and sleep on it. Not the best advice for those on tight release schedules, but true none the less.
Sometimes, however, it's nice to be able to see a live flow of the program and to watch variables change as you step through. I think we'll see more powerful tooling for Go debugging coming through very soon.
I'd love to see more integration in the various editors for go oracle, go cover tool etc. as these would probably show me when I started introducing errors, with the relevant info a click (or key combo) away, before they turned into horrible bugs requiring their own git branch to solve. TDD with the go cover tool is brilliant!
For instance, there is very little in the toolchain about debugging other than "use GDB". For someone very familiar with the workflow of typing "debugger" in Javascript code, being able to stop the world at any state, examine variables, and having a fully-functional REPL to test expressions,
Use cgdb for a nicer interface. Also, you can examine variables and set break points in gdb. Instead of a REPL, you can try out expressions in main(). Your workflow has to change, so if that's a deal-breaker, it is what it is. But if you can change, there are some advantages to be gained.
Maybe I didn't express this clearly, but I am entirely willing to learn new workflows, toolchains, etc. It just seems that the Go community is entirely uninterested in helping me out. I'm not complaining that somebody else hasn't coded a REPL for me, I'm complaining that the vast majority of Go developers don't understand the need for a REPL or a debugger or any of these things, because they seem ultimately unconcerned about helping me or other newcomers out.
If the answer was, "yes, that would be great, we just haven't had the time for it", I would say awesome, and start working on building it. However, the general answer seems to be, "you are an idiot, get out of my language."
I'm complaining that the vast majority of Go developers don't understand the need for a REPL
Well, asking for a REPL in Go is kind of like designing a bicycle for a fish. You can imagine what that would look like, but it's not really the most pragmatic or useful thing in context.
don't understand the need for a...debugger or any of these things, because they seem ultimately unconcerned about helping me or other newcomers out.
Are you sure that it's not partly that you haven't lurked around enough to come up to an acceptable level of background knowledge first? Maybe certain members of the Go community could've been more friendly, but one doesn't have to look very long to figure out that gdb is the standard way to debug Go. To be fair, such "background" is pretty easy to get wrong when going into unfamiliar territory. Sometimes, we don't even know what we yet need to learn. I've had such travails myself.
It's kind of ironic, since back in the day, I was a compiled language programmer who discovered dynamic languages like Perl then Smalltalk. Once I switched, I was frustrated by the too-broadly applied assumptions everyone had about programming in general. (Programming = edit-compile-test; "Where can I find the compiler?") Now, I find myself meeting dynamic language programmers who have no exposure to environments like C, where there is no managed VM and no dynamic runtime with even an inkling of late binding. We've somewhat come full circle.
I think a Go REPL would be a very cool project. I think it is doable. I always have ipython up, when I program in Python.
There is even a C++ REPL somewhere. It basically compiles very time your enter the expression and runs the code. I think it can be done in Go (but someone correct me, if I am wrong here).
go run is surprisingly effective for playing around with code. I keep a main.go file in my home directory that I edit with vim and then run with ':!go run %'. Go compiles and runs fast enough that I almost don't notice its'a compiled language.
imho, go, afaik, is designed to cleanly express concurrency primitives in the context of a single system, and doesn't do 'fault-tolerance' in the erlang sense of 'you need >1 machine to be fault-tolerant'. with that lens, it is clear that the optimal way to do concurrency is with shared state, but that gets exported out via channels and go-routines etc.
also, can someone please explain the issues around 'nil' ? i fail to appreciate author's concern about those...
> also, can someone please explain the issues around 'nil' ? i fail to appreciate author's concern about those..
The question can be thought of as two parts:
1) Whether nil should be a base value for objects. If something points to type t, should that be allowed to be nil. It is obvious that it should if coming from C/C++/Python, not so obvious if coming from Haskell for example.
2) Should the lack of objects, or should error conditions be represented by nil. Say lack of a an object in a map is indicated by returning nil. Should it throw an an exception instead? Now, go doesn't have exceptions so it needs something like this perhaps. Or maybe panic should happen.
To expand on 1). If you are coming from Haskell (or any strong typed language) the thinking is, you are already paying the price for static checking, strong types, compilation step etc. It would be nice to have explicit non-nil types. You can say this function accepts a reference to this object and this cannot be nil. During compile time the typechecker/compiler will do a more rigorous analysis based on that, and barf out an error then instead of during production.
Like in Haskell you can explicitly say something may be nil (Nothing):
data Maybe a = Nothing | Just a
I don't personally lean one way or another here. I just don't have a strong opinion either way. I understand the trade-off. But this is how I interpret what the author means by his issue with 'nil'. Hopefully this helps.
Golang maps do not return nil for missing values of maps. Referencing a missing value in map returns the zero value for the value type, which for int is 0, for string is the empty string "", bools are false, and so forth. Golang also has the two-return-value form of map reference:
There is no such guarantee, a NULL reference can be created although it does require a little more effort than creating a null pointer.
However, null is just a special case of "invalid" pointer (e.g. point to freed memory), and it's very easy to create an invalid reference, e.g. take a reference into a vector and then push a pile more elements onto it, causing the vector to reallocate and move in memory, invalidating the reference.
If it takes undefined behavior to create a "null reference" then by definition it's not possible to create one and still have a well-defined program. That is, you have bigger problems than "null references" to worry about.
My understanding is that it is impossible to have a NULL reference. Can you show us some code that ends up with a NULL reference? I'd be curious to see how that can be done!
Dereference a NULL pointer and use the result to initialize a reference. Sure, it's undefined behavior, but so are all the problems that result from NULL pointers, and in practice it doesn't crash until you "dereference" the reference by looking at the value, so it's probably as hard to track down as a NULL pointer crash.
It should be noted that nil in Go is not quite[1] the same as null in C though. For example, if a method defines the return type as a value (say, a string), you cannot validly return a nil (compile error).
Say what? Your statement is that Go's nil is different than C's null. As demonstrated, it is not, C's null is for pointers and Go's nil is also for pointers.
The difference is that Go has a value-type string and C does not, but that is only a parenthesised aside in your original comment (as an example of Go value types), nowhere near "the point" let alone "the entire point".
I use nginx + upstart to keep my golang process up and running. When I asked about it on the golang irc channel, people said that they just use golang without a reverse proxy.
I personally like the ssl termination that nginx offers. I believe that nginx is a better way to load balance and I like not having to run my code as root, which using a reverse proxy provides you.
I write it fully acknowledging that programming language flamewars are pointless, but this article just shows that you don't even have to try hard to create a biased comparison.
Here's the essential difference between Go and Erlang: Go gets most of the things right, Erlang gets way too much wrong.
So what does Go gets right but Erlang doesn't:
* Go is fast. Erlang isn't
* Go has a non-surprising, mainstream syntax. You can pick it up in hours. Erlang - not so much.
* Go has a great, consistent, modern, bug-free standard library. Erlang - not so much.
* Go is good at strings. Erlang - not so much.
* Go has the obvious data structures: structs and hash tables. Erlang - no.
* Go is a general purpose language. Erlang was designed for a specific notion of fault tolerance - one that isn't actually needed or useful for 90% of the software but every program has to pay the costs
* Go has shared memory. Yes, that's a feature. It allows things to go fast. Purity of not sharing state between threads sounds good in theory until you need concurrency and get bitten by the cost of awkwardness of having to copy values between concurrent processes
You just have to overlook ugly syntax, lack of string type, lack of structs, lack of hash tables, slow execution time. Other than those fundamental things, Erlang is great.
> but this article just shows that you don't even have to try hard to create a biased comparison.
So, let's see you detailed unbiased analysis...
> * Go is fast. Erlang isn't.
Hmm. Given that you accused the author of making biased comparisons, I would expect some more detailed information there. "Faster" doing what? Assembly is faster, Go isn't. Ok, let's use assembly then.
> * Go has a non-surprising, mainstream syntax. You can pick it up in hours. Erlang - not so much.
I think Erlang syntax is small and self consistent. If ; and . are the biggest stumbling blocks to learning a new system. Ok, maybe Erlang is not for you.
> Go is good at strings. Erlang - not so much
Erlang is very good at binaries. It can even do pattern matching on them. Decoding an IPv4 packet is 2 or 3 lines of code only.
> Go has the obvious data structures: structs and hash tables. Erlang - no.
Erlang has obvious data structures -- maps, lists and tuples?
> * Go has shared memory. Yes, that's a feature. It allows things to go fast. Purity of not sharing state between threads sounds good in theory until you need concurrency
Quite the opposite. You can get easy concurrency if you don't share thing between concurrency contexts. You also get low latency response and non-blocking behavior.
Erlang shares binary objects above a certain size behind the scenes as well so those don't get copied during message passing.
It also has Mnesia, a built-in distributed database. It is used heavily by WhatsApp to share data between primary and back-up instances of processes running on different machines.
> You just have to overlook ugly syntax, lack of string type, lack of structs, lack of hash tables, slow execution time. Other than those fundamental things, Erlang is great.
Ok it looks like we only have to overlook syntax. Sound good to me then, I can handle . instead of ; and I will also learn and use Go because both are very cool and interesting tools.
> Assembly is faster, Go isn't. Ok, let's use assembly then.
That's pretty flawed logic. It can be used to dismiss any valid statement. "B is better than A at property C." "Z is better than B at property C; let's ignore the valid A vs B comparison."
What matters is how much better something is at a given property, how valuable that property is (to you/your tasks), and what the cost that improvement is - in the context of the big picture.
I like and use Go not because of any single aspect, but because I enjoy the entire package. It has some good parts and some weak parts (e.g., I wish the `go build import/path` command would be consistent in generating/not generating output in cwd regardless whether the target package is a command or library - that way it could be reliably used to test if a package builds without potentially generating files in cwd).
Exactly. I was replying to his message and mocking his way of conduction a conversation. That should be viewed with that in mind.
> "B is better than A at property C." "Z is better than B at property C; let's ignore the valid A vs B comparison."
Yes. I think you probably want to direct that at gp post, not my post ;-) You also probably want to use different capitalization for properties than to entities (or at least letters from the other end of the alphabet), like say B is better than A at property x
> I like and use Go not because of any single aspect, but because I enjoy the entire package
> Hmm. Given that you accused the author of making biased comparisons, I would expect some more detailed information there. "Faster" doing what? Assembly is faster, Go isn't. Ok, let's use assembly then.
This makes big difference. With pure Go I can do computation heavy things, for example - I've implement some information retrieval methods in Go (indexes, stemmers, ranking and so on). I can't do the same in Erlang, because my map based inverted index, written in Erlang will be to inefficient. I can write it in C, but in my case - most of the application complexity is located in this information retrieval part. Because of that, efficiency is a big deal for me.
> You can get easy concurrency if you don't share thing between concurrency contexts.
We typically don't call that concurrency. Some resource has to be under contention, otherwise its just plain-old parallelism. In Erlang, you have to ship everything to everyone, but the resources are still technically shared...just very inefficiently.
Concurrency is a property of algorithm. Parallelism is a property of the running environment. The hope is given that you have large amount of concurrency, that concurrency would be reasonably and easily distributed over parallel execution units (CPU, sockets+select loops, goroutines, tasks, processes, separate machines, VMs etc...).
So if you have concurrency in your problem domain/algorithm ( say web page requests for example ), and, your language has reasonable ways to handle it. Abstractions like processes, tasks, etc, you can make each request spawn an isolated, lightweight Erlang process (it takes just microseconds and only a few Ks of memory). Then finally at runtime, if you have a multi-core machine or started your Erlang VM with a good number of async IO threads, there is a good chance that you'll get good parallelism too! But if you only run that bytecode on a single core machine, you might not get parallelism but concurrency will still be there.
> Some resource has to be under contention,
Why? That is just resource contention. You don't want resource contention if you can avoid it. Think of concurrent as independent units of executions.
> In Erlang, you have to ship everything to everyone, but the resources are still technically shared...just very inefficiently.
Because it keeps things (including faults, but also logical decomposition) simple. Also it maps well to real life. You have to send messages to other people, emails, phone, image. You don't directly plug a shared network back-plane into their brain. So when that crashes the whole group dies. You want to send a message to them and then continue minding your own business.
Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other (stolen from wiki). Here simultaneous could mean interleaved (multi-threading on a single core) or at the same time, but the interacting with each other part is key. They have to communicate, they have to communicate what they know about the world, what they know can change...they necessarily share state whether explicit or not.
Parallelism is is actually doing stuff at the same time, usually for a performance benefit, in which case, you aren't going to be using any of these slow FP languages anyways (but there is always Java + MapReduce, which is kind of functional). Your Erlang code will run "faster" relative to single core Erlang code, but if you honestly believe that you are being parallel fast...
> Because it keeps things (including faults, but also logical decomposition) simple. Also it maps well to real life. You have to send messages to other people, emails, phone, image. You don't directly plug a shared network back-plane into their brain. So when that crashes the whole group dies. You want to send a message to them and then continue minding your own business.
I get it: back in the old days before cell phones and screen sharing, we'd have to use carrier pigeons to send messages to each other, so we just sent messages out and minded our own business. But somewhere down the line, we gained the ability to interact in real time via language, looking at and modifying the same stuff even (I think this happened 50,000 years ago or so). We've never been able to recover from these impure interactions.
Parallelism is is actually doing stuff at the same time, usually for a performance benefit, in which case, you aren't going to be using any of these slow FP languages anyways (but there is always Java + MapReduce, which is kind of functional). Your Erlang code will run "faster" relative to single core Erlang code, but if you honestly believe that you are being parallel fast...
Erlang's shared nothing architecture is wonderful for avoiding cache misses. (False Sharing) That's a big stumbling block for good parallelism on today's multicore machines. Also, each process having it's own GC helps even more, and GC is an even bigger problem. Go also makes less use of GC than other langs.
You are aware that Erlang was invented to route telephone calls, which maps very well to message passing concurrency? Erlang is after all not a pure functional programming language.
> and potentially interacting with each other (stolen from wiki). [...] but the interacting with each other part is key.
Sorry don't see it. It seems you took something that was optional and turned into "key". It is not key. potentially means they can interact but they don't have to.
It seems you conveniently misinterpreted the definition from wikipedia to fit your idea of what concurrency and parallelism is.
Erlang does not specify how the message passing semantics is achieved, in principle it could be implemented with shared memory. But then it is hard to garbage collect processes independently, take Haskell as an example. It uses shared memory concurrency (STM) and the garbage collector thread has to stop the world to collect. Message passing also allows to transparently distribute the system over several nodes.
Note that the people who espouse Erlang are espousing the Erlang VM (mainly because it doesn't have a name besides "the Erlang VM".) Nobody likes Erlang's syntax. Nobody likes Java on the JVM either, but Clojure's pretty great. Use Elixir, and you get pretty syntax, and also "string support" and "a consistent stdlib" for free.
And, since other people have already rebutted your statements about structs and hashes, I'll ignore that. †
On needing speed: for IO-bound operations (concurrent ones especially), Erlang is faster than Go. For CPU-bound operations, Erlang knows it can't beat native code--so, instead of trying to run at native-code speeds itself, Erlang just provides facilities for presenting natively-compiled code-modules as Erlang functions (NIFs), or for presenting native processes as Erlang processes (port drivers, C nodes.) If you run the Erlang VM on a machine with good IO performance, and get it to spawn the CPU-bound functions/processes/C-nodes on a separate machine with good CPU performance, you get the best of both worlds.
And finally, on "every program having to pay the costs": is someone forcing you to use Erlang to create something other than fault-tolerant systems? Learn Erlang. Learn Go. Learn a bunch of other programming languages, too. Use the right tool for the job. The article is the rebuttal to people who claim that Go replaces Erlang in Erlang's niche, not a suggestion to use Erlang outside of its niche.
---
† For a bonus, though, since nobody seems to have brought this point up: Erlang has always had mutable hash tables, even before R17's maps. Each process has exactly one--the "process dictionary." It's discouraged to use them in regular code, because, by sticking things in the process dictionary, you're basically creating the moral equivalent of thread-local global variables. However, if you dedicate a gen_server process to just manipulating its own process dictionary in response to messages, you get a "hash table server" in exactly the same way ETS tables are a "binary tree server."
Erlang's syntax is not my favorite, but as a professional programmer, it's just something you deal with. You get used to it, and in the end it's ok to work with. Over time, as a programmer, unless you live in some kind of Java silo, you are going to deal with lots of different languages and syntaxes. I've used, professionally: C, Tcl, Perl, Python, PHP, Java, Ruby, Erlang, SQL, HTML, and probably a few other things I'm forgetting. After a while, you get to the point where you can pick something up and use it and appreciate what's good about it without getting hung up on its warts, unless they are such that they really prevent you from being productive. Erlang's syntax does not fall in that category.
> Note that the people who espouse Erlang are espousing the Erlang VM (mainly because it doesn't have a name besides "the Erlang VM".) Nobody likes Erlang's syntax.
I think lots of people like Erlang's syntax and find it well adapted to what Erlang strives to do. There are certainly people who don't, but like its features, but I don't think its at all true that praise for Erlang is just praise for BEAM (and, yes, the current Erlang VM does have a name, as does its predecessor, JAM).
I mean, Erlang's supporters have made Erlang-to-C and Erlang-to-Scheme compilers, and the main distribution includes a native code compiler (HiPE), so its pretty clear that its supporters don't think that the VM is the only good thing about Erlang.
I like the erlang syntax, and think its pretty easy to reason about honestly.
The only thing thats goofy is the ,;. endings but I just stopped thinking about them after a few months of writting erlang and it became second nature just like everything else.
It's not really "BEAM on the JVM", it's actually a translator of BEAM compiled code to JVM bytecode. But the concept is similar, the target for Erlang is important, but so far a variety of things have been used. JAM was the original VM, later BEAM, later BEAM with HiPE. See [1] for some more information. There was also, apparently, an Erlang-to-Scheme translator. It'd be interesting (does this exist yet?) to see someone implement Erlang semantics in Racket.
While it's true that Erjang translates BEAM code, BEAM itself also does this to yet another more modern internal code (. I'd still consider both BEAM implementations.
Yes, though Erlang has many built-in functions that make it more than just a byte code runtime. See Erjang with the j for an example of a BEAM written on the JVM.
One thing that complicates the construction of a new compatible virtual machine is that the beam files are actually pre-processed at load time into another internal format that is then executed by the vm. Nevertheless see for example http://erlangonxen.org/ they have build a new vm running on top of xen.
> * Go has a great, consistent, modern, bug-free standard library. Erlang - not so much.
Erlang has one of the most battle-tested standard libraries around. OTP is rock solid. It may not be bug free (I don't know that any language can claim a bug free standard library), but it's damn close.
It's pretty obvious you haven't spent much time at all with Erlang based on your points here:
- Claiming "X is fast, Y isn't" is not even an argument, you should've just left that out.
- Arguments about syntax are rather pointless, but Erlang has very consistent syntax, and it's small. You can pick up Erlang in a couple of hours if you are familiar with FP concepts.
- I haven't encountered any real problems working with strings in Erlang. It may not have a bunch of standard library functions for manipulating them, but it's pretty trivial to do most things you would in any other language. It makes up for it with how much of a breeze it is to work with binary data.
- As mentioned previously, structs aren't datastructures, and Erlang has an equivalent (records) for those anyway. Erlang has trees, maps, tuples, lists - I would consider those a lot more obvious and necessary.
- Erlang and Go are both general purpose programming languages. They don't share the same design goals though. You could write a Go program to do a poor approximation of what Erlang is good at, and vice versa, the point though is to use the proper tool depending on the application. I don't know where you got the idea that fault tolerance isn't important for "90% of software", but the software I work on certainly requires it.
- Your argument about shared memory makes it clear you haven't actually used Erlang. The copying of values between processes is abstracted away from you entirely, there simply is no awkwardness. Perhaps there is in Go.
You are claiming the article is biased, but your post is riddled with it. There are certainly problems with Erlang, but none of the things you list are one of them (except perhaps strings).
To be fair, the standard libraries do have some unexpected inconsistencies. The array module indexing at 0, whereas everything else starts at 1, setelement being (index, record, value), but the similar feeling dict/orddict:store being (key, value, dict) (so the value and the collection items are switched between the two), things like that. Nothing really major, but a few things that mean you end up looking at the docs or autocomplete now and again because you forget argument ordering.
Erlang strings are quite inefficient if you aren't careful, and their printing is just terrible; a lot of the Erlang community uses binaries where possible instead, since those behave more like you would expect and are generally faster (especially since concatenation can be achieved with io_lists). It's a fair point; most people trying Erlang assume strings are a basic data type (since they were in whatever language they're coming from), they don't know to use binaries instead wherever possible, and so as soon as they see how slow they are, or when they get a non-printable character in it and the entire string prints as a list of integers, it's rather offputting.
The way I see it, arrays and binaries start a 0 because they represent an offset. Others start at 1 because they represent a position (first, second, third, ...) instead.
For the function and argument orders, there's no explanation for that one.
Erlang strings are a whole other subject, for which I recommend you give an eye to this blog post for the rationale behind a lot of their behavior: https://medium.com/functional-erlang/7588daad8f05 . It's a decent read on the topic.
Definitely good points. Your first is one of those really small, but also really annoying things about Erlang. I spend a lot more time with Elixir than Erlang, and while the first one is addressed, and the the second one is mostly covered, printing is still a pain point for people new to the language. Once you understand the caveats, it becomes a non-issue, but it's certainly frustrating for new users of both languages.
> Go has shared memory. Yes, that's a feature. It allows things to go fast. Purity of not sharing state between threads sounds good in theory until you need concurrency and get bitten by the cost of awkwardness of having to copy values between concurrent processes
There are approaches that allow the flexibility of shared state without the possibility of lurking data races or, worse (in Go's case) lack of memory safety. Even JavaScript has such a solution now (Transferable Objects). In fact, Erlang itself has one such approach: ETS.
To be honest, I don't think unrestricted shared state is the right thing in a programming language. It just invites too many bugs (and race detectors don't catch enough of them).
I don't know that much about Erlang, or Go for that matter, but there are a couple of things that seem to be at issue:
* From what I can tell the times I've looked at it, Erlang's syntax is pretty standard if you're used to functional programming. Sure, if your background is in C and Java, you might have a hard time picking up Erlang syntax, but if you have experience with Haskell or ML, it will be nothing new, except for (that I can tell) its syntax for bit-level operations.
* Structs are not data structures. They are a way to represent objects. Erlang has these as well, in the form of Records, which as far as I can tell are pretty much exactly the same thing as structs.
* Hash tables are only really useful with mutable data, which has its own set of issues and which Erlang does not have (much). Erlang does have maps, which act much the same way but are immutable.
I think it really comes down to that Erlang was designed for a restricted set of uses, which makes it really excel there but seems to hamper it in other areas.
> Hash tables are only really useful with mutable data
But maps (arbitrary k:v associative arrays) are not, they're useful in all sorts of contexts. And r17 adds EEP43 maps[0] to Erlang (though IIRC only with constant keys at this point)
Not directly related but I'm wondering when or if it's even prossible to implement OTP on the system level. OTP is basically a process manager with linked dependencies ? I know unix processes aren't as lightweight but it would still be useful I think. It's the right level of granularity for languages who have shared mutable state internally.
From a particular perspective, that's what Docker is trying to do (or at least, is providing infrastructure to build something like an "OTP for Docker" with).
Fault isolation and pauseless garbage collection is something that is very important in some contexts. Often the need for it becomes apparent the second time around, after one version of the system has been plagued by large mutable shared state bugs, or strange, un-predictable response times in a highly concurrent system.
Do you pay in terms of raw CPU performance by copying messages and keeping private heaps per lightweight process? Yes you do. There are no magic unicorns behind the scene. That is the trade-off you get for getting fault isolation and soft realtime properties. But keep this in mind, one of the biggest slowdowns a system incurs is when it goes from working to crashing and not working.
Also, no matter how strong the static compile checking is, your system will still crash in production. It is usually a hard bug that has been lurking around not a simple "I thought it was an int but got a string", those are caught early on. It will probably be something subtle and hard to reproduce. Can your system tolerate that kind of a crash in a graceful way? Sometimes you need that.
In the end, it is good for both systems to be around. If you need fault tolerance, optimization for low latency responses, supervision strategies, built-in inter-node clustering(node = OS process or instance of running Erlang BEAM VM) you cannot get that in any other way easily.
Now, one could think of trying to replicate some of the things Erlang provides. Like say build tools static analysis tools to check if go-routines end up accessing shared state. Or say, devise a strategy to use channels-based supervision tree. Heck, if you don't have too many concurrency contexts (processes, go-routines) you can always fall back on OS process-based isolation and use IPC (+ ZMQ for example), as a mailbox. But then again Erlang provides that in one package.