It's ironic that the "better" the language (for some hazy definition of "better") the less actual work seems to get done with it. So Go can be pretty annoying at times, and so can Java (I've said before that I find the two almost identical, but that's beside the point now); and C is horrible and completely unsafe and downright dangerous. Yet more useful working code has probably been written in Java and C than all other languages combined since the invention of the computer, and more useful code has been written in, what, 5 years of Go(?) than in 20(?) years of Haskell.
Here's the thing: I am willing to accept that Haskell is the best programming language ever created. People have been telling me this for over 15 years now. And yet it seems like the most complex code written in Haskell is the Haskell compiler itself (and maybe some tooling around it). If Haskell's clear advantages really make that much of a difference, maybe its (very vocal) supporters should start doing really impressive things with it rather than write compilers. I don't know, write a really safe operating system; a novel database; some crazy Watson-like machine; a never-failing hardware controller. Otherwise, all of this is just talk.
As someone who thinks that more advanced type systems and proof-carrying-code are some of the biggest advances in theoretical computer science and practical programming in the last few decades, but also is a C# compiler developer, I have also noticed this. Haskell is really interesting, but C# developers get shit done.
I'm not sure what the cause is, but it definitely gnaws at me.
> I'm not sure what the cause is, but it definitely gnaws at me.
Here's my hypothesis: programming language are made for people to use, and people spending time thinking about designing Watson, don't want to spend it thinking about expressing their program using lambda calculus. The only group for whom the stuff they develop coincides with language concepts are those writing compilers. So they get confused because their domain is the programming language, so they don't feel like they're spending their mind power on two different things. They can't imagine that someone trying to solve a tricky scheduling problem wouldn't want to spend time thinking about types.
The other group that can adopt languages like Haskell are those whose domain doesn't require too much thought: namely CRUD applications. These guys want to spend time thinking about type-driven-design. It might actually make their code quality better. They, too, can't understand why people solving a hard concurrent-data-structure problem wouldn't want to spend time thinking about expressing their ideas elegantly through types.
Both of these groups have one thing in common: expendable energy to spend thinking about programming language concepts. So to them, it seems easy; and it probably is, if you're willing to put some effort into it, which is precisely what many developers don't want to do.
Or, in other words, advanced languages have some cognitive overhead that people working on extraordinarily complex applications can't spare?
Maybe. Then again, I've been using C++ again recently and I feel like the cognitive overhead there is so huge that I can barely understand my simple programs. Nonetheless, Microsoft pretty much runs on C++ (and C#).
It's not so much that they can't but that they won't. And the same people don't use C++'s confusing features (crazy template stuff) either. But the problem isn't just the cognitive overhead inherent to the language, but that of the switching cost. As I said in another comment, while Haskell certainly has some advantages, they don't easily overcome the switching cost. People writing compilers happily pay the cognitive price for the switch because that's their core business, while those doing CRUD apps can certainly afford the price.
I've started recently working in and properly learning C++ (most because I hoped the OO model would suit me over C).
Sometimes, I get so frustrated by how weird the syntax can get (the templates always confuse me to the point I have to stop and slowly read the code before I get its meaning)!
It's not a good thing - for production software - for a language to be too much fun to use. It means that developers spend all their time using the language and not enough time solving problems. The suckiness of Java/C# tends to encourage people to solve their problem and move on quickly to the next problem, because it's honestly not much fun trying to extend the language or build abstractions. Haskell & Lisp programmers, however, can spend hours constructing utility libraries, DSLs, and custom monads to solve their problem elegantly, and as a result they have extremely elegant source code that nobody but themselves will ever read, a library of utilities that they will probably not use again, and a solved problem that took 4 hours while the hacky Java solution took an hour.
Nonsense. A high-level language developer uses the higher concepts to get more work done, in better ways, and less time. Hacks always strike back, sooner or later.
The average emacs user I know doesn't actually know or write Elisp, they just grab preconfigured .el files off the Internet or corporate intranet and add the appropriate snippets to their .emacs.
Elisp gurus, in my experience, don't get all that much work done, other than writing lots of editor extensions. This is great if their job is writing editor extensions, less great if their job is something else. It's the same as with compiler writers: they're great at making other people more productive, but when it comes to writing end-user software, not so much. In general, of course - there are exceptions.
Yes, and that says a lot about the final quality too, no? We could say the same about Perl/PHP/Java/C++ and then we're sitting in the same horrible morass we live in today, because people settle for minimally bad code produced as fast as possible.
Good, cheap, fast. Pick 2. Almost invariably: Cheap+fast are the picked results.
It's interesting. There's so much more C# code out there, but I wonder about the lifetime of that code, and when it will be swept under the rug (i.e. if javascript eats the enterprise).
Haskell seems to disseminate ideas rather than code. It even disseminated LINQ into C#.
Perhaps this makes the lack of volume of production Haskell code less stinging because it has contributed to languages in other ways?
There's kind of an upside to this as well, because Haskell isn't entrenched in industry, it's still free to make changes and grow the language (i.e. Applicative becomes superclass of Monad), although this gap may be starting to slowly close with the recent popularity.
> and then we're sitting in the same horrible morass we live in today
But you see, this is the claim that requires substantial evidence. Let's assume we're in a mess. If Haskell is one way out of it, as some people claim, why won't they show us the way? They've had more than 20 years to do it. There are enough Haskell developers out there to give us this pesky evidence we need. And yet we've seen absolutely none. There are way more impressive BASIC programs out there than Haskell programs. I think that there are two reasons for that: 1) People involved in PL research are less interested in programs that aren't compilers, and 2) languages like Haskell make certain tradeoffs that their designers fail to see, and as a result their advantages fall way short of the claims they make.
My internet is filled with half-baked webapps with bad errors and riddled with stupid security vulnerabilities, many of which can be defined to not occur in an adequately designed tech stack. Desktop programs work through dint of manpower, crufty testing environments, and users-as-QA.
What exactly are you looking for? Web app frameworks? they exist. database connectors? they exist. mathematical proofs relating to correctness? they exist, in abundance. data structures arcane and common? they exist. regex libs? they exist. Sophisticated programs running sophisticated systems? They exist, and you won't hear about them except as a rumor on message boards...
If you mean, "is there going to be a big marketing campaign to make me and my manager feel comfortable"... I doubt it.
> Sophisticated programs running sophisticated systems? They exist
Not really. Well, far less than in all those horrible, unusable languages. All I'm saying is, if Haskell makes writing correct code so easy, where is it? There should be tons of it now. It should be practically everywhere. In fact, it should be financially stupid to do anything in any language other than Haskell. So where are all the companies making a fortune by betting on Haskell?
Maybe it already is. That doesn't mean it would happen. People do not genuinely optimize for what they want, they tend to optimize for what they want given what they know today and their internal biases.
I'm not willing to claim that Haskell is perfect, but I am willing to claim it is better. I'm also willing to claim that if it had anything resembling the community, say, C# has (to pick a random example) then it'd solve most of the problems you're highlighting here.
And it's getting that way. The number of programmers proficient in it and projects succeeding using it grow year over year.
But if you want to argue from a perfect world then you need to note reality will look different and search for trajectories that aim to approach your perfect world... not just current states which are far from it.
> People do not genuinely optimize for what they want, they tend to optimize for what they want given what they know today and their internal biases.
Absolutely. Which means that there are substantial switching cost, which a slight marginal improvement can't overcome. But if benefits are so immense, they should be able to. I think Haskell's supporters tend to overstate its advantages (and it certainly has plenty of those), and discount its disadvantages (and it's got plenty of those, too). All in all, Haskell is an extremely interesting and very good language, but it most certainly does not solve all or even most of the challenges of modern software development. It's good, it's very interesting, but it's not the second coming. I, for one, would take the JVMs monitoring and hot code swapping capabilities over Haskell's type safety, because those solve a bigger pain point for me.
I think Haskell has, of late, entered into the echo chamber a bit and been dressed up as much more than it is. I believe Haskell has a cleanliness of semantics which makes it rich, but I don't believe that it has the best RTS at all. My understanding (via Scala, an experiment) is that it's difficult to embed these semantics into the JVM.
Ultimately, choose the tools with tradeoffs beneficial to your goals, obviously. Engineering of any kind is nothing if not about understanding tradeoffs.
That all said, I do think there are upgrade paths for both technologies—JVM languages with better semantics and "ML-alike" implementations with better runtimes. I'd be more than happy to use either.
The reality is that most companies choose "fast+cheap" and usually incur significant technical debt which is paid off through throwing bodies at the problem. It is usually considered better to hire 3 people who are mediocre than 1 person who is very skilled. This is extremely well addressed in the software engineering literature going back to the late 60s.
It's also the case that software engineering is very conservative: "Don't change ANYTHING" is the default state of the industry with regards to practices.
I don't buy this either. If this were true you'd still see, for lack of a better term, the "redis" of Haskell. You can't argue that antirez skimped on development or that he isn't talented enough.
Where is the redis of Haskell? The ffmpeg of ML? There's something else at play here.
I think the simple answer is that a lot of problems people consider useful are intellectually not terribly interesting. Among other things, if it's been done before, the cool factor drops off a good deal.
e.g., I run a tutorial site for common lisp. One request has been for 'example of web framework'. At some point, I'll do it (it's useful), but my heart at the moment is in implementing an extensible indexing system for n-dimensional tuples. It's a good deal more esoteric and a lot more fun than screwing with http/html.
"is there going to be a big marketing campaign to make me and my manager feel comfortable"... I doubt it.
And this is why better languages fail: because language weenies have atypical minds, and can't comprehend why something marketed inappropriately to the mainstream never catches on as something mainstream.
This is also precisely why Java succeeded. This is also why LightTable finally got across what Smalltalkers had been raving about. It's really not only about who has the best tech or the best message. It's who has the best tech that can make itself understood.
> I'm not sure what the cause is, but it definitely gnaws at me.
My theory is that it is because that is what we teach. If you go to school you learn Java (which is basically C#) and if you learn a language on your own most people recommend Java/C# because it will get you a job.
By extension we end up with the vast majority of coders knowing Java/C#, thus most code gets written that way.
I think many experienced programmers forget how hard it was to write good code in the first place, whatever abstraction they started to use. I "learned" Java and C++ very quickly. I'm sure I was banging out code and stopped looking up most things within weeks. However, it took me years to actually understand how to write good code in those languages. I had to learn the patterns, I had to learn where the short-cuts and sharp edges were. I had to learn what features to avoid and which ones to embrace. It took a long time, but now it's all habit and instinct. I don't even have to think about it.
Asking someone to switch to using a new paradigm—like moving from OOP to FP—is asking them to relearn how to structure a program. All those habits and instincts no longer apply and new ones need to be learned. It takes years to learn how to write good high-quality FP programs. I know I'm still learning and I first picked up Haskell as a primary hobby four years ago. It probably took me a year before I really understood what a monad was and could apply that abstraction, similar to how long it took me to really understand and apply things like visitor patterns.
I believe that if you took an entirely untrained person you could get them to be at least as productive if you taught them with FP versus OOP from the start.
The problem probably is that Haskell is still not expressive or useful enough compared to some of the more "primitive languages". Take for example high performance / numerical computing, in principle a well adapted functional language should have a lot of potential in that area (e.g. sisal), yet Libraries like repa or accelerate-cuda fall short as soon as you want to go beyond what they expose in high level functionality (for example dynamic stencils in the case of repa). Try to express matrix vector multiplication tailored to matrices arising for discontinuous galerkin of a certain pde (they are typically sparse but in a predictable way) in accelerate-cuda and compare to the ~500-1k straightforward lines of code to do the same with intel thread building blocks or cuda. A language like C++ will give you both the possibility to write very high level code at the expense of
loss of control and low level code with very fine grained control.
There are lots of examples like this in Haskell, where you come to a point at which the abstractions provided to you don't work, mostly when the problem has more structure than you can hope to capture in the type system and some generic operations. In an imperative program it is much easier to "teach" or "tell" the computer what to do, when the compiler or language can not do it for you, whereas in a pure functional language you essentially have to give up at that point.
There are a few people who are doing that. One good example is the work that Anil Madhavapeddy's been doing on the Mirage OS: http://www.openmirage.org/ It's a really impressive use of OCaml.
Not sure why there are so few other good examples to point to though.
Maybe it turns out that the sociological side of programming trumps the technical side; that languages that forces or fosters a common vocabulary will always "win" over languages where you have so much power that you can easily diverge from the mainstream dialect, and there is not much culture of restraining this freedom.
On the other hand, these dialects are embedded DSLs; specialized languages created and hosted by a base language. People seem to balk less at the idea of DSLs compared to the idea of powerful languages.
I think that part of the problem is that more 'powerful' languages often more concept heavy than more 'pragmatic' languages and that makes them harder to learn for a lot of programmers.
Take Clojure for example. A lot of people including myself like this language a lot, but to be productive in it, you have to get used to the JVM, Lisp s-expressions, a heavily functional programming style with few side effects, a significant break from a normal approach to OOP, and Clojure's distinctive concurrency constructs which are more complex than Go's. It's not that any one of these things is hard, but you have to know all of them. Unless you have Lisp, or Haskell (or similar) experience, the only one you're likely to know is the JVM, and even then, maybe not.
This isn't like C++ or Java which are huge languages with a lot to wrap your head around but at core simply reuse concepts that most programmers are familiar with, (OOP, basic types) It's the very things that make the language interesting to begin with that makes it trickier to pick up.
Most of this holds for Haskell, Erlang, etc.
Then again, real code is written in these languages, but they don't tend to dominate.
I think it's actually a third-order effect resulting from this: being harder to learn means that fewer people learn it, which means that there are fewer libraries and less community support available for the language. That means that even experienced, highly-skilled practitioners who could easily pick up the new language in a weekend are less productive in it.
I've written Haskell, I've written Scheme, I picked up Go in a couple weeks, and learning a new functional language shouldn't take me more a few weeks. Nevertheless, if I were starting a project now, I would probably pick some combination of Python/Java/C++. Why? Because their library support & ecosystem far outweighs any productivity boosts I could get with Go, Node.js, Clojure, Erlang, Haskell, etc. And yeah, I know you can use Java classes with Scala/Clojure, but there's an impedance mismatch mapping the concepts and existing library structure onto a functional language that doesn't exist with Jython.
Not to mention emacs and the REPL. Emacs was my biggest hurdle learning Common Lisp. Learning a language - OK. Learning an editor - OK. Learning both at the same time - Not OK. I don't think light table was around at the time.
The first programming course we expose students to at Edinburgh has them doing Haskell in Emacs on Scientific Linux. We like steep learning curves, it seems.
If you're looking for a language that will enable "bottom-up development", where you gradually define, in Paul Graham On Lisp style, a language optimized for your problem domain, Golang is not the language for you.
Similarly, if you're looking for a language that will read and write like a specification for your problem domain, so that writing your program has the side effect of doing half the work of proving your program correct, Golang is also not a good choice.
What's worse, those two approaches to solving programming problems are compatible with each other. Lots of sharp programmers deeply appreciate both of them, and are used to languages that gracefully provide both of those facilities. If that describes you, Golang is a terrible choice; it will feel like writing 1990s Java (even though it really isn't).
There are two kinds of programmers for whom Golang will really resonate:
Python and Ruby developers who wish they could trade a bit of flexibility, ambiguousness, or dynamicism for better performance or safer code seem to like Golang a lot. Naive Golang code will outperform either Python or Ruby. Golang's approach to concurrency, while not revolutionary, is very well executed; Python and Ruby developers who want to write highly concurrent programs, particularly if they're used to the evented model, will find Golang not only faster but also probably easier to build programs in.
Systems C programmers (not C++ programmers; if you're a C++ programmer in 2014, chances are you appreciate a lot of the knobs and dials Golang has deliberately jettisoned) might appreciate Golang for writing a lot like C, while providing 80% of the simplicity and flexibility value of Python. In particular, if you're the kind of programmer that starts projects in Python and then routinely "drops down" to C for the high-performance bits, Golang is kind of a dream. Golang's tooling is also optimized in such a way that C programmers will deeply appreciate it, without getting frustrated by the tools Golang misses that are common to other languages (particularly, REPLs).
At the end of the day, Golang is overwhelmingly about pragmatism and refinement. If you're of the belief that programming is stuck in a rut of constructs from the 1980s and 1990s, and that what is needed is better languages that more carefully describe and address the problems of correct and expressive programming, Golang will drive you nuts. If you're the kind of person who sees programming languages as mere tools --- and I think that's a totally legitimate perspective, personally --- you might find Golang very pleasant to use. I don't know that Golang is a great language, but it is an extremely well-designed tool.
There is something to your description. I use Go sometimes, for pragmatic reasons, when I need more performance than I can get from Python or Ruby, but boy is it a ugly language!
There are just tens of little ugly things Go does and the result is, well, even uglier. Writing four functions for every collection to be sorted, writing x = append(x, foo) everywhere, shitty namespacing that prevents you from writing list := list.New() and so on and so forth. Even C is more elegant, and I would probably prefer to use C, if only C came with a library manager like rubygems or go get and a decent repository of libraries. I would even prefer Java if it produced native binaries and had an integrated library manager.
x = append(x, foo) is a common C idiom too; in fact, it's been awhile since I had to work with STL containers, but I think it was common there too.
I love Golang namespacing rules. They do exactly what I need them to do to allow me to call into other people's code using the right function calls, and nothing else. I think Golang's namespacing is a high point of the language.
> x = append(x, foo) is a common C idiom too; in fact, it's been awhile since I had to work with STL containers, but I think it was common there too.
No, vector::insert() is the common idiom and mutates in place. It'd be written `x.insert(x.end(), foo.begin(), foo.end());` For single elements, vector::push_back() is the common idiom, and also mutates in place.
vector::insert() is certainly not a C idiom. In can never be a C idiom because that is C++ code.
the x = append(x, foo) idiom can be seen all over C, most obviously with the use of realloc, which is often called to resize an array via x = realloc(x, newsize).
I don't think simply having a separate symbol table for packages and a separate one for types and variables would hurt much, and I hate writing things like aList := list.New(). Having said this, I know many of those things are purported to have such and such super important reasons, it doesn't change the ugly "feel" of the end result. Another instance is the indication of visibility by case, they will go on and on about how it's a triumph of simplicity, I think it's simply an ugly notation.
You frequently have packages that are named after generic nouns, like time, host, file etc. There are many contexts where this same generic name makes for the best variable name: clear, short, and easy to type:
You can do the same thing in Golang by renaming the import that clashes with your preferred variable name. Nobody does it, though, because it's not worth the trouble.
In the C# language we refer to this as the 'Color Color problem'. You have some enum or type Color and then a member of an object which you, quite obviously, want to name 'Color'. It's so annoying for the programmer to work around this that we define special exceptions in naming rules so the programmer can do what they want.
You're right, Golang is all about pragmatism, and a very pragmatic thing for language designers to do is to listen to the marketplace on how the language should evolve. Lack of polymorphism must be the most common critique of Golang. It seems pretty practical to add it, as it would greatly reduce the amount of boilerplate. And it's not as if polymorphism is stuck in the ivory tower - languages used in industry have had them for decades.
When I wrote that paragraph, I definitely did not think of "pragmatism" as "doing whatever people wanted them to do". Golang is also not Perl. If you're looking for Perl, Golang will disappoint you.
I'm not sure how you can create a pragmatic language while ignoring the most common complaints about it.
There's this common attitude of "if you want feature X and Go does not have it, Go is not for you." To some extent it's not surprising since there was confusion about what niche Go hit when it first came out. But when there's repeated patterns in critiques of Go coming from very smart people, it's probably worth paying attention rather than saying it just doesn't fall into the language's sense of "pragmatism".
FWIW, I'm a Python programmer who loves Go. Despite all its good stuff, though, we shouldn't remain content with what we have already.
I wrote the comment you're replying to, so I'm probably in the best position to determine what I meant by the word "pragmatic". I didn't mean "implementing whatever feature people complained most about missing".
If there's a debate to have here, it's about what word better captures the point I was trying to make than "pragmatic". But that's an incredibly boring debate, so, I opt out.
I thought the debate was about polymorphism. It's apparently not worth including because...? It's not already in the language spec? Because the people who advocate it are using the wrong tool? Surely there's a better argument than this.
Lack of polymorphism isn't even close to the most common objection to Golang. The most common objection is the lack of generics, which make it tricky to write reusable container libraries.
Golang comes pretty close to flat-out rejecting object orientation. It's only barely more object-friendly than C is. If you're the kind of programmer that wants to model a problem domain or build code with the Smalltalky feel that Gang of Four patterns give Java and C++, you will also hate Golang.
Generics might happen in the future (the more Golang I write, the less excited I am at the prospect), but a richer object model seems very unlikely. If you want to hang your argument on polymorphism rather than generics (I applied the principle of charity and assumed that's what you meant), the argument gets less valid, not more.
Apologies, polymorphism is a loaded term that could mean different things depending on your background. I mean parametric polymorphism, i.e. generics. What makes you unexcited about them?
I surely see the value, but I'm also convinced by the idea that they will complicate the language, and, in practice, after a year or so writing fairly serious Golang programs (I wrote the emulator and debugger backend for Microcorruption in Golang, for instance, along with a pretty complicated web fuzzer) I haven't missed them.
Or "likelihood that program runs correctly if compiles correctly X" is too low, or "ability to write concurrent code in straight-line fashion rather than with callbacks X" is "too low".
When single threaded top-down Python is too slow for a particular task, of late I find it easier/cleaner/faster to use Go routines than multiprocessing / threading in Python, because of the way concurrency was designed in Go, but added to Python.
> "ability to write concurrent code in straight-line fashion rather than with callbacks X"
> languages used in industry have had them for decades
If languages used in industry have had something for decades it does not automatically mean that it's a good idea. Go reconsiders a couple of things that once were thought to be good ideas like class inheritance, exceptions, generics and questions their net benefits.
Yeah, and notably, you never hear grumbling about class inheritance, sometimes hear grumbling about exceptions, and constantly hear grumbling about generics. It seems like the amount of grumbling probably tracks the actual net benefits.
They probably will add generics at some point in the future. In the meantime, lots of people are using and enjoying the language as it is.
If you want to write generic methods using your own types, you can define an interface to operate on, and then create types which conform (or wrap more basic types to conform) - as this article demonstrates after it sets up a straw man to criticise golang with. I quite like that approach, though it does depend what sort of libraries you're writing, and what sort of types you're operating on - a maths library might be painful for example but if you're operating on your own types anyway, using interfaces is no great burden.
Well, it certainly isn't a convincing argument that it is the best of all possible languages, but it does lend weight to it being worth a look in its current form. Personally, compared to other languages I've been forced to think in (Java, C, C++, ObjC, Ruby), it comes out favourably, but I'm quite happy to accept that some people might think it is stupid, dangerously simple etc, etc. and listen to their reasons why - there's room for more than one language in the world.
Golang is a programming language that applies for readability over language features. Built in channels make it easy to break down your code into more granular pieces. It's like the unix approach to piping but in a program.
Reading a golang program is simple because you don't have to chase class hierarchies very far. I personally find polymorphic code to be a detriment to team projects. Anything that hides implementation is a hit on readability.
"If you're looking for a language that will enable "bottom-up development", where you gradually define, in Paul Graham On Lisp style, a language optimized for your problem domain, Golang is not the language for you."
Sounds like he is looking for Forth or one of its decedents (e.g. Factor). That is pretty much the definition of how you do Forth development.
I don't know many (TBH none) c programmers that likes or actually uses go. The thing is that as for c++ if you are still writing in c is just for few reasons: portability (in terms of embedding your library) and speed. Especially the former seems very important.
I prefer to work in C over C++ (did both at different jobs), mostly does C but where I don't care just that much about performance (specific allocations of memory and the like) than I'm likely to prefer Go over C++. It's far nicer and I really like the goroutines and channels concept. I like it that much that I copied it back to C in-fact.
REPLs are for toying around and learning. The Go playground provides an alternative that seems to satisfy these needs for most people. If people really missed a Go REPL then the existing ones would not be "curiosities".
People use REPLs for more than "toying around and learning." They can be very useful both for exploratory programming (which is different from toying around in that you're actually trying to accomplish something) and debugging. It seems more likely to me that Go doesn't have a good REPL because Go just isn't very amenable to general-purpose REPLs, which is the same reason C REPLs tend to be more trouble than they're worth.
> People use REPLs for more than "toying around and learning."
In all those years of Scheme, Ruby, Python, Haskell, Scala I have never used REPLs for anything other than tutorials or trying out something quickly.
> that Go doesn't have a good REPL
What makes you think that the Go REPL is not a good REPL? Have you actually tried it? Probably not, because you do not care as much about REPLs as you think do.
REPLs are for interactive development, where you try out your code live on a command line, insert working snippets into an editor, and gradually build up a working program.
Really shines when the language / REPL has decent introspection, is able to break out into the REPL prompt in the middle of your code with in-scope identifier lookups, and you're dealing with lots of libraries that you don't necessarily have encyclopedic knowledge of, yet need to get the job done under time pressure.
At least that's how I use REPLs, most usually, pry in Ruby.
This is a great rant. Having recently done the 'golang' tutorial on the site a lot of it resonated with me. I have a slight advantage in that I worked at Google when Go came into existence and followed some of the debates about what problem it was trying to solve. The place it plugged in nicely was between the folks who wrote mostly python but wanted to go faster, and the folks who wrote C++. It was a niche that Java could not fill given its structure.
In a weird way, it reminds me of BLISS[1]. BLISS had this relationship to assembler that "made it manageable" while keeping things fast. BLISS was replaced by C pretty much everywhere that C took hold (one theory is that BLISS is the 'B' programming language, Algol is the 'A' language, personally I think BCPL is a better owner the the 'B' moniker). The things that C has issues with, memory management, networking, and multi-threading, Go takes on front and center. It keeps around some of the expressiveness and type checking that makes compiling it half the battle toward correctness.
Now that was kind of what the Java team was shooting for as well but with limited success. I feel like between Go and Java we've got some ideas of what the eventual successor language will look like. For me at least that is a step in the right direction.
As someone who writes both Lisp and Go (and enjoys both), I find it odd that this article uses Lisp and Haskell as points of comparison.
> programming in Go isn’t “fun” in the same way that Python, Haskell, or Lisp is.
Yes, writing Lisp is much more "fun" than writing Go for me. But writing Go is much more productive and maintainable (both for solo projects and for larger groups).
> The idea is that there’s simply no way that any group of designers could imagine how people will want to use their language so making it easy to extend solves the problem wonderfully.
On the other hand, there's no way of making a language so easily extensible while also maintaining relatively uniform idiomatic, design, and style conventions across a language community. Go very heavily favors the latter.
> In Lisp, CLOS (Common Lisp Object System) was originally a library. It was a user defined abstraction that was so popular it was ported into the standard.
CLOS is actually very similar in some ways to Go's structs. Both emphasize encapsulating data inside an "entity" (object/struct), and separating the notion of behavior from that entity. To quote one of the language authors from GopherCon[0]: "Interfaces separate data from behavior. Classes conflate them."
Lisp was "designed" (if you can call it that) around the principle that extending a language should be as easy as writing a program in that language. Go was designed around the principle that there should be only one dialect of the programming language, for the sake of cohesiveness. It's a stronger assertion of the Pythonic motto, "There should be one, and preferably only one, obvious way to do it".
> Yes, writing Lisp is much more "fun" than writing Go for me. But writing Go is much more productive and maintainable (both for solo projects and for larger groups).
I heard ASP.Net WebForms get this defense, that no other platform could handle the "enterprise" needs for maintainability and every other platform was just building a pile of cowboy spaghetti code.
Well, an anecdotal counter-evidence is that we have some WebForms apps maintained by another team in our company and the developers from that team seem very happy and are productive enough to meet reasonable deadlines. I despise WebForms but that doesn't mean anything from a product managers perspective.
Yes, my point is that Go intentionally makes it hard for users to extend the language, while simultaneously learning from the successes and failures of other languages, and Yes, my point is that Go intentionally makes it hard for users to extend the language, while simultaneously learning from the successes and failures of other languages, and incorporating the results of that into Go.
The fact that users are able to create core language features has its drawbacks when it comes to cohesion in the language community. Lisp itself is a perfect example of this - codebases are immensely fragmented in terms of which libraries they end up using (Quicklisp is, in part, an effort to solve this).
Lisp was "designed" (if you can call it that) around the principle that extending a language should be as easy as writing a program in that language. Go was designed around the principle that there should be only one dialect of the programming language, for the sake of cohesiveness.
Both are legitimate philosophies for different use cases, but that's why it's kind of silly (IMHO) to compare Go to Lisp - the goals are not only different, but diametrically opposed in most ways.
I think the problem with Lisp is that it grew organically from a large community across multiple languages and 50 years of life. Not that the language was excessively expressive.
Go, on the other hand, was designed and implemented from a single vendor in a handful of years. That makes it far easier to provide a single, coherent platform. Once you provide a coherent, complete platform you don't need your users to solve every edge-case by implementing their own language features.
Any language that doesn't provide some kind of code-that-writes-code workflow will eventually leave its developers dealing with frustrating boilerplate.
> Yes, writing Lisp is much more "fun" than writing Go for me. But writing Go is much more productive and maintainable (both for solo projects and for larger groups).
Not sure about this statement. As the OP said in order to trick the anti-features of go you need to come up with super verbose syntax. Which is easy that's true but when the project tends to be quite big the mission becomes hard.
Also the "go get" thing is neat for small projects but for larger project IMHO is just broken. Godeps or few sh scripts can alleviate this but definitely is something needed to be addressed.
Yes, Go is boring, but it's interesting that the people who hate on Go for being boring are not the same people who hate on Java for being boring (though Java 8 is much improved).
I don't really get what all you guys mean by "boring". I'd appreciate some clarification (especially about Go).
What does it mean for language to be "fun"? I have some intuition of "fun", but it doesn't play well with what are you saying. I'd say being "fun" means ability to solve your problems easily, concentrating on problems themselves, not on language pitfalls. Isn't it? That's pretty much synonymous (maybe a little broader though) to "productive". So I can't comprehend how language can be "less fun, but more productive and maintainable".
> I don't really get what all you guys mean by "boring". I'd appreciate some clarification (especially about Go).
Go offers very little that's interesting or innovative and doesn't go out of its way to empower you in any particular way. It's fairly straightforward and tries to avoid gotchas, which is both good and bad. This means writing code is easy and you're unlikely to make a mess, but there are relatively few opportunities for making your code read better or to structure it better.
In a nutshell: Writing code is more of a drudge than an art.
For example, I was recently writing a processor for some data that came in an XML file format. I figured maybe throw it at Go since it was a lot of data and Go has pretty good XML support. In Go, this was a lot of looping and copying and basically nothing all that confusing but it just felt like busywork. For fun, I decided to write it in Clojure as well. The Clojure version was much more pleasant to read, with all the loops disappearing into simple compositions of map, reduce and filter. It felt nice to be able to easily express things in such an elegant way instead of on a more machine-like level.
(To be clear, I am not a great Go programmer and wouldn't want to speak for the Go community, though this is something the language creators seem to agree with. I'm just trying to explain what how I understand this stuff.)
> Go offers very little that's interesting or innovative and doesn't go out of its way to empower you in any particular way.
That's true of the features looked at individually. I think its particular combination of design choices is interesting and novel ("innovative" requires a value judgement that I am not quite ready to make in Go's case; though I think the momentum its gained in certain areas is quite likely because it does offer something that is innovative in its utility even if its not immediately apparent why the particular combination of features it provides should be.)
My feeling is that it's mainly the CSP features and the relatively good speed that make Go stand out despite being "boring." Having such strong concurrency primitives baked into a generally performant language is really cool and makes it the obvious choice for doing large tasks quickly.
People hate Java because it's unremarkable but successful. There's nothing particularly innovative or wild about it, but equally, nothing particularly awful (plenty of mediocre, of course). Whether despite or because of that, it somehow became the most widely-used language in places you get paid to sit and write code. It always feels cool to hate popular things, and it always feels cool to hate boring things. Hence, Java became the Nickelback of programming languages.
I like some Nickelback songs. Also, I don't hate on Java, I've even gotten seriously into it for about two years (for a university project and later got a job based on that), and it just made me unhappy (which is a big deal for me, for unrelated reasons). However, I never felt unhappy coding in Ruby (love it!), C (even though it was/is frustrating sometimes), C++ (ok, I am relatively new to it), Python, JavaScript... I tried other languages like Haskell, Erlang and similar, that didn't stick (Erlang syntax will plague me with confusion forever, probably), but nine of them made me unhappy when I worked in them.
The agreeableness of Python is the reason I settled on it after experience with a bunch of other languages. I was starting to find software engineering to be a chore and was losing interest in it until I started using Python. No language is best at everything, and all the ways python isn't the best have been worked around without much bother.
Go did not strike me as a language I would enjoy writing, despite the strengths it surely has in some areas. I would just be giving up other strengths I prioritize more highly.
Frankly python is good enough for most things. It's not perfect but it's one of these languages you can really use for anything(like java). I'm not fan of the syntax though,I prefer ruby's(ruby is not for "everything" unfortunatly). I've seen insane stuff done in python (3d games,gui apps,complex network tools...) running quite well on my old mac.
Go is relevant where you want pretty good performances and concurrency with low memory footprint(unlike java) ,without using C/C++ and threads. But it's clearly not for everything.
> I’ve been using Go since November and I’ve decided that it’s time to give it up for my hobby projects. I’d still be happy to use it professionally, but I find that programming in Go isn’t “fun” in the same way that Python, Haskell, or Lisp is.
I can't leave go, but indeed I think absolutely the same. The thing is that GO was designed to replace c++ thing that absolutely failed. So we have a python/ruby replacement with very old patterns (I think manual checking errors ie.).
Indeed I think go is an awesome language but sometimes I really feel like a monkey repeating myself over and over again.
>The thing is that GO was designed to replace c++ thing that absolutely failed.
I was so excited for GO to become a nice replacement to C++. GO has awesome build times. C++ has awful build times. This is partly due to all the extra work the C preprocessor has to do make sure all the headers are present.
The big lose for me is the fact that GO has absolutely no operator overloading. This is one of the biggest wins in C++, especially since I do lot of high-performance scientific computing. Overloading operators makes code a lot more readable and user friendly, in my opinion (I'm others will disagree, though). I'm aware that you can hack GO to mimic operator overloading but it involves hitting the vtable. And hitting the vtable is completely unacceptable in performance-critical code.
I wrote this analysis [1] in the context of Julia vs. C++ about why I think that Go has not been successful as a C++ replacement. The primary issues are performance (the ability to trade off abstraction for performance, really), generics, and operator overloading. We did not intend for Julia to be a C++ replacement, but it turns out that it's quite a good one and a lot of adopters have come from C++ – I think that these features are why.
Go is also easy to parse, AFAICT, both syntactically and semantically, and they ship a parser in the standard library.
While absolutely not implying that it's a reasonable alternative, someone could try implementing infix operators as a syntactic layer on top of Go (kind of a preprocessor). It might work to fix the use case presented in the OP (math operators being far more readable in their infix form).
Infix operators might just be implemented as a syntactic sugar layer after all, at least as a first draft; if you're able to parse a Go source file and its types in a way that when you see a sum expression, you already know whether both arguments implement a Numeric interface and/or Add method, you might be halfway through.
To clarify: I love infix operators and I'd love if Go added them, and I consider the above just a hack.
> Overloading operators makes code a lot more readable and user friendly,
Completely true for scientific computing, but I've been subjected to libraries written in scala recently and operator overloading there has lead to some very difficult to read and use libraries.
The key to preventing overloading of definitions from being a complete disaster is to be very careful not to overload meanings. I.e. if you're going to have lots of methods for your `frob` function, you had better be crystal clear what it means to `frob` something and only add methods that fit that definition. Scala and C++ libraries tend to like to just pick a random operator and use it to mean some arbitrary made up thing because they feel like it, ignoring the meaning of the operator. The classic example in C++ is using << for stream output. That's the left shift operator for Pete's sake! Why would that be what you use print stuff? Because you liked the look of it? It's not terribly surprising that this a complete usability disaster.
> So we have a python/ruby replacement with very old patterns (I think manual checking errors ie.).
Before using Go, I have always programmed with "modern" languages that use exceptions for error handling. So the concept of only doing manual error handling was new to me, but after a few months of golang I love it now and exceptions in python started to really annoy me.
In Go, I'm forced to do error handling right after every single call that will possibly return an error. While I thought that the resulting code looked ugly and repetitive, the result is a much more focused and intelligent error handling.
If i wanted the same granularity and robustness of error handling in python, I'd have to put a catch all try..except block around every single call .
Exceptions are a good time saver in stateless request based programs (like HTTP/web stuff) where it's sort of accetable if one request fails. I could just not care so much about detailed error handling, the worst that could happen is that i don't catch some error and the wsgi wrapper will catch it and return error 500 to the client.
But building daemons in python is just plain painful. If I don't want my program to crash at some point I have to reckon that anything can raise an exception I didn't think of.
> The thing is that GO was designed to replace c++ thing that absolutely failed.
I don't think Go was designed as a general replacement for C++, I think it was designed as a replacement for C++ for the things that Python was almost better than C++ for, but not quite.
EDIT: Though, OTOH, I do get the feeling that the people who designed Go largely are the type of people who feel that just-plain-C is a superior choice to C++ for most of the rest of the domain where C++ might be used, so they might view Go as a general replacement for C++, because they start with a smaller view of C++'s useful role than the market at large has.
Go wasn't designed to replace C++, Go was designed to replace C++ at Google. And it totally succeeded at that, given the list of critical infrastructure at Google where Go is used, and in some cases has replaced existing C++ implementations.
> I was asked a few weeks ago, "What was the biggest surprise you encountered rolling out Go?" I knew the answer instantly: Although we expected C++ programmers to see Go as an alternative, instead most Go programmers come from languages like Python and Ruby. Very few come from C++.
Yes, but he was talking programmers out there, not Google. IIRC, in 2012, they were still _very_ reluctant to talk about any use of Go within Google, with vitess being the only exception.
I think people gave Go a lot of credit just because it was designed at Google. People have high respect for Google engineers and so they assume what Googlers have designed must be flawless. So they take for granted ideas like lack of exceptions or lack of operators' overloading being a good thing, even though I'm quite sure they would be quick to criticize such BS if that was a feature in a language not coming from Google. But, in the long run, the language must defend itself on its own, without the authority of people/organization behind it.
I have a feeling that the attention would be much more modest if they happened not to be on Google payroll when they designed the language. Anyway, whether it's because of Ken or Rob, or Google brand, it doesn't change the point I was trying to make.
I think people gave Go a lot of credit just because it gets a lot done, especially comparatively to the time taken to learn it, and the stuff you come up with can very well be used in production because you made it robust thanks to the excellent built-in testing and profiling tools.
What's interesting with go is that if you can program, you already know it; all your time is spent building stuff, not learning the language. And when it comes to building stuff, Go is just straightforward.
Rust is great but it's still frontier territory right now. The documentation for the standard library is very sparse and stuff like HTTP is still in very early stages. I would only adopt it for production work if you're willing to be very actively involved in the nascent Rust community, follow the mailing lists, fix bugs yourself, etc.
On the other hand, if that's the kind of thing that excites you, Rust is a really great option right now.
As much as I want to emphasize that this is not the right way to think about programming in Go, I do want to point out that the example has a lot of extra code that isn't needed. Indeed, it's not really possible to have a bug of the kind the author wrote if written properly:
The explicit type switch also lets you streamline handling of uint8, uint16, etc (just return value) and panic or error if given a type you don't expect.
But I find it unfortunate that type switches must have single type cases. For example, it's too bad this doesn't work:
Every design decision has trade offs. Since the author only covers the more obvious negative aspects of those 2 features, I'd like to cover the more subtle positives.
> Extensibility
One really great thing about that is how consistent it is. Every func in Go has a unique identifier: the package import path and the func name pair. This makes it possible to build tools like godoc.org and the `doc` command that let me look up the exact behavior of (unfamiliar) code.
In this case, I can go to godoc.org/math/big#NewInt or type `doc big.NewInt` and see:
// NewInt allocates and returns a new Int set to x.
func NewInt(x int64) *Int
In fact, it's so predictable that I don't have to think. As long as it's Go code, if I ever run into an unfamiliar func F of package P, I can always go to godoc.org/P#F or `doc P.F`. No searching, just an instant lookup. If I need to do 30 such lookups to solve a task, it makes a big difference. This applies to _all_ 3rd party Go packages. That is big.
On the other hand, something like `x + y` is magic to me. I know that computers work with bytes at low level, and I want to be _able_ to understand what happens on that level. I understand and accept such magic for built in types. But I certainly wouldn't want to be reading a 3rd party Go library code that says `x + y` on its custom types. Where would I go to find out what that custom plus operator does? How does one make an unexported version of a plus operator? There'd be less consistency, more exceptions and rules, more variations in style and less tools that can be built to assist/answer questions about Go code.
> The Type System
I don't have time to cover this atm, but there are some advantages to the explicitness and verboseness of Go's approach. Can you think of any?
I'm not suggesting it's the best it could ever be, just that there are both advantages and disadvantages that should be considered.
Go's strengths are tooling and libraries. Especially when you see the well written libraries all revolving around internet protocols, encodings, and content generation. It makes it extremely easy to write performant web sites and http API endpoints which many people do daily.
All that said I hope Go adds D like compile time code generation and static typing or [D,Rust] gains better tooling and HTTP service oriented libraries and APIs that are as well written as the Go standard library packages are. Secondly I hope [D,Rust] sees how awesome having a common automatic format, build, and test tool is. In so few language is the testing package as simple as Go's. In so few languages is the build process as easy as Go's.
All languages suck, at some level. Whether this matters to you is a matter of need, willingness to compromise, and benefits to you, for some definition of you. This often leads to people writing their own language if they can which generally only pleases themselves.
I've the same feelings about Go. After working several month on a side project with Go, I have given up, because I've felt not as productive as with Java (not to mention Scala). There are lots of annoying things: missing generics leading to a lot of uselesse code which could carry bugs. Without generics you can not create an Option datastructure, but have to return nil (the million dollar mistake of Hoare). Without generics you can not write concise object/functional code like someCollection.map(i -> i * i). Go has no good support for immutability. Mocking is awkward, because you have to code your mocks by hand. Unicode handling is a pain.
That is why Go attracts mainly people from scripting languages (they get a bit more type safety and better performance) and C (they get a bit more type safety and less errors). Coming from other languages Go is not that attractive. I'm hoping for Rust to succeed.
They are indeed, Julia has replaced the combination of Python (prototyping) and C++ (implementation) for me for scientific work - would be painful to go back.
That's not really a fair statement – plenty of people (me included) find Julia to be a lot of fun. I'd be interested to hear why you think that, though.
If the OP is looking for a C++ alternative and knows Haskell, Rust is also an interesting alternative, although it's at a very early stage of development.
I was interested until I discovered Julia doesn't have a useful form of OOP inheritance. Ok, you don't always need it, but for some things it's indispensable (e.g. using widget/window systems).
That's a bit like saying, for example, Python isn't interesting because it doesn't have monads (and they're essential for managing state, or whatever). Different languages have different ways of doing things, and lacking feature X isn't necessarily a deficiency.
Julia generally favours composition over inheritance. You might be interested to look at GTK.jl, its widget/window system.
If you guys like Python and are having second thoughts about Go, take a look at Nimrod.
Also, I personally look at Nimrod as a faster and slightly cleaner Python with a few extras, and I don't try to break the compiler using all of its cutting edge features. If you use it that way you will be happy.
A 16 year old kid decides that Go isn't suitable for writing an embodiment of Principia Mathematica. Surprise: Haskell is a better Haskell than Go.
Go is a something like C, but with a few more features plus GC and come neat concurrency facilities. Why would anyone expect that to be anything like Haskell, or even to have a type system even approaching that kind of power?
Very well put. I tried Go now for a while and I was wondering why it doesn't appeal to me like for some of my friends. By reading through your response I found the reason for my gut feeling. I guess I belong into your first categorie ;).
Does anyone actually use CLOS? It's always held up as this great example of language extensibility, but my impression is that since it's not part of the core language Lisp folk tend to create their own abstractions instead.
Beat by a few minutes again, as piokuc says, it's in the standard. But even so, it may still be implemented as "just a library" in some CL implementations. That's the beauty of the language, it's so versatile you can extend it within itself, and it really feels like a language extension and not like you're accessing yet another API. If someone wanted a different object system, they could produce it within CL giving totally different OO semantics.
How does it make the OP less relevant? i know pretty smart programmers that are still in high school. And tt doesnt matter, the guy obviously knows Go.
My implication was that the article was very professional and I was quite impressed that it was written by somebody so young. I think that it makes it more relevant, not less.
Me neither, but a classmate was reading over/working through the dragon book our senior year of high school. There's a reason he went to CMU and I didn't. On the other hand, compilers (well, a less than formal version) was the third CS course topic at GT. Definitely approachable, with good faculty/staff, as an early topic of study.
I think that it is very impressive that he is having these thoughts at such a young age. That's it, I wasn't saying that his opinion should be discredited!
Yeah, indeed. I think with the flak this article is taking elsewhere in the comments people might be reading this as an argument against him. I know that was my first impression, glad I asked for clarification.
I have a problem with his extensibility critique. I don't agree that having generic types are good for the language. The problem in using polymorphism is that your users are going to abuse polymorphism. There will be class hierarchies that will confuse and abstract from the algorithm at hand. You will be mutating and mangling objects rather than dealing with the issues at hand.
Reading a go source file is like following a for loop. It's quite technical. Reading a Java source file can be a case of trying to figure out the high level abstractions of the program.
The genericity that is being talked about here is parametric polymorphism, not subtype polymorphism. Parametric polymorphism doesn't lead to confusing class hierarchies. Parametric polymorphism doesn't involve inheritance, so it doesn't lead to any class hierarchies.
>Ok, it’s only a few characters, big deal. Now what does this do?
>b.Mul(b).Sub(big.NewInt(4).Mul(a).Mul(c))
>Or in Haskell
>b * b - 4 * a * c
umm.. correct me if I'm wrong, since I'm not well versed in either Haskell or Go, but doesnt the Golang version read better in terms of scoping. I mean just by reading the Go version, I know what it will evaluate to, but in the Haskell version I dont know the precedence order just by reading.
I do, and I don't even know Haskell: (b * b) - (4 * a * c)
edit: I don't know go, but I assume the go example is intentionally unreadable. But complaining about infix multiplication and subtraction is... curious.
No, that's pretty much what the Go would look like, for any non-standard numeric type. (Though it does need to be a non-standard type, like a "bigint"... a merely type-wrapped standard type ("type Feet uint32") can still have arithmetic done to it.)In fact the default go fmt rules won't even let you put whitespace in there.
However, my general call on that is that if you need your language to provide overloaded arithmetic operators, stick to languages that actually want to give that to you. It comes up a lot in language discussions because it's such an easy example to type out in a couple of lines, but in practice I don't think you see a lot of need for exotic int types being used pervasively throughout your code. Don't use Go for that. Don't use any language that doesn't do operator overloading. But this is far less a common use case than seems to be supposed... a variant of the "look under the streetlight" fallacy, I think.
Go's really for heterogeneous concurrent business- or server-like programming. It isn't a language for mathematics, and the burst of libraries I've seen in the last couple of weeks that seem to be focused on that is bizarre to me. Go isn't good at that, isn't meant for that, and probably never will be. Go's nice for the very large niche that "business- and server-like programming" is but it's not so fantastically wonderful in every way that I'd want to work too hard to make it work in a niche it's not really suited for.
I'm pretty sure that the precedence is the same as in the usual convention of arithmetics.
You can ask for it in GHCi (the REPL):
> :i (-)
> [...] infixl6
> :i ( * )
> [...] infixl7
So ( * ) has precedence over (-). The expression is then:
(b * b) - (4 * a * c)
Personally, I don't know if infix syntax is truly overall more readable than purely prefix/postfix syntax, or other such uniform representations. It might be for arithmetic, because it is so ingrained, but the precedence order was never hammered into me to such a degree that I learned it automatically by heart; I used to defensively make more parenthesise than I really needed to, just in case.
May I ask what languages you tend to favour? I've had a few friends tell me lately that they were all giving up on Go after persisting with it since early 2010. They are all people who tend to work in C/C++ so they don't tend to be thrown by a steep learning curve. I'm quite surprised because they were all vocal proponents in the beginning.
These sorts of posts are profoundly boring. I know people will up arrow it -- some sort of spiteful "Down with Go!" contrarian thing, when they aren't talking up rust -- but it isn't because the content is interesting or illuminating, but rather as some sort of activist thing.
This particular piece (by a high school student, as an aside) starts off trying to create a surrogate for generics in Go.
Don't.
Here's the thing -- most of these posts are not about people making real code, but people making -toy- code. Where every function is all things to all people.
The number of times I've needed a generic abs in my life -- zero.
The number of times I've needed a double floating point abs in my life -- every single time.
That's the thing about generics and real, actual world code: Your types are generally much less amorphous than you think. They really are. This illusion that everything needs to be everything just does not hold in the real world.
But it is always the tiring example used against Go. Boring.
A generic abs? Not particularly useful, you're right. But he was using that as a trivial example. Generics are incredibly useful in a lot of contexts. One context that the author touched on briefly is new container types. In Go, if I have a Slice, the type of element the slice contains is specified. But if I have a user-defined container, either the container has to be specialized for a single type (which makes it largely useless, at least as reusable code), or it has to take interface{}, which really sucks. No more homogenous container, no more static type information when working with this container, etc.
Trivial examples are terrible as evidence to support a point. Trivial examples are generally good for pedagogical purposes, which doesn't really fit with the OP.
For example, the OP mentioned that using a stack in Go requires casting from `interface{}`. But this is patently ridiculous. Most people who need a stack in Go just use a slice: https://code.google.com/p/go-wiki/wiki/SliceTricks --- Is this a bit messier and limited than a container type suited to stacks? Yes! But it gets the job done the majority of the time. Therefore, it's not as big of a problem as the article leads you to believe.
Not all things require clean polymorphic container types. Sometimes you can get by with less. And when you need more, you're going to feel a bit of pain with a less expressive type system. It's all part of the trade off. But it's not good to represent this as a scenario where it's all just bad stuff all the time.
It's more than just feeling pain when writing code. Using interface{} has a runtime cost. Both because of the vtable and because it requires heap-allocating something that could otherwise be inline.
Yes, that's true. The "pain" I had in mind wasn't in using `interface{}`, but in writing non-generic code. This doesn't result in a performance loss, but will result in some code duplication. (Or alternatively, extra complexity from using some sort of code generator.)
But, to be fair, in the Go world, both mechanisms are used. It depends on which costs are important.
You could easily define an interface and write your red black tree once for all types, you'd just have to wrap basic types like int to conform if you want to add them (most often this sort of list is of a custom data type though). e.g.
So if you need a data structure like this, you can do it without much trouble without generics. Perhaps you don't feel that solution is as clean, but it's certainly pretty easy to do.
NB that golang has not ruled out generics, it's just not something they felt compelled to put in initially, and not many people who use the language miss them.
Except the containers aren't typed. They are essentially containers of void*. If you believe that to be "generic", then you don't understand the topic.
You are right, I overlooked the magic of Go's build-in containers (I admit, I haven't touched the language after we looked at it around a year ago). I amend my comments to apply to user-defined containers and types, where generic == interface{}...at least, it did when we tried the language out.
This is an example of a simple "generic" in Go, as I recall it:
type mytype struct {
mydata interface{}
}
The casting this requires felt like a step into the past when you're used to eg the STL.
Generic abs is an unrealistic toy example, sure. A much better example is an extensible array (a.k.a. vector a.k.a. list). This is the most basic data structure in all of programming, the first data structure needed in almost every program I write, and to be honest, I am of the opinion that a programming language that doesn't provide this has no reason to exist in the twenty-first century unless it was grandfathered in from the twentieth.
Fun enough, a list[0] in Go is actually a doubly linked list. Resizable slices[1] are built into the language. Though, this leads to C style coding that some people thing is annoying (assigning the result of append back to x, since it could allocate a new array).
I posted a comment about this to the blog, but I released a tool which will generate type-specific source code from a generic definition. Code is at https://github.com/joeshaw/gengen
The abs example could be reduced to:
import "github.com/joeshaw/gengen/generic"
func abs(x generic.T) generic.T {
if x < 0 {
return -x
}
return x
}
You could then generate different type-specific versions:
The downside is that the API is annoyingly non-generic (a different abs variant for each type) but at least you didn't have to type it in a bunch of times.
I agree that abs() is kind of a toy example, but this approach has helped me a lot for various slice operations like indexing and deleting.
The same sort of sales pitch goes for using no-sql because it's so flexible. But as time goes by you settle on a data model and it turns out you don't need flexibility as much as consistent data or services that scale well.
It is so funny to see that somebody else got annoyed by Go the same way as I did. I could not get over the fact that I cant pass any type to a function other than interface{}. When you do that it just delays the problem and I have decided to not have a generic function but have multiple specific one. This violates the DRY principle but at least works. I am still looking for best practices with Go. I think the biggest advantage of that languages is the fact that Goole uses it in production and the libraries are well tested. The community is great too. Clojure has a way smaller community that limits the usage of that environment quite a bit. I agree with the author on Lisp is being more fun than Go, but that is a single dimension of the entire problem.
func abs(x Top) Top {
switch v := x.(type) {
case int32:
if v < 0 {
return -v
}
case int64:
if v < 0 {
return -v
}
case float32:
if v < 0 {
return -v
}
case float64:
if v < 0 {
return -v
}
}
return v
}
This isn't simpler, it's smaller. In fact it's more complicated, since instead of 4 completely separate cases you have 4 half-cases and a default.
For instance, pass a positive int16 and you get the right answer, but pass a negative int16 and you get a useable but wrong answer instead of nil from the original.
It was an example of a language feature, that is short variable declarations in type switches, which avoid repetitive casting. The example below has the default clause which handles unknown types. I suppose the point is that you'd never actually do any of this because the method is disgusting.
I did have to put "return x" as the last statement, though, not return v. And I'm not sure I'm happy about an "abs" that happily returns a string if fed a string, but, well, that's Go.
Yes, the return statement was what I was referring to. You can't use a type switch to do some sort of an implicit conversion that exists outside of the switch statement.
Here's the thing: I am willing to accept that Haskell is the best programming language ever created. People have been telling me this for over 15 years now. And yet it seems like the most complex code written in Haskell is the Haskell compiler itself (and maybe some tooling around it). If Haskell's clear advantages really make that much of a difference, maybe its (very vocal) supporters should start doing really impressive things with it rather than write compilers. I don't know, write a really safe operating system; a novel database; some crazy Watson-like machine; a never-failing hardware controller. Otherwise, all of this is just talk.