Hacker News new | past | comments | ask | show | jobs | submit login

Rob Pike's repository 'filter'[0] contains implementations of map ("Apply"), filter ("Choose", I believe), and fold/reduce ("Reduce"). The code is an ugly mess, and the implementation shows that he probably hasn't used any of these standard functions in other languages (see the weird permutations of 'filter' in apply.go, or my patch for his fold/reduce implementation[1]). The README is also quite arrogant, IMO.

> I wanted to see how hard it was to implement this sort of thing in Go, with as nice an API as I could manage. It wasn't hard.

> Having written it a couple of years ago, I haven't had occasion to use it once. Instead, I just use "for" loops.

> You shouldn't use it either.

[0]: https://github.com/robpike/filter/

[1]: https://github.com/robpike/filter/pull/1




I quite like the logical progression though:

+ People keep telling me we should be able to implement a map function in go.

+ I implemented a map function in go.

+ The map function was ugly, slow, unsafe and generally an abomination.

Conclusion? You don't need a map function in go.

You may not agree but you have to admire his dedication to the One True Way whatever is put in his way. Even if it's him that's erecting pretty impressive roadblock himself.


It's the equivalent of PHP developers claiming the way PHP works if great.

I've used generics in several languages now and it's so awesome for reducing boilerplate and duplicated code.

The guy is just flat out wrong.


The mental gymnastics required to adhere to the One True Way are certainly impressive.


> Conclusion? You don't need a map function in go.

I don't see Rob making that conclusion anywhere.


This is what people mean when they say that Go has disregarded everything we learned in the programming language community in the last decades.

Sure, some people are productive in it and that's fine - but for those of us who have worked in better languages taking the step down is unattractive.


I'll grant that Go is lacking in generics, but IMHO, the opposite is true. Go is thriving because although not perfect, it is one of the few languages which seems to have learned lessons from the failings of C++, Java; and from the successes of the more dynamic/scripting languages (team Python, Ruby etc.). Go isn't a step down, it's a step backwards from the edge of the cliff.


That's just rhetoric. What does it mean? What lessons have been learned? What is it about generics that makes them the 'edge of the cliff'? Personally I couldn't live without generics, and would never choose a language that doesn't have them; otherwise you end up doing cartwheels with untyped references and reflection to try and write reusable code (as you see above). The idea that generics adds complexity is nonsense. It might add complexity to the compiler, but that's about it. For the end user and the compiled code it's easier and faster respectively.


I clearly stated that Go is lacking on the generics front. The cliff is forced, rigid OOP and complicated tool-chains.

> Personally I couldn't live without generics, and would never choose a language that doesn't have them

I'm kind of confused here. Yes, Go needs generics, but are generics even that key a feature? I mean how often do you have to define a generic type, and how much copying does it really take? Is it a hassle? Of course. But at the end of the matter, Go much, much better when it comes to combining expressibility, efficiency, and simplicity then many of the other options available today.


> The cliff is forced, rigid OOP and complicated tool-chains.

How are generics related to OOP or tool chains? Generics have a strong grounding in type theory and are used equally successfully in both OO and functional languages.

> Yes, Go needs generics, but are generics even that key a feature?

I believe so.

> I mean how often do you have to define a generic type

Many times a day. But not just generic types, generic functions, which I believe are just as strong an argument for generics.

> I mean how often do you have to define a generic type, and how much copying does it really take? Is it a hassle? Of course.

It's not just the hassle of copying and pasting. It's the ongoing maintenance of code. If you have a list class for example, then you're going to need a ListOfInt, ListOfChar, ListOf... If you have a single bug in your list implementation in a language with generics, that is now N bugs in Go. If you write an untyped List class then you are constantly coercing the type and potentially working with unsound values that you only discover at the point of use, not the point of compilation. In a large application that's as painful as the null dereference issue has been for all of time.

Even in the code example by Rob Pike he mentions that he can get by using a for loop. for loops are less declarative and less expressive than map, filter, fold, etc. They mix loop iteration scaffolding with code intent.

> But at the end of the matter, Go much, much better when it comes to combining expressibility, efficiency, and simplicity then many of the other options available today.

More rhetoric. Please tell me how? Is there something that makes this better? I remember C# before generics, and it was a pain to work with (collection types especially, that's why I mention it above). The only issue I see with generics now in C# is the visual clutter. If that's what you're referring to, then fine, but that's still not a good reason for the language to not implement them. If you look at F#, OCaml, Haskell etc, they all have generics in one form or another that you barely see. The expressiveness is there, but also type safety.

I find it hard to believe that a language with static typing can get away with not having generics today. It makes the language semi-dynamic, which is the worst of both worlds because you end up manually doing type coercion (which would be automatic in a dynamic language), but still have the lack of safety of a dynamically typed language to boot.


> How are generics related to OOP or tool chains? Generics have a strong grounding in type theory and are used equally successfully in both OO and functional languages.

Once again, I never said there was anything wrong with generics! And I agree that Go should have them, I just don't think they are nearly necessary enough a feature to justify overlooking the numerous qualities the language has to offer. Please look at my initial comment: I never said anything negative about generics.

> More rhetoric. Please tell me how? Is there something that makes this better?

The "options" I'm referring to are Java/C++/C# and Python/Perl/Ruby/PHP. The former languages are too verbose, and cluttered, and Java requires the overhead of the JVM, C# is essentially Windows-only. The scripting languages lack typing and efficiency. Go is able to combine the performance and control advantages of low-level languages (to a high degree) with the simplicity of higher-level languages like Python. I'm not saying it's perfect, and I'm definitely not crazy enough to put it up against the functional languages (Haskell etc.). But when it comes to web applications, it looks like Go will soon be the one of the most practical choices available.


The lesson Go seems to have learned is that, since C++ and Java burned their fingers, clearly fire is too dangerous for humans.

The thing that makes it painfully obvious to me that Rob Pike hasn't bothered to learn anything from the PL community is that Go has nil. That just shouldn't happen in a modern language.


> The lesson Go seems to have learned is that, since C++ and Java burned their fingers, clearly fire is too dangerous for humans.

I think that's a little bit unfair, since Go introduces many powerful ideas not (traditionally) available in Java or C++: namely first class concurrency and functional primitives. Its handling of typing, the ability to define types not necessarily based on structs, the excellent design of interfaces are other examples. Go is an extremely powerful and expressive language that opens up the doors for programming in new paradigms, while making it easy to maintain readability and simplicity.

Fair point with the nil issue, I think that's one of Go's other weaknesses. But it does make up for that with its excellent error handling paradigm.


https://golang.org/doc/faq#nil_error is not an excellent design. It's a serious bug that converting nil to an interface sets the type field to a meaningless value (nil doesn't have a type!) and ridiculous that the interface doesn't compare equal to nil (if it's not nil, what does it point to?)


What's wrong with nil? Just legitimately curious, and I've never used Go at all.


It means that every single type in the language has one extra value it may contain, 'nil', and your code will crash or behave erratically if it contains this value and you haven't written code to handle it. This has caused billions of dollars in software errors (null dereferences in C/C++, NullPointerExceptions in Java, etc.). See "Null References: The Billion Dollar Mistake" by Tony Hoare, the guy who invented it:

http://www.infoq.com/presentations/Null-References-The-Billi...

A better solution is an explicit optional type, like Maybe in Haskell, Option in Rust, or Optional in Swift. Modern Java code also tends to use the NullObject pattern a lot, combined with @NonNull attributes.


Beside the fact you're wrong (structs, arrays, bools, numeric values, strings and functions can't be nil, for instance), I'm always a little puzzled when I read the argument that "nil costs billions of $".

First, most of the expensive bugs in C/C++ programs are caused by undefined behaviors, making your program run innocently (or not, it's just a question of luck) when you dereference NULL or try to access a freed object or the nth+1 element of an array. "Crashing" and "running erratically" are far from being the same. If those bugs were caught up-front (just like Java or Go do), the cost would be much less. The Morris worm wouldn't have existed with bound-checking, for instance.

Second point, since we're about bound checking. Why is nil such an abomination but trying to access the first element of an empty list is not? Why does Haskell let me write `head []` (and fail at runtime) ? How is that different from a nil dereference exception ? People never complain about this, although in practice I'm pretty sure off-by-one errors are much more frequent than nil derefs (well, at least, in my code, they are).


> I'm always a little puzzled when I read the argument that "nil costs billions of $".

$1bn over the history of computing is about $2k per hour. I would not be astonished if a class of bugs cost that much across the industry.

> most of the expensive bugs in C/C++ programs are caused by undefined behaviors

Sure, there are worse bugs. Why, then, waste our time tracking down trivial ones?

> Why does Haskell let me write `head []` (and fail at runtime) ?

Because the Prelude is poorly designed.

> How is that different from a nil dereference exception ?

It's not different, really. It's a very bad idea.

> People never complain about this

Yes we do. We complain about it all the time. It is, however, mitigateable by a library[1] (at least partially), whereas nil is not.

[1] http://haddock.stackage.org/lts-5.4/safe-0.3.9/Safe.html#v:h...


> $1bn over the history of computing is about $2k per hour. I would not be astonished if a class of bugs cost that much across the industry.

It's not about knowing whether it's $1bn, or 10bn, or just a few millions. The question is to know whether fighting so hard to make these bugs (the "caught at runtime" version, not the "undefined consequences" version) impossible is worth the cost or not.

Can you guarantee that hiring a team of experienced Haskell developers (or pick any strongly-typed language of your choice) will cost me less than hiring a team of experienced Go developers (all costs included, i.e from development and maintenance cost to loss of business after a catastrophic bug)? Can you even give me an exemple of a business that lost tons of money because of some kind of NullPointerException ?


>fighting so hard to make these bugs ... impossible is worth the cost or not.

In this case the solution is trivial, just don't include null when you design the language. It's so easy in fact, that the only reason I can imagine Go has null, is because its designers weren't aware of the problem.


Not including null has consequences, you can't just keep your language as it is, remove null and say you're done.

What's the default value for a pointer in the absence of null? You can force the developer to assign a value to each and every pointer at the moment they are declared, rather than rely on a default value (and the same thing for every composite type containing a pointer), but then you must include some sort of ternary operator when initialization depends on some condition, but then you cannot be sure your ternary operator won't be abused, etc.

You can also go the Haskell way, and have a `None` value but force the user to be in a branch where you know for sure your pointer is not null/None before dereferencing it (via pattern matching or not). But then again you end up with a very different language, which will not necessarily be a better fit to the problem you are trying to solve (fast compile times, easy to make new programmers productive, etc.).


But null pointers are not really a problem in Go, aren't they? The problem only exists in lower level languages.

Go, as a language, is a pretty good one. It's Go's standard library that's not, especially "net" and other packages involving I/O.


Why do you think it costs very much to provent null pointer exceptions?


I think it has consequences on the design of the language, making it more complex and more prone to "clever" code, i.e code harder to understand when you haven't written it yourself (or you wrote it a rather long time ago). I've experienced it myself, I spent much more time in my life trying to understand complex code (complex in the way it is written) than to correct trivial NPEs.

That being aside, it is less easy to find developers proficient in a more complex language, and it is more expensive to hire a good developer and let him time to teach himself that language.

I'm not sure it costs "very much", though. I might be wrong. But that's the point: nobody knows for sure. I just think we all lack evidence about those points, although PL theory says avoiding NULL is better, there have been no studies to actually prove it in the "real-world" context. Start-ups using Haskell/OCaml/F#/Rust and the like don't seem to have an undisputable competitive advantage over the ones using "nullable" languages, for instance, or else the latter would simply not exist.


nil in Go doesn't work that way. Most types cannot be nil.


But a bunch types you do expect to work can: Slices, maps and channels.

  var m map[string]bool
  m["foo"] = 1  // Nil, panic

  var a []string
  a[0] = "x"  // Nil, panic

  var c chan int
  <-c  // Blocks forever
This violates the principle of least surprise. Go has a nicely defined concept of "zero value" (for example, ints are 0 and strings are empty) until you get to these.

The most surprising nil wart, however, is this ugly monster:

    package main

    import "log"

    type Foo interface {
    	Bar()
    }
    type Baz struct{}

    func (b Baz) Bar() {}

    func main() {
    	var a *Baz = nil
    	var b Foo = a
    	fmt.Print(b == nil)  // Prints false!
    }
This happens is because interfaces are indirections. They are implemented as a pointer to a struct containing a type and a pointer to the real value. The interface value can be nil, but so can the internal pointer. They are different things.

I think supporting nils today is unforgivable, but the last one is just mind-boggling. There's no excuse.


I don't think you're right that interfaces are implemented as a pointer to a struct. The struct is inline like any other struct, and it contains a pointer to a type and a pointer to the value, like `([*Baz], nil)` in your example. The problem is that a nil interface in Go is compiled to `(nil, nil)` which is different.

That still makes this inexcusable of course.


You're right, it was lurking in the back of my mind that it must be on the stack entirely.


I don't think using nil to represent uninitialized data is a major issue-- if it were possible to catch uninitialized but queried variables at compile-time, that could be an improvement, but we want to give the programmer control to declare and initialize variables separately.

I agree the second case is a little silly.


It's perfectly possible to separate declaration and initialisation without using a null value.


Interesting, because (reading up on this) value types can not be nil.

How often does typical Go code use values vs. interfaces or pointers? It seems like the situation is pretty similar to modern C++, which also does not allow null for value or reference types (only pointers) and encourages value-based programming. Nil is still a problem there, but less of one than in, say, Java, where everything is a reference.


In my own experience, nil basically only shows up when I've failed to initialize something (like forgetting to loop over and make each channel in an array of channels), or when returning a nil error to indicate a function succeeded. I've never run into other interfaces being nil, but I also haven't worked with reflection and have relatively little Go experience (~6 months).

The code that I've written regularly uses interfaces and pointers, but I'd guess 80% works directly with values.


> I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.

https://en.wikipedia.org/wiki/Tony_Hoare#Apologies_and_retra...


Go is thriving because it is Google sponsored.

If it came from "Joe the dev" no one at HN would give it a second look.


I simply do not believe this.

I believe it is thriving because it was well-designed, by extremely influential individuals, and the early library work was stellar. Also, several other experienced and influential programmers tried it, and expressed something along the lines of "programming is fun again!"

Inside Google, the two main programming languages are C++ and Java, not Go (at least when I left, in September). The Go tooling is generally less capable, but the interfaces smaller, and often nicer: they have the dual benefits of hindsight, and a small but very smart team that really cares about conciseness and clean APIs.

Of course, it's undeniable that the Google name helps a bit. And paying a team of very experienced (and presumably very expensive) developers to work on it makes a huge difference. But I think it would be as successful if those same developers were sponsored by Redhat, or Apple, or just about anyone.


Redhat or Apple are not "Joe the dev" though


Java Joe works at Readhat though :P


Dart is also Google sponsored and no one uses it despite the fact that it's actually a pretty great general purpose language. People use go because it's productive and had a PHENOMENAL standard library for networking.


> Dart is also Google sponsored and no one uses it despite the fact that it's actually a pretty great general purpose language.

Citing an example that had support from Google and failed does not refute the claim that Go would have failed if not for Google's support.


It clarifies the fact that Go is successful for more reasons than just being pushed by Google. So it focuses the question to "what is it that people like about it". And then we can have a better conversation.


Given that .NET and Java already had a PHENOMENAL standard library for networking by the time Go came out, I really bet in the Google bias.


Your theory fails to account for the lack of success with respect to Dart; so, it seems more like something you have an urge to believe (despite a lack of evidence).


Dart has been abandoned by Google the day that Angular team has chosen Typescript instead of believing in Dart, thus sending to the world the message that the company doesn't believe in it.

Whereas there are a few production examples of Go at Google.


My understanding is that Dart is used by Google Fiber for their routers, so I wouldn't call that abandoned yet. But, the point is that Google supporting a language does not seem to imply its eventual success.


That's some odd reasoning. The Angular team does not decide which languages are invested in by Google.


No, but Google decides what their employees are allowed to use.

In this case a company team was allowed to use a competing language instead of the one designed in-house for the same purpose.


He who takes his examples of generics from C++ and Java has a huge blind spot. The FP crowd came up with simple and useable generics (Hindley-Milner type inference) in 1982.

It's like Go's creators haven't even read Pierce's Types and Programming languages. This is inexcusable. Even more so from Rob Pike and Ken Thomson —you'd expect better from such big shots.


It's like you assume that, since they didn't do it your way, they're either stupid, ignorant, or malicious - which I also find to be pretty inexcusable.


Well… I have seen generics that (i) don't blow up the compile times like C++ templates do, (ii) are very simple to use, and (iii) are relatively simple to implement. (I'm thinking of Hindley-Milner type inference and system F.) So when some famous guys state they avoided generics for simplicity's sake, yeah, I tend to assume they missed it.

And it's not hard to miss either. When you google "generics", you tend to stumble upon Java, C#, and maybe C++. The FP crowd talks about "parametric polymorphism". Plus, if you already know Java, C# and C++, 3 mainstream examples of generics, fetching a fourth example looks like a waste of time. I bet they expected "parametric polymorphism" (ML, Haskell…) to be just as complex as "generics" (C++, Java, C#).

On the other hand, when you study PL theory, you learn very quickly about Hindley-Milner type inference and System-F. Apparently they haven't. Seriously, one does not simply make a language for the masses without some PL theory.


> On the other hand, when you study PL theory, you learn very quickly about Hindley-Milner type inference and System-F. Apparently they haven't.

Again you assume that, since they didn't include it, they must not have known about it. You keep claiming that. Given the breadth of these guys' knowledge (it's not just Java, C#, and C++, not by a long shot), I really struggle to see any justification for you assuming that.

I know you think that system F is all that and a bag of chips, but it is not the only reasonable way to design a language! Assuming that they did it wrong because they didn't do it the way you think is right... that's a bit much.

But I'll ask you the same question I asked aninhumer: How fast does Go compile compared to Haskell? And, is that a fair comparison? If not, why not?


> Again you assume that, since they didn't include it, they must not have known about it.

That's not why I assumed ignorance. I assumed ignorance because their stated reasons for doing so are false. Generics can be simple. They skipped them "for simplicity's sake". Therefore they didn't know generics could be simple.

Besides, generics could have probably helped them simplify other parts of the language. (But I'm getting ahead of myself.)

> How fast does Go compile compared to Haskell?

I don't know. I have reasons to guess Haskell is much slower however.

> And, is that a fair comparison?

Probably not: both languages are bootstrapped, so their respective implementation use very different languages (Go and Haskell, respectively). Haskell is non-strict, so it needs many optimizations to get acceptable performance. Haskell's type system is much more complex than your regular HM type inference: it has type classes, higher-order types, and many extensions I know nothing about —it's a research language after all.

Qualitative analysis would be better at assessing how generics affect compile time. My current answer is "not much": the effects of a simple type system (let's say system-F with local type inference) are all local. You don't have to instantiate your generics several times like you would do with templates, you can output generic assembly code instead. To avoid excessive boxing, you can use pointer tagging, so generic code can treat pointers and integers the same way —that's how Ocaml does it.


> Therefore they didn't know generics could be simple. I wouldn't call Hindley-Milner type inference simple, though. I think you underestimate what has to be available in the language for a ML-like parametric polymorphism to be implemented in the language. For instance, can an interface be generic? Can a non-generic type implement a generic interface? How do you say your type is generic, but "only numeric types allowed" ? Does it mean the language must implement a type hierarchy of some kind ? How well does it play with pointers? Is `*int` a numeric type?

Once you introduce generics, you have no choice but to make a more complex language overall. You say generics would have simplified the language, I find it hard to believe. Care to mention a language that is easier to grasp than go (i.e I can be productive in less than a week) and that also offers efficient generics?


> How fast does Go compile compared to Haskell? And, is that a fair comparison? If not, why not?

For this to be a fair comparison, you need to compare a Go LLVM compiler against an Haskell LLVM compiler, as possible example.

For this to be viable comparison one needs to compare similar toolchains.


I'd like to give them the benefit of the doubt, but even within their stated goal of "simplicity", some of their design choices still seem ignorant of PL theory. The obvious one being including a null value, which is widely recognised to be a terrible idea with pretty much no redeeming qualities.

Another subtler example is the use of multiple return values for error handling, rather than some kind of sum type. It just suggests the designer doesn't have any experience working with ADTs. (Not that I'm suggesting Go should have used full blown ADTs, just that they change the way you think about data.)


Simplicity is, I think, a secondary goal. A big part of the motivation for creating Go was 45 minute C++ compile times. A major reason for the emphasis on simplicity is to keep the compiler fast, even on huge codebases.

So: How much would adding sum types slow down the compiler? I don't know. How fast does Go compile compared to Haskell? (Is that a fair comparison?)


I'm a little dubious of the speed advantage to be honest. Sure compile time is important, and C++ is pretty bad on this front, but you don't need to try that hard to do better.

And no, I don't think sum types would slow the compiler down much, especially if they were limited to a special case for error handling (which seems more in line with the rest of Go).


Sum types don't fit well with zero values. What is the zero value of an `(int|string)` union type ?


Well I don't think zero values are a very good idea to start with (if you want a default value, make it explicit), but if one insists on having them, they can just use the first case. So for your example it would be int 0.


So Google's primary language offering is designed by a guy who thinks Map/Reduce is just a weird way of writing for loops.

Tremendous. The amount of irony in that could fill a steel furnace for a year.


    // goodFunc verifies that the function satisfies the signature, represented as a slice of types.
    // The last type is the single result type; the others are the input types.
    // A final type of nil means any result type is accepted.
    func goodFunc(fn reflect.Value, types ...reflect.Type) bool

Dear god, why?!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: