I’m hoping that https://github.com/golang/go/issues/19623 will come through, and we’ll get a native “true integer” type (and hopefully a rational one as well, though maybe this is pushing it a bit). This is really something that should be implemented at the language level, so that “int” can become a true integer, yet still remain efficient in many cases.
It is bizarre to me that languages boasting built-in language-level data structures like lists, hashtables, etc are content to just leave us with the bare minimum support of numbers, being basically whatever the hardware thinks a number is. The semantics of integers and fractions are perfect, and everybody already knows them. On the other hand, overflows in int32’s are weird, and if your idea of a fraction is a floating-point number, then you can never have something like (5/3)*6 evaluate to 10 exactly.
To be clear, I think fixed-width integers and floating-point numbers have their place, I just see no reason why they should be the default.
I like how Haskell does it. When you simply write a number like "3" it will infer the type to be "Num a => a" which means it could be any type you have loaded that defines the functions in the typeclass Num:
class Num a where
(+) :: a -> a -> a
(-) :: a -> a -> a
(*) :: a -> a -> a
negate :: a -> a
abs :: a -> a
signum :: a -> a
fromInteger :: Integer -> a
"3.0" would be "Fractional a => a" which means it defines the functions in the typeclass Fractional (in addition to being a Num):
class Num a => Fractional a where
(/) :: a -> a -> a
recip :: a -> a
fromRational :: Rational -> a
Depending on how you use the value, the type would be further refined in compile-time. For example, if you did `recip 3`, 3 couldn't be any Num anymore, it would have to be some Fractional.
Regarding (5/3)*6, it does equal 10 using floating point. A better example would be how 0.1 + 0.2 is not equal to 0.3. We can see this:
ghci> 0.1 + 0.2 == (0.3 :: Float)
False
If we specified that we're working with Rational values, which are also Fractional a => a, and which are defined as a pair of integers, one representing a numerator and the other a denominator, roughly like so:
type Rational = Ratio Integer
data Ratio a = a :% a
then we can see that 0.1 + 0.2 does equal to 0.3:
ghci> 0.1 + 0.2 == (0.3 :: Rational)
True
It's pretty cool that Haskell lets you define new types of numbers and use them like any other. While you can transparently support hardware type numbers, represented by types like Int, Float, Double, Word, Word8, Word16, Word32, Word64, you also have transparent support for arbitrary precision Integer and Rational. Writing your functions to work with the typeclasses like Num and Fractional lets your functions work with any of these types and future types defined.
This is a great system for tying in new number types with existing ones, but the lack of explicitness about casting exact types (Integer, Rational) to inexact types (Int, Float) has led to many bugs and confusions for me. Coupled with the fact that so many standard functions (like length, for example) want to return an Int rather than an Integer, it just leads to me spewing ((fromIntegral ___)::Integer) all over my code.
I love all of the automatic casting between exact types, but happily implicitly casting to inexact types is (in my opinion) a big mistake.
> [...] but the lack of explicitness about casting exact types (Integer, Rational) to inexact types (Int, Float) has led to many bugs and confusions for me.
What do you mean? As far as I'm aware, you always have to explicitly convert exact types to inexact types in Haskell (using "fromIntegral" and "realToFrac").
> Coupled with the fact that so many standard functions (like length, for example) want to return an Int rather than an Integer, it just leads to me spewing ((fromIntegral ___)::Integer) all over my code.
If you don't like the automatic cast, you can always manually cast inline by doing (3 :: Int). I've used this a number of times when doing numeric stuff in Haskell.
I do know how to manually cast, although in my case I'm usually casting to Integer. I was arguing that a cast from Num to Float, Double, Int etc is a lossy operation, and breaks the semantics of arithmetic, and so should not be done implicitly.
> but the lack of explicitness about casting exact types (Integer, Rational) to inexact types (Int, Float) has led to many bugs and confusions for me.
> I love all of the automatic casting between exact types, but happily implicitly casting to inexact types is (in my opinion) a big mistake.
> I was arguing that a cast from Num to Float, Double, Int etc is a lossy operation, and breaks the semantics of arithmetic, and so should not be done implicitly
It's not really casting; it's just type inference. You can't have a Num and use it as a Rational in one place and a Float in another (unless you lift the monomorphism restriction), and you can't add a Rational and a Float without explicitly casting one to the other by the use of a function like fromRational. Like so, real casting is very much explicit. I feel like you're technically arguing against representing Floats with decimal numbers in code, but I don't think you really mean that.
EDIT: At the risk of stating something that you might already know, (x :: Int) in Haskell is not a cast like (int)x in C. (x :: Int) simply further restricts the list of possible concrete types that x could be. If the code context implies that x is a Num a => a, that means it could one of an Int, a Float, etc. Doing (x :: Int) simply says to restrict those possibilities to only Int. If x were a Float, and you did x :: Int, that'd be a type error because it was never possible for it to be an Int, only a Float. You'd need to cast by using a function like floor, ceiling, or round.
EDIT 2: To further explain this, C's (int)x is a run-time operation that converts whatever x is to an int, while Haskell's x :: Int is a compile-time operation that states that x can only ever be an Int and never something else. At the end of compiling, Haskell needs everything to be concrete types. Num a => a is not concrete because it can be many types and if Haskell can't resolve it to a single concrete type implicitly by context or explicitly by signatures like x :: Int, then it has to resort to defaulting rules or raise a compile-time error informing the programmer of the type ambiguity.
EDIT 3: On:
> Coupled with the fact that so many standard functions (like length, for example) want to return an Int rather than an Integer, it just leads to me spewing ((fromIntegral ___)::Integer) all over my code.
I agree it would be nice for length to return a Num a => a instead of an Int, but it's nice that the reason is to maintain stability and limit code breakage from the times before Num existed. The Haskell community seems to really care about that. They did add genericLength which does return a Num a => a, though. There's other generic functions added to Data.List.
Makes no sense to me to redefine existing integer types. Why not introduce a primitive type "num" for arbitrary precision rational numbers instead?
As a long-time Racket and Scheme user, I'd say that arbitrary precision numbers as default bring more disadvantages than advantages. As an option with syntax support Yes, but not as a default. it just makes it harder to port all kinds of code that relies on modulo arithmetics.
> it just makes it harder to port all kinds of code that relies on modulo arithmetics.
If you're relying on modulo arithmetic, you should be using a fixed size integer anyways, not one that varies based on the architecture. Go has fixed size integer types available, and modulo arithmetic should be using those, not "int".
I'm not sure how these two are related. Why would you mix modulo operations and fixed size? Modulo on an arbitrary bigint is perfectly valid and useful.
Modulo arithmetic refers to the property of fixed-size integers where they will predictably wraparound in expected ways if you overflow or underflow them. This wraparound is "free", so it used to be commonly used for cyclic high performance code. An arbitrary bigint is computationally more expensive, and it will never experience this effect, so it's a double whammy of irrelevance for bigints. You can manually do a modulo operation on a bigint, but that just makes them even more computationally expensive than a fixed size int, requires more code, and would be considered an unnecessary inconvenience by most people who care about modulo arithmetic.
> Why not introduce a primitive type "num" for arbitrary precision rational numbers instead?
Rationals aren't supported natively by processors, so there's no real need to handle this as a primitive instead of letting people to use a library for it. Adding them to the base language just because some people would find it convenient would clash with Go's explicit minimalist philosophy.
This argument falls flat for me. Classes aren't supported natively by processors either, yet any number of OOP languages use them.
It's generally nice when you can do simple things with the language's built-in standard library. I tend to prefer languages with more powerful standard libraries because it means that you can more easily move across codebases since they'll all be the same. If commonly used data types like rationals, hashes, maps, strings, etc., aren't defined in the stdlib, then there'll be any number of different libraries to use them, and you'll have to potentially re-learn a lot of different unnecessary things when working on different codebases.
Yes, standard library. I understood the poster above you to suggest they shouldn't be a part of language by that's used by default, for sure though it would be convenient to have centralized implementation.
Exactly. Having a rational datatype as easily usable as float would make it easier to use as a default when you don't really need floats but just non integers. Which is often the case if you think about it. Automatic simplifications, literals (12.345122 is a rational, as well as 22/7)
It's like building in special handling for a string type, exactly one instead of a catalog of library options.
I'd say that would fit in quite well with the brand of minimalism that Go is representing, keep the language simple and provide builtins where more convenience is needed. The opposite kind of "minimalism" would be providing tools to allow libraries to define what other languages can only support as special builtins. C++ and Scala went down that road, they minimize builtins by empowering libraries to replace them. That's also a form of minimalism, but not the one chosen by Go.
Tangent: now I wonder wether Scala has inerited special handling for the (not special, but specially handled) string type it inherited from Java or if just reimplements those syntax extras in the standard library using implicits.
> I wonder wether Scala has inerited special handling for the (not special, but specially handled) string type
Scala uses Java's String exactly, but does define a few extra methods using its extension method functionality (called implicit class in Scala)
Templates like s"This is a string with a $variable", are part of the language, but `s` is a stdlib feature.
xxx"First $a $b Second"
will be translated by the compiler to
new StringContext(Array("First ", " ", " Second")).xxx(a, b)
The standard library defines `StringContext.s`, but there's no restriction. Several SQL Libraries for example define `StringContext.sql`, to allow you to safely and type-checked embed SQL queries directly inside the code.
"X isn't supported natively by processors, so there's no real need to handle this as a primitive instead of letting people to use a library for it" is a general-purpose argument to cherry pick which primitives you support and which ones you don't. Surely this isn't your actual reason you choose to oppose Rationals / BigIntegers as primitives while accepting hashmaps and arrays?
Arrays, slices, hashtables, and channels are not supported natively by the processor, but Go still has them as primitives. Why not also add a number type which behaves in a sane, easy-to-understand, useful way?
Scheme has an explicit minimalist philosophy, so much so that when R6RS was released with too many conveniences it fractured the language and community. Now there's two specs, R7RS-small and R7RS-big, and they're far less popular than R5RS.
Rationals are also the default number type.
Lists are the base type used throughout Scheme, and can be used to implement arrays and hashmaps and whatnot, so Scheme doesn't really need them, you can just use a library.
... That didn't stop the committee adding types to the spec. Like records, promises and other complex types.
What is useful and necessary to a language isn't defined by what the processor can natively do, or we'd only use registers, not arrays. And it isn't defined by a overbearing attitude towards one or more philosophies.
I am misunderstanding or really go is using the ‘int’ type that can be 32 or 64 bit depending on the system that it runs on???
If this is the case I think that is crazy and I can’t think of any useful use case for it.
If it’s not the case then please explain me what that proposal is really about...
"int" in Go the native index type for arrays. I consider it an error to use it for anything else, and since I've adopted that policy, I've had no particular problem with ints.
Despite most of my professional programming being in Go nowadays, I am extremely sympathetic to the Haskell/FP way of thinking about things, and trying to make invalid state unrepresentable.
However, having made the mistake a few times now of trying to use "unsigned ints" for things that I want to assert are never negative, I've learned the hard way not to do that. The problem is, there's always some bug you have in the program that will drive the uint "negative", wrapping over to a huge number. Unfortunately, since you get no warning when that happens, it tends to take longer to show up, and without a clear cutoff to say "ah, this is clearly invalid" it can be hard to even program detection code, whereas checking for "less than 0" can be unambiguous.
I'd loooove it if I could easily, cheaply, and universally (i.e., not just in Go, but across all my programming languages) turn on a behavior that says "if this int or uint under or overflows, throw an exception instead of trying to do whatever stupid thing you're going to do". I get the sense this is heading down the road where in 20 or 30 years, this is just going to be common sense. However, we are very early on the road yet from what I can see. Whenever I bring it up, even today, even in this sort of context, it still is generally negatively received. (But not uniformly. A few other people agree. I think momentum is with me. I'm not sure my programming career will live long enough to see it, though.)
Using unsigned indexes would be fine if the program automatically throws an error on wrapping. Otherwise, it’s an accident waiting to happen, since you have to code all of your arithmetic keeping in mind the fact that the numbers are trying to blow up in your face.
If x is unsigned, then you can’t rearrange an inequality like x >= 4 into x - 4 >= 0, since the first inequality is only true sometimes, whereas the second is true always. This looks obvious, but bury it a little way into some nontrivial index arithmetic and boom.
Another more aesthetic reason is that indices can represent offsets into a structure relative to some other index, and in this case you really need negative numbers.
In general it is nicer to use signed numbers for indices. The actual index can't be negative, true, but differences between them can, and it means you don't have so many dubious sign conversions.
Because you want to represent errors. For example a find() function that returns the index of some element: you want to return -1 when the element doesn't exist.
That example doesn't work precisely because you're conflating the concept of the loop counter (which type represents an integer that needs to be able to go <0) with the array index accessor. There's no reason for these two things to be the same type, it's just a pattern that accidentally works because we allow arrays to be accessed using a type that represents negative numbers.
What would be a more compelling argument is to use int in the same way that python does - a negative accessor means offset from the end of the array, not the start. I'm not sure golang lets you do this though...?
Returning -1 instead of an error is abusing the type system. In C, there is no error type, so it might be excusable there, but it is inexcusable in Go, IMO.
Crazy indeed. This was like lessons learned #2 from C, which later introduced the stdint types int32_t et. al.
A modern language with this type of confusing and platform dependent behavior is inexcusable. By all means, include a native data type as big as the processor can handle (size_t), but don't call it int which is the default data type most people will use out of pure laziness.
Which would you choose instead? To default to 32 bits (even on 64 bit systems) or to default to 64 bits (even on 32 bit systems).
Defaulting to 64 bit math on 32 bit systems will have a huge performance penalty on generally the slowest/oldest systems where this penalty is least desirable. It's not clear to me what the benefit to this would be for most programs.
Running 32 bit ints on 64 bit systems will cause problems handling large arrays on systems with a lot of memory. I think it also complicates generating efficient loop code, although C++ gets around this with signed integer overflow being "undefined behavior".
Of course Go has int64 and int32 types if you want to use them (and all conversions require explicit casting), but the default type for array lengths etc is the platform native type. But what is the better alternative?
And as a result of this, you can't have a 3GB byte array in C#. I think there needs to be a big benefit to make hard-coding a restriction like that into the language worth it.
Unfortunately I've read every comment in this HN thread (so far) and I haven't found any specific one mentioned, just people calling it "crazy" and "confusing" over and over.
Go is a language that has explicit pointers! And of course those can be 32 or 64 bits and change struct sizes, field offsets etc. There are certainly things about Go that are confusing but I've never thought this was one of them.
Just FYI Go has int8, int16, int32, int64, and int types. Only the 'int' type is the machine-native one. It's always been obvious to me that 'int' is the one without a fixed size, but I think this would be less obvious if the naming convention was different using short, int, long etc.
Even if it's OK to degrade the performance of a small minority of users, is there a big benefit to emulating 64 bit ints everywhere on 32 bit hardware? (I'm not saying there isn't a benefit, just that it's not clear to me what it is).
I don't know how many users are still on 32bit. Maybe low power network devices? Older mobile phones (Go isn't usually run on mobile, but it's possible...).
If at some point in the future 32 bit systems become totally irrelevant for Go users, maybe a release of Go could drop support for it altogether and make int an alias for int64, and save people some type casts. 32 bit systems probably aren't at that point yet though.
>I don't know how many users are still on 32bit. Maybe low power network devices? Older mobile phones (Go isn't usually run on mobile, but it's possible...).
Well, 16 bit would also be possible if someone took the effort. Perhaps it just shouldn't be encouraged.
It may be crazy but it's not exactly without precedent. Neither C nor C++ fix the sizes of the fundamental integer types, although for backward-compatibility reasons popular 64-bit platforms still have 32-bit `int` and even `long`. But yeah, there's a reason eg. Rust has no `int` and friends but `i32` etc. instead.
Yeah, that makes sense because you need some type to span the address space and that depends on the size of the address space. That is what int is used for in go, though why they chose signed rather than unsigned is confusing. unsigned you would never have to do negative bounds checks and you can prove a lot of other useful things if you know your index is always positive.
Right, 64-bit Windows has 32-bit int and long and 64-bit long long (LLP64) while 64-bit unixes generally have 32-bit int and 64-bit long/long long (LP64).
Ignoring the basic int type and using a zoo of preprocessor defines instead is what has worked for C all that time.
But I still think that native types have their place. It can often be quite reasonable to accept serious limitations on narrow machines where the performance gains won by the trade-off are desperately needed, lift those limitations on wider machines and grant an enormous safety margin on the widest architectures. When you dive deeper, the values where this makes sense still tend to be "index-like", as GP stated, but they don't have to strictly be indexes (e.g. IDs, a wider machine will be capable of working on bigger datasets and therefore be more likely to exhaust a given width of IDs, all indexes are identifiers but not all identifiers are indexes)
> Ignoring the basic int type and using a zoo of preprocessor defines
My C is a bit rusty but why should #defines be used in this scenario? A typedef set in a config.h.in is more than enough to meet the requirements of this usecase.
I don't know how many times I've been writing a program using int, then suddenly I have to do some math on the index of a range (for example), and the math.* functions require int64. I start casting to int64, but things get infected and it spreads. Eventually I refactor everything to use int64 and wonder why int can't just be an alias for one of the others. Personally I would make it int64, since that is what the standard library thinks math should be done with.
Odd, I've rarely had this issue, and almost never need to do complicated math on the index of a range (maybe "multiply by two and add one" kind of stuff, but nothing from the "math" pkg). Also, note that the math.* functions all use float64, not in64.
Yes, you're right float64 in the math package. So I've definitely run into this with float vs float64.
As for the int vs int64, I definitely run into what I describe on Project Euler. Probably using a self referential map[int]int when I need to suddenly pull an int64 and everything gets infected.
I realize it's just a matter of convenience for me, but I personally have no downside to int == int64.
Meh I use C# and I rarely need an int bigger than 2147483647, and if I do I use a long which gets me up to 9223372036854775807, and if I need more than THAT then I'll use a math library of some sort.
I think the question though is that if “int” were an actual integer, would you still default to using int32/int64, and for what reason? There are valid uses for fixed-width types, but beyond domain-specific number crunching (crypto, image processing, etc) I would argue that the use of these types is technically incorrect, and not what people expect. Even experienced programmers will see “x > x + 1” and mentally replace that with “true”, even though if x is fixed-width, the value of that expression actually depends on x.
The problem with overflow isn't just needing to store numbers larger than 2 billion. Sometimes, intermediate values are larger than that even if the final result isn't.
Take averaging as a very simple example. Doing (a + b) / 2 will overflow if a and b are sufficiently large, even if the average will always fit in 32 bits. Things like this go unseen for years.
don't most people compute the mean iteratively when they need to do this? Sure, it goes unseen, but programmers on systems like this are expected to understand the math model and write tests.
I mean, in C and other low-level languages unlike python, it's generally assumed you have a basic understanding of the machine model and the consequences of arithmetic in limited types.
If most people make mistakes, they should self-select to languages which have properties that protect them from their ignorance. for example, I mostly program in Python using longs so I don't have to worry about overflow.
What kind of "long" are you referring to? Most programmers would think of a C long, int64, not a Python long, bigint.
Maybe people ought to self-select, but that would mean they'd need the training or experience to recognize what they don't know. It's often the most ignorant people that believe they have the most expertise.
I typically wouldn't be passing around milliseconds since epoch as an raw number, but then that's C#, I guess you might do that in other languages for performance or out of necessity.
Why oh why do people want a value to have different bounds depending on which system it is used? This is a source of huge confusion and why people stick to uint8 and other precise types
It's a perfectly reasonable niche use case for achieving maximum performance.
My main issues with it are:
1. Like many other performance optimizations, it is a trade-off. You may be sacrificing easy maintenance or even introducing breakage (if the person using it doesn't understand the limitations and implications). So IMHO you should only do it if you can show through profiling, etc. that the gain is real and worth it.
2. As a matter of language ergonomics, it should never be something that anyone does unknowingly. You should have to explicitly ask for it. The type name should be something blindingly obviously like "native_int" or "word_int".
More precisely: they wanted the integer type that was large enough and ran their code fastest.
On CPUs with 64-bit and 32-bit registers, that typically is the register-sized integer type. Machines with 16-bit registers are a bit of an edge case, as 16-bit integers may both be a lot faster on them then 32-bit integers and, in many cases, too small)
If go chose to use 32-bit integers as default, some users on 64-bit systems would complain that they couldn’t create, say, an array of 6 billion ints. If they chose 64-bit, some users on 32-bit systems would complain their loops were needlessly slow, and code bloated (why waste 4 bytes of almost every integer in a program?)
Not having generics makes this more of a problem. If go had generics, switching between integer sizes could be a matter of recompilation using a different compiler flag)
A sufficiently advanced compiler would help here, but go doesn’t have one (not exceptional), and doesn’t even aim to have one (compilation speed seems more of a goal than run-time speed)
> Not having generics makes this more of a problem. If go had generics, switching between integer sizes could be a matter of recompilation using a different compiler flag)
Isn't it already the case that you can switch between int sizes with compiler flags?
var i int
That will be 32 bits if I compile it with GOOS=linux GOARCH=386 and will be 32 bits if I swap in amd64 instead of 386 there.
Is there any language where generics are useful for switching between int sizes? None come to mind offhand, so I don't quite understand what you mean.
Traditionally the register size was too small, it's very common to have numbers above 256 or 65536. It's not true anymore with 32 bits numbers and a limit of 2 billion.
On 64 bit systems you can have large arrays that are bigger than a 32 bit int. For example, it's nice to be able to load a >2GB file into a byte array.
For comparison, Java's int type is 32 bits on 64 bit systems, but then this limits the array size. It seems like a lesser evil to have a platform-native int type than to limit the size of arrays (especially since servers can be expected to increase their max memory going forward).
Of course, standardizing on 64 bit ints instead would require emulating them on 32 bit systems, when they can't support that much memory anyway (you could argue that 32 bit systems don't matter much anymore, but if you really don't care about ever running on them, then you can just treat 'int' as 64 bits and not variable size).
C# is the same, int is merely an alias for Int32 and the default Array type only holds 32bit.
If you really need to, it's easy to wrap this into your own BigArray-class holding a 32bit-array-of-32bit-arrays and overriding the []-operator with Int64 as argument.
It's a rational practice when your language allows casting pointers to integers. Since the size of the pointer itself will vary based on the underlying architecture, you need an integer type which will also vary based on the underlying architecture.
I think that the rationale for using int for array indices and lengths and so on is to avoid shooting yourself in the foot with wrap-around arithmetic (again, my thesis being that when 99% of people use integers, they expect them to behave like integers and not wrap)
Take for example code which finds consecutive differences in a list, something like
for i := 0; i < len(list) - 1; i++ {
print(list[i+1] - list[i])
}
This code looks fine and reasonable, and we even avoided the off-by-one error at the end. However, if len(list) is a uint and we run this on an empty list, len(list) - 1 is then 2^32 or something, and the program explodes.
So this is the rationale for using signed integers rather than unsigned integers. I think the language semantics should go further, and use true integers pretty much everywhere. Note that the compiler could still easily see that the index variable in the loop above is bounded, and use a machine word under-the-hood. But the programmer only ever has to think about well-behaved integers.
The original thread was about converting int to be an arbitrary precision type: "so that 'int' can become a true integer". One would imagine that, to preserve the ability to do pointer <-> non-pointer number, they would just leave uintptr alone, and so basically ignore your concerns, whereas int as the "native type" for slice indexes and the like is a more likely reason to prevent messing with "int" itself
Okay, that's the source of the confusion. I was not replying to that sentiment, which is a nuanced discussion on what an int should be. I was replying to the_clarence's categorical dismissal of variable width integer types.
It’s been a while since I COBOLed, but if I recall, the COBOL decimal is similar to Currency types in languages like C#. (Maybe I’m misremembering.) If I’m right, though, it’s not a floating point number. It’s currency properly handled via integer math under the hood.
In C#, there's a "decimal" type. Which is exactly that - decimal floating-point. However, one catch about it is that it doesn't allow for the exponent that would place the decimal point somewhere outside of the representable digits of the number (i.e. unlike floats, the difference between any two decimals is never more than 1).
But it's still plenty useful, because it allows to accurately represent decimal fractional numbers, which is exactly what's needed in many domains, since humans work with decimal fractional numbers. Representing money would be one particular example.
Decimal floating point is actually an established thing and part of the current IEEE 754 spec. I'm not too familiar with COBOL but pretty much any modern language will offer it through a library or compiler extension. Recent IBM Power processors even have hardware support.
Both you and the parent are correct. Decimal floating point is a long-established thing, but the vast majority of "Currency" or "Decimal" data types in modern languages are arbitrary or fixed precision (to some configurable precision) which uses integer math under the hood.
That would be quite a fundamental change. At the moment _int_ and _uint_ have certain semantics that, if changed, would surely break many applications and libraries that rely on the current semantics. I am finding it hard to think of a more sweeping and drastic change to the core of a language.
Having said that, I'm by no means a Golang expert – this is the comment of an outsider looking in. I get that the proposer is Rob Pike and obviously Mr. Pike is god-like so what is it I am missing? Is Go 2 meant to break everything? That's like, wow.
Any low-level hardware bit-banging code is going to have to be audited or break in subtle ways, will it not? Is Golang different from C and C++ and languages of that ilk that this sort of change is not that much of a problem? Help me out here folks, I'm genuinely perplexed.
If the change as originally proposed went through, it probably wouldn't affect most uses of bitbanging low level code because they are already using int32/int64 rather than int (which has different size on different platforms). You are correct that it could definitely break some programs though, and so with recent sensibilities veering away from making any non- backwards-compatible changes, an amended proposal would probably just introduce a new "vint" type (maybe removing the old int type at the same time if it is really ambitious)
If these are new datatypes then they would break nothing (ignoring the "int" default part).
> so what is it I am missing?
The Go team has been adamant about avoiding any changes that break existing code. The whole point of Go 2 is that at some point that might have to happen and they're trying to minimize that pain.
I've been thinking about this, and even though it's indeed a breaking change, it's one of those changes that can be easily toggled through a compiler flag. I cannot picture many libraries relying on the edge cases that Pike enumerated.
My (admittedly unsophisticated) mental model is that int32 and the like are simple primitives that can be stored in registers and use the processor's native ADD/MUL codes, while BigInt is going to be some boxed structure with its own implementation of arithmetic operations. For as common as number munging is (especially in somewhat lower level code that Go shoots for), it would carry a huge performance penalty, wouldn't it? Or are there more sophisticated ways of offering a "true integer" type that I'm not aware of?
I'd like this because it'd make certain classes of programming problems simpler.
Admittedly, not for things I build in my day-job but mostly for the kind of things you'd get when doing 'project euler' or 'leetcode'. Or more seasonal, the coming "Advent of Code".
I'm pretty excited by the idea of Go getting generics. This has always been my deal-breaker issue with Go and I'm glad that what appeared to be a disingenuous "let's pretend to be hunting for the truth until people go away" stance was actually really a hunt for the truth! Goes to show that you shouldn't make snarky snap judgments.
As for all the folks claiming they'll leave Go if it gets generics, it's faintly reminiscent of Mac fanboys claiming PowerPC chips were the Very Best right up until this was obviously not true. C++ generics are a PITA in many ways, but you can be insanely productive in the STL without having to hand-roll everything and with good type safety.
Despite the pain, I've been amazed at how easily you can build up some really complex data structures as pretty much one-liners ("Oh, I need a vector of maps from a pair of Foo to a set of Bar") that would either take a preposterous amount of code (or be a type-unsafe disaster waiting to happen) without generics.
Hopefully the final Go 2 generics proposal will capture some of this goodness without some of the horrifying C++ issues (error messages, bloat, sheer brain-numbing complexity).
"As for all the folks claiming they'll leave Go if it gets generics"
Baffled at this claim I searched and found one person who really seemed to be saying "if Go changes dramatically...".
This recurring notion that Go fans are anti-generic is not rooted in reality. Instead they simply didn't buy the "either it's there or the language is useless -- generics or bust!" argument that pops up in every Go discussion. It's a fine, if imperfect, language without generics. It's a better language with them.
Maybe the even more nuanced side is that a language that can't conjure a safe and well designed generic has a badly designed type system in the first place
I really don't get the people opposed to generics. Is there actually a cross between experienced developers who have come from languages that have generics and understand them, yet don't want them in go? If so, why, and what do they use instead? Because go has no compromise-free answer for generics. You either lose type safety, maintainability, or performance.
I suspect a lot of the generics hate is due to a large chunk of the community coming from dynamically typed languages. In which case they're just have a negative reaction to unfamiliarity.
Increasing language surface area will generally increase the complexity of all api surfaces written in the language. This has obvious costs.
It’s true genetics will shrink some specific APIs, where they are a good fit.
But they will also be used opportunistically by developers excited to push their boundaries.
Of course you can say “just don’t do that” which works if you have a tightly controlled codebase. But most code is not that, and will be handed off to novices over and over for fixes.
So, it’s a question of whether you cater to the advanced developer who can capably handle a vast toolset, or do you commit as a community to more rudimentary tools, in order to reap the rewards of systemic simplicity.
There is no right answer.
I believe in the future all languages will fork into a simpler novice subset for general use and an expansive language for infrastructure. These will both be valid in the same parser, but the subset will be quarantined at the package management level.
It’s C’s lack of complexity which makes it necessary to do dangerous things for everyday purposes. I’d much rather a novice interact with generics, where the compiler is a safety net, than with void * and interface{}.
>> It’s C’s lack of complexity which makes it necessary to do dangerous things for everyday purposes.
That's not the case anymore. We have C++ which has generics and "fixed" C's lack of complexity for sure...Now for some 'weird' reasons some people still use C. Wonder why ?
It is not the case since 1993, when CFront was dropped.
In certain domains like UNIX like OSes, I don't see C ever going away, due to the infrastructure, symbiotic relation with the OS that gave it birth, and the culture.
>I believe in the future all languages will fork into a simpler novice subset for general use and an expansive language for infrastructure. These will both be valid in the same parser, but the subset will be quarantined at the package management level.
I don't think we'll see this happen much for existing languages, but it could be a very interesting angle for a newly-designed language (or rather pair of languages).
To some extent this is already happening in languages popular for machine learning. Libraries are written in C/C++ and the users just glue things together with very accessible API's.
Although ultimately I do want generics in Go I am afraid they will make the language more difficult to use and understand. Generics in c++, c#, scala, java, etc all tend toward being very complex and change the way programs are written.
The focus moves towards a taxonomy of types and developers (myself included) sometimes get stuck on difficult type problems. There's something about trying to preserve type safety which sets the bar extremely high for bypassing the type system when there's not an easy solution and before you know it you've wasted 2 or 3 days writing code which doesn't actually do anything but placate the compiler.
And often that type-safe concoction you create is almost indecipherable when you come back to it later.
For example here's a project I worked on recently which cached a concrete version of a generated method using generics:
At least for me that was really hard to figure out how to do and I still have to squint to see what the heck its doing. The non-generic version wasn't type-safe and it wasn't as fast, but it sure was a lot easier to read and understand.
And to give some sense of the complexity involved, Rob Pike mentioned in his talk that the proposal spec for adding generics to Go is longer than the spec for the entire language.
I think the complexity is worth it, but I just hope we can be cautious about how and where generics get used in real-world code, otherwise we'll end up with gobbledy-gook that only experts can decipher... and that would be really sad, because the promise of Go was a language normal engineers could be productive with.
C++/CLI had both compile-time generics (templates) from C++, and run-time generics from the CLR. And they could be complementary at times.
For compile-time generics, Dlang are a lot more sane that C++, having had the benefit of coming later, and dumping C compatability.
Similarly, CLR (C#, ...) generics had the benefit of being designed having seen Java first, so IIRC they're baked into the CLR. CKR generics were derived from work done at MS Research Cambridge, and I seem to remember Don Syme (F# creator) being rather proud of them. Disclosure: I was contracting at MSR Cambridge back in 2007.
Anyway, the golang designers will be aware of these implementations, so hopefully they'll come up with a nice design.
I don't hate generics. I see the value of generics, especially after having worked with Go for so long; I've had to do some mental gymnastics to get around not having proper generic collections.
That being said, I'm not excited about seeing generics in other people's code. The added complexity doesn't really solve problems I have anymore.
That being said, I'm actually way less excited about overloading.
I think generics and function overloading is going to make me think "where the hell is this coming from?" a lot more often and then I'm going to need to load it up in my IDE, or vim with way too many plugins, and start following definitions.
I think the main fear people have isn't the concept of generics, but the implementation of generics. Go's primary goal is simplicity, and implementing generics isn't simple.
But like I said, Go doesn't have an answer for generics. So the complexity just lives in userland code instead of the language itself. The problem and complexity doesn't go away.
If the choice is between badly implemented generics, and having the problem manifested as userland code, I'll take the latter. I've actually had to debug C++ production code produced by the confluence of templates that had no source code of it's own. At least with boilerplate, you can simply see what's going on right there.
If language features were free we'd likely have had generics for a long time now. Unfortunately generics is a trade-off: you get development speed/ease and pay for it in compile time, binary size and/or execution speed.
This seems to be slowly changing, but Go was designed to be a solution to Google problems - python being slow but some c++ applications taking literal hours to compile. Keeping that perspective in mind makes it easier to understand why Go maintainers have not accepted an implementation of generics into the language yet.
If you maintain type safety, you'll pay in increased compile times with or without generics. You'll either hand-roll an implementation for each type (https://golang.org/pkg/sort/) or you'll generate code before actually compiling. The problem doesn't go away.
It's people that have seen the abuses of C++ templates. They're very powerful and therefore people tend to want to use them for really complicated things.
Look up things like SFINAE and compile time metaprogramming. Templates were not intended for those things. When they work they're ok, but if they go wrong good luck following the 10 line error message.
That's not compile time metaprogramming at all, it's just series of wrapped closures. It's basically the sequence of function call names preceding that call backwards, with different capitalization.
The big difference between C++ and Rust is that, when you scroll down in this Reddit thread, it shows that the Rust guys have a clear path for fixing this issue, whereas there is no fix for this in C++ (that I'm aware of).
Any sufficiently advanced type system can be used for compile-time metaprogramming - that's just an inevitable side effect of a type system expressive enough to capture all the more convoluted (but still plenty common) cases without hacks like interface{}.
I'm of 2 minds. I come from a Java background so I've personally wanted generics. ORM's are 1 use case that comes to mind.
But. There are many modern languages that already have generics. Why can't Go be that one that doesn't cave in and remains powerful in its niche and perhaps may never be the preferred tool in other areas? Why must it be useful for web development, microservices, DAL, etc.?
Any implementation of generics in Go will come with complexity tradeoffs.
I feel like the community learns more about languages and software engineering when we maintain language diversity and see the pros and cons of each approach in practice.
I want generics for selfish reasons, but would also like to see how a modern strongly-typed language solves problems without it.
As someone who quite likes generics, I'd love to see how a modern strongly-typed language solves problems without them. And I think that's what the Go community and developers have been trying to do up until now. It looks like they're giving up. I'm not sure whether we'll ever find another way to tackle composition/scalability as effectively as generics do, but such a technique would be fascinating to see.
For me at least, the reluctance comes from the new proposal process not having yet proved itself for large backwards-incompatible features. I want to be assured that a Go with generics is a Go with a really well-integrated feature, and not some grafted mutant appendage whose only real purpose is to appease the greater community.
It's great that they're trying out the process on smaller, more simple proposals. My hope is that this system will either produce really good features, or reveal that there are simply no satisfying solutions.
Coming up with a complex solution is much easier than finding a simple one. Go forces you to find those simple solutions. There are places where generics are the only solution, but they also enable lazy design.
I'm hoping things like generics can just be accepted to be a good idea moving forward and that we can all agree that languages without them are handicapped.
I still fell things are up in the air about exceptions but maybe we can just agree on generics which would make me feel better.
Using go without generics just felt insane to me...
Yeah, Go has some really fantastic aspects around tooling and a good concurrency story, but it's otherwise such a huge step backwards. I can't fathom the reason for not having generics. It's such a simple, completely common-sense abstraction.
Things like typeclasses or multimethods offer vastly more abstraction power. These are (a bit) more difficult to understand, but you certainly don't have to be a genius (take it from me). I can kind of get why a language targeted towards "average" programmers might want to omit these.
But here's the thing: The less abstraction power a language has, the more complexity must be handled by the developer. This leads to things like Java Spring, which you do have to be a genius to understand.
>"Oh, I need a vector of maps from a pair of Foo to a set of Bar"
This is a fun example since you can do that in Go now, as it comes with a generic vector (essentially) and map :D
I think that was the design decision, 95% of generics use cases are covered by having growable arrays and maps, so why clutter the language with generics.
I like generics, I think most people who dislike them are coming from C++ templates, which has downsides that don't exist in newer generics systems (such as in C# or F#)
Rather than writing multiple functions that take different types as args and return different types (say int8, int16, int32, etc.) but do the exact same thing, with generics, you can instead write one function that takes a number (which could be any int type) and return a number, and you only wrote one function. That is generics in a nutshell. The function is generic, not specific to one type.
Generics allow for less code, thus they are easier to debug, test and reason about. I've used them a lot in C++ and I do miss them in Go. It's not a deal breaker for me either way, but for people who write larger, more complex code, not having them makes it more difficult (more code to write, test and maintain).
C does not have generics either. Go is a lot like C in this regard.
In a ahead-of-time compiled language probably not. The generic function is just a code generator for several functions that will get called in the right places, generated and put in the right places by the compiler, and then get optimised as per normal.
And no, being a dynamic language is not what makes Python slow. Lua, Nim and Scheme are all examples of dynamic languages with fast implementations.
- Generics do not make runtime performance slow, but it sure does make compile times slower (although it really depends on the type system and its implememtation) For example, C++ templates, although very powerful, makes build times order of magnitudes slow when used poorly. One of the most important features of Go is its fast compile times, but generics/contracts can potentially slow it down a lot.
- Yes, being a dynamic language slows it down a lot. For example, when evaluating a+b, the interpreter has to check the types of the variables a and b at runtime beform performing the right form of addition (it might be an int, or a string, you dont know). Even with JIT (just in time compilation) the compiler has to initially “guess” that the variables are numbers, and fall back if that is not the case. Statically typed languages do not have this problem, because you know the types of variables beforehand at compile time.
By the way, Nim is not a dynamic language. And LuaJIT is one of the the fastest dynamic language implementations because the language is very simplistic (compared to Javascript/Python/Ruby) and Mike Pall is a robot from the future...
> the interpreter has to check the types of the variables a and b at runtime beform performing the right form of addition (it might be an int, or a string, you dont know)
That's not necessarily true. In Scheme, a very dynamic language, the compiler will generally make choices about memory layout of the various values before launching into the evaluation phase. Where safe or possible to do so, it will probably reduce the number of choices that will have to be made at runtime, at compile time. The interpreter may not have to lookup what the data type is, because it may just have two bits of memory and an instruction to call. You can know what the data will be, and optimise for it. [0]
There's no reason that: b = 1; c = 2; a = a + b needs to be slower than a = 1 + 2 if b and c are never referenced before or after. But in Python, it is, because the design makes it harder to know whether or not an object should get optimised away.
> And LuaJIT is one of the the fastest dynamic language implementations because the language is very simplistic
Right, Lua is dynamic, and one of the design choices was making it simple, and so it's easier to make it faster.
If you've never met Scheme, then SICP [0, 1, 2] may be something that can change the way you program. It certainly made me better, or at least a deeper understanding.
Some of it is simple stuff, like CPython being interpreted, so PyPy get's a huge boost by JITting.
Some of it is much harder stuff, like the way Python is designed to store objects in memory (a list is a pointer to a contiguous space of pointers to things that might be pointers...), and everything that hangs off each object (everything has a dict), and the awful GIL ([0]). Awful for performance, great for thread-safety.
Every single part of a language design has trade-offs. It depends on what you're trying to do whether or not they help you, or hinder you.
Sometimes you change how you're doing things because priorities change, and get a radical improvement, like Python's 3.6 Dict. Most of the time, you don't. One step at a time. Not being able to break backwards compatibility hinders the designer, if they realise a trade-off they've made was a mistake. So you get stuck with some features you'd rather not have.
To be fair, all of the aspects of the language itself you mentioned apply to JavaScript (save perhaps differences in how literally it takes `int`, etc. being objects vs. JavaScript's unboxed `number`, etc.). I suspect most of the difference is that one has had three of the largest corporations in the country competing to have the fastest implementation, in some cases for decades, and the other hasn't.
I think a bigger reason is that Python prioritizes simplicity of internal implementation.
To be sure, the internal implementation of CPython (the only one I'm familiar with) is complicated as hell. But it's a whole lot simpler than any fast JavaScript runtime I've ever used. I think that's the result of a conscious choice by the maintainers/BDFL/community in Python, not a side effect of it having less funding (assuming that's the case).
I don’t think it necessarily impacts the runtime (though it could).
This could be my naïveté, but I don’t see why generics in Go couldn’t work analogously to the way they do in TypeScript, just a way for the compiler to verify the correctness of code accessing an interface{}.
A simple example of generics is generic collections.
Without generics, if you need a list filled with TPS Reports you basically have 2 choices:
- Use the 'untyped' List that deals with objects. You will have to cast to TPS whenever you look up an item and you will have to make sure that no one accidentally adds a Timesheet to this list.
- Write a custom TpsReportList. I'm sure you'll be able to write an implementation as efficient as the language creators. Oh an if you copy and past from TimesheetList, don't forget to check all names. `tpsReports.AddTimesheet()` is just embarrassing.
With generics, you will have a `List<T>` type. If you need a list of TPS Reports you will use the type `List<TpsReport>`. If you need a list of Timesheets, you will use `List<Timesheet>`.
You can't add the wrong type of item to such a list, you will always get the declared item from that list and you can't assign an instance of one type to a variable of the other type.
As a newcomer to Go, by an immensely wide margin, the hardest, most frustrating thing, which soured the language for me, is whatever the heck package management is in Go.
There's like three or four different angles, all of which overlap. Some are official. Some aren't. The unofficial ones seem more popular. They're all kind of incomplete in different ways. And it was all such a frustrating migraine to try and figure out. I haven't felt so viscerally aggressive about software like that whole experience made me feel in a long time.
I hope Go2 makes something concrete from the start and sticks with it, for better or worse.
Having used the new Go module system (introduced in Go 1.11 as an option, to be the default choice in 1.12) since August, it's my opinion that this is now a solved problem.
The biggest source of pain moving forward is going to be the projects that haven't transitioned, including the various command-line tools that work on parsing, generating and manipulating Go code (e.g. linters, code generators). Most of the important ones are already there, and I've transitioned several myself.
As an added bonus, word is that the Go team wants an official package repository system (similar to Cargo, RubyGems etc.). I wouldn't be surprised if this happens rather quickly.
> it's my opinion that this is now a solved problem.
It's starting to look like a viable solution, but it's not even close to actually solved yet. Why does `go mod why <module` make changes to your module? How do you run go get to install a remote package when modules are enabled (without explicitly running `GO11MODULE=off go get`)? Why isn't the module cache concurrency safe? Why does the module cache sometimes mysteriously cause compile errors until you `go clean -modcache`? There are so many little bugs and oddities.
And as you mentioned, a lot of things have side effects now that didn't use to, which has catastrophically broken a lot of the tooling surrounding the language. Autocomplete using gocode used to be nearly instant. Now it sometimes triggers downloads and takes 30+ seconds.
I'm hopeful that go 1.12 will be the first release where this problem is really solved.
There are bugs, but I was referring to the design of the whole thing.
By the way, your "go get" bug was fixed today [1], if I understand your complaint correctly: With the new modules turned on, you could no longer do "go get" globally.
(I would agree that it's a little weird that "go get" outside a module installs it globally, while inside a module it installs it locally; that's going to trip up scripts and Dockerfiles, and it should really be two separate commands. "go mod add" to add a new dependency, for example.)
There are still some rough edges, but I agree that the new module system is very good. And it's given me the confidence in Go to start using it far more broadly than I had before, when every new project meant hang-wringing over how to handle dependencies. Now it Just Works well enough for 90+% of use cases, with no extra steps required. It just works.
I just started learning Rust for a small project, and I found its "one clear path" model very appealing (I don't know if it is a formal goal, or if it's just a happy accident based on a smaller, more focused, community). It doesn't just apply to the package manager, but that's one of the first bits a beginner sees. Rust has a single, easy-to-find, "pretty good" answer for nearly every question a beginner asks, at least, it has, so far, for me. Cargo is that answer for packages and for building, and it's pretty good, and there's no debate about how to install or build Rust tools and libraries.
I didn't really find that to be the case with Go, so even though Go is a simpler language, I have been more productive, much more quickly, in Rust. Within a couple of hours of starting my project (with only a cursory glance over the docs and tutorial) I had my project daemonized, had options parsing working, had system calls working, got logging working, etc. It was shocking how quickly I was up and running.
I'd been hesitant to try Rust, because it looks big, and I'm kinda tired of big languages. I just don't have enough time/motivation to study a bunch of nuanced syntax and such; I'll never be a great C++ programmer, though I can muddle through and usually understand other people's C++ code. But, so far, Rust is proving to be one of the easier languages I've learned lately, partly due to the holy path being well-defined, and partly due to strong libraries that overlap my particular project perfectly (being a systems language built by very smart people, it has some very good systems libraries).
I don't mean to imply Go is a hard language, it's not. I picked it up pretty quickly, too. Both are easier to learn than, say, JavaScript, because they're much smaller. But, I agree that there isn't a super clear path forward for a beginner with Go, including with installing and building. You just have to acquire more tribal knowledge to work in Go than in some other languages (but, much less than many others...older languages tend to have tons of that kind of thing).
It has excellent package management, robust compiler messages, pattern matching... etc. But once I start trying to build non trivial data structures it becomes a nightmare. For example, doubly linked list, any sort of graph is extremely hard for me to build in rust but extremely easy to build in golang.
I am still learning the full capability of rust, hopefully as I do more practice it gets easier. In the worst case I think I would still use it to build data pipelines since I really enjoyed the syntax and the safety check as long as I don't get into smart pointers or raw pointers.
Also another annoying point is that some libraries use nightly.
Of course, if you're comparing with Go, you need to compare apples to apples - since Go doesn't have ownership tracking, the equivalent Rust would necessarily have to be unsafe.
I think that heap-allocating everything and using Rc<T> and Weak<T> would be a closer comparison to what Go does, than using idiomatic Rust with borrow-checked locals etc.
Rust easier to learn than JS lol... In Rust you have issues that will or will not overcome easily, nothing like that happen in Go. And seriously getting starting in Go takes less than an hour:
I explicitly said above that Go is an easy language to learn. But, I found Rust easier.
And, yes, I'm finding Rust easier to learn than JavaScript. Without question. It is a much, much, smaller and more cohesive language. It has some new concepts (features or techniques that I have never used in any other language), but so does modern JavaScript, and with JavaScript there's usually five ways to do it, and half of them are really poorly thought out. I don't dislike JavaScript. I'm not saying one shouldn't learn some; one definitely should. But, it's definitely going to be "some", for most people, because there's just too much of it to learn it all, unless you can devote yourself full-time to being a JavaScript expert. I aint got time for that.
As I mentioned, I've built a (toy) project in Go. I've spent more time with it than Rust at this point. I know how it works, and what getting started looks like. And, though Go was pretty easy with few pain points, I've found Rust to be easier and to provide a more clear path for a beginner to follow, so far.
Everyone is different, and we're all coming from different places. No one has exactly the same set of starting conditions for learning a new language as I do. For some, Go may be easier than Rust (I expected it would be, which is why I started with Go and avoided Rust for so long). For me, I am finding Rust easier. I haven't done much with it, but I was surprisingly productive surprisingly fast.
I for one totally agree on rust being easier to learn than Javascript. I suspect it is because of the explicitness of rust vs the flexibility of javascript where you can do literally anything you wish without any sort of guard rails, then there's the ecosytem with like a gazillion tools in your build pipeline. I really admire front end devs because they do what I cannot, I guess i'm not as clever a programmer, I need the compiler to hold my hand , lead the way, and yell at me when I am going astray, rust does that for me , and when I'm done obiding by the rules, cargo is there to take over the rest of the process.
I don't really care about the guard rails. I grew up in Perl (pre-strictures and warnings), and BASIC was the first language I ever built something in, so I'm not too fussed about things always feeling a bit loose, and having to defend yourself with tests and imposing some rules on yourself by convention.
While I think it's likely that having a stricter language is more likely to produce better software, especially as complexity grows, I don't think it has a huge impact on whether the language is "easy to learn" for me. Python is very lax (contrary to popular belief, Perl with strict/warnings is stricter than Python, and protects you against a wide variety of scope-related bugs, in particular) but I consider it an easy language, too, because it is consistent and small(ish). There's some kind of balance to be struck between elegance and simplicity, and between explicitness and concision, and Rust feels very good, so far. I don't think that balance will be the same for everyone.
JavaScript, to me, is hard just because it's so damned big and incoherent. It's been pulled in twelve directions at once for its entire life, and it shows. JavaScript is like a buffet that has Chinese food, Indian food, pizza, sushi, and tacos. Most of the food isn't very good, but there's a lot to choose from. It doesn't help that learning JavaScript also entails trying to make sense of the maelstrom of tooling that's available. While Rust has one clear path for beginners, JavaScript has a haunted corn maze.
My only experience with Go was around 2012-2013. It was fun, but I did not bother sticking with it. The forced directory structure was a little bit annoying and I found myself writing interfaces for everything. I'm sure people will say it's my fault as a programmer, but it turned me off from the language.
Go modules are available in go 1.11 today. If you're envisioning another new pkg management solution besides that, I don't think that's going to happen.
go1.11 pretty much solved this, though there are still many dependency managers out there and projects which rely on them that should be upgraded to support the New Way.
"package" in Go has always referred to a folder. Or in other words, a shared namespace for every entity exported by every file in that folder. "module" refers to a package which has a go.mod file in it; that module package + all of its children become versioned via that go.mod file and are distributed as one unit.
It operates identically to npm; npm packages are folders, and there's a special package with a package.json which versions that folder and all of its children. Then, that "package.json package" is what is distributed.
I'm curious what language you come from that has much better package management than Go. I'm guessing not JavaScript, or C++, or Java, or Python, or...
I tried Go a while ago.
I was hooked by the performance , the community around it and vendors support ( AWS , Heroku , GCloud etc...) but I got quickly fed up by the awkward package management system, the weird syntax and the horrible idea of $GOPATH, especially on Windows.
Haven’t tried it since.
Hope lots of this change to make the language more welcoming for Newcomers to the language.
If $GOPATH was your biggest complaint, now may be a good time to give it another look. As of 1.11, there's an experimental feature called go modules that lets you avoid using GOPATH. I believe it's going to be non-experimental starting in 1.12.
That's terrific news! GOPATH and the file system conventions are horrible for me as well. It forces me to break my personal conventions and workflow that I use for every other language. I avoid using go for new projects now because it got to be so annoying and disruptive (a somewhat shallow reason, I know).
Don't get too excited. You don't have to place your projects within $GOPATH anymore, but all your dependencies are still forcefully downloaded to – and imported from – the shared $GOPATH. AFAIK they didn't provide a way to have your dependencies localized to a subdirectory of your project's root. The documentation for "go mod vendor"[1] makes it seem like it should accomplish that task, but I couldn't get it to work for initial pull of dependencies – it only worked after dependencies were already downloaded to $GOPATH, at which point it was willing to make a copy of it within the project.
[1] "... or to ensure that all files used for a build are stored together in a single file tree, 'go mod vendor' creates a directory named vendor in the root directory of the main module and stores there all the packages from dependency modules"
That ended up being the biggest hurdle for me. I wanted a single repository with some Go source code, some Python, some C++, and I didn’t want to have to put the repo in a specific place or set environment variables for every project.
Nowadays I just put my Go source code in <repo>/go/src/example.com/pkgname and that works well enough, but it's a bit clumsy and reminds me of bad experiences navigating Java source trees. I haven’t switched to modules yet but I will once I get 1.12 everywhere.
Go modules are extremely suitable for current work; they're as un-risky as anything labelled "experimental" could be. I've been using them for a boring professional application for about 4 months now, and there are no hassles with using them.
Think of it this way: if you spend time learning and using a feature that is not guaranteed to be there, say, a year down the line, is that a good investment?
If they do mean what they say then they will release a version of Go where the GOPAH nonsense is over and a fix for that design problem is not euphemistically described as experimental.
Until then, no LTS means not ready for production.
GOPATH is still being used internally by Go modules, but it's set up automatically and you never have to even know it's there.
It will probably remain an option to use it as a user for a while due to backward compatibility.
I don't see how that prevents you from using Go modules.
> a fix for that design problem is not euphemistically described as experimental
The simple reason for it being "experimental" is because Go 1.11 is the first release to include it, not because of some inherent instability. Wait till February for Go 1.12 if you're so worried.
> no LTS means not ready for production
I don't know what you mean by LTS here, since every Go release as of 1.0 has been backwards compatible.
For some people and organizations "as un-risky as anything labelled "experimental" could be" is risky enough to be automatically being disqualified from even being considered. There's the label 'experimental', so it can't be used anywhere touching the production codebase regardless of any other arguments.
Not the OP, but uh, no neither of those are "professional" in the sense that professional should use them, regardless of the plain fact that "professionals" do use them.
Those are both garbage languages, despite their utility.
> should we only use languages that were perfectly designed to start?
If there are a myriad of languages and we do have limited time to invest mastering a language, we better use our time wisely and not waste it with those with severe design problems and designers who have refused to face those issues for years.
the compiler will infer automatically that the return value inherits the lifetime of the `text` argument. I don't find that line up there particularly busy.
>I tried Go a while ago. I was hooked by the performance , the community around it and vendors support ( AWS , Heroku , GCloud etc...) but I got quickly fed up by the awkward package management system, the weird syntax and the horrible idea of $GOPATH, especially on Windows.
My experience was similar. In addition to those, I found I really missed a REPL console and moreso something like byebug that RoR has. For those that aren't familiar, byebug lets you put the command "byebug" anywhere in your code that opens an in context REPL. It's enormously helpful for hard to figure out bugs.
The other thing that turned me off from Go was testing. It has good enough support for unit testing but it really lags behind in integration testing. Sometimes you want to know that if you hit this endpoint with this payload you get this response back. It's harder than it should be to write integration testing where you spin up the application and test it end to end.
> In addition to those, I found I really missed a REPL console and moreso something like byebug that RoR has. For those that aren't familiar, byebug lets you put the command "byebug" anywhere in your code that opens an in context REPL. It's enormously helpful for hard to figure out bugs.
This is like comparing apples and oranges, or at least, like comparing apples and apple-orange hybrids :)
Just saying that Rails (RoR) has a command like byebug that opens an in-context REPL, seems to ignore the fact that compiled languages like Go catch many errors earlier, at compile time, so you don't even need an in-context REPL or a debugger to find those.
Not saying that features like byebug have no use at all, of course.
You know well that it is. Add /s or /rhetoric to your question next time :)
Languages can be pretty useful even if they are imperfect, as others have said. And Go is plenty useful. The Stroustrup quote about C++ comes to mind ...
"Generics" can be a lot of things in a lot of ways. AFAIK there is no official Go design for generics, or a list of the deficiencies they will have, so a blanket "this particular problem will of course be solved well" is speculative.
Elixir's IEx.pry/0 function, that opens a REPL at a particular point of execution, is probably the stdlib function I call the most while developing. In a compiled language, to boot.
>This is like comparing apples and oranges, or at least, like comparing apples and apple-orange hybrids :)
>Just saying that Rails (RoR) has a command like byebug that opens an in-context REPL, seems to ignore the fact that compiled languages like Go catch many errors earlier, at compile time, so you don't even need an in-context REPL or a debugger to find those.
I definitely liked the type system of Go and the compile time errors it found. It gives you a lot of peace of mind and reduces some errors. However, those errors aren't the things I use byebug for. Byebug is for things like figuring out why your code went down Path A when you expected it to go down Path B. Or if you don't know how to do something it's a sandbox to try a few different things until you get the output you were looking for.
Yeah, and having access to a REPL can catch semantic and architecture issues before you finish integrating and fire up the whole application. As do testcases.
Compilation is not a magic bullet. Although, to be fair, I pushed Golang at work precisely because you have to compile it first. Not everyone is diligently testing their code...
>Yeah, and having access to a REPL can catch semantic and architecture issues before you finish integrating and fire up the whole application. As do testcases.
True about the semantic and testcases parts. Not clear how it helps with architecture.
Will check it out, thanks. Had just been thinking whether there is any Go interpreter some time ago. I remember C having at least one, back in the day.
>It would be awesome to have a REPL in the standard Go distribution, and I definitely feel the lack
Agreed. And it should be more like IPython (command-line version) than like the stock Python shell.
I think the closet Go equivalent of 'byebug' would be delve[0].
It's got more of a learning curve than a REPL, but very powerful. Editor plugins for Go also tend to have good convenience support for delve, making using it less painful. For example, vim-go[1].
> For those that aren't familiar, byebug lets you put the command "byebug" anywhere in your code that opens an in context REPL. It's enormously helpful for hard to figure out bugs.
Isn't that what a normal debugger does? Or am I simply just living in lala-land because of C#.NET/Visual Studios terrific debugging experience?
I remember php and the hell that was xdebug. It was much easier and more efficient to simply to a `var_dump` whenever you needed to debug.
hah. GOPATH is the only thing I like about Go. I have all (non-Go) repositories cloned as URL-style paths e.g. ~/src/github.com/user/repo.
I strongly dislike the non-standard internals (horrendous custom assembler, direct usage of syscalls instead of libc) and the "developers are too stupid to use this" attitude towards modern language features.
No, by default Go programs and the whole Go toolchain has no C dependencies. The whole Go toolchain is implemented in Go and it doesn't like into any C libraries. Go links against libc only if you either use CGO or a package which has C dependencies, but I am not sure whether there is any package left in the standard distribution, that does.
Same for me. I moved all my code into the GOPATH pattern after finding that I had 3 separate clones of the Linux kernel in separate places. (And not just for development, just for reading and grepping.) I made myself a helper tool for navigating the GOPATH:
$ cg gh:torvalds/linux # cg = cd to git repo
$ pwd
.../src/github.com/torvalds/linux
It is very common for language runtimes to link and depend on libc on Unix, even if the libc API is not directly exposed in those languages. Go is somewhat unusual in this regard.
MacOS doesn't guarantee backward compatibility for direct syscalls. This has caused bugs like this with compiled Go binaries:
Go has very recently started using libSystem (which is analogous to Linux libc or Windows CRT) on macOS to avoid this issue.
On a more philosophical level, POSIX is defined in terms of a C standard library, and not using libc means Go doesn't support and/or must implement itself various features that are otherwise provided to POSIX applications by the system (like locale handling). Your mileage may vary in terms of whether that's a bad thing or a good thing.
> It is very common for language runtimes to link and depend on libc on Unix
It is, and this is where you have to choose between a fragile executable that linked to a specific vendor and version of libc (glibc, musl, etc.). or a bloated executable that statically links it.
> MacOS doesn't guarantee backward compatibility for direct syscalls.
Sounds like Go should use the stable API MacOS does offer. If the stable API is libSystem (different than libc), then so be it.
But if we're talking about Linux libc, there's no reason for Go to use it.
> POSIX is defined in terms of a C standard library
Specifically POSIX.1 (not POSIX.2).
Also, for what it's worth, POSIX compatibility falls short of Go's goals for compatibility. Specifically, on Windows. So I'm not sure what there is really to gain by following a different language standard for a different set of platforms that you desire to support.
It's also worth considering that none of the major systems are actually POSIX-compliant today. You're always going to need to make specific allowances if your standard library supports anything beyond the absolutely trivial even if you ignore Windows.
On glibc-based Linux you generally need to compile on a system running the oldest (or close to) version of glibc you want to support. I suspect this was the singlehanded reason for Go being built that way.
That said, on macOS it makes a lot less sense because libSystem's compatibility guarantees work a lot like those of Win32, where you can safely compile on newer versions of the OS as long as you don't actually use features that are newer than your deployment target.
FWIW GOPATH has not been required since 1.8. It now defaults to $HOME/go if not set.
Go is pretty easy to get up and running in Windows. There's an installer for the compiler and you can install vscode and the Go extension pretty quickly.
Windows is an afterthought for most programming languages (ever try ruby or c++?) and Go's cross-platform capabilities were a breath of fresh air.
I agree with the package management system and $GOPATH. I would add that is awkward that most of the libraries are not thread safe when go routines are core to the language.
Same here. For me it was primarily the syntax. So many people think that syntax is something you get used to, but I don't. Syntax matters a lot for me, and the way Go does it just isn't compatible with my brain.
Regular grammars are great for parsers. But really, having an easy to read (conceptually!) language is way more important imho.
But call me crazy when I say that I like C++ and can read it effortlessly :)
I can get used to something and still dislike it.. with Go it's mostly the bracket style, or put differently, the inability to turn off automatic addition of semicolons before parsing. I'd actually rather "have to" put semicolons manually, but no, I have to suffer so others don't have to put an additional line into their style guide.
Yeah I completely agree. Though I think when you really get used to a language and understand it deeper, you tend to understand the trade-offs that were made, if the language is well designed. Then you can still think that the trade-off don't match your requirements.
C++ is definitely something to behold in terms of syntax. It's hard to justify much of the type-system syntax (meta-templates, `sizeof...(Args)`, and `std::forward<Args>(args)...` come to mind). I'm picking on argument-packs here and ignoring the beauty of initializer-lists, operator-overloading and user-defined literals (which allow great syntax in user-code but their own syntax is haunting in library-code). I doubt that C++ would look the same if the syntax were created from scratch with the current set of features.
So it's fair to say Go's syntax is unfortunately limited in expressiveness (and I too find myself mystified by how to express ideas neatly with go), but I have a hard time imagining C++ is something worth emulating without adding a whole bunch of subjective caveats to what "should" be avoided.
I've given go plenty of time to sink in, and I still prefer c++. Things like not allowing implicit type conversions on things that clearly are not a problem (int16->int32) are just annoying.
Well I must say the Go team is certainly putting in the work to avoid a catastrophic major version bump (e.g. Python).
That said, any major additive change to Go, especially generics and/or try/catch will push me away from the language. If I need a well designed language, I have Rust. Go's sell for me is it's so naively simplistic it's actually useful when your team members are idiots.
If they bolt on type variables, well that's just a different language, and we'll just end up on the hedonistic treadmill towards another Java. No thank you.
Isn't the premise of Go that a team sufficiently large enough will eventually act as a collective idiot in terms of code maintenance? I'm looking forward to some real research on the question of whether Go's design choices have had real world results in improving productivity with supersize code bases.
My team has ~200k lines of Go code and I'm exceedingly happy with the state of the codebase. Having previously maintained a C++ codebase of similar size, I can say that the pace of changes is higher, the effort necessary for large scale refactorings is lower, and our ability to reason about the system is similar.
A few examples:
- Refactorings are simpler due to the use of consumer-side interfaces. Say you want to inject an in-memory cache above a backing store. To do that you probably have to change the constructor and provide an implementation matching the 2-4 relevant methods. That's it.
- Tracing code is slightly worse, due to having to track callers through said duck-typed interfaces, but on the flip side multi-threaded code is sufficiently simpler to reason about that I call it a wash. Having previously had to do threading in the form of "control flow" state machines, and then fibers (which were better but not perfect, and still aren't widely available), Go constructs are great. Locks where appropriate, channels where appropriate, overall very fast and clean code.
- Performance is good, and reliable. Not as good by cycle-count as C++ - and e.g. the comparable RPC libraries are definitely less mature than Google's very-well-kicked C++ libraries - but on the other hand it scales almost linearly. We started a system at ~5 cores/task under AutoPilot, and then when we next got around to adding more tasks it was peaking at ~60 cores/task at essentially the same per-core throughput. I've never managed to write a C++ server that can accidentally scale concurrency by >10x without hitting _some_ bottleneck.
- We use Go for ~everything. Server code, definitely Go. Client tools, also Go. Simple scripts, bash until they need their first 'if' or flag or loop, then Go too.
- I'd prefer real generics to interface{}, but the number of places it comes up is minimal enough that it's no more than a minor annoyance.
I can't speak to the issues of package management - we dropped compatibility with Go's native package structure fairly early on and went all in with blaze/Bazel (http://bazel.io) to coordinate builds and dependencies and whatnot, and haven't had reason to try modules yet.
> Simple scripts, bash until they need their first 'if' or flag or loop, then Go too.
If you don't mind, can you give a little insight on what this looks like in practice? I'm not sure how to use a compiled language as a script. I've played with executing go as a script using a shebang hack, but I somehow don't think this is how others are doing it.
For reference, the shebang hack I was using looked like this:
//usr/bin/env go run $0 $@; exit $?
package main
import "fmt"
func main() {
fmt.Println("i am a script")
}
It looks more like Go code than a bash script - the tradeoff we settled on is that pretty much as soon as you need to add any sort of logic it's no longer really a simple "script" and you _know_ it's just going to grow into a monstrosity. Better to use a language with real functions, real error handling, that you can actually unit test, etc. In that sense, I guess you could say we write lots of little tools moreso than we write scripts.
For something of this form, if the standard library has the functionality, we us it - os.Mkdir() instead of `mkdir` and so on. But to simplify shelling out, we have a little library that includes the interface
type Runner interface {
Execute(dir, name string, args ...string) (*CmdOutput, error)
}
so it's easy enough to call miscellaneous programs and get the exit code / stdout / stderr. It also supports printing and executing a command, etc.
Iteratively executing a program of this form looks like `go run whatever --flag=value`, though your shebang hack looks like it'd also do nicely.
Anecdotally (n = 10 or so) every Go codebase I've worked on has devolved into a trash fire, so no. Turns out collective idiots can write horrible systems in any language.
If there was such a need, the languages without them would still have a major market, which isn't the case.
Naturally those of us coding since the early days have experience with programming languages without generics support, yet a large majority eventually adopted generics.
In what context would “need” a language to not have generics? I can understand not “wanting” generics to keep it simpler, but I can’t see that as a “need”.
Personally, in any typed language, I want generics, not having them feels very limiting.
I really don't know why you got downvoted. The fact that go was meant to force "average" programmers produce maintainable code is an extremely important push for the language.
I do think that generics are needed if go wants to become more useful in contexts others than network middleware or data plumbing, but i'm also pretty sure that adding them will help cripple a lot of codebase in a very short term.
I don't think Go pushes maintainable code. It's simple. That's it. Most of the code bases I see have no layers. As long as you write once and forget, you're fine. After that Go's lack of structure with functions for structs just floating freely in the source file makes things hard to manage. The community's extension of the language to prefer 1 or 2 large files per package makes things worse. The lack of an explicit interface implementation makes it hard to jump to the interface definition of a function on a concretion.
Is there a definitive source for the "average" programmers target? I'd always considered it a tongue-in-cheek response to architecture astronauts who sneer at Go's simplicity.
“The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.”
I'm a relative expert in some domains, but I just find the fancy languages tiresome. I even find so called experts who find it condescending to use a straightforward language tiresome. Seriously - these so called experts never actually deliver any actual product. They just loop around writing line noise that is unreadable.
If you code with others, look into the idea of write-only languages. I think you'll find some of the ideas behind go make some more sense if you understand what they are working to avoid.
I don't hate generics, but don't think they are critical to go's success.
Treating users like idiots is a common engineering practice, not only in programming. Assuming users only make intelligent choices is just unrealistic. It is not that you underestimate any particular person.
You may argue if Go finds the right compromise between giving enough power to their users and not allowing them to shoot themselves in their feet, but the general premise seems to be a very sound one to me.
I think that often people judge programming languages, or frameworks, based on how it would work for them when working on a medium sized one man project.
I believe the designers of Go are onto something.
It's not just the code, but how the code will evolve over time, after a few years of having dozens or perhaps hundreds of programmers modifying it.[1]
In a large shared code base the following dynamic plays out.
* A programmer is assigned to fix a bug, or add a feature.
* His reward for that task is limited to whether he succeeded in achieving that task.
* Even if he is rewarded for improving the codebase overall (refactoring), this introduces much more risk than simply making his changes and getting out.
Basically, everyone wants to get in, make their change, and get out. Now iterate this a few thousand times.
Really it's just the tragedy of the commons, and codebases rot because of it.
The problem is magnified if the language encourages lots of complicated abstractions and meta-programming, because abstractions are difficult to evolve incrementally.[2]
Honestly, I just see Go as a reaction against C++ in this regard.
I still believe however, that the Go team went too far in leaving out features, especially in the areas of generics and error handling. I mean both create more code, so what if it's simple code?
Then the problem becomes that you just have more lines of code to maintain and test.
[1] And for twenty years I was a freelance consultant brought in to untangle big pile of mud codebases for projects in crisis.
[2] See the concepts of assimilation vs. accommodation in psychology. Once your abstractions need to accommodate an inconvenient new fact, then it takes a lot of disruptive effort to fix the problem.
Have you checked that he actually says this in the talk that you're linking to? I've seen this quotation repeated all over the place, but I've never been able to source it. It sounds more like a hostile paraphrase than a word-for-word transcription.
Fair enough. I had not found any other citations that gave the timestamp.
I wasn't suggesting that you'd deliberately make up the quote, but you can find it all over the place without a proper citation, so I thought it was possibly apocryphal.
I really really hope Go 2 can do something about `context`. Context is the biggest hidden wart of Go. We need the capabilities of context in different packaging.
This doesn't seem to be a popular opinion, but I agree. It's such a pervasive functionality in concurrent programs that it really should be a built-in aspect of a goroutine.
The problem with context isn't necessarily the interface, it is that it is "viral". If you need context somewhere along a call chain, it infects more than just the place you need it — you almost always have to add it upwards (so the needed site gets the right context) and downwards (if you want to support cancellation/timeout, which is usually the point of introducing a context).
Context's virality also applies to backwards compatibility. There have been discussions of adding context to io.Reader and io.Writer, for example, but there's no elegant way to retrofit them without creating new interfaces that support a context argument. This problem applies to any API; you may not expect your API to require a context today, but it might need one tomorrow, which would require a breaking API change. Given that it's impossible to predict, you might want to pre-emptively add context as an argument to all public APIs, just to be safe. Not good design.
Cancellation/timeout is arguably so core to the language that it should be an implicit part of the runtime, just like goroutines are. It would be trivial for the runtime to associate a context with a goroutine, and have functions for getting the "current" context at any given time. (Erlang got this right, by allowing processes to be outright killed, but it's probably too late to redesign Go to allow that.)
(I'm ignoring the key/value system that comes with the Context interface, because I think it's less core. It certainly seems less used than the other mechanisms. For example, Kubernetes, one of the largest Go codebases, doesn't use it.)
I have written the 'contextio' package that may be useful to you. It provides io.Reader and io.Writer wrappers that handle context cancellation. This allows to transparently add optinal context cancellation awareness to routines that work with I/O without injecting context as argument.
Forgive me since I've never used Go, but this sounds similar to having to add the async keyword to methods in C# all the way up the stack once you attempt to use an async method, as well as having to pass CancellationTokens down the stack to support cancellation. I've noticed it pollutes code with a lot of ceremony that I wish had been added to the runtime itself. Is this what you're talking about?
Yeah, context is basically a cancellation token with the same downsides and a bunch of unrelated features piled on (because it came from a bunch of people needing to work around limitations getting together and deciding to put all their hacks in one place, but I digress). But from a certain perspective all functions in go are async by default, so we get to dodge that one.
Absolutely. Contexts are similar to your CancellationToken; a context contains a channel you can listen to, just like CancellationToken's WaitHandle. They're slightly simpler in that I believe CancellationToken supports registering callbacks, which contexts don't.
Go doesn't actually have async support in the sense of promises/futures (as seen in C#, JavaScript, Rust, etc.). The entire language is built around the idea that I/O is synchronous, and concurrency is achieved by spawning more goroutines. So if you have two things that you want to run simultaneously, you spawn two goroutines and wait for them. (Internally, the Go runtime uses asynchronous I/O to achieve concurrency.)
I usually phrase this part of Go as: no async/await, it only has threads. But no thread handles. Everything is potentially parallel under the hood and all coordination requires external constructs like WaitGroups / chans / etc.
async/await has major complications like changing call syntax through the whole chain, so I actually prefer it this way. the lack of thread handle objects (e.g. a return value from `go func()`) is strange IMO tho.
The main advantage of the async/await model is that it's just syntactic sugar on top of CPS, so you can bolt it onto any language that is capable of handling callbacks (even C!). For a good example of that, consider WinRT - you can write an async method there in C#, the task that it returns can pass through a bunch of C++ frames, and land up in JS code that can then await it - and all that is handled via a common ABI that is ultimately defined in C terms.
Conversely, goroutines require Go stack to be rather different from native stack, which complicates FFI.
it's true that it's "just syntactic sugar", but in most languages it has call-site contracts that are either part of the signature (`await x()` to unpack the future/coroutine) or implicit (`x()` in an event loop host). and to change something deep in the stack means changing every call site everywhere.
that's a huge burden on a library (and thus the entire language ecosystem). it splits the world.
to avoid that, you basically need language-level support, so everything is awaitable... at which point you're at the same place as threads (but now they're green), where you cannot know if your callees change their behavior. which is both a blessing and a curse.
---
tl;dr yes but no. do you want your callee's parallelism to be invisible? you can't have both. (afaik. I'd love to see a counter-example if you know of one. there was one very-special-case language that made parallelism a thing you did to code rather than the code doing, but I can't find it at the moment. it only worked on image convolutions at the time.)
Right. You can have callee's parallelism be invisible - but only by adding it to your language (can't be done as a library) and making it be similarly invisibly parallel. And even that only works so long as everybody adopts the same system - goroutines don't play well with Ruby fibers, for example.
With the async/await model, you can immediately use it with any language that has callbacks, and then the languages can gradually add async/await on their own schedules.
I would dare say that the async/await model has proven far more successful in practice. It came to the table later than green threads etc (if you consider syntactic sugar a part of it - CPS itself was around for much longer, of course). And yet it was picked up surprisingly fast - and I think the way in which you can bolt it onto the existing language is precisely why. Conversely, every language and VM that has some form of green threads, seems to insist on doing their own that aren't compatible with anything else out there - and then you get insular ecosystems and FFI hell.
Maybe if some OS offered green threads as a primitive, it would have been different. But then again, Win32 has had fibers since mid-90s, and nobody picked that up.
thread interop may be a major contributor to "far more successful in practice", because yea - I agree, it's far more common. I don't know that side of things all that well :|
having spent a fairly significant amount of time in an event-loop system with async/await tho (python coroutines): I don't know if that's a good thing. getting your head around "thou must never block the event loop, lest ye unceremoniously bring the system to its knees" / never using locks / never confusing your loops / etc requires both significant education and and significant care, and when you get it wrong or performance degrades it can be truly horrific to debug.[1] it's nice that it tends to have fewer data races though.
green thread systems though are trivial - your stack looks normal, your tracing tools look normal, your strategies are basically identical (since they tend to context switch at relatively fine-grained points, so your only real concern is heavy func-call-less computation, which is very rare and easily identified). since I don't have to deal with thread interop[2] I'll take that every single time over async/await.
---
[1] I've helped teams which had already spent weeks or months failing to make progress, only to discover what would be an obvious "oops" or compile-time error elsewhere. some languages do this much better, from what I've seen, but CPS javascript and python coroutines and other ones I've used have been awful experiences. basically, again, language-level support is needed, so I broadly still disagree on "just syntactic sugar" for it to be even remotely acceptable.
[2] ....though cgo has been a nightmare. nearly everyone uses it wrong. so I 100% believe that I could switch sides on this in time :)
However, the reason we don’t use that more is partially because of the viral nature of context - it came late in kube lifecycle, so we didn’t ensure it everywhere and now it’s a lot harder to wire (clients have been iterated on for a while).
I have a closed PR from 2015 to kube that added per request ID tracking that we closed as “wait until we have context everywhere” and we’re still waiting.
I agree heartily that context is viral in APIs, but would argue that that's essential to the nature of context. Accordingly: implicit association of context with a goroutine would introduce a complementary API virality issue: you now need need to worry about whether anything above or below you starts to delegate their work to separate goroutines.
Not sure what you mean here. How does an implicit context change any semantics? You would still be able to override which context is given to goroutines you spawn. As a developer, you'd have to be aware of the implicitness, that's the only difference.
I agree it doesn't change the semantics, and that you can express the same set of programs (given that you still let people explicitly handle contexts, send them over channels, etc). I just mean to highlight that removing context-usage information from go function signatures does not mean that usage or non-usage of context isn't part of a function's API—it's just now global state, and needs to consider its interactions with everything else in the same routine (instead of everything lexically in scope).
Because I think discussing actual solutions is better that just complaining, this is my current favorite design document to address context:
Go Context Scoping [1] by Eyal Posener. It even has a working PoC implementation [2] which is pretty ergonomic and could become even moreso with language integration.
I think this concept solves most of the problems we currently face with context. As a consequence of making context available per goroutine it basically becomes an implementation of gouroutine-local storage. But this is more because context.WithValues exists in the first place than because of this proposal. In fact context has effectively become the de-facto GLS anyway, except it makes everyone's code ugly to do it.
Not making Go routines values like Ada Tasks is what led to awkward solutions shouldered by the developer such as "context". Too many time Go developers were told to solve issues in user-land, this is the consequence of that.
It doubles the surface area of every library that deals with anything related to IO, and forces middle libraries that don't and shouldn't care about context to double their surface area just to support connecting their consumers to their upstream providers. "not ideal" is an understatement.
Coming from languages that lack such a «convention», I quite like Context. Try implementing something similar in java or even node is also a giant pain
I really like the humility in this part of the statement:
> After almost 10 years of exposure, we have learned a lot about the language and libraries that we didn’t know in the beginning, and that was only possible through feedback from the Go community.
It's so tempting to hold one's project back until it seems perfect. And then, even worse, to defend it as perfect in the face of real-world feedback. I really appreciate it when smart people do their best, but in full recognition that a lot of things will be learned once real use happens.
Well for the first round they are looking at:
1. Allowing generalized unicode identifiers. That is hardly likely to break anything except possibly some crazy edge cases that dont happen in real code.
2. Binary integer literals. (unlikely to break things)
3. allowing seperating groups of digits in a number with _ like 1_000_000 (unlikely to break anything)
4. Permit signed integers as shift counts (no need to cast ant int to a uint to use it in a shift expression. This won't break any existing code).
So at least this first round is quite unlikely to break anything in real use.
For the first round they are testing the GO 2 selection process by applying it to proposed changes for 1.13, which limits it to non-breaking proposals. It won't get interesting until they start selecting breaking proposals.
It won't get interesting until they start selecting breaking proposals.
From what I see in the past couple of decades in popular languages, is there really a justification for breaking changes, from the POV of project maintainers?
.net 2.0 had breaking changes, and it was absolutely justified. The ecosystem would not be where it is now without it. (.net 2.0 introduced reified generics, and required a major overhaul of the internals, with quite a bit of breaking changes)
That's a good question because they ostensibly do it to get more users aboard, but in most cases, the result is the opposite (e.g., the Python 2/3 disaster).
The Python 2/3 switch was indeed a disaster, but in term of user gained and lost not so much: the number of Python users skyrocketed since 2007 despite the breakage nightmare.
I was actually thinking of the POV of people writing in the language, not the maintainers of the project which is the language. I guess my point is that those two groups of people have different incentives.
I would like to see more Unicode operators, at least as options. It's crazy that we still use * for × in 2018. Yes, I know that most US keyboards don't have that symbol but that's a solvable problem. I use an international layout on my Linux systems and can type it easily.
Also, I prefer using ' for the thousands separator and was happy when C++ adopted it. It's less visually intrusive, especially with variable width fonts, and some calculators even use apostrophe. Also, in identifiers, the underscore has semantic meaning: foo_bar is different from foobar. But 1'234'567 is meant to be identical to 1234567, so underscore isn't the best choice.
> It's crazy that we still use * for × in 2018. Yes, I know that most US keyboards don't have that symbol but that's a solvable problem.
I see the appeal but there are two problems with this: “solvable” is not the same as “easy”, and that similarly also wants fonts which make × more distinct from x. In both cases that's something which is perhaps approachable for dedicated developers but it seems likely to turn newcomers off of the language far more than deliver any real benefit.
> the underscore has semantic meaning: foo_bar is different from foobar. But 1'234'567 is meant to be identical to 1234567, so underscore isn't the best choice.
It's kind of odd to argue that inconsistency counts against use of the underscore but then argue for adding another distinct use for the apostrophe. That would require care for every tool which works with the language from compilers to highlighters, which seems unlikely to be worth the hassle since, unlike mathematical symbols, there's not much precedent for that convention.
Prior to some programming languages adopting it, I don't think there was precedent for using underscore as a thousands separator, so I don't consider it a strong precedent. The first language I recall using it was Ada. But I understand that using apostrophe has problems that would likely make its adoption impossible for an existing language. I was actually surprised when C++ adopted it. Interestingly, there is a locale that uses apostrophe for the thousands separator: de_CH (Swiss German). Try this in bash:
$ LC_NUMERIC=de_CH df --block-size=\'1KB
I don't know if there is a locale that uses underscore, I didn't find one when I looked a while back. Perhaps there should be...
Yeah, wikipedia cites some unspecified maritime usage of _ but it's definitely not common. It seems like most of the standards outside of programming languages have been moving towards spaces. Unfortunately there doesn't seem to be an easy way to search e.g. https://lh.2xlibre.net/values/thousands_sep/ but some lazy JavaScript shows no underscores and several flavors of whitespace:
It's interesting though, Racket certainly doesn't shy away from it. It allows you to use the lambda symbol as a replacement for the `lambda` keyword. (The keyword still works however)
Pretty sure Go hit 1.0 in 2012 and Swift in 2014. In any case, I think the age is not a significant indicator or driver of stability (at least for these young languages), but rather the community's commitment toward stability. The Swift community definitely seems to value stability less than the Go community.
Swift can get away with it because the majority of its users are using Xcode which has excellent tools for migrating between versions of Swift, and also interop with Objective-C code. Plus, the current compiler can target a limited number of older versions’ syntax but allowing use of new APIs for some degree of forwards compatibility in large code bases.
Whereas I doubt there’s one canonical Go dev environment common to the majority of its users to make easy migrating large code bases and providing simple syntax adjustments.
There’s a tool called fix, it’s already part of the standard Go distribution. In the past (mostly pre 1.0, IIRC?), it’s used to apply changes to one’s codebase when migrating between Go releases that introduce incompatible changes.
Can it be used in editors, say VS Code, to provide fix-its or suggestions for improving code on a line-by-line basis? If so, that sounds great. A lot of languages are missing this in their standard distributions and devs must rely on third-party offerings.
The standard go cli toolchain comes with `go fix which was used to do the update code for breaking changes or styles during the time before 1.0 release. If anything, I suspect the swift team was partially inspired by this pattern.
I doubt it. Apple's refactoring tools in Xcode go way back, thanks to the introduction of Carbon; the Intel transition; synthesised properties and automatic reference counting in Objective-C 2; and the extremely regular, breaking changes in UIKit.
> #19113 Permit signed integers as shift counts: An estimated 38% of all non-constant shifts require an (artificial) uint conversion (see the issue for a more detailed break-down). This proposal will clean up a lot of code, get shift expressions better in sync with index expressions and the built-in functions cap and len. It will mostly have a positive impact on code. The implementation is well understood.
The proposal as far as I can make out says allow signed integers for shifts but panic if they are negative.
This seems like a step backwards to me pushing checking which the compiler made you do to runtime.
Personally I'd expect a negative shift to shift the other way, but that doesn't seem to be a popular option with the team.
There must be some very angry people downvoting on this comment section today. Go is very opinionated and it's quite obvious that its original design being so radical (no class, no inheritance, no generics, no macro) was only possible because it was designed by a few very experimented people with a very specific goal in mind.
I'm also worried about how being "community driven" will change the philosophy of the language.
I think "community driven" means less than you fear. I think it means that community feedback is used to push the language in directions that scratch some of the major community itches. That seems perfectly reasonable to me - more reasonable than making changes in a vacuum, in fact.
But I don't think that they're going to let the community run wild and completely change the character of Go. I think they're looking for wins that matter to users, but wins that are possible within the framework of what Go is.
HN has a good rule [1] about comments: "Please don't post shallow dismissals, especially of other people's work." The above comment is being downvoted because it just voices dissent, not a real, substantive opinion. I can guess what their intent is, but that's hardly a basis for good discussion.
That made me remember part of an interview done to Dennis Ritchie back in 2000:
When I read commentary about suggestions for where C should go,
I often think back and give thanks that it wasn't developed
under the advice of a worldwide crowd. C is peculiar in a lot
of ways, but it, like many other successful things, has a
certain unity of approach that stems from development in a
small group.
IMO: very much still half broken. Calling functions is still experimental[1] and in my experience has yet to work even once, there's no display formatting options (afaik) so you are often looking at a chunk of bytes instead of something useful (after diving several times more layers deep than other debuggers require), and you pretty often can't view memory that's in scope at another call stack location without going to that location. All of which gives you a pretty crippled experience, especially as it makes conditional breakpoints extremely limited in use.
That said, it's mostly just half, before it was like 3/4 or worse. It has improved in stability (I haven't had it randomly disconnect at all since late Go 1.9 days, but I haven't pressed it hard at all either) and GoLand's integration works well and is mostly fast. It just can't do anything except set breakpoints and view memory.
If you use Goland 2018 it's pretty seamless, there are a few things missing like being able to get ptr addresses and view values in hex, and it's a little laggy compared to VS but not bad.
Vague estimate is that there are 18 million programmers in the world[1] and that 4% of them use Go[2], so 0.7 million would be a starting point guess as the total number.
Note that these were the first two Google search results I found for "number of programmers in the world" and "percentage of programmers in different languages" so... being off by a factor of 10 or more is likely.
These numbers seem pretty accurate, and I wouldn't expect that they're off by anywhere like a factor of 10.
Think about it: the number of programmers in the world is certainly not off by 10 factor - no way there are 180 million programmers in the world, only 18 of which are visible.
Similarly, there's no way that 43% of programmers in the world are Go programmers, that's again quite clearly off.
I think an estimate of 700k Go programmers makes a lot of sense, maybe with a factor of x2. I would very strongly doubt that there are more than 2 million Go programmers in the world.
For the their purposes, the people who would care are those who are currently maintaining a significant Go codebase.
So even if you did Go full-time for 5 years, but then switched to Rust and now work full-time in Rust, you would not mind at all if the language added some backward-incompatible changes.
In fact you'd probably welcome them, since often you moved away because of features the language was missing.
Not sure, but I wouldn't be surprised. There seem to be a lot of Go developers in China and elsewhere who don't really participate in the English-speaking Go community.
I don't really see a reason why there'd be a large iceberg of Go developers in China. There's no reason why programmers in China would use Go in higher proportion than elsewhere in the world.
Perhaps those languages also enjoy wider adoption in China than elsewhere in the world? Similarly, the software engineering industry has certainly advanced more rapidly in China than elsewhere in the world (as an artifact of China's rapid economic development), so their language adoption is likely skewed toward more recent languages (like Go).
These are good points, but the bottom line is "we don't really know". So I'd caution against assuming the number is very high just because it fits our hopes.
It seems Go wants to have a more open development process, but it looks like all the decisions are ultimately under the purview of the "Go team". Does the Go team include any community members? Or is this "open process" still essentially up to whims of a small group working at Google?
I am torn apart between investing the next one year between Go and Rust. I want to do some cool systems level programming. Both seem to be very good at it. Go has additional advantages of being older (and may be wiser).
What projects did you have in mind? I think the answer to your question depends on what you mean by "systems level programming" and what you plan on doing/trying.
This blog post doesn't answer likely the biggest of all questions: Will there be breaking changes?
If so, how will those be handled?
"As a rule of thumb, we should aim to help at least ten times as many developers as we hurt with a given change" sounds like there might be breaking changes, but on the other hand Robert still talks about including new features in the Go 1 compatibility guarantee.
I'd love if the compiler would stay backwards compatible and packages / modules could be pinned to a certain version, either during import or in the package / module itself. Then one could write Go 2 code but still use packages which are not yet updated to Go 2.
Personally I think that making breaking changes is a good idea, as it allows to clean up previous mistakes. However, Go should at all cost avoid incompatibilities like between Python 2 and 3.
It is very clear that there are breaking changes for Go 2.
However, they are testing out their new proposal-review system using non-breaking changes included in Go 1.
I'm learning Go, just a naive question, why does Go put the variable type at the end of declaration, is this an absolute need? no other widely usage language does that, and it just feels odd to me.
C++ is the weird one out, you might have a declaration like:
int (*f)(int x);
The variable type is around the variable name. Some of it is before and some of it is after. In Go it’s simpler:
var f func(int x) int
If I want a void function, I can just leave the return type off, I don’t need to add "void":
var f func(int x)
It’s easier to write a parser for this, because you know that this is a variable declaration just by looking at the very first token and finding a certain keyword. If you write a parser for C or C++ it’s much more complicated, because you have to keep track of which identifiers name types in the scope that you’re in. Generally, more modern languages like Java, C#, Go, and Rust are much easier to parse, they are often designed to be relatively straightforward to parse with e.g. an LL(1) parser, or close to it, maybe you can just use recursive descent.
In C++ it's also a bit inconsistent,
int f1(int x) { return x + 5; }
std::function<int(int)> f2;
auto f3 = [](int x) -> int { return x + 5 };
You also have to invent a placeholder for when there aren’t types:
int x = 3;
auto x = 3;
In Go you just omit the type, and you don’t need a placeholder:
var x int = 3
var x = 3
Other languages where the type comes after: Haskell, Python (PEP 484), ML, Rust, Pony, Nim, TypeScript, Swift.
In fact I think the way C/C++/C#/Java do it, with the type at the beginning, is actually somewhat rare.
You get used to it. Having to deal with this kind of differences between different programming languages is a lesser concern in the grand scheme of things.
Like everything different, it can feel odd at first, but I actually prefer it now.
Think of it as "Joe is a Person" is more natural than "Person named Joe".
My experience is limited to c, c++, java, all of them did 'type variable' instead of 'variable type', now I see there are many others that are doing things very differently. Thanks.
Love Go's one binary does http-server-login-everything-etc, can't be simpler for deployments.
Given that go is already pretty well specified (and, perhaps more importantly, has 2 mature implementations) it's hard to see what advantages having it formally standardised would bring.
With Go 2, they are moving to a community run project. ISO is a process for that. I guess, why reinvent the wheel unless they think they can do substantially better.
I haven't yet. I hope a good eco-system develops around them. My biggest gripe is that there are just too many approaches in Golang when it comes to the dependency management.
Definitely true, the hands-off approach they took seemed strange to me - as others make that one of the core tenants of the community early on. Since I have been in the ecosystem it's been `go get`, `vendor` directory, the various community ones, and now modules.
When I started work in the early 80's as a COBOL analyst programmer, I encountered ideology vs reality of GOTO. When I learned COBOL, I was taught Jackson Structured Programming. No use of GOTO at all, even exception handling. Fast forward in my first week into work and having done a nice JSP program for the task at hand a senior came over with my code and had a chat. Then took me to the system developers who did all the dirty low level assembler code (for sorts and other area's in which speedup was huge over COBOL). Was shown how my approach was far more wasteful of CPU resources and why others would not be able to maintain such code as all the rest used GOTO's. I will admit, it was nicely explained and this is in a time in which mainframes offered the only computer solutions for business of this scale and not cheap.
I will say GOTO's work well for exception handerling, but still doable without. But for speed, though less of a factor today, it still translates as faster code at the low level of CPU runs.
But then, things move on and you will always have that wave of what students are taught being cutting edge compared to what is actualy in use in work. A friction many would have encountered in one form or another, even today. Be it style, design or language/tools choice. What is the best today, may well be outdated tomorrow, but you have to maintain that legacy investment and it is often too costly/risky to rewrite that legacy for something that itself could be legacy the next day.
So as for harmful, well they can spagetti up code if used badly, yet that comes to many approaches and if they are that bad, why do CPU's still have JUMP instructions you could counter-argue.
> why do CPU's still have JUMP instructions you could counter-argue
Machine code is linear and executed one instruction at a time. It does not have the concept of blocks, so there is no way to have structured programming. Jumps are the only way to create a loop or conditional.
Are there normal coding patterns that are much faster with explicit gotos? Modern compilers seem to do a pretty good job of converting normal (goto-less) code into efficient binaries.
E.g, the simple switch statement has several possible machine-code implementations that compilers will switch (heh) between, depending on the characteristics of the cases.
In bytecode interpreter VMs, one often encounters the "computed goto" [1] pattern in use to dispatch opcodes. This is one that tends to be a little faster than a switch statement, enough to matter in the dispatch inner loop.
Of course, if you are going to JIT compile the bytecode, that'll usually be a lot faster. But at that point you're changing one form of low-level wizardry for another.
I always appreciate a good joke but unfortunately this has become the top comment as of now and is not adding to the real discussion around Go 2. This is a typical case of when I use my downvote power. Great joke but we should keep HN noise free.
Is it really so harmful to have it at the top of the comment thread? I think HN can be focused mainly on serious discourse without being completely humorless.
I think about it all the time. But imagine leaving the page and coming back a few hours later. Now you have to sort through tons of unnecessary comments (even if funny and humorous) while you look for the quality stuff. Quality requires sacrifice and I am willing to sacrifice the humor part for quality unless you throw in a bit of humor with quality content.
A great way to solve this is to use the [-] button to collapse any comment threads that you're not interested in following, like this one. HN remembers which threads you've collapsed, so you won't see them when you come back to the page.
God forbid that one of 208 comments is a (rather funny) joke.
I don't get why some people here are so against humour. I appreciate the high standards for jokes and expectation of high signal:noise, but this was clever and topical.
It was actually Niklaus Wirth (designer of Pascal and other languages) who coined "considered harmful":
> In 1968 the Communications of the ACM published a text of mine under the title "The goto statement considered harmful, which in later years would be most frequently referenced, regrettably, however, often by authors who had seen no more of it than its title, which became a cornerstone of my fame by becoming a templace: we would see all sorts of articles under the title 'X considered harmful' for almost any X, including one titled "Dijkstra considered harmful."
> But what had happened? I had submitted a paper under the title "A case against the goto statement, which in order to speed up its publication, the editor had changed into a 'Letter to the Editor', and in the process he had given it a new title of his own invention! The editor was Niklaus Wirth".
I apologize for asking a question that will likely lead to a flame war regardless of your answer, but which is better? I've used Go for a while for certain apps, but as a primarily functional programmer I find my way of thinking often clashes with the language (and I also don't like the verbosity).
So, do you do functional programming, and is Rust a better (with all the subjectivity that word implies) language than Go?
This 'question' is bound to go up in flamewars, but here is an honest and unbiased answer by someone who has taken a look at nearly every language on the planet (a hobby) and thinks that both languages are a bit crappy from a general programming language perspective but quite usable in practice.
Go is good to get things done quickly. It has a vast ecosystem and super-fast compilation. It's like a modern BASIC, but more performant and fun to use. It's fast enough for most everyday tasks except for real-time audio processing and high end gaming. It's good for writing CLI tools and server backend software.
Rust is good for writing libraries and CLI tools that replace existing C or C++ solutions with inherently safer versions and when speed matters a lot, though not as much as what would make you use Fortran or hand-optimized C. It is not suitable for high integrity systems and solid engineering where you'd normally use Ada/Spark, because of low maintainability, an unprofessional 'language aficionado' user base converted from C++, and being a fast moving target. Maybe later, though.
Both are fundamentally different, neither is "better".
They are both good at slightly different things.
If your desire is to accept bytes over the network and spit back bytes over network Go is going to be a pretty solid choice because that was very much the focus of it's design.
However if you want to build an application for a hard realtime environment and you either lack the space for runtime or can't handle GC pauses then maybe Rust is a better choice.
From a language perspective Go is a simple language and Rust is a complex language. The two have different tradeoffs here. Go is easy to learn, has limited pitfalls but also lacks in the power department if you need metaprogramming and abstractions to model your problem.
Rust however accels in that role due to it's powerful type system and hygienic macros. The tradeoff is very apparent once you try use the two languages however, Rust is -far- more difficult to both climb the initial learning curve and has a much higher ceiling.
Fundamentally you will probably find Go is better at replacing dynamic languages though there are many cases where C/C++ was used where it's bare metal nature isn't needed and Go is a very suitable replacement. Go however has some difficulty in replacing certain usages of C/++. Namely it can't easily be used to create a shared library because of it's runtime and I/O system.
That said if you wanted to be able to replace any and all C/C++ code Rust would be a better choice as it can do anything C/C++ can with no downsides. i.e embedded systems, shared libraries, bare metal access without worrying about the green threaded execution model.
There are many other things to consider too but these are some of the important ones from someone who got into coding doing C and embedded, has since learnt both Go (and used professionally) and Rust (and used for side projects).
Subjectively I think Go is a better choice when it can do the job as it's easier and less brain intensive to just do the thing. Rust however is more "fun" to program in as it's a less mechanical endeavour and also can solve some problems you can't with Go.
My rule of thumb is use Go by default, but if it makes sense to trade a lot of developer time for extreme performance or extreme type safety, use Rust. As with all rules of thumb, there's a lot more nuance than this, but I think it captures the big idea well enough.
I disagree that using rust means trading a lot of developer time. I'm as comfortable with rust as I am go, and I develop equally fast in either language. I would even say faster in Rust because of the type system.
That's quite a feat. According to the Rust developer survey, it takes many people a month or more to feel productive in Rust^1 at all much less as productive as with Go. I've been picking up Rust occassionally for 4-5 years now and I'm still not particularly productive and far less productive than I am in Go (and I come from a C++ background, so it's not like I'm a stranger to thinking about memory management and things). I suspect that you're an outlier (I may be also, but my point doesn't hinge on that).
^1: Most people report being productive with Go in a day or two
Well it took me longer to get comfortable with Rust than Go, and I also had to learn actix (actor style framework in rust) to do the same high concurrency programming. But once the time investment is put in, I definitely consider Rust to be the more productive language.
Once async/await stabilizes and the rest of the ecosystem catches up and becomes a little bit more ergonomic to use, I would say Rust will be in a good position.
As a Haskell & Erlang person who currently uses Rust as my primary language at work:
> do you do functional programming, and is Rust a better (with all the subjectivity that word implies) language than Go?
Yes, yes. Obviously the Rust ecosystem has fewer mature libraries, but its type system and error handling make Go look like a toy.
Go can be okay for small one-off tools, but its safety guarantees are not far ahead of scripting languages and I think it should be considered as such.
Rust has much better support for a functional style. On the other hand, not having a GC in Rust means that dealing with closures can in some cases get quite complicated, whereas closures in Go work exactly as you'd expect. (Although to be fair, if you aren't doing any mutation, closures in Rust are pretty easy to deal with.)
This is like asking for a language war, which is the last thing we want on a language thread. You shouldn't have any problem finding already-existing Go vs. Rust discussion either in the search bar here or on Google.
I would love to have type-clases a-la Haskell (implicits with parametric polymorphism, which is dead-simple and well understood) and universal pattern matching everywhere, but this is, of course, just a dream.
I would love to have ML/Scala-style syntax for curried functions and function definition via pattern-matching with guards, which is also, it seems, out of questions.
Actuall, the more of ML a strict language gets in - the better.
What is really funny is that Bell Labs did a lot of ML research, especially on stdlib, but Go team is ignoring everything which is not of the C flavour. Pity.
Again, ML is absolutely wonderful, and type-classes are the biggest single major innovation since Smalltalk.
It is better to lean towards ML instead of heading towards Javascript.
I'd be surprised if something similar to python 2/3 happens. The go team have been very explicit in saying that all go 1 code must continue to compile, and transitioning to go 2 needs to be as seamless as possible (most likely using tooling to automatically migrate code across, a la go fix from the early days).
Google had Guido working there for a very long time; there's probably a lot of institutional memory built up around the 2->3 transition, and likely a lot of lessons learned, and a strong desire not to repeat the experience.
How can you have an "explicit requirement" not to end up like Perl 6? Nobody planned to "end up like Perl 6", it's just something that happens when your new, backwards-incompatible version of the language doesn't catch on.
Also, sounds like Go 2 is going to make backwards-incompatible language changes. How is that different from Python 3, or even Perl 6?
Probably not. We can add things and we can opt-in remove things (e.g. user says they want v1.14 of language gets them new language features and removes some language features), but we can't change the semantics of existing programs. That is, if a program compiles with two different language versions, that program should mean the same in both versions.
So are chicken scheme and GNU Forth. The problem is that the Perl ecosystem and community are a shadow of what they used to be in the heydays of Perl 5. I used to write a lot of Perl but I wouldn't really recommend it to anybody these days, be it version 5 or 6.
"Shadow of what they used to be" is a nice gig if you can get it when the "used to" part was basically the scripting language.
Perl is still in top 20 programming language lists. That's arguably "below the fold" of search results, but hey, so are Lua and Haskell.
The Perl Community... I don't know exactly what you mean. I might believe that size in terms of active work on a growing number of modules has dropped; but I don't believe the quality of participation has. Even though I haven't written a line of production Perl in well over a decade I still like checking in every so often to see what they're talking about.
And for those for whom all you know about Perl 6 is its delayed arrival and related woes, I can guarantee there's more interesting things to be seen.
By the time the confusion about Perl 6 being DOA subsided (which still hasn't happened for some enthusiasts) Perl 5 was seriously damaged and it never recovered since.
Python 3 took a lot of time to take over but it seems that they're finally leaving Python 2 behind and the language is still extremely popular.
I would love to completely avoid it, and I do when I can, but it has started to creep into every job out there now and especially in areas I spend a lot of time. So just avoiding it is not possible.
I maintain it has been shoved down the industries throat for no good reason other than Google. It does nothing better than any of the existing mainstream languages and in many cases is a large step backwards.
So no, I will not stop railing on Go and my hope it goes away sooner rather than later. Nor will I stop railing on how I think macOS is a cancer in the Unix ecosystem and also needs to die, how JavaScript deserves to be relegated to the trash heap, how Android has fucked the Java-ecosystem, and how Chrome has ruined the internet.
I don't like Go either, but I'm making my entire product on it. The reason is that I need multiple services, with a small memory foot print, which compiles to binary, all of which runs for web with either HTML or REST. As far as I can see there is no alternative. Java, which I've used for 15 years, requires a JVM, the currenct Alpine Docker for Java only go to 8, and require too much memory. Dlang looks interesting, but I can't find any modern resources on how to program it. The book I have says I have to pick between two competing collection libraries that are not interchangeable. C++ is not a good choice for what I'm doing. For the same reasons C# and Erlang/Elixer are out. As a result it does this better than mainstream languages.
I never fully trusted it. Relatively recent posts show that the GraalVM crashes. While it would be nice to use something like this, I have to go to war with the army I have. That means using the Go troops. Perhaps the Java regiment will be useful in the future, but right now it's still learning to navigate the new terrain of native execution.
I wanted something that stayed close to the main community. I've never heard of people having much success with Maven.
Just because the majority of them are commercial doesn't change the fact that they exist.
They are more expensive than a stack of laptops, because FOSS made the business of selling software tooling only worthwhile when targeting enterprise customers.
Those enterprise customers have developers using these compilers almost since Java exists.
I think you might say threading. Maybe? The JVM can be tuned to get similar performance. Where it fails to get similar threading performance, it has better tuning in general.
Simplicity of the language? Maybe? It's simple, but I think it's too simple. One of my issues with Go is that it's hard to tell what implements what. VSC support is a little hit or miss. The fact that I have to use an IDE, or extension pack in VIM/Emacs, to figure this out is terrible. I like that in C++, Java, Python, C#, and Typescript I can clearly see which things a current struct implements. It makes code navigation easier. The lack of generics makes certain natural, and I mean natural as in how the majority of other languages do things, solutions to problems hard. Do you want a set, i.e. a unique collection of things? Use a map, but keep in mind that equality is at the whole struct level so one field being off and you have logical duplicates. You can't make structs immutable.
Community lead structural design? Most Go apps appear to be transaction scripts where the database connection is created at the handler and explicitly passed down or perhaps buried in a Context. This makes testing hard. You have to mock/stub the DB interface to test business logical. If I want to introduce interfaces that wrap the DB away, I'm told by the community that I'm "overthinking things" or I'm not a true gopher.
Go does not compete with C or C++ in my mind because of GC.
The ONLY thing it does better is packaging because of its standalone native binaries. But most folks using Go are writing software for servers and those are predominantly Linux x86_64... so ¯\_(ツ)_/¯
See, I don't particularly enjoy Go, but that's because of its weak type system, which Python is no better at. Still, I understand that a lot of people enjoy Python, so good that it exists.
Java is too verbose. I'd had understood if you have said Kotlin, but Java is not that much better, (yes, it has generics, but I myself prefer the way they're implemented in Rust/Swift).
I feel like your examples are not much better than Go and in some ways may be worse, (concurrency?). If you have said Rust, Swift, Scala, Kotlin...I still wouldn't want Go to die, but at least could get behind the argument.
I like Kotlin and I write a lot of Kotlin for personal stuff, but I think it is a niche language in the backend/server space and hardly qualifies as mainstream outside Android.
Python with PEP484 is pleasant.
The concurrency story in Java is great already and it is going to improve once Project Loom is integrated.
I fail to see how Java is significantly more verbose than Go. At least not the core libraries. The ecosystem is a mess but getting better. Java 11 with `var` for local variable inference is pretty much on par with Go in terms of verbosity.
Rust brings a lot of new things to the table but its main competitor IMO is C and C++. I've done very little Rust but what I have seen I like (though there are some things I dislike too... like lack of keyword args... a minor nit I suppose).
Not yet, I use types as an important form of documentation, so the vast majority of the library ecosystem would need to adopt optional types before I'd consider it so.
> Java 11
Not may people actually write Java 11 at work, most are stuck with 8, or even 6. Java 11 is basically even less mainstream than Kotlin right now, as not even Android supports it.
> is pretty much on par with Go in terms of verbosity
Yeah, but on par is not good enough, it has to be significantly better for your argument to work. Yet I don't see you calling for the eradication of Java. Also, gofmt makes it significantly easier to get familiar with a foreign Go codebase. Not a thing in the Java world. And not having to suffer the JVM startup time + having static binaries is indeed a real win for some.
> Scala is a shit show to maintain with some teams.
Yeah, it's a kitchen sink. Kotlin I like a lot better in this regard. But then everything can be a shit show with some teams.
P.S. Rust is getting rather good for writing web servers and with things like rocket.rs & aync/await, I think it'll be a real contender.
No you misinterpreted my point. It has to be significantly better than what already exists at the time of creation. Java was this to C++ in the 90s and early 2000s. Go is not that language.
Go can serve as a sort of Java to people who don't want the JVM.
There's no rule that says only one language is allowed in a particular space. Go has several strong points and several weaknesses, which are to be addressed in Go 2.
Java didn't have lambdas until 8 and now it does. Its generics implementation is not what I'd like to see. Still, it certainly has its place.
Go has been used to implement widely used technologies like Docker & k8, so it seems to have found its justification for existing as well.
I'm all for full support for unicode string manipulation.
But when ever are "unicode identifiers" a good idea? All kinds of BS decisions (normalization etc) for no good reason at all. Would you share code with identifiers written in RTL language? Chinese? Hieroglyphics?
And I'm saying this as someone who's not a native English speaker.
If it was an APL dialect I'd see some reasoning, but what good does it do for Go?
It is bizarre to me that languages boasting built-in language-level data structures like lists, hashtables, etc are content to just leave us with the bare minimum support of numbers, being basically whatever the hardware thinks a number is. The semantics of integers and fractions are perfect, and everybody already knows them. On the other hand, overflows in int32’s are weird, and if your idea of a fraction is a floating-point number, then you can never have something like (5/3)*6 evaluate to 10 exactly.
To be clear, I think fixed-width integers and floating-point numbers have their place, I just see no reason why they should be the default.