Hacker News new | past | comments | ask | show | jobs | submit login
I Want Off Mr. Golang's Wild Ride (fasterthanli.me)
717 points by whatever_dude on Feb 28, 2020 | hide | past | favorite | 492 comments



The author spent a lot of time dwelling on Window's filesystems, at which point many of the readers got bored and started commenting. There are actually a couple of excellent points in here, the majority of which relate to Go's tendency to just be silently completely wrong in its behaviors from time to time, and is absolutely packed with hidden gotchas.


This.

I like, and agree, with the conclusion, and wish more people would get to it:

> Over and over, Go is a victim of its own mantra - “simplicity”. (...)

> It constantly lies about how complicated real-world systems are, and optimize for the 90% case, ignoring correctness.

> This fake “simplicity” runs deep in the Go ecosystem.

I've always liked simplicity and on my own design, I tend to go for abstraction; trying to make it easier for consumers of my API. But nowadays more often than not I find myself preferring to be explicit about the underlying idiosyncrasies when needed. This is partly due to my recent experiences with Rust, and this post seems to concur:

> Rust has the opposite problem - things look scary at first, but it's for a good reason. The problems tackled have inherent complexity, and it takes some effort to model them appropriately.

In that sense, I especially like the approach to `Permissions`/`PermissionsExt` that Rust takes. It makes it clear what the tradeoffs are, and allows consumers to implement their own high-level, abstracted API without compromises.


A post about fixing Date in JavaScript got me thinking about why it took so long for languages to get good date/time APIs.

I think it's because it took so long to accept that date and time really is complicated.

If you sit down and work it out carefully, you end up with Joda-Time (more or less - not in all the details, but in the set of abstractions). If you balk at that and make something simpler, you make a subtly but fundamentally broken API.

It took a long time for us to get comfortable with the level of complexity in Joda-Time, but now nobody thinks a serious date/time API can be substantially simpler.

It sounds to me like you and the author are saying that Go does this balking systematically.


The author of Joda-Time actually thinks that even Joda-Time didn't get it quite right, and believes the java.time libraries in Java 8 and above (aka JSR-310[1]) are better than Joda-Time: https://blog.joda.org/2009/11/why-jsr-310-isn-joda-time_4941...

It turns out that abstractions for time are really hard to get right.

[1] https://jcp.org/en/jsr/detail?id=310


Props to him for not having an ego with that. Jodatime even recommends using JSR-310 time for new development.


The author of Joda was the primary person responsible for the new Java Date and Time API (JSR-310).


Date & Time need to be baked into the operating system so it only has to be gotten right once, and then every programming system benefits.


So long as they actually get it "right". Compare to Windows' APIs originally taking UCS-2, then UTF-16, when now we would all rather be using UTF-8.


Note that UTF-8 hadn't actually been invented at the time UCS-2 was implemented in Windows: https://unascribed.com/b/2019-08-02-the-tragedy-of-ucs2.html


In fairness to Windows and Java, they weren't wrong. There was no UTF-16, rather, UCS-2 was the accepted standard because the plan for Unicode was to encompass languages in use, not emojis and historical langauges. That changed and we're stuck with that legacy.


Even without emoji, mashing up Chinese, Japanese, and Korean to fit in 21k was never going to happen. It’s sort of like asking Danes to stop spelling their names correctly because we can't afford the extra codepoint for “å”.

https://en.wikipedia.org/wiki/Han_unification#Rationale_and_...


What I don't understand is how they miscounted so badly. Even ignoring Han unification, Chinese by itself uses more than 65k.

Or was there an intent to not encode some of these rarer characters? I haven't been able to find any info.


They didn't think they knew enough about CJKV to make that call, and so instead asked pre-eminent scholars from major Chinese, Japanese and Korean universities, and they replied that 21k was going to be enough.

The reason they thought that they could fit Chinese into so few letters was that at the time, the CPC supported academics who wanted to reform Chinese towards fewer letters. Not long after, views about traditional scholarship changed, and now they want to promote maintaining more of their traditional characters.

The reason they thought Han unification would work was that the time they asked was in a short period of rapprochement in Sino-Japanese relations. At the time, it was good politics for Chinese scholars to co-operate with Japanese ones. Very soon after this changed.


The current policy seems to be to add characters to Unicode even if nobody uses them any more. Is that a complete flip from what it used to be? Because I feel like that's the only way character reform would have mattered.


Yes. The current policy is made possibly by the adoption of UCS-2 and UTF-8, which expanded the amount of representable characters from 64k to >1M, of which only 10% are in use.


That solves the problem for one computer, but I can't see how it would solve it for networks and databases, given a computing environment that will continue to evolve and foil compatibility.

But a time protocol that recieves the wide adoption of TCP/IP might suffice.


How much does that actually help? Some things are much easier when system calls return sane values, but the standard library of a programming language needs to work as best it can on many platforms.


IMO taking some time to deeply understand typical date / time abstractions is almost as useful as learning SQL, basic algos, and git. It'll keep coming up throughout your career and inevitably bite you in the ass.


Strings are the same story, except there's still a widespread (and incorrect) perception that they're simple - probably because it's a built-in type in almost every language.


for js I can understand, based on history it wasn't meant to do much.. so they put a trivial model

Java on the other hand is a lot more surprising.. but maybe they expected to be a third party package from the get go..


And the other one is the tendency for Go's design to say "exceptions are allowed for me but not for thee".

When I worked at Google I used other languages by some of the same authors and they showed the same design philosophy. Make things with an enforced simplicity, and where there were more special use cases that the language designer needed, they created escape hatches for themselves but not the language users.


And the other one is the tendency for Go's design to say "exceptions are allowed for me but not for thee".

Yes. Exceptions are kind of a pain, but the workarounds for not having them are worse. Passing back "result" types tends to lose the details of the problem before they are handled. Rust is on, what, their third error handling framework?

Exceptions have a bad reputation because C++ and Java botched them. You need an exception hierarchy, where you can catch exception types near the tree root and get all the children of that exception type. Otherwise, knowing exactly what exceptions can be raised in the stack becomes a huge headache. Python comes close to getting this right.

Incidentally, the "with" clause in Python is one of the few constructs which can unwind a nested exception properly. Resource Acquisition Is Initialization is fine; it's Resource Deletion Is Cleanup that has problems. Raising an exception in a destructor is not happy-making. It's easier in garbage-collected languages, which, of course, Go is. You have to be more careful about unwinding in non garbage collected languages, which was the usual problem in C++.


Internal (bug, e.g. divide by zero or missing function definition in a dynamic language) vs external (out-of-your-control corner case, like file not found) is an important axis.

But another axis is "finality":

1. do you just want to never crash (return code)

2. sometimes crash but have the ability to deal with the problem up the callchain (exceptions in most languages)

3. sometimes crash but be able to fix the problem and continue, at the point the error occured -- not up the call stack (restart-case etc. in common lisp)

4. sometimes crash, but have a supervisor hierarchy make an informed decision if and how to restart you and things in your dependency tree (erlang)

5. crash (panic, assert, exit) and maybe have some less sophisticated but probably very complicated mechanism take care of restarting/replacing you (systemd, kubernetes etc.)

This axis may not be completely orthogonal, but probably mostly is. For example resumable conditions are nice in common lisp both to deal with external stuff (no space left on device? ask user to abort or free some up, and just resume download instead of erroring out as webbrowsers do) but also to just fix problems as you run into them and continue your computation during development, including calling a function you did not define – you can just define it and resume the call to it.

Sadly, the choices in most languages for this second axis are much more constrained. Erlang's supervision trees and common lisp's resumable exceptions in particular seem very useful in many scenarios but nothing else has them (well, elixir has everything erlang has, but it's still the same VM/ecosystem).


>Yes. Exceptions are kind of a pain, but the workarounds for not having them are worse. Passing back "result" types tends to lose the details of the problem before they are handled. Rust is on, what, their third error handling framework?

Rust's non-panicking error handling hasn't really changed: you return a Result<SuccessType, ErrorType>.

What has changed is the details of how to implement your ErrorType. Should it store some sort of context? What useful helper functions can there be? Things like that. What these error handling frameworks provide is macro-based code generation to implement these details, and extension traits for the helper funcitons. They don't change overall method of error handling.

Or, at least, I've not seen one that does.


GP wasn't referring to those types of exceptions. They were referring to the golang authors making escape hatches for themselves which aren't available to end users.


btilly really walked into that one[0], but they meant "exceptions to the rules of the language", not "un-/anti-structured stack and/or control-flow fuckery". Most famously the lack of generics, except for anything the standard library wanted to use generics for.

0: "special cases are allowed for me but not for thee" would have been better but, y'know, hindsight.


This is one of the things Ruby nails with its ensure statement that lets you clean up properly even if an exception was raised.


Rust has panic (and unwind_stack) for when things are truly borked.

Exceptions for error conditions are for the birds because they lead to bugs either in the code or in the compiler, and they lead to messy code.

Rust has functional-style error handling where it's possible to run other code, handle or ignore errors whereas exceptions create messy try catch blocks for every caller.


> Exceptions have a bad reputation because C++ and Java botched them.

Okay...

> You need an exception hierarchy, where you can catch exception types near the tree root and get all the children of that exception type.

Didn't Java do exactly that?


They tried, but the hierarchy is not well-designed. You want a clear distinction between "program has an internal problem" and "external thing (network, file, database, remote service, etc.) had a problem". You usually want to catch "external thing had a problem" in whatever wanted to talk to the external thing. "Program has an internal problem" usually requires restarting the program.


There is a clear distinction. RuntimeException (and descendants) is an internal problem (e.g. divide-by-zero); Error (and descendants) is a VM-internal problem (stack overflow, OOM); checked exceptions are external problems (e.g. IO exceptions).


Yeah... but checked exceptions are controversial within the Java community to say the least. Some codebases eschew checked exceptions altogether, rewrapping any checked exceptions they find.

This is in contrast to how sharp of a divide the community observes around Error vs Exception.

In practice there's a lot of Java code that collapses "internal" and "external" errors into unchecked exceptions.


The blame Java gets for checked exceptions its unfair.

Firstly, CLU and C++ were there first, so the language designers were building on something that they though was a trend that would carry on.

Most never having learned how checked exceptions were done in CLU and C++, blame Java for them.

Then even though it is more convenient to work without them, I do miss in other languages, because developers hate documentation, so I have to keep fixing code or just had a catch all handler, just in case.


I mean personally I happen to agree that Java's checked exceptions represent a reasonable experiment, although I think they suffer from low-level problems, namely the way they interact with lambdas, the way they're at odds with the community's interface-happy habits, and the self-inflicted tension between whether something should be checked or unchecked (should an out of bounds exception on a dynamically sized array be checked or unchecked? It doesn't seem morally all that different from whether a file exists or not).

But language features exist within the programming community that uses them and on codebases where even if you go to the trouble of threading through all checked exceptions underlying code still blows up under you with what should be checked exceptions (which is all production codebases I've ever seen), it quickly becomes an uphill battle to convince team members to use checked exceptions.


There are many Java developers who like checked exceptions. True, there are issues about polymorphizing them (which is what you feel when you use them with lambdas), but they're solvable. As to file vs. array, I do see a fundamental difference. Whether the index is out of bounds or not is up to the program; whether the file exists is not.


There are many who do. There are many who don't and among those who don't they can have pretty strong opinions and be fairly influential (see e.g. Robert Martin's Clean Code). When I say controversial I don't mean that as a back-handed way of saying no one likes checked exceptions, I mean that to say there are large contingents on either side. Unfortunately that means practically speaking checked exceptions lose a lot of their power since you have to proactively rewrap and rethrow a lot of your underlying libraries since a lot of them eschew checked exceptions (e.g. big examples that can pervade entire codebases are Spring and Hibernate). That's not impossible to do (I've done it for personal projects before), but I've never had success in convincing coworkers to do it and you're at the mercy of the accuracy of the documentation (or just seeing it blow up enough times).

I was thinking of the fact that the array can be dynamically sized as not being up to the program, but it's certainly true you can ensure the array isn't out of bounds (just check the array length in a single-threaded context or lock and then check the array length in a multithreaded context) in a way that's not possible for files.

Perhaps a more blurry example is ParseException (checked) vs DateTimeParseException (unchecked). I'm pretty sure I know the reasoning behind it, which is that ParseException is thrown by methods that directly ingest user data whereas DateTimeParseException is presumably not and is probably meant mostly for hard-coded strings. But there doesn't seem to be any real distinction between the two cases; it seems reasonable enough to parse strings passed in by the user as datetimes.

When you solvable, I'm curious do you mean currently in Java or with future features? Because right now if I want to play nice with checked exceptions and lambdas I do an ugly thing of wrapping the exception in a RuntimeException, then catching the RuntimeException, downcasting the wrapped exception, and rethrowing it. It'd be nice to know if there's a better way than that to use.


> When you [say] solvable, I'm curious do you mean currently in Java or with future features?

I meant future releases. That people can manage now make the problem not one of extreme urgency, so we're waiting until there's a solution we like.


Fundamentally, the problem is that whether an exception should be checked or runtime depends on the context. It makes no sense so say that "FileNotFoundException" is checked.

If that exception is thrown because a user selected a file that disappeared, it's recoverable, so it should be checked.

If that exception is being thrown at startup because the app failed to find a file that absolutely needs to be present otherwise everything is broken, then it should be runtime.

Obviously, Java will never be able to adjust to that perspective on exceptions since it would break pretty much everything.


I completely agree that context matters immensely and ultimately it's the caller rather than the callee that should decide whether an exception is fatal.

FWIW though I think that Java the language could adjust to that perspective, if nearly all current exceptions were checked exceptions by default and then callers were expected to wrap as a RuntimeException (perhaps something that was a bit more suggestively named such as FatalException) whatever exceptions they deemed unrecoverable.

Unfortunately there's some implementation-specific problems of going that route (generating exceptions is expensive and you'd probably want a far more fine-grained exception hierarchy rather than lumping whole classes of things under say generic IllegalArgumentExceptions), but they don't seem insurmountable.

More importantly though I agree that I doubt Java the community would ever accept that, at least not for a long long long time.


I suspect the proportion of Java programmers who like checked exceptions even in principle is decreasing over time unfortunately. So waiting might just result in it never being implemented :/ (although I guess that's probably a point in favor of waiting and never expending the wasted energy).

Just looking at the developments of past decade the community seems to be going hard for all exceptions being unchecked. Other statically typed JVM languages (Scala and Kotlin) advertise a lack of checked exceptions as an improvement over Java. Various Java plugins also tout the ability to remove checked exceptions. Manifold is particularly exuberant about it. Lombok offers this capability but is much more reserved about it, in no small part because the way it does so has pretty big downsides.

It's a bit sad (from my tiny personal perspective) because checked exceptions in Java have some ergonomic benefits over Result/Either types in other languages, but social dynamics are what they are.


Maybe I am misunderstanding your point here, but isn't making RuntimeException be the base for internal problems the issue the parent poster was talking about? Divide by zero is an obvious case, but what about when invalid or misconfigured data of some type is passed into a method? This is "internal", ie inside the program, unless I am misunderstanding your terminology? It isn't an external problem, so it isn't a checked exception, which means that the caller to your method doesn't necessarily actually need to handle RuntimeException that you would throw, meaning we're back in the C world of optional error handling?

I might be missing your point, but I would think that making it so that only program-external problems throw checked exceptions defeats a lot of the forced error handling power that checked exceptions grant?


You can always make input validation errors a checked exception with your own exception type. It'll accomplish what you're trying to accomplish. It's not perfectly clean because it'd be nice if you could stick it under the "IllegalArgumentException" part of the inheritance tree, but it does what you're asking for.


Right but if you derive it from Error it isn't checked and same as if it is derived from RuntimeException. Deriving from Exception is checked but then you aren't fitting in with the hierarchy that pron is recommending, or at least what I think they are recommending, which is why I asked for clarification.


Internal errors are bugs or other conditions that the application is not expected to handle (other than, say, restart the thread a-la Erlang). Input validation is not an internal error. IllegalArgumentException is meant to represent bugs, and so an internal error, and so it is unchecked. It does not represent input validation errors.


> Resource Acquisition Is Initialization is fine

It doesn't work well at all for transactions, where both A and B must succeed or neither.

https://dlang.org/articles/exception-safe.html


Thanks, that's a nice discussion!

Do you know if a similar, more granular approach (scope(exit)=~finally, scope(failure)=~catch, scope(success)=else) over go-style defer=~finally is implemented elsewhere than D?


> , they created escape hatches for themselves but not the language users.

man, seriously disappointing..


Is that wrong, though? I mean, confounding features like exceptions are... useful. That's why they found in languages in the first place. They're just footguns when used "wrongly". So why not have an "escape" hatch for experts that isn't part of the standard/official/blessed paradigm of the language?

I mean, isn't this exactly what Rust did with unsafe? Warn everyone away from it, promise that "normal" code will never need it, but... include it in the language and use it pervasively where needed in the inner workings of the runtime?


No, because anyone can use `unsafe` in Rust. GP’s point was that Go can use the feature internally but no end-user can.


That sounds more like a legal argument than an engineering one. I mean, who cares what the mechanisms are and what the precise rules and enforcement mechanisms are? The point is there are complicated features that are OK in some contexts but not others, that this varies between problems and between languages, and that some parties make different decisions on how to make use of them even when implementing software for people who take contrasting positions.

I mean... so what? Either Go is good because it lacks exceptions or it's bad for the same reason. And it's either a good decision that it uses a similar mechanism internally or it's a bad one. Both of those are arguments worth having, but they are different arguments and there is no technical reason to demand they be resolved in the same direction.


It is very much an engineering argument, because writing maintainable software requires respecting contracts. If an escape hatch is private, you can't depend on it, because code doing so will break when the hatch changes (and there's no guarantee that it won't).


The discussion is about Go-internal features, and whether or not dogfooding should be required. There's no contract being broken.


The whole point of OP is that many Go-internal features shouldn't be internal.


And my whole response is that that notion of "shouldn't" reflects an aesthetic or moral argument and not a technical one. How Go is implemented says nothing about whether Go is a good language or not.


If I understand you right, your entire argument depends on counting "Go exposes X internal feature" and "Go exposes something equivalent to X internal feature" as different things.

Because demanding Go do either one is an engineering argument.

But demanding the former over the latter is an 'aesthetic or moral' argument.

Well, nobody else is making that distinction! They're only making the engineering argument! You're interpreting their words way too literally if you think they're making the aesthetic argument.


Agreed, the author’s main argument is summarized nicely at the end, and it’s a good one:

> It constantly takes power away from its users, reserving it for itself. > It constantly lies about how complicated real-world systems are, and optimize for the 90% case, ignoring correctness. > It is a minefield of subtle gotchas that have very real implications - everything looks simple on the surface, but nothing is.

“Our users are stupid, prefer subtly wrong to a more complex but correct abstraction” is core to Go, and appears everywhere.


That's indicative of a serious attitude problem - I am so smart and can handle the power but you can't. Contrast this with C where everybody is on equal footing. Thanks for posting. This tells me everything I need to know to not get on Mr. Golang's Wild Ride.


I don't think it's necessarily an attitude problem, I think it's a fairly inherent part of writing libraries and APIs. You always need to make trade offs between complexity (or lack of) and correctness, the authors of the go standard library have swung the needle a bit more in the simplicity direction while the authors of the rust standard library have gone a bit more in the correctness direction.


Unfortunately that's becoming increasingly difficult with the amount of programs written in go that touch developers lives on a regular basis. Kubernetes, for example, is written in go. Want to fix a bug? You're now at least a part-time Gopher.

I don't think we've seen the likes of this since PHP.


Kubernetes is always brought up when discussing Go. I wonder if Go might ultimately be its Achilles' heel? If so, I wonder how that might manifest itself?


I cannot assert to the full correctness, and I’m aware of the heresay-ness of it, but I’ve seen discussion before on how much “hacking around the lack of generics is in the Kubernetes codebase”


I like to think of it more like Go was built by Google, for Google. If you aren't Google, then Go is probably not right for you.


This seems to be a natural result of the fact that the language's creators and implementors work at the same company as the target demographic.


> the majority of which relate to Go's tendency to just be silently completely wrong

100% right on.

Go's handling of errors is often ridiculed for its verbosity and lack of thought, but the fact that Go makes it so easy to sweep errors under the rug has real and devastating consequences in the real world.

Go programs are much less safe than programs written in Rust or Java for that reason.


It's not surprising that other pieces of golang have incredibly broken assumptions, I know this particular one was discovered independently by many people:

https://github.com/golang/go/blob/71ab9fa312f8266379dbb358b9...

https://marcan.st/2017/12/debugging-an-evil-go-runtime-bug/


That debug report was incredible, I have to say. Hats off to the author for their patience.


I don't think this example justifies the expression "incredibly broken assumptions", I would rather call it an "incredible corner case"...


Making assumptions about the stack frame size of third-party code is incredibly broken. You have no way to make any guarantees about it and primitives like alloca mean you can't even be sure the stack size will be the same across multiple calls.


The VDSO isn’t really any third-party code. It’s very surprising for it to do a stack probe. Though obviously it’s possible, and falls on the runtime to handle (so it’s a legit bug).

But.. nearly every “runtime” makes stack assumptions. They usually just have enough slack that it doesn’t matter. E.g. musl has an 80k stack by default — is it still “incredibly broken” if the VDSO needs > 80k? No.

While the VDSO doesn’t have an explicit stack requirement, it’s definitely implicit that it will use a small, reasonable amount.


Java just hides them in 40kb of logs that don't tell me anything.

I agree that golang makes it easy to lose errors but using Java as a better comparison is laughable in practical use.


"Go programs are much less safe than programs written in Rust or Java for that reason."

This is plain wrong. ( Rust included )


Being unaware of an exception thrown by a function when calling it in Java will cause compilers to bark at you - while doing the same in go will work until it doesn't.

I think this is even more insidious with changes in third party code over time though - did the package your gigantic product uses to validate that a phone number is in European time just add an error return value to a function that previously had none? I hope you are reading all the change logs closely because in GoLang nothing will bop you over the head for suddenly not supporting a new `err` response that wasn't previously returned by a function, and if that error is triggered (for whatever reason) it won't be at all visible in your system until something major breaks inexplicably.

I really dislike forcing users to be proactive about handling errors, some folks with wrap a System.in call in try { } catch () {} - but these are a clear anti-pattern that can be actively flushed out, tracking down someone forgetting to check the return value of mkdir is much harder.


Go will most certainly notify you if a 3rd party API suddenly returns a new error. Typically you would see an assignment mismatch.


No, it won't.

If a function could return one error but in a new version, it can now return two types of error, the Go compiler will not notify you. And your code is now failing to handle an error case.


Won't this only cause an assignment mismatch if the function previously returns values that you captured? For functions that previously had no return value and suddenly have a return value (so those side effect only functions) I believe golang won't complain?


The only situation in which that will happen is a function going from returning a value to returning a value and an error and you were actually getting & using the value.

If a function starts returning more variable errors Go likely won’t tell you (though in fairness that’s a pretty common issue).

If a function was not returning anything or you did not care for the value it returned, it adding an error will be a completely invisible event.


It doesn't make Go much less safer than Java. Go is a safe language.

Working long enough with Java I saw all the problems with exceptions and NPE and can tell you that those problems are less prominent with Go.


I think you two are using the word "safe" differently. I agree with you that go is memory safe, but I think they mean more that business logic will do something wrong because you got an err you didn't handle, vs deferences a null or something.


golang does nothing to solve NPEs, and any line of golang can panic at anytime. At least with Java established frameworks have exception handlers at the base where relevant exceptions can be logged or handled in some other way. With golang, an error that is not handled just silently keeps the program running in corrupted state.


That said… I feel that Rust’s use of WTF-8 for OsString on Windows has resulted in some really nasty problems, especially since OsString doesn’t expose any useful methods for string manipulation. As far as I can tell, Rust’s approach fails to hide any of the complexity, and then adds the additional complexity of a new encoding and conversions on top. I can see that there’s some end goal of being able to work with OsString in Rust code but at the moment the API is missing everything except a couple functions to convert it into something else.

It’s a truly cursed problem that we have three separate notions of strings. We have Unicode strings, we have bytestrings, and we have wchar_t strings on Windows. No two of these are completely interoperable. This has a ton of direct consequences which cannot be completely avoided. For example, if I want to make a version of “ls” that gives a result in JSON, I’m already fucked and I have to change my requirements.


Isn't the point of OsString that it's essentially a faux-union type that is intended to be immediately converted to some concrete representation which /is/ richly manipulable?


The problem is that this is not good enough. It’s not uncommon to need to do a small amount of manipulation of OsString and there is no good way to do it.

In C++, it’s fairly easy. In Rust, it’s a damn nightmare.

In theory, in Rust, since OsString is basically Vec<u8> on the inside (like String), you could implement e.g. Path::has_extension in the same way as str::ends_with. However, anyone who has gone in and tried to implement this for OsString or Path has apparently gotten buried in the complexity and given up.


Is that not the point? That you ought to map out that complexity in a type whose constraints must be satisfied in order to have a valid instance?

If your string is invalid to start with and you need to correct it, then yes, you need to wrangle that complexity yourself. If you need some tools from another toolset - eg. String functions that can help you make a valid Path - then you will make multiple type conversion hops to arrive at your destination. But trying to use String methods on something that may not be a valid string is no solution to the original problem, and would merely be hoping you could get away with the assumption.


The point is that should be part of the standard library, like wstring in C++.


https://doc.rust-lang.org/std/ffi/struct.OsString.html

I'm puzzled what you think is missing. What should be part of the stdlib that isn't currently?


Basic stuff like starts_with() is missing. You cannot slice an OsString into parts or iterate over its components. Almost everything you want to do with a string is missing.

If you are curious for yourself, try to write an argument parser that will parse something like "--output=<path>" and store the path as an OsString, and make it work on both Linux and Windows. The OsString abstraction breaks, and you have to write platform-specific code or use "unsafe", even though internally OsString is just a Vec<u8> and you should be able to strip off the "--output" as it is encoded the same on both platforms.

E.g., fill in the blank:

    /// Split an arg "--<name>=<value>" into (<name>, <value>).
    fn parse_arg(arg: &OsStr) -> Option<(&str, &OsStr)> {
        // What goes here?
    }
This is trivial with &str.


Why not transfer it over to a full String instead? How many other libraries and core functions expect an OsString? Why rely on an abstraction that's intentionally been given minimal functionality?

Of course `starts_with` is missing: you haven't resolved what underlying type the value actually is yet, and you'd be trying to compare apples and oranges for all you know! Move the OsString to a concrete type and you'll have all that functionality and more. The only time that will fail you is if you don't have a valid string to begin with, under which case `starts_with` should fail, correct?

Everything about OsString makes it a type you convert to and from, but it's not intended to be one you work /in/, since that would make require you to make assumptions about which platform you are running on. You really want to manipulate it? Go to String and back, and pay the cost. This should also encourage you to use OsString as little as possible, at the edges.


I have felt this pain for sure, but only really once. This is because, in languages with strings as paths I tend to use string manipulation to do operations on paths, but given that virtually all of my OsString usage is paths, which have specific manipulation functions already it’s lesser.

This is also why the interface isn’t so rich, there just hasn’t been a lot of demand. That said I think some things are in the pipeline?


The most common use for string manipulation on OsString is parsing CLI args like --output=/some/path. See eg. clap/#1524.


Do you have any more info about what's in the pipeline?



I thought that someone had recently sent in a PR adding some convenience functions, but I can't find them now, I must be imagining things :(


Rust still doesn't get this right. If I'm calling an NFS library, say, on Windows I need to use UNIX paths. Rust needs WindowsString and UnixString on every platform, with OsString as a synonym for whichever is most useful locally.


In that case... You wouldn't be using the rust file system libraries though, right?

It seems like the simplest definition of an OsString is "the type used to interact with the OS file system API as implemented in rust".


Rust has a policy of keeping the standard library minimal and this is completely reasonable. But sometimes they overdo it. In this case it's nuts that I need to implement my own UnixString because the standard library doesn't expose it, and when I run on Linux I have two incompatible versions of the same thing.

Another example: I wrote a command line app which takes a hostname/ip address plus an optional port number after a colon. And the whole thing's async using tokio. The way the hostname/IP address parsing is structured in tokio and the standard library meant I had to reimplement all of it to add the port number. This all feels like more effort than it should be.


Doesn't address the OsString complexity- but the author has another post on strings in Rust that you might be interested in [1]. It at least addresses how Rust does hide a lot of regular `String` complexity that `C` doesn't.

[1] https://fasterthanli.me/blog/2020/working-with-strings-in-ru...


Not a Rust guy, but I did have to look up WTF-8. Had to chuckle at the 'wobbly transformation format' as I was translating it something else. :)


> at which point many of the readers got bored

I don’t know Go or Rust, and yes, I did almost get bored and quit the article.

However... glad I powered through because the 169 dependency packages that ended up bringing in GRPC and this Protobuffers and the kitchen sink was worth the read. In a 0.00% acceptable conclusion. The language shouldn’t encourage this imo.

I get the idea. But I’m coming from embedded so when I see they have a 32bit argument with 4.29 billion options to represent a boolean, well, no sir I can’t get behind that at all :)

Likewise I think the author would have been well to leave Rust out of it most of the time. Your gripes with x should be independent of “y lang does it better” aside from just knowing it is possible to be better.


> the author would have been well to leave Rust out of it most of the time.

Initially I thought the same, but then I realized that it was being used as a means of expressing that it doesn’t have to be this way. That there are better choices, and here’s an example of better choices. Probably too much detail on the Rust, but using it to contrast with some of the poor decisions pointed out in Go is useful.


Is there a language besides Rust that could be used instead as this example? As in, languages whose standard library was so carefully designed from previous experience that the design of features like Permissions/PermissionsExt and OsString deliberately take into account the design of both Windows and Unix-like internals.

The author mentioned part of the reason the filesystem API is so awkward in Go is because Go was designed from the start with Unix paradigms and Windows was not even considered because there was no reason to at the time - it started as a language internal to Google and their development priorities likely excluded Windows. When it became public and more widely used Windows support became a necessity for cross platform support, but there was no going back to redesign everything so it had to be bolted on within the paradigms the stdlib's interface allowed for.

When it comes to the stdlib I feel you have to be extremely careful about these things and plan for Windows users ahead of time if you intend for cross platform support - and that should be a goal if it's to be widely used. I feel like Rust only accomplished what they did because they made sure to include Windows as a first-class platform in their philosophy from the start.


Yes, plenty of languages do this. For example, Racket exposes permissions as bitmasks but they work portably across platforms, and Racket handles all the filename encoding issues mentioned here.


C++17's std::filesystem is awesome.

One of the benefits of C++'s minimal standard library is that when something finally does get added 20 years after it's an established technology the OS primitives have already solidified. (threads and mutexes in C++11 for another example)


Well threads weren't that well done, hence std::jthread.

And lets not start with the whole concurrency soap opera, that might be done by C++23, and the mistakes in std::async design.


Best back handed compliment I've seen in a while ... And right too.


> Your gripes with x should be independent of “y lang does it better” aside from just knowing it is possible to be better.

And what better way to demonstrate it's possible to do better by having an example ready?


Because you may turn anyone who would otherwise be empathetic of your gripe with their reluctance to accept your proposed solution.

I’m reminded that “not every problem needs a solution”. Or at least needs one right now.

I agree, in this case it made good points, but too many of them. It just didn’t need so many examples of why “Rust is better”. If you were invested in Rust or Go, this article reads differently to you than me who is invested in neither.


Them you would have to contend with people that are either not convinced it is a problem or that think a better solution can't exist.


The problem is... and a lot of people aren't going to like this, including the original author... a lot of those particular gotchas are there for a reason. The author overestimates how much of them are intrinsic to the language, in my opinion. This is a cross-platform file interface, and we've had those for years. What we tend to discover is that if you do write something precisely correct on each platform, you lose a lot of value in the cross-platform bit. Technically, pretty much the entire interface ought to be slightly different between all OSes. All of them have relevant differences all the way around in permissions, behaviors, "inodes" or whatever the equivalent is or possibly no equivalent, whether they have "symlinks" or "hard links" or other bizarre things, whether a file is a single stream or multiple, the list goes on and on.

It would be completely feasible to write the "os" package to intimately bind to each and every difference; as mentioned in the article, the cross-platform functionality is there. (Well... at least down to the OS level. Considering the full space of filesystems themselves get even more fun.) The consequence would be the near complete loss of ability to write cross-platform code beyond very trivial stuff. On its own, this is neither good nor bad. It is a matter of what the goal is, and the goal here was an 80/20 cross-platform functional library, as is the goal of most of the standard library. If you need the other 20, you need to do something other than the standard library, and that's the case for the entire library, not just "os".

If you magically materialized this perfectly-matched cross-platform library and submitted it to the project to replace os, and even if we ignore the backwards-compatibility promises for Go 1, I virtually guarantee it would have been rejected even so. It's not the kind of library that Go wants as its standard library. It's a perfectly sensible sort of library. It just isn't what is desired in the standard library. "What is desirable in the standard library" is a very exclusive list.

All cross-platform file interfaces are quirky if you really zoom in on them, because if you sit down and really play with it, like, beyond what a ranty single blog post would constitute, you'll find you can't get the quirks out. There is a essential level of quirkiness in the problem itself.

I also would disagree that "stronger types" would solve the problem. You can easily write a Rust library that is basically the same thing as Go, even if it happened to have a slightly different set of quirks, using similar types across all platforms. You can easily get OS-targeted libraries that don't implement a virtual lowest-common denominator, but that means you get non-cross-platform code, on the grounds that it does not matter what the underlying language does, you can't access extended attributes on filesystems that don't have them, and if your concrete type forces you to deal with that on an OS-by-OS level, you can't share concrete types.

"Rust could use traits to solve this!" In which case, the traits will themselves define a lowest-common denominator cross-platform library, with a more "accurate" library underneath. Go could use interfaces in essentially the same way, with the same consequence. You can't get away from the fact a LCD library will be quirky; it is only a matter of choosing the quirks you have, not whether you have them.


I share your conclusion that "the perfect cross-platform library" does not exist, and I also agree that we could use Rust to make one that's worse than the Rust standard library, and we could use Go to make one that's better than the Go standard library.

However, Go's limitations make it hard to make one that is much better than the Go standard library. And the Rust standard library is so carefully designed, that there's really no need to go ahead and redo the work. Unless you need something it explicitly doesn't support (and doesn't promise to support!), in which case there's a wealth of crates at your disposal, which also benefit from a rich type system and strict compiler checks to ensure correctness at compile-time.

People don't just switch to Rust and write code like they did before. It's different enough that it makes everyone rethink how they approach a problem. But it doesn't just get in your way - not only are the compile errors excellent (and the core team is working tirelessly to improve them even further), it gives you the tools to build solutions you'd never pull off in other systems languages.


Can I give you an utterly useless comment that I am dying to get out of my system now that I see your HN username?

I saw the domain name and thought it was a clever phonetic hack, using the dot character literally pronounced to make "faster than li (dot) me", which sounds like "faster than light (dot) me" when you say it out loud.

I now see this is not the case at all. Alright, that's all I've got.


I thought it was a play on "quicklime".


This is a lot of words about how a cross platform library is good. The author never said otherwise.

The complaints are about how the libraries as implemented are subtly wrong and use simplicity as an excuse.


> This is a cross-platform file interface, and we've had those for years. What we tend to discover is that if you do write something precisely correct on each platform, you lose a lot of value in the cross-platform bit.

This is similar to the pros/cons for using a mobile framework like React Native, where you get a lot of productivity in cross platform design but you lose some of the precision compared to developing for each platform natively.

If you need that precision, then you need to use something native. It is simply a tradeoff of business needs. Do you need cross platform productivity or per-platform precision? If you need precision, then reconsider your engineering tools.


>It's not the kind of library that Go wants as its standard library. It's a perfectly sensible sort of library. It just isn't what is desired in the standard library. "What is desirable in the standard library" is a very exclusive list.

This sounds like a reasonable argument if your language is, say, Julia, or something like Lua, where in the first case you probably don't write code that needs to do a lot of work at the OS/network/hardware level, i.e. systems programming, and in the second case, the language has a good built in FFI that lets you drop down into C or a similar language to do systems programming. Python, Clojure and Ruby fit more-or-less into the second case. C simply forces you to do all of the work yourself.

But Go's FFI (cgo) is a constant source of consternation, and while Go's authors admit it might not be suitable for "the largest" codebases, the hostility of Go to good FFI makes it more uncomfortable to use in practice. The official viewpoint is something to the effect of "most people shouldn't need cgo". The result has been a proliferation of libraries that attempt to do systems programming in Go, which includes in particular the "monotime" debacle highlighted by the author.

Remember: this blog post highlighted this weirdness in a real library used to solve a real problem. The idea that Go isn't meant to be used for that is belied by the fact that many, many people do try to use Go for that.

So yes, if Go had a more "difficult" file library, it would be less consistent with the "simplicity" idea used to advertise the language, but it might be more consistent with the way that Go is used in practice.


Yes, and the FFI friction maps to my main concern with Go as it stands: it's a bit like Python in that it wants to be extended rather than embedded. That always creates a "hollowing out" of its core over time as users exceed the capacities of its standard library and try to push it into new environments. In contrast, Lua, for example, has a parasitic quality to it: make the host more powerful and then you can easily give Lua similar powers.

However, the other half of that is that in a lot of cases, the libraries are chosen, not the language. And then why would you choose the janky "worse is better" libraries? Well, there is a reason: at some level all your code is still a prototype or draft and the "real thing" is yet to come. And then Go looks rather successful on that front in that its primitivism works at the outset and ships a lot of software, which in turn creates the demand for the heavier "big-boy" solutions.

That's a thing I often don't see addressed in this kind of rant.


I don't think the windows vs everything else is the main point of this article. I think it is just an example of a design trend, and even within that section of the article the author mentions unix specific design flaws of go file handling.


Yes, the latter half made much better and broader points


Not disagreeing. While some readers got bored (understood), others got more interested at the detail to which this was being taken and enjoying the care of discussion.

I’ve never programmed in Go from a vague sense of these issues. Hey, it confirms my uninformed biases!


On the other hand, why care about Windows?


It’s a pretty recent language to the production ecosystem.

Is it any different than Java or Python 20 years ago?

How much of this is “wrong” given the relativeness of wrong when it comes to what is effectively how to organize a syntax construct hierarchy?

You can find the same ranting all over about C, Python, etc

Oh look computer people got an opinion on the organization of computer stuff. Shock, awe


It's not immediately obvious, but the rant is less about the specifics of the design, and more about the assumptions and attitudes underpinning said design - and how they translate to bad design all around, not just those particular pain points.


Here's an example of why Go's simplicity is complicated:

Say I want to take a uuid.UUID [1] and use it as my id type for some database structs.

At first, I just use naked UUIDs as the struct field types, but as my project grows, I find that it would be nice to give them all unique types to both avoid mixups and to make all my query functions clearer as to which id they are using.

    type DogId uuid.UUID
    type CatId uuid.UUID
I go to run my tests (thank goodness I have tests for my queries) and everything breaks! Postgres is complaining that I'm trying to use bytes as a UUID. What gives? When I remove the type definition and use naked UUIDs, it works fine!

The issue is Go encourages reflection for this use-case. The Scan() and Value() methods of a type tell the sql driver how to (de)serialize the type. uuid.UUID has those methods, but when I use a type definition around UUID, it loses those methods.

So the correct way to wrap a UUID to use in your DB is this:

    type DogId struct { uuid.UUID }
    type CatId struct { uuid.UUID }
Go promised me that I wouldn't have to deal with such weird specific knowledge of its semantics. But alas I always do.

[1] https://github.com/google/uuid

EDIT: This issue also affects encoding/json. You can see it in this playground for yourself! https://play.golang.org/p/erfcSIe-Z7b

EDIT: I wrongly used type aliases in the original example, but my issue is with type definitions (`type X Y` instead of `type X = Y`). So all you commenters saying that I did the wrong thing, have another look!


Is there even a single go abstraction that doesn't leak it's guts everywhere?


There aren't any abstractions in any language or library that don't leak everything about what they are trying to hide as well as everything about their own implementation. That's just life. It's impossible to hide complexity. Whatever wraps one thing will be strictly more complex than the wrapped thing was.


This just isn't true!

Haskell's json/sql marshalling do not use runtime reflection but instead ad hoc polymorphism, so when I create (or even derive automatically!) a marshalling instance, it is pretty easy for me to reason about what will happen statically. Haskell's Generic & newtype-deriving go a long way here, and are good examples of principled abstractions that do not leak.

Haskell's conduit (and other streaming libraries) is another good example. I use them to create programs that process things in constant memory, and when I compose them (e.g. with operators like =$= or .| in conduit), the resulting program streams in constant memory. I have built entire systems (CLIs, batch jobs, event processors, etc) on top of this abstraction and conduit itself has never leaked.


I believe Rust's serde library is similar when it comes to JSON. It also gives you the option to choose how strictly you want to enforce the types.


This is funny because after almost 4 years of working with Elixir and watching Go from arms’ length, I literally see NONE of the criticisms regularly leveled at Go. This is not an exaggeration. In fact, Elixir has few criticisms at all to begin with, and it’s driven very large sites already at this point.

(Yeah, it can’t compile easily distributable self contained binaries. It’s not (yet) designed for that.)

Not intending to start a language war, but your argument is basically the beaten wife claiming this is just standard treatment from husbands. This is my counterexample.


Elixir is a fantastic and small language. Jose Valim kept it reasonable.

It just have so much depth with concurrency. The actor model is amazing and easy to think about too.

I'm not entirely sure they overlap completely but the concurrency model is superb. I will probably only use Elixir for web application from now on (unless they don't have packages for certain API). Chris have made web development possible and many awesome people have contributed (POW package is just lovely).


Elixir is Erlang anyway. The only difference between the two is basically syntax and a couple of bonuses in the Elixir STDLib, most of which wraps existing Erlang functions.

You're praising Erlang's concurrency model, there's no such thing as Elixir concurrency model.


Yeah I understand that but not many people do Erlang from Elixir from my experiences. Also I find Elixir community is better for me as a web developer than Erlang.

But it's also inescapable to ignore Erlang when you dive deeper into the actor model. Several good actor model books are Erlang only. Unfortunately many people, including myself, ever actually do anything hardcore with Actor Model. There is a good blog post about several skill stages of erlang/elxir and OTP was at the very top in term of few people actually uses it. I can't recall it the author =/.

Actually, I like Elixir package management system and command line tool is much better. They recently have the release command line tool. Elixir have several more differences?


I tell people Elixir is just pretty Erlang.


The reason is because Go is very popular where Elixir is not, if Elixir was standing where Go is now it would be pretty much the same, I guaranty you that.


Erlang is a stable language that has been around for a long time (almost 35 years now). It's used for way more mission-critical code than golang is ever likely to be used for. It's weird syntax and performance tradeoffs are very well known, but you still won't see anywhere near the number of complaints that you see against golang.


Erlang is less and less used in telecoms and it's the only place if was really used, lot of things have switch to C/C++/Java.

As for the reason why it has less complains it's pretty simple no one uses Erlang and it's a niche, it's not a generic purpose language. I can't even tell a single known application or library written in Erlang.


IBM Cloudant runs everything CouchDB which is written in Erlang as does Amazon for SimpleDB. WhatsApp and a bunch of Pinterest are written in Erlang (they may actually be in Elixir, but on BEAM).

To my knowledge Ericsson still writes most things in Erlang. Yahoo uses it for a couple large services. My company use PagerDuty which is also written on BEAM.

I'm sure there are many, many other things in not familiar with, but claims of Erlang's demise are greatly exaggerated.


Erlang is mainly used in the control plane at Ericsson -- you'd never write a DSP in it for example. Most code is sadly not Erlang.


Ah yes. Whoever's heard of Whatsapp, Pinterest, Discord, and Goldman Sachs? https://codesync.global/media/successful-companies-using-eli...


Just like Go only matters mostly to Docker and K8s shops.


ejabberd is one that I had installed at some point.


If you were using Go for those four years, you'd also see very little of the criticisms regularly leveled at Go. What you see on the internet is the union of everyone's complaints and frustrations. Each individual sees only some of those, maybe even none depending on the type of work they're doing. And the ones that are generally happy don't tend to post big rants, so the overall impression of an outsider can be pretty skewed.


> If you were using Go for those four years, you'd also see very little of the criticisms regularly leveled at Go.

That's bullshit. When I worked in the Ruby on Rails space, I saw critisms quite often... for example of the "one size fits all" way-too-large-surface-area library design (see: ActiveSupport) or the "fat model" design trend at the time... I even PRODUCED some of these criticisms myself. Heck, the bug that made me leave the space took a month to track down and had to do with nondeterminate behavior when merging a Hash with a HashWithIndifferentAccess (this was not my code, I would have not written it the way it was written, but it was a bug I was assigned to of someone else's code) because the Ruby language did not see these as two distinct "types" and simply went ahead as if they were both regular Hashes, which caused HWIA keys to get overwritten unexpectedly/nondeterministically... that was the last straw at the time.

And when I worked in .NET/ASP prior to that, I saw (and witnessed) MANY criticisms of the language and API such as how easy it was to produce difficult-to-test spaghetti code.

And while all this was happening I was also working on frontend code in JS and needless to say there have been A LOT of JS criticisms over the years and I've seen most of them.

So yeah, maybe try again with less BS. The biggest criticism I can produce of Elixir (and more generally the BEAM VM) is that too few people understand its advantages and too many people actively misinform others about it. (OK, one more criticism, but it is a more general criticism of functional immutable languages: A handful of algorithms perform suboptimally in a language that does not even permit mutation, relative to a language that does.)


I think you might be reading an unintended meaning into that sentence. I'm talking about the experience of using Go vs. what people write about it, not about your experiences with other languages (which I have no intention of invalidating). I was pointing out that sometimes one can get a skewed perspective of the typical experience by reading blog posts and things (which by definition are written by the more vocal members of a community).

I did use Go professionally for about five years, and read lots of criticisms of it (and still do out of curiosity). Most of them struck me as just not a big deal in practice. When I had to write some list-processing code that could be greatly simplified with generics (a few times per year), I just sighed and typed it out and moved on to a more interesting thing. When I had some repetitive error-handling code, I refactored it or wrote some helper functions. The dependency stuff was annoying when starting a new project, but once you pick a dependency manager/vendoring tool it's fairly straightforward (and of course now this is included).

Certainly, the language and ecosystem has warts and frustrating things. Maybe Elixir has fewer warts, I don't know. But overall the experience of using Go is fairly smooth and boring and productive. The things that people like to complain about don't really register in day-to-day usage. That's not BS, that's my personal experience.


Well, you might have the Stockholm syndrome. :D But yeah, we all have to deal with some idiosyncrasies with our languages/tools of choice. It’s inevitable.

For the record, I really love Elixir but it’s quite easy to swallow/ignore errors there as well. (To translate one of the Go’s criticisms.)

So I get what you’re saying. But IMO the outsiders’ perspective is valuable because it outlines stuff we have gotten used to, and they might not be willing to do that.

So such criticisms might be minor for you and me but they add much-needed nuance in the long run, I believe.


Fair enough.

And truth be told, a lot of the issues I encountered in the languages I mentioned were from other people’s flawed code designs, not my own.


I've been using Go for years. It works, but it's mostly despite bad design issues and because of network effects/money+time put into the project. Not impressive at all.


You can find rants on any language because nothing can ever be perfect. Everything has pros and cons. https://yourlanguagesucks.com


"It's impossible to reach absolute zero, so it's also impossible to drink chilled beverages."


heh. someone hasnt used a monad.


There is no way to avoid leaking implementation details, it's just a matter of how much you leak. Even monads leak their performance characteristics.


*sometimes and not always

State, for example, doesn't leak. You never worry about the fact that it's a function `s -> (s, a)` and it gets optimized away like nuts.


This is an example in favor of the fact that abstractions can't paper over performance characteristics. Users may depend on them.

In the unlikely event that there was a change to the implementation of the State monad, or to one's compiler, such that the State monad was not optimized away, it would be disruptive to users. It would probably be treated as a bug, even if the only change in behavior was in additional CPU and memory usage.

This is a corollary of Hyrum's Law.


Not to mention that sql.Result (the return value of Exec) has a LastInsertId() that's an int64, so if you're using uuids, you can't use that at all and have to call Query instead and manage generated IDs yourself.


This is a more ridiculous symptom of bad library design than the filesystem trouble mentioned by the article.

In the real world, executing most SQL statement could be made to return a semi-useful integer according to simple and consistent rules (e.g. affected row count, -1 if there's no meaningful integer).

But the official Go documentation

https://golang.org/pkg/database/sql/#Result

makes it quite clear that the Go design committee decided to imitate a remarkably limited and inelegant MySQL function that returns the value of an auto-increment column, not even realizing that only a few statements have auto-increment columns to begin with. I'd call this a negative amount of design effort.

  LastInsertId returns the integer generated by the database
  in response to a command. Typically this will be from an
  "auto increment" column when inserting a new row. Not all
  databases support this feature, and the syntax of such
  statements varies.
(Of course, MySQL's LAST_INSERT_ID() is only bad as a building block and inspiration for a general API; in SQL queries assumptions aren't a problem and overspecialized tools can be occasionally very useful)


In a lot of cases - esp. distributed systems - it's not up to a database to generate a UUID, but the application. In theory you can have a hundred servers that generate records and send them to a central storage platform (which may or may not be a database, or event bus, etc).

UUIDs are not meant to be generated by databases.


Haha yes I've long since resigned myself to using Query + explicit RETURNING for inserts

Cut 2 of 1000


I agree that the standard library database tooling is really clumsy in a lot of cases, but it's the library implementation at fault, not Go itself. Notably, contrary to your last sentence, you aren't troubling yourself with "weird Go semantics", you're troubling yourself with the semantics of the database stdlib.


Is there a database library that uses reflection that properly descends into type aliases? Probably not, because it isn't always what you want.

It's still fundamentally caused by Go's shitty design choices.

encoding/json is at fault as well, which is also in the stdlib and a flagship library (basically part of the language - the maintainers wouldn't even extend its struct tag parsing to allow for required fields it's so fundamental!)

Proof that this issue affects JSON: https://play.golang.org/p/erfcSIe-Z7b


> It's still fundamentally caused by Go's shitty design choices.

I mean, come on. In your playground example, one way of using the UUID type inherits its methods and another doesn't. Inheritance is inherently complicated, and if you're relying on it you need to know what you're doing, no matter what language you're using. I wouldn't call that a poor design choice.


    newtype NotRobPike'sUUID = NotRobPike'sUUID UUID deriving newtype (ToJSON, FromJSON)
^ not inherently complicated


I'm not sure you're making the point you think you are.


My point is if you Do Better than cargo cult stuff that feels nice, you can solve the original problem nicely.


> Is there a database library that uses reflection that properly descends into type aliases?

The database package uses a type assertions to find the methods, not reflection.

Go types have one level of underlying type, not multiple levels as you seem to be assuming. Go is simplistic compared to other languages in this regard.

A type definition defines a new type using the underlying type of some other type.

I can understand the complaint that Go does not have the aliasing feature that you want, but the database/sql and encoding/json packages work exactly as expected given Go's simple model.


I'm not sure, I kind of like this?

Personally I would never create different UUID types for each DB struct, and have never seen that done.

    type DogId struct { uuid.UUID }
is type composition (Sorry if I'm not using the right term there). Isn't this exactly how you're supposed to do what you're trying to do in Go?


> Go promised me that I wouldn't have to deal with such weird specific knowledge of its semantics.

Where, specifically, did Go promise you that? I know of no languages where you don't, sooner or later, have to have weird specific knowledge of the semantics.


In all of its marketing and propaganda? Constant comparisons to languages like Java bemoaning boilerplate and idiosyncrasies?

I think it's perfectly fair to say that this behavior is not in line with Go's philosophy.


One company asked me to write a wrapper around database/sql that adds a connection pool. This is pretty easy until you want to implement any function that returns Row, which you just can't, because you can't make one of those. Amazing.


Take care, you may be wrapping your connection pool around a built-in connection pool. Furthermore, some dbs might not like log-lived connections.

http://go-database-sql.org/connection-pool.html


Sorry, my original reply was unclear. The connections would be to different replicas, and it was just a hiring task rather than something one would actually use in production.


A type alias binds a new name to a type. All of the type’s methods are available through the new name.


I fixed my comment. I didn't mean to have equal signs in there.

Proof that `type X Y` causes the issue: https://play.golang.org/p/erfcSIe-Z7b


The type definition `type X Y` declares new type X with the underlying type of Y. X and Y share underlying types and nothing else.

This is a useful feature and there's nothing weird or special case about how this works. It's just not the aliasing feature you expected.


It's still shitty and weird behavior. Not simple at all.


Type aliases (type X = Y) are a niche feature and not meant for this use case.

If you had done e.g. type X Y you would have had more success.


I think the OP simply wrote them the wrong way around, because `type X = Y` would not have broken anything as it'd make X a trivial alias to Y, and thus would not be losing anything. Even the reflected name doesn't change from the original.

type X Y however does create an entirely new type which is physically identical to the original but logically unrelated (without any of the methods or anything).

Which is fine in the sense that it's what OP was looking for (completely independent types) but less so in that it doesn't forward any method, and it's not necessarily clear how you'd do that (type conversion, which looks really shitty when it involves pointers)


Sorry I miswrote. I meant `type X Y` by "type alias" (terminology from other languages)

`type X Y` is still affected - you will 100% _not_ have more success :)

Proof (using JSON this time): https://play.golang.org/p/erfcSIe-Z7b


> `type X Y` is still affected - you will 100% _not_ have more success :)

type X Y neither inherits methods nor forwards anything (what struct embedding does), so you need to know about the protocol (which you're not told because it decides to just encode the bytes), and then you need to implement its methods on the newtype explicitly forwarding them to the underlying one.

TBF this is a pretty awful case as Go just goes and generates garbage without prompting.


golang's broken interface design also encourages hacks (which lead to difficult to track down bugs) like this: https://github.com/golang/go/issues/16474


If you imagine a spectrum of languages from sloppy-but-"easy" to precise-but-"hard", with something like Python or Ruby way off on the left and something like Rust way off on the right, Go is sitting somewhere in the middle.

And so if what you're craving is absolute precision and maximal avoidance of errors or incorrect behavior, then Go is not going to be your jam. I sympathize w/ that.

That said, these specific complaints don't strike me as that bad.

- Filesystem perms exposed on windows, which just no-op. This seems pretty reasonable, though!

- Filesystem paths represented as str type, which is assumed to be utf8, but doesn't have to be. This also seems reasonable! If you want to check for invalid utf8 and specifically print out something special in that case, nothing in Go is stopping you from doing that. This is a classic "easy but sloppy" vs "hard but precise" tradeoff.

- Timeout thing -- I'm a little confused here, or maybe not up-to-date. He says let's do things the "modern" way and pass a context to do HTTP timeouts, which apparently doesn't work, and then goes off on a 3rd party package to then fix this which has an insane dependency graph. But...if you just set the Timeout field on the http client, everything works correctly. So what's the problem? Or am I missing something?


Often these choices are 2 dimensional, but people don't see it because we can always agree that there's one quadrant that nobody wants, but usually there's another quadrant that some people really want, and others need but don't think they want. It gets ignored and everyone behaves as if X = Y when in fact it should be X <= Y

As a concrete example, I was struck by something in an interview where the consultant pointed out that easy to implement functionality gets copied by your competitors quickly. Differentiating features are ones that are very valuable but tricky to get right. But nobody wants to prioritize those and so (my words) whole industries are boring dystopias of cheap features with no kick. You should want to implement some features that are worth far more than the trouble of implementing them, regardless of how much trouble it is.

Similarly, getting a concise design may be one of the hardest things we can do. So we end up with naive or baroque most of the time. When someone stumbles onto something better a bunch of us copy them in the next generation of tools, but the inspiration/perspiration balance is very evident in the slow rate of change we see.


The http client isn't always directly exposed. I agree with the author. Context is a per request object and timeout should be able to function on a per request basis. Client is often shared and reused, and thus not always exposed in certain design patterns. If context has a timeout why doesn't it work as you would expect?

Also, now that I think about it, why does the basic http.get call mentioned in every go networking tutorial not have a default timeout?


(I have never personally used context, so I'm not so sure what the expectations are with that.)

Looking at the http docs, I don't see any reason to believe setting a context for a request would control timeouts.

If the complaint is, "the http library API does not provide a way to set timeouts on a per-request basis," then OK, I guess, that's true, but I don't see why that should be a huge issue (just use different clients for the different timeout values you need).

But if you really don't want to do that, it should be easy enough to access the underlying network connection and set the timeout before reading the body, though I've never done this.

What Go is doing here still seems very reasonable from my perspective...


Author here, the article was actually wrong - I meant to expose yet another timeout you can set on HTTP requests, I've updated it to include that one, and be clearer on what `idletiming`'s purpose is.


If Go is somewhere in the middle and Rust is on the right then Nim is somewhere in between Go and Rust.

Personally I think that’s the sweet spot, and now that Nim is at 1.0 there is no excuse not to give it a try. As nice as Rust’s compile-time memory management is, it’s very often overkill.


> And so if what you're craving is absolute precision and maximal avoidance of errors or incorrect behavior

If you are being paid to develop software, doing anything other than aiming for absolute correctness seems negligent, at best.

I think this is part of what leads to obsession with Rust. We build so many things on a daily basis with a long long list of 'it depends'. But Rust aims to make you write something as correctly as possible, and provides a really solid base for you to do this. So that list of 'it depends' shrinks drastically and you feel superhuman for building something so solid.


You statement about absolute correctness doe snot really make sence -vast majorulity of bugs in all software I've ever used are not due to the language design, but are due to blatant mistakes of the application developers.


Well exactly. You and your parent's comment are in agreement: languages like Rust that give you less ways to shoot your foot, encourage writing a more correct code.


The premise of the article is about incorrectness in a language. So while I agree that most bugs are likely caused by the developer and not the language they are using, I think my comment makes sense in reference to the main post.


You dont have to call Python sloppy just because Go sucks at some things.


Weakly typed is sloppy.


Python is 'strongly' typed `dynamic` language!!!


There are two technical conversations I don't have because everybody just gets mad and nothing gets resolved. (Note to reader: if you haven't guessed already, this means I am not going to be reading replies to this thread and certainly not responding to them. Go outside and get some air.)

One, the Monty Hall problem. You either get it or you will die on a hill of misunderstanding. I've never seen anyone's mind be changed (I had to change my own mind). Statistics are really fucking hard.

Two, that the differences between the Java Language Spec and the Java Virtual Machine spec mean that Java is not quite as statically, strongly typed as you think. There is code that you cannot (re-)compile that runs just fine, for some useful definitions of 'fine'.

To support lazy loading of classes, and reduce inter-version dependency hell, the first invocation of every function is dynamically dispatched, and the result is memoized. It's not Duck Typing, but it isn't link-time resolution either. It's sort of a Schroedinger's Cat situation. Until you open the box it could be anything. The first Generics implementations and later generations of code obfuscators (ab)used the hell out of this. In fact I don't think Pizza (Java 1.1 era generics prototype) worked without it, and some languages-on-the-JVM may have been intractably slow.


Off topic: I’ve had success explaining the Monty Hall problem by generalizing it to, say, 10,000 doors, where Monty opens 9,998 of them before allowing you to switch. People seem to intuitively understand that it’s extremely likely that the prize is behind the other door.


You'd think that this would make it evident, but every person I've said this to said "no, you still have a 50-50 chance". I just give up after that.


At that point, the only thing to do is set up 20 playing cards and offer them a prize of $100 for every $10 they wager.


Java uses lazy linking, but Java's static type system is sound (modulo some bugs [1]). The Java VM type system is different from that of Java the language, but it is also sound.

[1]: http://wouter.coekaerts.be/2018/java-type-system-broken


this is the only comment on this thread that gave me any pleasure to read. thanks for sharing


'Dynamic typing' is to 'typing' as 'emotional intelligence' is to 'intelligence'.


No, Python is not strongly typed by any serious definition of the concept.


Strongly typed:

    >>> "foo" + 3.141
    TypeError: can only concatenate str (not "float") to str
    >>> object() + 3.141
    TypeError: unsupported operand type(s) for +: 'object' and 'float'
Weakly typed:

    > "foo" + 3.141
    "foo3.141"
    > Object() + 3.141
    "[object Object]3.141"
    > [] + {}
    "[object Object]"
    > {} + []
    0


    > "foo" + 3.141
    "foo3.141"
How is this weakly typed when it uses type information to work?

No, this is strongly typed.


[flagged]


You are conflating weak/strong-typing with static/dynamic-typing. These are largely orthogonal.


    fn main() {
       let x = 1;
       let x = "foo";
    }
Does this make rust not strongly typed? Here's a Rust program with no types in the source code.

It seems the issue you're objecting to is that python doesn't differentiate variable declaration from assignment (The fact we need let twice in this code is a result of Rust doing this). Which is a fair thing to complain about (and why Python had the "nonlocal" and "global" keywords), but is not the same as being strongly or weakly typed.


That's not an identical translation, the identical Rust would be

    fn main() {
       let mut x = 1;
       x = "foo";
    }
which indeed fails to compile with a type error.

(That being said there is a conflation of static/dynamic and weak/strong going on in this thread, as there always is in these kinds of discussions.)


I'd disagree that this is a better translation. In python-land, `x` is just a name binding. The closest thing might be that `x` is something akin to a Box<T>, but I don't know that that's cleanly expressible in rust.

Like in (modern) python you can totally do

    def foo():
        x: Union[str, int] = 1
        x = "foo"
which would be akin to in rust ?? (sorry my rust foo isn't great).

Specifically the semantics don't work here because if you do ~this:

    def foo():
        x = 1
        async takes_int(x)
        x = "foo"
this will always work find in python (even in a hypothetical GIL-free python, even if you make the assignment actually async), whereas that wouldn't work in rust if you pass a mutable ref to takes_int (at least if memory serves).

Or I guess another way of putting this is that names in python can't be mutable.


No, the name is definitely mutable. Consider:

  x = 1
  capture = lambda: x
  x = 'foo'
  print capture()
If python were just shadowing, this would print 1. But it prints foo.


That's an issue of scoping, not capturing. The x in the lambda isn't scoped to the lambda, it's scoped to the surrounding environment.

So the x closes not over the lambda but the outer scope. So it's as expected given shadowing.

Edit: Since I'm getting throttled:

No, I'm saying that scoping rules are different in python and rust.

In Rust (and cpp) there's the concept of scopes/closures as a first class feature. This concept doesn't exist in python (python has namespaces, I guess, instead, there's no good terminology here).

See https://stackoverflow.com/questions/2295290/what-do-lambda-f.... This is due to weird scoping, not name mutability.

Second edit:

Well ok on second thought I see where you're going here. I was trying to avoid thinking about copy-on-scope-change behavior, but you actually do have to consider that and you're right.


Shadowing is when you have two separate variables with the same identifier. It is beyond obvious that all the x's refer to the same variable in dilap's example. Contrast that with an actual example of shadowing[0], in which it is clear that the same identifier is being used to refer to two different variables.

[0]: https://en.wikipedia.org/wiki/Variable_shadowing#Python


I'm using shadowing in the rust sense, not the python sense.


They are the exact same sense!

The exact same idea of shadowing is also present in, for example, sentences in first order logic.


Setting aside any terminology for a second, consider this rust program:

  fn main() {
   let x = 1;
   let capture = || x;
   let x = 2;
   println!("{}", capture());
   println!("{}", x)
  }
This will print 1 and then 2, whereas python would print 2 and 2.

Hence, you can see that the formulation "let mut" is equivalent to python, not "let" followed by "let".

Here's the rust program that prints 2 and 2:

  fn main() {
   let mut x = 1;
   let ptr = &x as *const i32;
   let capture = || unsafe{ *ptr };
   x = 2;
   println!("{}", capture());
   println!("{}", x);
  }
(I had to use unsafe otherwise the borrow checker will complain will I modify x from underneath the closure; maybe a more elegant way to make the same point -- I don't really know rust...)


Actually hmm, I may want to take back my earlier comment. There are multiple things at play. There's scoping (where rust will copy across scope boundaries for non-ref types, which allows closing over something as in your first example above).

Then there's mutable refs and mutable variables, which as hope-striker mentioned I was confusing, possibly because I was using ints in my example. If instead we used a vec:

    fn main() {
     let x = vec![0,1,2]
     x.push(3) // fails since x isn't mutable
    }
There's no clear direct related concept here by default. If we're allowed to use pytype, you get this:

    def main():
      x: Sequence[int] = [1,2,3]  # Sequences aren't mutable
      x.push(3)  # fails since x isn't mutable
Cool, so mutable and immutable values are possible in both langs. What about refs? Well we went through that one, if you pass a mutable ref to a function in rust, you can modify the ref in ways that just aren't possible in python:

    fn main() {
        println!("Hello, world!");
        let mut x: i32 = 3;
        modifies(&mut x);
        println!("{}", x);
    }

    fn modifies(x: &mut i32) {
        *x = 5;
    }
There's nothing analogous to this in python. Everything is always passed as a mutable "value"[1], nothing is passed as a ref.

Cool so that's mutable variables and mutable references. That leaves this weird scoping issue. In rust (and in cpp) there's lots of scopes. Any set of braces creates a new scope, and so shadowing can happen across scopes. Lambda capture/closure happens over the scope. A given scope binds a name to a value, or a set of names to their values.

Python's a bit different, only new names are created in the scope. If a name isn't accessible in the given scope, the name is pulled from parent scopes etc.

So for the capturing behavior you want, there's weird nonlocal stuff that needs to be done, or you can explicitly make an additional scope, which removes the wonky behavior. If the name were really mutable, you'd be able to change what x referred to in the enclosing scope, which you can't.

tl;dr: This isn't mutable names, its python's (admittedly abnormal) scoping rules.

[1]: Unless you add in mypy or whatnot, where the typechecker will prevent you from modifying something that is non-mutable, but unlike in rust this isn't done with mutability as a first class citizen, its just that some interfaces expose mutating methods (`append`) and some don't. You can pass a list to a function that expects a list or a sequence, and the first case is mutable, while the second isn't.


Python's scope & mutability rules are idiosyncratic, but that's a distraction from what's going on here.

Let's go back to steveklabnik's ancestor comment:

"That's not an identical translation, the identical Rust would be:"

    fn main() {
       let mut x = 1;
       x = "foo";
    }
He was saying the identical Rust would not be:

    fn main() {
       let x = 1;
       let x = "foo";
    }
These are being compared to the following Python:

    x = 1
    x = "foo"
So consider these slightly enhanced versions of the fundamental question posed above.

Python:

  def mystery_py():
    x = 1
    capture = lambda: x
    x = 2
    return x * capture()
Rust:

  fn mystery_a() -> i32 {
     let x = 1;
     let ptr = &x as *const i32;
     let capture = || unsafe{ *ptr };
     let x = 2;
     return x * capture();
  }
  
  fn mystery_b() -> i32 {
    let mut x = 1;
    let ptr = &x as *const i32;
    let capture = || unsafe{ *ptr };
    x = 2;
    return x * capture();
  }

If you compare return values, you will find that mystery_py() returns the same as mystery_b().

So! I think you must agree that steveklabnik was right -- the rust code that is equivalent to the python code is the "let mut" variant. (Because surely you would not argue code that returns a different value is equivalent?!)

So now the question is, why?

Rather than answer, I will trollishly pose 2.5 more questions:

What would an implementation of mystery_a and mystery_b look like in scheme?

Would it be possible to author mystery_b in scheme if your "!" key was broken? (How about in some other purely functional language?)


I disagree that those are doing the same thing. I propose that the actual answer is c:

    use std::cell::RefCell;

    fn mystery_c() -> i32 {
        let x = RefCell::new(1);
        let capture = || x.borrow();
        x.replace(2);
        return *x.borrow() * *capture();
      }
  
    fn main() {
        println!("{}", mystery_c())
    }
Which is what I meant when I said that Box<T> might be the analogous thing (I guess it's actually RefCell, whoops!). And note that in this case, x is immutable :P

That said I accept your broader point, the effect is that python names act like mutable rust names, although the reality is slightly more complex (my final example is, I believe, the closest to actual reality).


You lost me, boss! Why do you think mystery_c is closer to mystery_py than mystery_b?


Let's go on a journey.

The answer is that I started with a hunch. You're treating x as a pointer sometimes, and a value other times. That seems strange, and unlike the python. In python the thing is always access the same way, it isn't a ptr type sometimes and a value type others.

So first let's talk about scopes. In python, you aren't introducing a closure. If we do introduce a closure, like with an IIFE:

    def mystery_closure():
        x = 1
        closure = (lambda v: lambda: v)(x)
        x = 2
        return x * closure()
suddenly we get 2. The IIFE/outer closure here is equivalent to the capture happening in rust. So this is more equivalent to the rust examples than your python example. Closures are what matter, not variable mutability.

Cool, so now let's add another wrinkle: `i32` in rust isn't a mutable type, there are no mutating methods on an i32. What happens if we use a type that has mutating methods, like a vec?

Let's start in python, since python doesn't allow multiline lambdas, we have to swap to using an inner function, which is fine, this makes the structure a bit clearer in python.

    def mystery_ mutable():
      x = [1]
      def closure():
        def inner(v):
          v.append(2)
          return v
        return inner(x)
      x.append(3)
      return x + closure()
And what if we do the same in rust? Well, we have to mark x as a mutable ref:

    fn mystery_b() -> Vec<i32> {
        let mut x = vec![1];
        let ptr = &mut x as *mut Vec<i32>;
        let capture = || unsafe{ (*ptr).push(2); 
                                  ptr };
        x.push(3);
        unsafe { x.extend(capture().as_ref().unwrap().iter()); }
        return x
      }
So the python value is a mutable ref, right? Well no, we're back to the whole issue of the closure being able to modify things outside itself in rust with a mut ref that we can't do with python:

    def mystery_mutable():
      x = [1]
      def closure():
        def inner(v):
          v = [5]
          v.append(2)
          return v
        return inner(x)
      x.append(3)
      return x + closure()
This returns [1,3,5,2] in python. If you translate it to rust with a mutable ref pattern, you'll get [5,2,5,2] and the 3 will just disappear:

    fn mystery_mutable() -> Vec<i32> {
        let mut x = vec![1];
        let ptr = &mut x as *mut Vec<i32>;
        let capture = || unsafe{ (*ptr) = vec![5,2];
                                  ptr };
        x.push(3);
        unsafe { x.extend(capture().as_ref().unwrap().iter()); }
        return x
      }
So in python, the thing isn't a const ref, but it's not a mutable ref, either, and it's certainly not a value type.

In languages like rust and cpp we describe calls as pass by reference or pass by value. Pass by value is mostly irrelevant here. When passing by reference, you can use a mutable or immutable reference. Immutable references don't allow you to modify the object, just read it. Mutable references allow you to modify or replace the object. With normal pointers and references, if you're able to modify the referenced object you can also replace it with an entirely new object.

The reasons for this are tricky, but have to do with self references in methods (self/this has to be mutable for a mutable method to work). In rust and cpp the self reference is exposed, so you can make it point elsewhere. In python you can't do this. This means that its tricky to pass an immutable reference to a mutable object in rust/cpp, but in python this is the only way things get passed around.

Rust calls this "interior mutability", and RefCell is the way to do interior mutability with references, as opposed to copyable types. The docs for RefCell actually call out passing &self to a method that requires mutability[1] as a use for RefCell, so in general you could use the RefCell to implement a python-like set of containers that can be passed "immutably" and still modified internally. In Pseudo-rust:

    struct PyVec {
      backing_arr: RefCell<Vec<T>>
    }

    impl PyVec {
      fn push(&self, v: T) {  // This isn't mutable?!
        backing_arr.borrow_mut().push(v);
      }
      ...
    }
Which would match python's semantics very closely

[1]: https://doc.rust-lang.org/beta/std/cell/index.html#implement...


Quality content! You should write this up into a blog post or something.


> The IIFE/outer closure here is equivalent to the capture happening in rust. So this is more equivalent to the rust examples than your python example.

Wait, I don't follow. Rewriting your example to only use one lambda for clarity, we have:

    def mystery_closure_one_lambda():
        x = 1
        def capture(v):
            return lambda: v
        closure = capture(x)
        x = 2
        return x * closure()
        
So notice the lambda (i.e. what we are assigning to the variable 'closure') is now capturing v, not x, which is why it doesn't see the change we make to x, i.e., why it returns 2 instead of 4.

But this is not equivalent to the rust code! There is no v at all in rust. We are capturing x! (It's slightly obscured by the fact that we to use an unsafe ptr to defeat the borrow checker, but we are still capturing x.)

So I do not think mystery_closure is equivalent to either of the rust mystery_a or mystery_b above; it is in fact equivalent to this:

  fn mystery_closure() -> i32 {
      let mut x = 1;
      let closure = (|v| move || v)(x);
      x = 2;
      x * closure() 
  }
Which also returns 2, just like the python code. (It's also a direct translation of the python code!)

> Let's start in python, since python doesn't allow multiline lambdas, we have to swap to using an inner function, which is fine, this makes the structure a bit clearer in python.

Careful! -- your de-lamba-fication accidentally changed the semantics. If we just de-lambda-fy, we get:

  def mystery_int():
      x = 1
      def closure():
          def inner(v):
              return v
          return inner(x)
      x = 2
      return x + closure()
Which returns 4, showing it's defnly not equivalent. The correct de-lambda-ficiation is:

    def mystery_closure_no_lambas():
        x = 1
        def capture(v):
            def inner():
                return v
            return inner
        closure = capture(x)
        x = 2
        return x * closure()
(which as a sanity check, returns 2, as it should).

Bringing in mutable reference data types like vec is I think not really relevent to what's at play here.

In both rust and python, the non-reference types mut i32 (rust) and int (python) are mutable. In rust you can pass a mutable reference to an i32, and in python you can't, but so what; that's not really relevent.

DIGRESSION:

Just for funsies, you actually can achieve what are essentially mutable references in python3 (you could also do this in py2 if you wanted to get nasty with locals()):

  # a mutable reference to a local variable
  class Ref:
      def __init__(self, getfn, setfn):
          self.getfn, self.setfn = getfn, setfn
      def get(self): return self.getfn()
      def set(self, v): self.setfn(v)
      value = property(get, set, None, "reference to local variable")

  # change a local variable using the mutable refernce
  def mutate(ref, new_value):
      ref.value = new_value

  def mystery_py_mutable_ref():
      x = 1

      # get a mutable reference 'ref' to x
      def get(): 
          return x
      def set(v):
          nonlocal x
          x = v
      ref = Ref(get, set)

      # capture x in a closure
      capture = lambda: x

      # mutate x
      mutate(ref, 2)

      # finally evaluate x and the closure; this will return 4!
      return x * capture()
END DIGRESSION

But anyway, I don't think it's actually relevent here.

Question: Are you familiar with scheme? Would you agree or disagree that the following scm_mystery_a and scm_mystery_b are equivalent to the rust mystery_a and mystery_b functions?

  (define scm_mystery_a 
      (lambda ()
          (let ((x 1))
          (let ((capture (lambda () x)))
          (let ((x 2))
          (* x (capture)))))))

  (define scm_mystery_b
      (lambda ()
          (let ((x 1))
          (let ((capture (lambda () x)))
          (set! x 2)
          (* x (capture))))))

  (display (scm_mystery_a)) (newline)
  (display (scm_mystery_b)) (newline)


> So notice the lambda (i.e. what we are assigning to the variable 'closure') is now capturing v, not x, which is why it doesn't see the change we make to x, i.e., why it returns 2 instead of 4.

Yes, but this goes back to the scoping issue: in python, lambdas (and functions in general) don't capture. The only way to close over something is to pass as an argument. So to get the lexical closure behavior that rust provides, you have to add extra stuff in the python. This indeed makes the translations not mechanical (and you can add the lambda back in the rust, it doesn't hurt anything in these examples), but to get matching scoping behavior between rust and python, you need an extra layer of indirection in the python.

> Bringing in mutable reference data types like vec is I think not really relevent to what's at play here.

Of course it is, because in python everything is a reference. There's no such thing as a value type, and this is precisely where the difference in behavior comes in (other than the scoping issues). A rust RefCell is the thing that most naturally matches the actual in memory representation of a PyObject.

As for your digression, eww, although you forgot to actually do the sneaky part. This would be the actual demonstration, you need to modify the list in the closure (a real closure), and set it after the closure is created and before it is evaluated:

      # capture x in a closure
      def closure(v):
        def inner():
          mutate(v, 2)
          return v
        return inner

      capture = closure(ref)

      ref.set([3])

> Are you familiar with scheme?

Unfortunately not.


Run my examples, they work! Python absolutely has real closures...

You seem to be confusing mutable variables with mutable references. A name, in Python, is a mutable cell that holds a reference. Python names definitely correspond to mutable, not immutable variables in Rust.


Well no, for the reason I describe above: if you have the pattern

    mut a = 4
    f(a)
    print(a)
In rust and python, you'll always get 4 in python, but the value in rust depends on `f`.

This means that the passed variable is immutable but shadowable, as in rust. (An object in python is much more like an Box/Cell, so the contained object can be mutated, but the reference to the box itself is immutable).


The value in Rust does not depend on f.

I will be precise. There is no definition of f such that this function will print anything other than "4".

    fn main() {
        let mut x = 4;
        f(x);
        println!("{}", x);
    }
Again, you seem to be confusing mutable references and mutable variables. If I had written f(&mut x) rather than f(x), you would be right.


I was incorrect here! I don't know Python as well as I thought. Thanks for all the discusison folks.


I still think you're correct, actually!

Cf. https://news.ycombinator.com/item?id=22448289


Yes, obviously, Python isn't statically typed. I fail to see how this demonstrates a lack of strong typing though.


name shadowing isn't the same as strong typing, you can do the same thing in rust today

https://play.rust-lang.org/?version=stable&mode=debug&editio...


Reassignment isn't shadowing. See https://news.ycombinator.com/item?id=22444824 .


Steve's wrong. Python names can't be made mutable.


Here is the most common definition of strong typing, and below it an assertion that "Smalltalk, Perl, Ruby, Python, and Self are all strongly typed".

https://en.wikipedia.org/wiki/Strong_and_weak_typing#Implici...


That is quite a disingenuous quote... the actual content is:

> Smalltalk, Perl, Ruby, Python, and Self are all "strongly typed" in the sense that typing errors are prevented at runtime and they do little implicit type conversion, but these languages make no use of static type checking: the compiler does not check or enforce type constraint rules. The term duck typing is now used to describe the dynamic typing paradigm used by the languages in this group.

That is quite a qualified usage.

The only feature distinguishing Python from Javascript here is that Python does less implicit type conversion (where it is reasonable v.s. where it is insane). In every other dimension it is the same.


Do you have an example of Python doing implicit type conversion?


Python2 implicitly converts int to long and str to unicode:

    >>> 2**100
    1267650600228229401496703205376L
    >>> 'x' + u'y'
    u'xy'
All Pythons implicitly convert int to float and int or float to complex:

    >>> 1 + .5
    1.5
    >>> 2 * 3j
    6j
Methods like list.extend now, in recent versions of Python (since 2.1), accept arbitrary iterables rather than just lists; it's more debatable whether this is an “implicit type conversion” or not.

    >>> x = [3, 4]; x.extend((5, 6)); x
    [3, 4, 5, 6]


Thanks for the examples. I was mainly thinking of int-float conversion that is present in the vast majority of languages.


Curious: what is your definition of strongly typed, contrasted with weakly typed? How about static vs dynamic?


I'd guess that parent doesn't agree with calling a duck-typed language strongly typed. If so, I concur.


Isn't the existence of TypeError and the various things you're not allowed to do implicitly (eg: 1 + "a", something Javascript will happily let you do) a definition of strongly typed?


Javascript has type errors too, and Java will let you 'add' a string to an integer, so it's more nuanced than that...

Javascript:

    > null.bob()
    
    TypeError: Cannot read property 'bob' of null

    > 4 >>> Symbol("four")
    
    TypeError: Cannot convert a Symbol value to a number

    > BigInt(null)
    
    TypeError: Cannot convert null to a BigInt

    > Object.create(false)
   
    TypeError: Object prototype may only be an Object or null: false
Java:

    int anInteger = 10;
    String s = anInteger + "Hello";


If it's qualified as a "serious definition," they're setting up a No True Scotsman argument.


It is in that you can't "peel off" the type system using casts, as you can in C and Java.


Sucking at certain things is sloppy. Weak type is not. Python is a much better language than Go in almost every aspect.


I think the mistake may be assuming that Go is meant to be a general-purpose language. From what I can tell, it's purpose-built to be a "web services" language, and its design-decisions center around that. What does that mean?

- It's expected to be run on Linux servers (not Windows) and developer workstations (probably not Windows).

- It needs to be fast but not blisteringly fast. Micro-performance concerns like the Time object thing are devalued.

- Embedded use-cases are probably not given too much attention.

- Agility in working with dynamic data (because that data is often foreign) is valued over flawlessly safe types.

By deciding not to worry about certain use-cases, the language can be more developer-efficient for its intended use-cases. In this light, for better or worse, I think the decisions made make a lot more sense.


If the language was meant to be run on unix machines for web services, it shouldn't support anything else in a half-done manner with silent errors and corruption.

The situation as it is now is just poor language and/or library design - choosing to support different operating systems with an API that requires the wrong thing to happen in some cases.


If you workstation does happen to be Windows, support-with-edge-cases may be plenty good enough for you to get your work done and if you run into a problem on your dev machine, it's much less of a big deal


Except docker is written in go. Guess they never got the memo to not use go for non-webservices...


Fair, although it has very similar constraints/goals to those listed above


Docker’s primary use case is also web services.


Ubuntu is used (mostly?) for web services. Is Ubuntu suitable to be written in Go?


Umm..what? Ubuntus primary use case was not web services, it was a user-friendly PC OS compared to the Linux variants at the time.

It's picked up a lot in the server space because of the familiarity of it, with respect to package management et all.


My most popular project on Github is currently a program I slapped together in Go a long time ago. The `sync/atomic` issue mentioned at the end of the article is THE issue that made me stop considering Go for anything other than trivial things. Lack of decent error handling, a terrible builtin json library, constant `interface{}` to poorly substitute for generics, the package management issues that made Node.js look well-thought-out by comparison, struct field tags, and generators provided by the core team that set off linters provided by the core team with no good way to silence them kind of piled on before that, but the `atomic` issue is the one that made me avoid it. The author is right, all the little things add up.

Note that a bunch of these may have been fixed since I last used it, but honestly, I haven't checked because it was frustrating working in it and debugging it. It's a shame, `pprof` and the race detector are pretty cool.


Holy shit, the entire explanation of the absurd reasoning behind needing to use the getlantern/idletiming lib, and the debacle behind unraveling it's dependencies is pure gold. When the breadcrumb trail to dependency-hell stems from a file whose contents are:

    // This file is intentionally empty.
    // It's a workaround for https://github.com/golang/go/issues/15006
I about fell out of my chair. Pure gold.


I don't understand why you would create a statically typed language but not actually take advantage of types, instead typing everything with generic types like string. Why make the user pay for complexity in types but not actually deliver their promise? This is the problem with C and Go doesn't really solve it either


We just ran a pretty high profile 20 year experiment with Stringly Typed languages - Java.

Generally we try to avoid the mistakes of the previous generation (and make the same ones as the one before that, half the time) so this is confusing to me.

I wonder how many Android contributors he had working with him while these decisions were being made.


There is a sweet spot and for different people, that spot lies in different places.

Having a proliferation of types is bad for everyone but highest order FP Weenies among us.


> The Go way is to half-ass things.

This used to be known as the New Jersey school, and is the underlying philosophy of Unix: build a bunch of little pieces that work a lot of the time and kind of fit together if you remember the gotchas, then call it a day. There is an essay on this that I am unable to locate right now which mentions the horror of someone working on ITS when they asked how Unix solved a rollback on error case in a system call and were told, "Oh, we just leave it inconsistent, and the application programmer has to deal with it."

Does anyone else remember this citation? I truly am failing to find it this morning.


Sounds like it might be straight from the original "worse is better" essay:

http://dreamsongs.com/RiseOfWorseIsBetter.html

> The MIT guy did not see any code that handled this case and asked the New Jersey guy how the problem was handled. The New Jersey guy said that the Unix folks were aware of the problem, but the solution was for the system routine to always finish, but sometimes an error code would be returned that signaled that the system routine had failed to complete its action. A correct user program, then, had to check the error code to determine whether to simply try the system routine again. The MIT guy did not like this solution because it was not the right thing.


Ironically, that is referring to the EINTR error code that I predict is about to cause a bunch of unexpected failures when people switch to Go 1.14. [0]

[0] https://golang.org/doc/go1.14#runtime


Oh, of course. Thank you!


Worse Is Better is a form of prioritization, especially in the face of changing requirements. Right Thing assumes you not only have lots of time to polish every piece, but that the goal is a fixed point you've already decided upon, so none of your work is going to be wasted.




jwz is redirecting this link based on referer, you might want to not click on it (copy/paste works).


Does anyone know why he dislikes HN so much?

Tbh I kind of like the attitude and I warmly recommend just clicking the link :-)


Classic: “There are only two kinds of languages: the ones people complain about and the ones nobody uses.”[1]

The thing that bugs me is the comparison to Rust. I mean, the author did caveat that he chose it because Rust provided the best available counter examples to his specific gripes. But my issue is that comparison seems to make a false conclusion: Rust is better. My intuition says if the author used Rust (or any other language) as much as they have used Go, and in the same environments solving similar sized problems, they would have a completely different 1000+ word rant on all the things they hate about that language.

We have an expression "use in anger". It describes a particular kind of understanding that only becomes available when we face the real problems and not just idealized ones. I even see smaller rants within this comment section showing how the very systems he lauds in Rust have sharp corners when used in anger.

I thought this rant had many good points and highlights many shortcomings of Go. I would have preferred that it did not contain the comparison which draws an implicit conclusion that IMO is likely incorrect.

1. https://www.goodreads.com/quotes/226225-there-are-only-two-k...


Rust is better _at the problem presented_.

Rust not being perfect does not mean other languages can learn from its successes.


> Rust is better _at the problem presented_.

What I'm suggesting is that wasn't demonstrated. Go had a real-world used-in-anger problem. That was compared to an idealized solution in Rust. It seems to me that this is an unfair comparison.

Fair enough, it is hard to demand anyone who wishes to make a comparison between two programming languages to have built equivalent massive systems that stretch each language to their limits. But the point of the article wasn't to compare languages, it was to show the kinds of problems exposed in Go when it is used in massive real-world systems. So maybe it would have been better to leave the comparison out.


> What I'm suggesting is that wasn't demonstrated.

An in-depth analysis of the respective languages APIs for a particular targeted problem isn't enough?

I won't disagree that readers might make a leap to intuit the author thinks Rust is overall better. But that extra leap doesn't mean he failed to show Rust was better at a particular problem. In fact, that is WHY people would make that un-warranted leap.

> But the point of the article wasn't to compare languages, it was to show the kinds of problems exposed in Go when it is used in massive real-world systems.

I mean, not really. Cross platform file manipulation is, maybe not common, but not obscure. And making a web request reliably is also not something you'd expect to be only needed in massive systems.


> An in-depth analysis of the respective languages APIs for a particular targeted problem isn't enough?

It isn't the same. There is another cheeky quote I can paraphrase: Everyone has a plan until they get punched in the face. He is comparing a Go implementation that has been punched in the face in a real-world use case against a Rust implementation that was sitting on the sidelines.

If the point of the article (and the title) was "Go file system API vs Rust API, an in-depth analysis" I would not have made my comment. The thesis of the article appeared to be "pains I felt in Go when I used it on hard real-world problems". All of his points seem to stand completely fine when you remove the comparisons to Rust. For that reason I would have preferred to remove them.


You mean the Rust language that has dozens of cross platform implementations of coreutils binaries? Go might be larger, but I hardly think Rust qualifies as sitting on the sidelines.

That's a fair opinion. I think the article is richer for having shown what a better API can look like for contrast.


No language is perfect but a couple of years ago I tried both. Learning Go was a constant stream of internalizing design failures and working around them (error handling, packages, etc.). Rust had some points which were harder to get started with but showed clear benefits for having done so — the difference between teaching you better habits versus not learning from C’s second greatest failure and viewing error handling as optional.


>There are only two kinds of languages: the ones people complain about and the ones nobody uses

Repeating this quote again and again won't make it correct.


Maybe I'm a zealot, but I don't really consider "doesn't work as well on windows" a con of a language.

C# is (or at least used to be) utter garbage on Linux compared to Windows. I don't hold that against C#, but rather recognize that Linux/Windows are very different, and that compiler maintenance and development is non-trivial (and obviously Microsoft is going to prioritize Windows).

This article is basically a rant that Go was designed with *nix in mind and that Windows is a second-class citizen by comparison.


I believe you missed half the article, as well as the main point: that trying to hide implementation details too much from the API consumers is not generally a good thing, and that's regardless of the system it's running on.

The OS is just one of many context-dependent axis on an application. Bad decisions done on that level are likely to occur on other parameters that are more important to you.


I'm not sure you can call it "miss" if the article tries to hide its thesis as well as this one. After you keep going about file API and windows for pages and pages, you may insist all you like that "the point is coming", but I'm sorry, I won't believe you.


C# support in linux and osx are actually quite good these days especially in .NETCore. You run into some cross platform issues if you want to go as deep as shell handling when trying to manipulate processes, though.


There's some stuff in there about it not handling valid edge cases on Linux particularly well, either, though.


But you knew C# was Windows only. Go was always touted as a cross platform solution due to statically compiled binaries. If said binaries have issues on Windows due to design decisions it seems like a language fault.


The binaries themselves don't have issues. But Go's descision to make the standard library Unix focused isn't a flawed design decision.

If you need OS/platform-specific precision, you're free to create or use an alternative library. The standard lib was never designed to be the magic bullet for cross-platform, but the language internals, compilation, and execution do a good job for many platforms.

We need to keep in focus what the goals of each part of the language are intended for.


I don't think Go was ever really touted as cross-platform, beyond not having to deal with dynamic linking hell. After all, I can't take a random binary built by the Go toolchain for Linux and run it on FreeBSD, MacOS, or Solaris. The only portability gain you get from static linking and the only one Go was ever aiming for was across versions and distributions of the OS.


What is it then if not a con?

I mean, C# not running well on unix systems was immediate show stopper for a lot of projects but we can't call this a "con"?


It's only a con in the context of your use case and requirements. It's not a con for everybody, so in the general case I would instead call it a limitation.

Every piece of software in the world has limitations. The limitations are only cons in the context of your requirements. Is it a con of SQLite that it is missing features when using it with the JFFS2 filesystem? Maybe, depends on your use case. If your system doesn't use JFFS2, then it's not a con worth considering.


That's just word play.


I absolutely agree. The one time I had the great misfortune of building software for windows I was extremely happy to see Go worked at all.

Linux and OS X largely work the same way due to their shared Unix-ness and pretty much everyone I’ve ever met or talked with uses Go on one of those two platforms.

If you have to develop software primarily for Windows, maybe don’t use Go - it’s easily the least actively maintained OS target and there are many options for languages that are well supported on Windows by vendors who actually care. Kind of the same folly as trying to write an iOS app not in Swift or ObjC and then complaining it doesn’t work well.


No one expects iOS to run on Windows. But given that there's a Windows version of Go, it's reasonable that it should work.

And also, the points about metadata and path management are spot on. It's 2020. Languages should not be assuming that paths are byte strings.

Unix-think is a bug, not a feature. A good language should abstract the file system, not just put a teeny tiny wrapper of modesty around it.


> Languages should not be assuming that paths are byte strings.

Lots of people who know what they're talking about disagree:

https://yarchive.net/comp/linux/utf8.html


Stuff exists on those file systems outside of your application, and at some point you'll have to interface with them.


For high-level languages you're writing user-facing apps in domains where your file handing needs are simple, I can see the appeal.

But of course you're ignoring the entire category of systems programming, cross platform apps that need something more than easy access to a file picker, integration code that frequently needs to deal with exactly the edge cases that these pretty abstractions ignore, etc. etc.

VB6 had its place. So does C.


> Languages should not be assuming that paths are byte strings.

They are in the real world and you have to live with that.

Screaming at people might sometimes have good results.

Screaming at reality never does.


Given than Mono-the first open source C# implementation-was used to build banshee going back 15+ years, C# on *nix deserves more credit than is given here.


absolutely, not sure why this dude is dumping on it? Has he even used it in the last 5 years?


Maybe I'm also a zealot, but this "second-class citizen by comparison" works pretty well - I've been using Go for some side projects under Windows for years, and was actually surprised how well it worked. I even developed a GUI app some years ago (using GTK). So, if you ask me, the glass is ~90% full, not 10% empty...


Not sure what you mean about C# on Linux. More than a decade ago it worked just fine on Linux using Mono. For a few years now it's worked just as well on Linux as it has on Windows, using dotnet Core.

Source: I've been developing using C# on both Windows and Linux for a very long time.


The gripe was not that it doesn't work well on Windows. It was about a culture of not caring about getting details right or valuing correctness. Windows was just used as an example of not caring about correctness. The time API was another.


Then don't support it at all instead of trying to support it and failing half way


> Nine out of ten software engineers agree: it's a miracle anything works at all

There was a beautiful rant about a decade ago called something like "everything's broken all the time and nobody cares." The gist of it is that all software is written by people. Anyone who's written software knows that it's usually riddled with hidden corner cases, unfortunate tradeoffs, rushed deadlines, etc. Software is also moving into critical spaces like aerospace, medicine, banking, etc. The thrust of the article is that we're trusting more-and-more critical infrastructure to a discipline that anyone who's worked in knows is untrustworthy.

Does anyone remember the link to the article? I've often wanted to re-read it and share it with people, but I've never been able to find it.


> Software is also moving into critical spaces like aerospace, medicine, banking, etc. The thrust of the article is that we're trusting more-and-more critical infrastructure to a discipline that anyone who's worked in knows is untrustworthy.

"Anyone" who's worked in those industries knows SW can be done in a trustworthy way.

At least not less than other engineering disciplines.

"hidden corner cases, unfortunate tradeoffs, rushed deadlines" in uncontrolled proportions are a symptom of lack of discipline, either originating directly at low level (even if maybe mainly because of cultural influences, but I mean, what is not?), or under pressure from the hierarchy. The same conditions can led to critical failures of other kind of engineering realisations. One key point of critical failures resulting from hierarchy pressure is that it does not absolves the engineers doing the work, and some engineering culture actually recognize and teach that. Other cultures mixe everything in the same pot without even an once of ethics nor serious reliability thinking, and you get people maintaining the myth that software just can't be reliable, that the whole industry - without exception - is in an eternal crisis, and that that's even normal because the field is "young". None of that is true; you even have plenty examples around you, and decades of history to study. And of course, we must remain exigent so that the quality does not decline just because of a kind of self prophecy.


*self fulfilling prophecy.


I think you're thinking of Everything Is Broken by Quinn Norton https://medium.com/message/everything-is-broken-81e5f33a24e1



The first article that comes to mind for me is Programming Sucks, although it's not quite the same as your description.

https://www.stilldrinking.org/programming-sucks


Isn't basically all of this shortcomings of Go's standard library, not "Go the language"?

The Go standard seems to be heavily geared towards doing work on the server-side, and "server-side" essentially means "Linux" today.

If I'd need to write "client-side" cross-platform code that also needs to run on Windows, Go wouldn't be my first choice, also not my second or third.

And TBH, most other languages are not that much better (Python might be the only notable exception, and even this requires different code paths for "Windows vs the rest of the world" here and there).

For this type of cross-platform code, it's almost always better to talk directly to the underlying OS APIs and put those under a thin custom wrapper library instead of relying on the language's standard library.


A standard library says a lot about the language. Even if someone made a much better path/file handling library for Go, another one of its strengths are the ubiquitous interfaces you can rely on across libraries. Unless the superior library gained a lot of adoption really quickly, it would remain largely irrelevant in the face of the standard that was set years ago by the Go authors.


Look at Go’s HTTP library. It’s much lauded for striking a good balance between performance and ease of use, but it’s not as performant as it could be. For that, fasthttp exists and is quite popular although not nearly as popular as the standard HTTP library.

Your comment gives the impression that this is a failure because the library for niche performance cases hasn’t become the go-to library for the general case. I disagree—it’s ideal that we have a canonical general purpose library and another for high performance cases.

Perhaps you would argue that we should have interfaces that allow for a pluggable performant implementation and an easy-to-use general purpose implementation? This is all well and good, but it’s inherently not possible, because the interface is about ease-of-use and the performance is achieved by trading off on friendliness. You might offer Rust as a counterpoint since many of its standard libraries use an interface that is suitable for the general case and the high performance cases; however, this is a lie: these interfaces (and the core language) are manifold harder to use than their Go equivalents. In other words, Rust’s “general purpose” interfaces trade ease of use for the ability to support high performance implementations. This tradeoff isn’t inherently bad, but it is bad to pretend as though it’s inherently good or that there is no tradeoff at all.


I'm not sure there's a meaningful distinction to be drawn there for the vast majority of usage, to be honest. A language's standard library is generally considered to be part and parcel of the language. Everyone's first sample program is "Hello World" which involves printing text to standard output.

That being said, languages that do offer the ability to work without the standard library give you a lot of flexibility for places where you need it (like embedded systems). Rust has a pretty good story there. I don't think C or C++ really do.


C and C++ don't have good stories for working on embedded systems without a standard library?


Not in the language standard they don't, no. The C99 standard basically says "good luck with that":

"5.1.2.1 Freestanding environment

1. In a freestanding environment (in which C program execution may take place without any benefit of an operating system), the name and type of the function called at program startup are implementation-defined. Any library facilities available to a freestanding program, other than the minimal set required by clause 4, are implementation-defined.

2. The effect of program termination in a freestanding environment is implementation-defined."

There are obviously numerous embedded toolchains that provide facilities for writing C/C++ to target embedded systems but they're generally all doing nonstandard things and every one is its own unique fork of GCC.


And that's all perfectly fine.

If I'm running embedded with no OS, I'm in a very specific environment. My code is going to be tied to my specific hardware, including almost certainly the specific CPU chip. (In this situation, it's usual to have a "CPU" chip that includes several peripherals on-chip, to reduce parts cost. Code is not portable to a CPU with different peripherals, even if it's from the same family.) So, if I can't reuse the code anyway, do I care that I can't reuse the one line that is the "main" function definition?

If I'm running embedded with no OS, what should happen if the program terminates by exiting main? Where is there to go?


We're going off into the weeds here but in C this is implementation defined. In Rust you can use `#[no_std]` and the semantics are all well-defined. There's a thriving Rust embedded ecosystem that's doing just fine. Yes, the specifics of interfacing with hardware are platform-specific, but the language and the `core` bits (under `std`) are well-known.


Those things are implementation-defined because anything else wouldn't make sense. If I'm making a C program for a microcontroller, I will need to know how that microcontroller works. It's probably an ARM core, which expects the first few bytes of flash memory to be a table which contains pointers to functions; it's my responsibility to put a pointer to my main function in that table.

This is completely unproblematic, and not really that different from having to make sure your code is compiled to a valid ELF file with the correct sections and section headers.

"Implementation defined" doesn't mean "nonstandard". A C program which overflows a signed integer is ill-formed, because signed overflow is undefined. A C program which relies on external linker scripts to set up the vector table and make the reset vector point to the main function is well-formed, it just necessarily depends on some implementation-defined behavior.


golang's philosophy leaks everywhere, many things are half-baked for no good reason, even when strictly superior solutions are there (e.g. defer works at the function scope, instead of the local scope). Not to mention interfaces are badly designed, leading to issues like this: https://github.com/golang/go/issues/16474


Good point.

Also, client side in Go is usually web interfaces, which work across any platform.


Having used go full-time for the last 3.5 years, this article didn’t feel like a twist of the knife. Yet all the language evolution efforts I’ve seen in the last two years make me think that early Go was, mixaphorically speaking, lightning in a bottle that won’t strike twice.

- I’ve never hit the file system stuff. We all use Linux; all our code runs on Linux. I’m curious who the people are who are using Go on Windows.

- Network timeouts are a stupid gotcha I first hit about six months into my go tenure. You can set read/write timeouts on the Transport that’s used by the connection though; not sure why that isn’t covered.

- The wall clock time thing is new to me and looks crazy complicated; I’m angry that it’s something I have to know about now. It’s bad enough that time.Time operator == and .Equals() behave mostly but not quite the same.

Something that’s not in the article: the tooling situation (autocomplete, source navigation, and so forth) IS STILL WORSE THAN IT WAS TWO YEARS AGO. The old tools were perfect but were never updated for module support. gopls is still an unfinished mess; last week I had to write a script that auto-kills it if it uses more than 3GB of memory.


Windows-focused rant. Plus a few reasonable points. Every language is complex at some level and in their own ways-Rust included. Every language hides some of the complexity of layers below it like assembly and thus hides hardware details. Computers are complex. Point granted.

Fact is Go is a very reasonable set of compromises that let's real enterprise-scale work get done and run with solid performance. I've done work on mostly Nix systems but have cross-compiled for Windows when needed. These are wildly different OSes and some adjustments are needed thusly in the code.

Go has faults. The "OMG Go has no generics so it's total trash" argument is just silly. Generics are coming.

Personally, Go has never let me down with anything I've asked it to do -- ETL flows, servers, streaming data processing, CLI programs, networking tools, etc. Use whatever tool fits your needs.


>Go has faults. The "OMG Go has no generics so it's total trash" argument is just silly. Generics are coming.

Until it has them it's a valid complaint. And the fact that they're finally coming 11 years after the language's creation is another matter


This is silly. Lots of people are very productive in Go without generics. Even more productive than many languages that have generics (including Rust). Lots of people are very productive in languages without static typing at all. Generics will significantly improve a relatively small proportion of use cases.


FWIW: Java was originally released in 1996, it had no generics, and didn't gain them until 2004 - 8 years after the language's creation.

Was Java "total trash" before 2004? Not really, it was still useful in lots of use cases - it just didn't have any generics.

Go at least has generics for its built-in collection types (maps, slices) - which, in that respect, arguably places it ahead of where Java was for a whole 8 years.

https://en.wikipedia.org/wiki/Generics_in_Java


Even if golang gets generics, it has so many other flaws that make it an non-starter for serious projects. This won't stop people who are driven by hype from using it of course.


Serious projects like major Cloud infrastructure? And what are these damning flaws? If they are damning performance flaws, then presumably no project can succeed in a language slower than Go. If they are static type safety flaws, then surely no project in a dynamic language can be successful. If they are lack of fine grained control over memory layout, then surely no major VM or interpreted language holds a candle.


It constantly lies about how complicated real-world systems are, and optimize for the 90% case, ignoring correctness

I don't know how this comment appears to come as a new thought after them using Go in production. I don't use it at all for work but that is literally my understanding of the point of Go; granular "correctness" as a trade off for the productivity it provides if you're doing things that are just on the "good path"


I think its worth pointing out that the Rust code is more broken than the Go code in this example. Because Rust is trying to come up with a sane way to display the filename (a) it prevents users from using encodings the language designers did not anticipate and (b) it prevents the solution from integrating with other system tools. For instance, you can't run `rm "$(rust_program)"`, but you can with the Go solution.

But discussing any of this means you've missed the point of languages like Go. Instead of arguing about the best way to to represent pathnames that aren't a valid byte sequence under $PREFERRED_LOCALE, we should be talking to our customers and solving their problems.


I'm c/c++ over 20 years, go 1 year, Python 5+ years. I work at a company with a guy on the c++ standards committee with the internal sdlc and engineering training for large scale, commercial systems to boot.

This article is a rant. Not an engineering take down of go. There's just not much of substance here. Were I to care about windows (I don't) for serious cross platform os interaction, go isn't your hammer of choice.

I've turned to go recently for some I/O heavy apps of a micro-service type which it is fine for. I also turned to go because of God awful c++ build times and bad build systems in the sense that they assume all code is in a single branch. By switching to go I also prevent less experienced programmers from linking in legacy c++ libraries and the evil that comes with them.

Go has delivered. My needs are such that protobuf/flatbuffer are good enough for types and go's lack of generics is irrelevant. I'm pushing bytes across a network pipe in which each message admits simple transforms/operations.

Now I am keeping my eye on three things that I think go could burn me on:

- garbage collection

- channels ... cool but slow

- something unixy/multicore/close to the bare metal ... Like kv store

Those things I'd be reticent about doing in go.

Folks, we need 2-4 languages with their connections to libraries and tool chains in our toolbox.

While we remain dominated by c++ (a complex beast of a language) I am looking to add a functional language to my kit (ocaml/Haskell). Btw good engineers need a formal language too. I recommend tla+ and there's a guy in hacker news here that's got good books on it. Recommended! Highly concurrent code ought to modeled in tla+ first before leaving your app language gun and taking the canolli.

Cheers


> (Note that path/filepath violates Go naming conventions - “don't stutter” - as it includes “path” twice).

That guideline is for package/content name, not package directory names. https://blog.golang.org/package-names


Yeah, the author misunderstands the naming scheme which actually suggests this sort of repetition. See also, io/ioutil.


My mistake, I removed the relevant paragraph in the article.


I still see it, maybe cached?


> So, no errors. Chmod just silently does… nothing. Which is reasonably - there's no equivalent to the “executable bit” for files on Windows.

It's simply not true that Windows doesn't have "execute permissions" for files. It does:

https://docs.microsoft.com/en-us/windows/win32/fileio/file-s...

It's just that the people who wrote the go library couldn't be bothered to abstract this interface across all platforms.


+1 came here to say the same thing. This seems more like a problem with the standard library authors not putting in enough effort than anything else.


2/3rds about platform FS incompat, last third about only a couple of things like time comparisons being monotonic now (does more good than bad IMO).

I suspect if issues of such importance are enough to make you want off the "wild ride", you will not find a ride suitable.


He presented them as typical examples, not as issues of such importance as to independently make him off the "wild ride."

He also compared them to alternatives that he found favorable, which specifically addresses the idea that alternatives are worse.

It could be reasonable to disagree with the content of his argument, but it didn't have either of these structural problems.


> it didn't have either of these structural problems.

Disagree...the volume/importance of grievances should be directly proportional to willingness to abandon. That a few examples can be provided isn't an indictment of the ecosystem anymore than it would be if I did the same to those the OP found favorable.


He chose to go deep instead of broad. That was an editorial tradeoff to keep the length this side of an encyclopedia, and I don't think it's fair to criticize him for it unless you're also going to argue that the generalization he asked us to take on faith doesn't hold -- in other words, that the example he gave in which a simplifying API decision backfired is atypical.

I haven't used much Go, but the bit that I've played with gave me the distinct impression that "opinionated simplification" wasn't just common, it was the defining quality of the entire language, which would strongly suggest that OP's complaint would easily generalize to a hundred other APIs. Is that not the case?


"With a Go function, if you ignore the returned error, you still get the result - most probably a null pointer."

Well you should handle the error in the first place.


> Well you should handle the error in the first place.

That's like saying you should just write bug-free code in the first place.


Go goes out of its way to ensure you handle the error. You have to do something with that err return, otherwise it's a compile error. If you're just throwing it away without checking, we've gone from the mere mistakes everyday developers make to irresponsibility.

There's a reason most go code is littered with "if err != nil" on nearly all function calls.


Go doesn't ensure that you handle errors, if the function doesn't have a return value other than the error. The compiler will happily let you silently drop the result of os.Mkdir() on the floor.


I also generally think that writing boilerplate code is unproductive - golang's error system relies on people pedantically writing boiler plate code constantly among their logic - the result is to obscure the actual logical drive of the code.


I'm no Rust expert, but Rust doesn't enforce that either.

There is no language that enforce error checking afaik.


You get a compiler warning.


You have to either propagate the error up the stack or face a panic. Rust does force you to deal with the error.


I didn't make myself clear enough, if something returns an error in Rust and you don't check it will it compile or not?


It will not compile. (As I said earlier, you can always fallback to a panic aka "I don't wanna deal with the error so let my program crash", but an error will not silently propagate through the stack)


That's is completely false, stop spreading misinformation.

This code will compile (see https://play.rust-lang.org/?version=stable&mode=debug&editio...):

  pub fn foo() -> Result<(), i64> {
      Err(1)
  }
  
  pub fn bar() -> Result<(), ()> {
      foo();
      Ok(())
  }

It will provide a warning, but there's a ton of stuff in c++ that would throw a warning and you wouldn't say that it "will not compile".

A trivial change that still doesn't handle the error would get rid of the warning.

  pub fn foo() -> Result<(), i64> {
      Err(1)
  }
  
  pub fn bar() -> Result<(), ()> {
      println!("{:?}", foo());
      Ok(())
  }


I wonder if people upthread meant the warning, or e.g. getting the `Foo` in `Result<Foo, BarError>`.

(EDIT: nevermind, just looked again and pcwalton was referring to the warning and specifically `Result<(), E>`; oh well)

Because the latter is impossible in Rust and probably more relevant to the usual cited issue with Go's pair approach (i.e. using the null/zeroed `Foo` without checking if there was an error).

I do agree though that the warning isn't to stop you from not handling the error at all, it's more of a hint that maybe you forgot something.

Printing a `Result` may be the legitimate way to handle it in that case, it's largely left to the user to decide what propagates and what doesn't.


All languages which uses Result or Either types...?


No it won't:

    fmt.Println("foo")
You're not forced to handle the error. Not to mention more obscure cases like

    a, err1 := foo()
    if err1 != nil { return err1 }
    b, err2 := bar()
    if err2 != nil { return err1 } // bug


If multiple functions you call return an error golang only cares if you've checked the last one.


Holy crap, seriously? I guess that makes sense since everyone calls the return 'err', I'd just never noticed before.


Yup, the compiler is perfectly happy with

    a, err := Foo()
    b, err := Bar()
    c, err := Baz()
    check(err)
    doSomethingWith(a, b, c)
because "err" is ultimately used so doesn't trigger the "unused variable" compile error. The Go compiler doesn't care that it's written to thrice and only checked once.

In fact thinking about it that's a perfect example of "solving 90% of the problem, badly" the article talks about (though it's probably closer to 70% here): the Go compiler doesn't really try to understand that errors are a thing and are relevant. To avoid developers writing

    val, err := Foo()
then going on to use `val` without checking `err` the devs decided to… require using variable.

This solves that specific issue but does nothing if, say, you miss that the function returns just an error, or you don't care about the result so you just ignore everything it returns. Or as above if you've got multiple calls binding to the generic (and conventional) `err` and think to check the last one (possibly because the calls above were only added later and the compiler never complained).

Meanwhile it makes Go throw a fit and literally refuse to compile your code because you wrote an innocent:

    val := 5
and hadn't come around to use it yet, or removed the one print you didn't care for anymore.


Go does absolutely nothing to ensure you handle the error.

The only thing in Go source code that you see more often than the boiler plate "if err != nil" is "a, _ = foo()".

It's all too easy to ignore an error in Go.


> The only thing in Go source code that you see more often than the boiler plate "if err != nil" is "a, _ = foo()".

Where did you see that ? because that's not been my experience at all, and I've looked at a lot of Go code.


> Go does absolutely nothing to ensure you handle the error.

Go has many community linters available, https://github.com/kisielk/errcheck is popular for checking unhandled errors.

If you'd like a combo-pack, check out https://github.com/golangci/golangci-lint which includes all of the popular linters in a configurable way.


Those linters won't catch everything. There are cases that will slip by. Rust's error handling, as well as exceptions are both strictly superior to golang's error handling.


This is fud, I've never seen in Go code people dropping the err with _.

The reason why you don't see that is because you have to be explicit about that, it's not something you forget it's done on purpose which obviously no one does.


Just because you haven't seen it doesn't mean it doesn't exist. It's very easy to mishandle errors in golang, I've seen it several times now.


I’ve never looked at a Go codebase where someone has handled every single error – it’s just too easy to assign it but not check the value.

For a language which refuses to compile if you have an unused import, it seems like an odd gap not to have the compiler force you to access the error before it’s reassigned.


And there's a reason why empty catch blocks in Java are an anti-pattern.


The language should make you handle the error and the compiler should refuse to compile your code until you have done so.


Sorry, can't do that, too hard.

We will make sure you remove that innocent unused import though, can't have those lying around being all importey.


So checked exceptions, then?


That's one way, yes.


I felt the same way after writing a large (100kloc) project in Go, this is back when go was 1.0 or so as well. It started off well enough, but eventually started to fail in helping me create the software I needed to make.


No matter what, after 100k loc, you'll encounter language quirks that irritates you. It's a matter of how complicated it was to find and what the work around is.


i cannot comment on v1.0 but i too work on larger project and it is tiresome to write so much code, many repetitions and the same stuff in general. but i think it is not the language that is the problem. plainly, it's just the sheer size of the project. sure, DRY and generics would help out but i guess only to you as a dev, to save some time, not to the project itself. when i jumped into the go world and have learnt that code generators are VERY popular. I hated the idea and it was a big no no. but in time I came to like it and now i am a big fan. i like to use protocol buffers and generate code from them so that i have a nice schema as single source of truth that is well documented and strongly typed. with lyft's protoc-genstar, it is very easy to write your own code generator.


My main takeaway is the quote from scottlamb:

> ... these sorts of statements contribute to my belief that Go is an opinionated language that I should hesitate to choose for anything that the language's authors haven't specifically considered in depth.


Can we just stop with the "Rust vs Go" shit?

If you want me to take a critique of Go seriously these days, pick another language to compare it to. Any other language.

And yeah, I'm aware that 5 years ago there were a ton of "Go vs Java" articles. I didn't think much of them then, either.


Author here - I apologize for pulling Rust into this, but for the life of me couldn't find any comparable language that solves those problems "the right way".

I tried really hard. I knew a lot of people would instantly have that reaction, but I couldn't find another way to show that there is another way, short of pulling it out of thin air (which would've made for an even longer, less accessible article).


Isn't that the point though? Go is the best worst option we have and is competing with c# and Java..


Did you look at Haskell, and, if so, what do they do?

Good article BTW, IMO.


Your choice was fine imo. Besides, you can never satisfy everyone.


You're fine, it's a good comparison, if you didn't bring it up someone in the comments probably would have.


LISP, surely? all HN knows that LISP is the perfect programming language... ;)


If only because Lisp is a moving target, so practically anything you say about it will be true by someone's definition.


It's not, imo.

It's an example of extremely differing philosophies - of which Rust is a great example of the opposite spectrum of Go.

I imagine there may be a couple other examples, maybe something like Haskell (I wouldn't know), but I'm guessing the author just knew Rust better for this comparison.

It's easier to illustrate problems that shouldn't be problems (in your eyes) if you have solutions for them - especially solutions that you believe work well. Rust's take on these nitpicks is something that the author clearly thinks Go is lacking on.

In my view, this post is a critique on Go and the "simplicity" mantra it has; and only that.


I'd buy that if there hadn't been approximately 48764576459674 "Rust vs Go" (well, more usually "this is why Rust is way better than Go and you're some kind of moron if you're not switching to Rust today") articles in the last few months.


> "this is why Rust is way better than Go and you're some kind of moron if you're not switching to Rust today"

I didn't see that anywhere in this article. Rust was just used to show an alternative approach.


[citation] last two lines of the article:

"At this point in time, I deeply regret investing in Go.

Go is a Bell Labs fantasy, and not a very good one at that."


So he thinks that Go is a bad language. What does that have to do with Go users being morons for not using Rust?


Okay, let me ask this then - what language would be better compared to contrast the shortcomings of Go that the other takes issue with?

Certainly not C++, no?


In 5 years' time, when there are a gazillion "Rust vs ${Nim}" articles out there, extolling the virtues of another language and pointing out the shortcomings of Rust, then let's ask this question again.

I'm not questioning the critique - no language is perfect, and Go certainly has its share of problems. I'm questioning the sudden rash of "Rust is awesome, Go is shit" articles over the last few months. It's not a good thing.


Java handles files and their permissions much better than this. It takes Path objects instead of strings, offers generic setReadable, setWriteable, etc methods and exposes finer grained apis to set posix specific bit and windows attributes.


For those unaware, the title is a reference to a hilariously long user-created ride in Roller Coaster Tycoon 2 titled "MR BONES WILD RIDE" [1]. The ride's exit connected to its entrance, so passengers were forced to repeatedly ride the roller coaster forever.

[1] https://knowyourmeme.com/memes/mr-bones-wild-ride


Have you not heard of Mr. Toad's Wild Ride?


> Computers, operating systems, networks are a hot mess. They're barely manageable, even if you know a decent amount about what you're doing. Nine out of ten software engineers agree: it's a miracle anything works at all.

I like that he identified the real problem right at the start.


Yeah so do I. It would have made a nice tweet.


main gripe seems to be that go will "optimize for the 90% case, ignoring correctness" -- particularly leading to issues on non-unix systems like windows.

That fits Go's stated goals afaik. While I understand the author ran into problems for their use-case, I did not find this rant compelling as a general criticism.


Same because if you try to create the % case you end up with things like ASP. NET and entity framework. Working with those for 5 years I was constantly annoyed with how far I could add super complex features only to have to unravel them to implement a simple lower level edge case.

And in my experience this too easily reflects poorly on the devs "well I found a blog post for ef that does what we need in 30 seconds.." which just isn't the case. I migrated to golang and find its nuances much easier to swallow. No generics? True - write a generator for your use case. It's really not that hard..


I agree. I would rather keep things minimal and have 90% correctness, than a weird and complicated interface that's 99% right.


(Comparing a function in Rust's sdtlib to Go's:) > Of course there's a learning curve. Of course there's more concepts involved than just throwing for loops at byte slices and seeing what sticks, like the Go library does. > But the result is a high-performance, reliable and type-safe library. > It's worth it.

When I first saw Go, I was blown away. Not by its features, but rather the lack thereof. It seemed like one last "Hail Mary!" from the C programming community to get "back to basics". But, as the author showcases, the time when programming was about manipulating arrays with pointers is, if not behind us, hopefully on its way out.


With the path example… just try to combine this with flags, so we do something like:

    $ ./my_program --file="$(printf "\xbd\xb2\x3d\xbc\x20\xe2\x8c\x98")"
Well, just try to write the program that does that in Rust, without using some option-parsing library that hides all the details, and then try to figure out how to get it to work equally on Windows.

To spoil the answer, it turns out that OsString only exposes a couple conversion routines and can’t be manipulated, and people have been trying to figure out a way to add a string-like API to it for years. Rust’s “do it the right way even if that exposes lots of complexity” approach here has its drawbacks.


If you want to half-ass it like Go you go https://doc.rust-lang.org/std/ffi/struct.OsString.html#metho... or https://doc.rust-lang.org/std/ffi/struct.OsString.html#metho... if you want to potentially get an error.

If you want to deal with bytes / invalid Unicode, you go https://doc.rust-lang.org/std/ffi/index.html#conversions


I'm aware of the conversions, unfortunately, you can’t really do any processing before you convert and you can’t (unlike C++) write generic code that works on both types of converted values.

On Unix you get Vec<u8> and on Windows you get an iterator over u16. This is hot garbage, to say the least, if you want to do any kind of processing. I can go into more details, but in C++ you would just be working with std::string and std::wstring, depending on platform, and at least in that case you can hide everything away like this:

    #if defined WIN32
    using OsChar = wchar_t;
    #else
    using OsChar = char;
    #endif

    using OsString = std::basic_string<OsChar>;
This is only the beginning, but you can see how the C++ version is much easier to work with, even though it doesn’t hide the problem from you.

Note that I’m not advocating that you make everything in your code into OsString, just that it’s common to need to do some small amount of manipulation of OsString and Rust makes this much harder than it should be.


With Rust you can also convert to `Vec<T>` where T is either `u8` or `u16` and use generics to work on any. ️

And there are probably handful of libraries that would help with all that too. Also - whatever convenient functionality you might want, can be added in the future without issues. Hardly a language flaw - just a minor unimplemented functionality.


> With Rust you can also convert to `Vec<T>` where T is either `u8` or `u16` and use generics to work on any.

That’s a very cumbersome way of doing things. I would love to see an illustration. It also involves a ton of conversions: if want to parse a command-line flag which contains a file path, it would go: wchar_t -> OsString -> Vec<u16> -> OsString -> wchar_t. It also makes it difficult to use Rust APIs in a more or less idiomatic way.

> Hardly a language flaw - just a minor unimplemented functionality.

It’s a flaw in the standard library, not the language. When you say that it’s minor, all you’re doing is saying, “I don’t care about the things you care about.” That’s not really an argument, just a statement of your own personal opinion.


The point isn't that it should be easy to do terrible broken things like this. The point is that you will encounter these things in the real world and have to deal with them in some way. Rust's OsStr[ing] let you do that. If you really have a burning need to create files whose paths are not valid Unicode you can do that in Rust but you will have to jump through some hoops. I don't see that as a problem.


I’m not saying that it should be easy, just that the API shouldn’t introduce significant amounts of additional complexity.

For the very simple case of "I want a command-line option which specifies a path as an OsString", Rust’s way of doing things makes things hard. By comparison, in C++, I am used to dealing with paths as std::string on Unix and std::wstring or std::u16string on Windows, and this C++ approach is a lot easier.

Rust’s OsString design is too smart by half, and if I use env::args_os(), I can’t easily do simple tasks like "test if this string starts with '-'" or "split this string by the first '=', if it exists". As far as I can tell, the way to go is to convert OsString to Vec<u8>, do your processing there, and then convert back… but that only works on Unix, because arbitrary Vec<u8> aren’t safe to convert back to OsString on Windows because they may not be valid WTF-8. So you can take the approach on Windows of going through encode_wide().collect() and it just goes downhill from there. :-(


> For the very simple case of "I want a command-line option which specifies a path as an OsString", Rust’s way of doing things makes things hard. By comparison, in C++, I am used to dealing with paths as std::string on Unix and std::wstring or std::u16string on Windows, and this C++ approach is a lot easier.

FYI, you can convert an `OsStr` (or `OsString`) into a [`Path`](https://doc.rust-lang.org/std/path/struct.Path.html) with zero overhead and get all the useful things you want to do with paths: https://play.rust-lang.org/?version=stable&mode=debug&editio...

> Rust’s OsString design is too smart by half, and if I use env::args_os(), I can’t easily do simple tasks like "test if this string starts with '-'" or "split this string by the first '=', if it exists". As far as I can tell, the way to go is to convert OsString to Vec<u8>, do your processing there, and then convert back

For commandline arguments that you actually need to parse there's nothing wrong with asserting that you only accept valid Unicode, and then you can just use `to_str()`: https://doc.rust-lang.org/std/ffi/struct.OsStr.html#method.t...

If you have a `Vec<u8>` or `String` or whatever you can just treat it as an `OsStr` (with no overhead) because both of those (and a bunch of other things) implement `AsRef<OsStr>`: https://doc.rust-lang.org/std/path/struct.Path.html#method.n...


i wrote go professionally on a project for a year in a single very intense push, and i was burned by every single thing listed in the article. felt like uphill impedance mismatch the whole way. its nice to see it articulated well


> burned by every single thing listed in the article

Really? That seems absolutely bizarre to me, I've been writing it professionally for ~8 years now and never hit… any of these.

I mean I basically never interact with Windows on any level, but none of this has ever bit me.


this was cross platform forensics software ripe with edge cases


>when you make something simple, you move complexity elsewhere.

This applies to so many things. I wish I could get non-technical people to understand that making something simple moves the complexity elsewhere.


I have become annoyed at go for completely different reasons than OP. I wrote my blog using go as the backend a few years ago. Deployed it on Google App Engine. Every time Go updates or the App Engine SDK updates, it is a super pain to update my site. I almost want to throw it all away now that Go is handling dependencies in a completely new matter.


50 Shades of Go: Traps, Gotchas, and Common Mistakes for New Golang Devs, a good read about go quirks and unexpected behaviors http://devs.cloudimmunity.com/gotchas-and-common-mistakes-in...


I kept waiting for practical examples that showed how these shortcomings made go a non-starter, and ultimately all I got was a mention, right at the end, about how he hit a particular bug multiple times.

I mean, currently I work in a go shop and I hate nearly everything about it, all just from what he calls "the bad", which is enough to make me not feel precisely happy about writing it. The content of this article, what he calls "the ugly", comes across as a bit nitpicky in comparison.

Nonetheless, it is a good article about string and path handling, time, and being irresponsible with what one is depending on.


I'm late to the show here, but I think the whole argument around filepaths needing to potentially be encoded before being presented to end users is a non-issue / how most languages I've worked in have handled it?

Does rust have some fancy handling? Sure? Is it syntactic sugar? Absolutely.

Maybe I've been working in Web too long, but encoding a value before handing it to the user seems second nature.


This is an excellent example of Waterbed Theory: "This is a theory which says that if you push down the complexity in one part of a language or tool, there is a compensation which increases the complexity of another part of the language or tool." http://wiki.c2.com/?WaterbedTheory


Based on the title, I was expecting a post about some sort of production horror story or some difficult edge case upgrading to 1.14.


Comparing Rust to Go: > Of course there's a learning curve. Of course there's more concepts involved than just throwing for loops at byte slices and seeing what sticks, like the Go library does. > But the result is a high-performance, reliable and type-safe library. > It's worth it.


Got really tired and bored with this. Maybe structure the article with some sort of meaningful abstract so that you can summarize the points you want to make up front without having to subject the reader to 9/10ths of this article.


Go is an opinionated language, but opinion is not just right or wrong, and it’s getting complicated with time passing by. Opinion might be prejudice. Anyway, I like the multiplexing and go routine in go.


I wanted a little bit more from FS department, so I rolled my own: https://github.com/ungerik/go-fs


Go needs good criticisms like this.

I will never understand why this language is so popular.


It's due to the same reason Rust is: it's backed by a large popular company investing in the language and being loud about it, which leads to a rapidly growing mindshare and ecosystem around it, which is essential for adoption.

This is not meant as a criticism toward go or rust: the history shows several cases where this happened before irregardless of the technical merits.

A language still needs to become popular, it's not like we're lacking great languages nowdays. It's certainly easier if you're big and can provide the founding around it.


The backing can get you publicity. If the language is lousy, though, the publicity won't help it. But publicity can turn an obscure good language into a well-known good language.

I think the bigger thing that corporate support gets you, though, is a better library (more complete, more debugged, and more polished). That is an essential ingredient for language popularity. Up through Java, it was enough.

But these days, I think that there's one more ingredient needed: Solve some problem that isn't well-solved in other existing popular languages. Go has pretty good answers on multiple threads and network services. Rust has the borrow checker. Those are useful enough pieces to gain traction for those languages.


Scuolo di Michaelangelo "god, this Carrera marble is so hard to work in why can't we just pour concrete into rubber moulds like the garden gnome factory next door"

Michelangelo "fine, I thought you wanted to learn how to sculpt perfect buttocks but whatever"

Jeff Koons "that garden gnome idea, how about now I know how to carve Carrera marble I make one in marble"

Scuolo ..."Jeff.. we hate you"

The GO authors are gifted. They make tools gifted people understand. If you aren't gifted, they are difficult tools to use.

(I'm not gifted btw)


getlantern/idletiming has 9 stars and 2 forks. This is not something the general Go community uses


2+ years on Go, 13 years on Python and JS. Some Kotlin and Java as well. Go is by far the best programming language you can find to build scalable microservices. Hands down, years ahead of anything else.


I think this post summarizes to,

- I don't like the file-related packages

- What's up with this random 7 star library having a lot of transitive dependencies

- Rust for life

- In summation, Go is the worst


Not that this is a good faith summary, but I've updated the article to point out that it's not just "this random 7-star library", but in fact, 266 publicly-available Go packages.


Your rant largely has to do with a library someone wrote that did not handle dependencies well. A few years ago you would have complained about lack of module support at all. I have a toy project that's relatively simple, and the Javascript frontend has a lockfile that is literally over 10000 lines long.


I was already shipping Go code a few years ago, and the various vendoring tools gave me a lot less grief the new module system has. Besides, it still had all the same limitations, the same standard library choices, the same sloppy abstractions. The rant applied then and it applies now - and focuses not on any specific problem outlined in the article, but the general philosophy of the language, its standard library, and its ecosystem.


> Your rant largely has to do with a library someone wrote that did not handle dependencies well

I am rather curious about how you concluded that "lots of dependencies bad" was the point of that section of the article and not, perhaps, the absurdity of having to compile an empty file to get around the solution to a bug being hidden from end developers.


^ This


Go is my favorite language but I do agree that Windows support has always felt like an afterthought.


It's a pretty good article. The tl;dr is that golang is a POSIX-focused application programming language that is incorrectly advertised as a platform-agnostic systems programming language.


A lot of people seem to be missing an overarching point, which is the benefits of a language having Sum types, so that edge cases can be represented clearly, and in a way where the consumer of the api can't fail to know they exist, and can't fail to handle them. Anyone thinking of making a new language today, should really get some familiarity with Option and Result types. They make so many things not only safer, but also nicer to use.


I surprises me that most people here aren't up in arms in agreement with this point. Code that is silently incorrect is an absolute disaster on an enterprise level. I spend a lot of time writing seemingly redundant double and triple error checking into my code, only to have the designers of the LANGUAGE say, "yeah, most filepaths are utf-8 so seems good enough to me".


There's a huge cult of Golang being the "one true way" right now and any logic that could potentially contradict that is going to cause folks to throw the blinders up.

Your point about that being a disaster in enterprise is exactly correct and I have huge misgivings about these people above writing the large majority of our software architecture. This is after we switched to Go from Java where these same people did some of the same things.

Lesson not learned.


I’ve written go full-time for the last 3.5 years and it still amazes me that by default the linter doesn’t at least warn about unused/uncaptured return error values.


Golang is such a joke of a language. The compiler won't even compile if there is an unused variable but won't warn you if there is an unchecked error! This language is meant to produce buggy incorrect code that can only be mitigated with writing excessive repetitive tests that have nothing to do with business logic itself.

Golang is probably the biggest embarrassment of a modern programming language ever conceived. Again, if you don't believe me, just start writing your first Kubernetes controller.


This doesn't match my experience. Go apps tend to be extremely clean and reliable. It's of course possible to write crap code in Go, but you can write crap code in any language.

The unused variable thing is mildly annoying but fits with the cleanliness philosophy. Not checking errors is very easily detected by a LINTer such as the one built into the JetBrains GoLand IDE. It highlights failure to check errors and requires that you explicitly ignore the error return with something like "_, foo = bar.baz()".

Go is spectacularly productive when used properly. It's a very nice language.


option types are spreading pretty well these days. Zig, for instance, has them, even as a low level C replacement language.


Even C itself has unions since forever. Optionals, enums, bools, err-or-result constructs are (highly ergonomic) sugars atop of unions.


Isn't that the standard library and not the language?


Go the language lets the Go the standard library play with things that nothing else can. If you can't implement it yourself, is it really a library and not just part of the language runtime?


Oh I didn't know that. I thought anyone could modify the standard library. Do you mean syscall?


> Code that is silently incorrect is an absolute disaster on an enterprise level.

Enterprise software is not well-known for its quality or correctness.


That's a valid point. Enterprise software can be continually buggy and broken and still be commercially viable.

So that things are silently wrong is not a disaster as much as it is a dumpster fire that corporations are happy to shovel cash into while a whole lot of people huddle around it for warmth.


It's curious that after pretty universally rejecting checked exceptions, they have now returned as result.


Checked exceptions were universally rejected not because they are intrinsically bad but because the language support was awful (e.g. could not wrap or abstract over a nested object possibly rethrowing), they were sitting right next to unchecked exception with limited clarity, guidance and coherence as to which was which, and they are so god damn ungodly verbose, both to (re)throw and to convert.

Results are so much more convenient it's not even funny, but even without that you could probably build a language with checked exceptions where they're not infuriatingly bad (Swift has something along those lines, though IIRC it doesn't statically check all the error types potentially bubbling up so you know that you have to catch something, not necessarily what).


A very large part of that though is Java not being 'generic' over checked exception types. So if you e.g. build something that supports end-user callback code, you need to either throw Exception (accepting all code but losing all signal as to what's possible) or nothing (forcing RuntimeException boxing).

That's Java. And I agree it is a wildly painful and incomplete implementation. I wish we'd stop conflating it with checked exceptions as a language feature.


Result types are genericizable in a way that checked exceptions aren't (IIRC), which is huge for ergonomics.


Can you give an example of how you think this helps?


Basically, exceptions have a "happy path" which is very simple but deviating from that path is often quite inconvenient and painful. A well-built result type makes it easy to opt into the happy path of exceptions, and also quite easy to use different schemes and deviate from that path, all the while being much safer than exceptions because you're not relying on runtime type informations and assumptions.

Furthermore, results make it much less likely to "overscope" error handlers (there a try block catches unrelated exceptions from 3 different calls) as the overhead is relatively low and there's necessarily a 1:1 correspondance between calls and results; and it's also less likely to "miscatch" exceptions (e.g. have too broad or too narrow catch clauses) because you should know exactly what the call can fail with at runtime. It's still possible to make mistakes, don't get me wrong, but I think it's easier to get things right.

"Path unification" is a big one in my experience: by design exceptions completely split the path of "success" and "failure" (the biggest split being when you do nothing at all where they immediately return from the enclosing function).

This is by far the most common thing you want so in a way it makes sense as a default, but it's problematic when you don't want the default because then things get way worse e.g. if you have two functions which return a value and can fail and you need to call them both, now you need some sort of sentinel garbage for the result you don't get, and you need a bunch of shenanigans to get all the crap you need out

    int a;
    SomeException e_a = null;
    try {
        a = something();
    } except (SomeException e) {
        a = -1;
        e_a = e;
    }
    int b;
    SomeException e_b = null;
    try {
        b = something();
    } except (SomeException e) {
        b = -1;
        e_b = e;
    }
    if (e_a != null or e_b != null) { // don't mess that up because both a and b are "valid" here
        …
    }
or you duplicate the path in both the rest of the body and the except clause (possibly creating a function to hold that), etc…

By comparison, results are a reification so splitting the path is an explicit operation, but at the same time they still don't allow accessing the success in case of failure, or the failure in case of success.

    let result_a = something();
    let result_b = something();
    if let Err(_) = result_a.and(result_b) { // or pattern matching or something else
        …
    }
Having a reified object also allows building abstractions on top of it much more easily e.g. if you call a library and you want to convert its exceptions into yours you need to remember to

    try {
        externalCall()
    } except (LibraryException e} {
        throw MyException.from(e); // because that might want to dispatch between various sub-types
    }
and if you don't remember to put this everywhere the inner exception will leak out (that's assuming you don't have checked exceptions because Java's are terrible and nobody else has them).

Meanwhile with results the Result from `externalCall` is not compatible with yours so this:

    return externalCall();
will fail to compile with a type mismatch, and then you can add convenience utilities to make it easy to convert between the errors of the external library and your own, and further make it easy to opt into an exception-style pattern. e.g. Rust's `?`

    externalCall()?
is roughly:

    match externalCall() {
        Ok(value) => value,
        Err(error) => { return Err(From::from(error))); }
    }
(there's actually more that's involved into it these days an a second intermediate trait but you get the point, in case of success it just returns the success value and in case of failure it converts the failure value into whatever the enclosing function expects then directly returns from said enclosing function).


People are quick to advocate anything from functional programming / academia here. Doesn't mean it would necessarily improve life of an ordinary programmer.


Having done a big tour of functional programming ideas in the last couple of years, I've found almost none of them to be generally helpful for the ordinary programmer, except sum types, which enable the option type and result types. Though even just having the one special casing those two and not exposing sum types to the language would be most of the benefit.


that's true, but realize that golang is largely shepherded forward by a company with a legacy of C++ (maybe lesser C/java) heritage. you aren't getting legacy C++ programmers on board with "optional" types: they'll riot and pull the purity card (this doesn't look like "my C++"). google is probably grateful these people are no longer returning -1, -2 etc. for errors from their functions.

consider things like the golang date formatting string. to anyone not well versed in a C++/C ecosystem, the golang date formatting string is absolutely nuts. it is complicated as hell and doesn't really make any sense. but consider the reaction of a C++/C developer: they're probably quite comfortable with it, because it's basically stolen from C. functions like itoa and atoi harken back to a """simpler""" time, despite being virtually nondescript for anyone who didn't start their careers with that stuff.


C++ has been adding things like optional types; std::optional was released in C++17, there's talk of maybe pattern matching coming soon...


Java has Optional as well, but as you well know, you can't just bolt ADTs onto the side of a language and wash your hands of it.

The whole system has to be designed around it to get the benefit of it. Java programmers will be checking for null until the last line of Java is written.


Sure, but the argument was that folks would reject it, when they in fact have explicitly added it. Its effectiveness is another story; not only along the axis you were talking about, but also others, but that's a separate conversation.


The language maintainers adding new features to appeal to people newer to the language is not mutually exclusive with veterans of the language not adopting the new features.


Some form of optional has been in wide use in the c++ community for more than 20 years.


> Java programmers will be checking for null until the last line of Java is written.

Not necessarily. Adding nullability types to Java is a smaller undertaking than adding generics was.


Yes, in fact Go's paradigm for errors is a clever half step towards option types.


The authors just happened to work at Google ,while having a manager that supported their work (check Go Time podcast), most of Google's relevant products keep being done in C++, Java and Python, and they are one of the biggest contributors to LLVM/clang, and ISO C++.


Its podcast #100 for anyone who want to listen. https://changelog.com/gotime/100


Rust is great it follows modern programming techniques and theory but it focuses a little too much on zero cost abstractions and because of that the abstractions are a bit complicated.

Go is easy to learn but poorly designed with an incomplete type system hence all these strange issues.

There is a vacuum that exists between Rust and Go. A language that utilizes modern Algebraic Data Types (like rust) but does not necessarily need to create abstractions just to make everything zero cost (like Go).


It's not a void. StandardML file the niche well and Ocaml is getting close (just waiting for multicore support). The issue is a company that wants to put in resources.


Functional is ideal, but these languages are harder to learn and not intuitive (like rust). Outside of idealism we need a language that can be procedural simply because that is what people are use to.

Something like Go with ADTs.


Ocaml and SML are not the Haskell dream world. They allow imperative code, side effects, and mutation.

The difference is that they have sane defaults (eg, immutable until you specifically ask for mutations) and an actually sounds type system (no null exceptions, exhaustive pattern matching, good generics, etc)

SML in particular was designed to be easy to learn and implement and succeeds rather well on both counts. It also has a standard instead of the implementation being the spec.


Interesting. What is your opinion is preventing SML from filling the vacuum in terms of adoption? Is it just marketing?


There are three major implementations polyML, SML/NJ, and Mlton (the last two are used together a lot as Mlton's whole code optimizer can take some time). Most of those coders work at universities and the biggest projects (for polyML at least are large theorem provers). They don't really focus on the software problems of more typical businesses.

There are very few languages that rise to prominence without corporate intervention. SML is a solid foundation, but the ecosystem is somewhat lacking. I don't really know aside from that. Even though SML syntax isn't difficult or particularly radical, it isn't in the C family which (I believe) makes it a no go for lots of companies.

EDIT: to answer more clearly, we simply need more dev time to create and improve the library situation and that basically demands a corporate patron.


Came here to say OCaml wojld be the sweet spot.

Sorry about multicore. It's like Perl6. A dream which will never come true or your accept F#.


Note that Perl 6 has been very much a thing since December 2015 (first official release). However, last October it got renamed to Raku (https://raku.org using the #rakulang tag on social media). And it is still very much a thing. If you want to keep up to date, you should check the Rakudo Weekly News (https://rakudoweekly.blog).


Perl6 exists.. well, it did for a year or two until it was renamed Raku.


Yes, thank you! Going without ADTs and pattern matching after having them is unbearable. All I really want is Rust but easier to use, maybe I should just bite the bullet and dive into Rust?


From an ex Go-er (5 or 6 years professionally?), now in Rust for a bit over a year, I can say that Rust in the majority of the cases is "just as easy" as Go.

The thing with Rust is it gives you a lot more of the complexity rope if you desire to hang yourself with it. But, a realization I had early on, was that I didn't _have_ to. Not everything has to reuse perfect lifetimes or maximum possible generics. You don't have to chase every latest feature (Async, I'm looking at you). Without all of that, Rust is still an amazing language.

What I found most amazing after leaving Go was not something I expected: Iterators. Being able to easily mutate complex data structures, filtering in complex ways, zipping, chaining, etc. I could do the same exact thing in Go mind you, but in Go I found myself writing helper functions all over the place. In Go, my code felt so spread out, and was hard to just look at in one screen to understand. Rust (and Iterators) made so logic concise that you could view it in one screen and make sense of it.

Keeping code "locality" was oddly, by a large margin, my favorite thing about Rust.


Haha


TL;DR I am on Windows and Go doesn't play well with its weird filesystem. It is all Go's fault for relying on highly used server OS semantics and therefore Go's simplicity is lie.


i think your problem is with windows, not go


Articles like this are evidence of Go's huge success.


What a ridiculous and narrow thing to get so upset about.


It's not ridiculous. Functions should not be returning garbage values when an error occurs.

This is the worst part of javascript and certainly it's not pleasant to uncover this in Go.


If somehow we had a way of checking Golang's source and submit our desired changes...


I seriously hate this argument.

Go and have a look at the issue the golang is discussing currently. Do you seriously think that everything can be fixed by a simple request?

Most of the time it wouldn't fit with the way Golang is going. It's not a critique of some bugs in Golang source code but the mentality and flow surrounding changes.

Do you except the author that submitted change overhauling the whole way Golang handles Unix vs Windows would be accepted?

I do not agree with the author, but that is fine. It's fine for me, understand it's not good for his use cases. Saying "duh, just submit your request" is stupid as it gets.


Obviously what I mentioned is just an oversimplification. I expect that when some particular piece of software (open source in this case) is causing major trouble to a big chunk of its users, they get together to fix it.

In the particular case of this user, some of the problems are are really related so I can imagine that if they were widespread it would´ve been taken care of.

I am sorry for using sarcasm to take a detour from my real point and I was just making some light-hearted fun about op's problem.


Apologizes for not getting the sarcasm, it seemed real enough.


If you don't care about Windows, half of the rant is just not interesting for you. So even if you accept that he is right in every single point, excluding those parts there these are not a lot of problems. EVERY language has problems, even Rust.


Clickbait title.

Should be titled "Golang doesn't work well with Windows".


Granted, the article is pretty long, and spends a lot of time talking about this windows pitfall that I was also about to abandon it. Then it speaks of other examples like the monotime issue which I think is a better example of what he is advocating.


Specifically, the filesystem. Other parts (memory management, cross compilation, goroutines, channels, etc) work just fine.


i cant help but feel Go is the new Javascript. Everyone wants to complain about how its semantics as a language do not align with their favorite programming paradigm. In this case, having complex, algebraic type-based abstractions that attempt to accurately reflect subtleties that are rarely important.

Yes, Go, as Javascript has unique failure cases and subtleties, but they are (as of 2020) very productive languages within their particular paradigms. That's not to say either language is beyond criticism, of course. But it's a little silly to think that a language that supports the 99.9% of writing a service well, but does the .1% badly as a tradeoff for simplicity is a fundamentally broken language because it doesn't share those aspirations. We might as well be complaining about the lack of pointer arithmetic in Python.


What is the point of a ten page rant like this? If the guy doesn’t like coding in Go, just stop using Go, problem solved. How many more times are we have to have the language X is different then language Y and I hate feature Z discussion? These are popping up almost daily. We could probably automate generating a daily rant with commentary, and let all the Joe Nobody coders get back to whatever they are trying to accomplish.


Hardly anybody writes a retraction after three years of “mongodb is great in production” — they silently switch to a new product, and maybe say something positive about it too. These kind of rants are hard-earned battle scars of former zealots learning their lesson, and should not be discarded.


it is often easier to learn from someone else's mistake than to have to make the mistake yourself. what's not to be gained by learning from someone else's experience? why do you see criticism (with actually-encountered, real-world examples, no less!) as being without merit?


Most of this rant is a disagreement of how Go handles file system differences between Unix and Windows, most of the rest is complaining about some badly written library.

May be good to know if you’re dealing with any of that, but this much effort would be much better served submitting a proposal to change whatever the author is so worked up about. Either the proposal is accepted, or the Go community will provide a response if the proposal is written with due consideration.


i went through the pain points of Go myself but that was at the beginning when i was expecting behavior i was used to from previous language(s) and from trying to force the previously learnt norms onto Go.

Reading the blog post(i wrote something similar that got a ton of views here few years ago) it sounds more like the issue is between the keyboard and the armchair, not with the language itself.

As with anything else, if you don't like it, don't use it. If you like Rust, Rust away.


I'm sure there's real issues; but this reads as an extended whinge on "Windows and Unix are Different and languages wrap those differently whaaa!"

If you want OS interfaces that look the same wherever; then choose a portability layer that abstracts that for you.


Uh okay, lots of "but windows" and a few misinformed takes about the http lib (using contexts over using a client instance with a timeout set) alongside ripping apart a random package I've never used or heard of for having a huge dependency graph.

Such a long article, for this?


> which makes a lot of problems impossible to model accurately

Impossible?

> (instead, you have to fall back to reflection, which is extremely unsafe, and the API is very error-prone),

Extremely unsafe?

> when you make something simple, you move complexity elsewhere.

Does it? Or did you, in reality, not really make it simpler?

> Go says “don't worry about encodings! things are probably utf-8”

Does it? https://blog.golang.org/strings

It just sounds like the author is very frustrated at some seemingly minor inconsistencies (from their perspective), and the extreme language used for things that are not that extreme are evidence in my opinion. Blogging can be a good exercise to shed some frustration, I definitely understand that aspect. Not sure this needs to be shared as a good example of anything or taken in any light, other than "someone is venting."


It's the fairest point, modeling some issues is a pain in go, particularly dynamic data types (think ActivityStreams).


Debating whether certain domains are more difficult/painful in Go than other languages is valid. Saying they are impossible is extreme.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: