Hacker News new | past | comments | ask | show | jobs | submit login
Major standard library changes in Go 1.20 (carlmjohnson.net)
246 points by todsacerdoti on Jan 16, 2023 | hide | past | favorite | 244 comments



The arena goexperiment contains code used inside Google in very limited use cases that we will maintain, but the discussion on https://go.dev/issue/51317 identified serious problems with the very idea of adding arenas to the standard library. In particular the concept tends to infect many other APIs in the name of efficiency, a bit like sync.Pool except more publicly visible.

It is unclear when, if ever, we will pick up the idea and try to push it forward into a public API, but it's not going to happen any time soon, and we don't want users to start depending on it: it's a true experiment and may be changed or deleted without warning.

The arena text in the release notes made them seem more official and supported than they really are. We've deleted that mention from the release notes to try to set expectations better. Posting here to leave a note for people who are curious where they went.


The next phase of language design is making it possible to write "data-oriented" programs which largely live within the cache of the CPU.

Ie., the next frontier is moving from RAM to cache, since CPUs are not going to get faster than programs are "already slow".

If you rewrite some OOP/Pointer-Machine-Model/RAM-Thrashing programs for modern CPUs, you can get 100-1000x speed-up.

The evolution in language design seems, then, to be about exposing more of the hardware to enable developers to target it effectively.


The 100x plus speedup is no exaggeration either, I've noticed some incredulity from others at the numbers.

For a very simple comparison, I recently was testing a (poorly) custom built data-oriented Entity-Component-System for usage in games with a more typical "componentized" object approach. No multithreading or anything complicated.

On my system, the typical approach could generate about 1000 new objects and attach a single component in about 1 millisecond.

The data-oriented approach could generate about 100,000 new "objects" and attach a single component in about 0.5 milliseconds.

Same thing in the end, but one is roughly 200x faster in the same time frame. It's pretty stunning when you see stuff like this in benchmarks.


    -----s.-ms.-us.-ns|----------------------------------------------------------
                     0.1 ns - NOP
                     0.3 ns - XOR, ADD, SUB
                     0.5 ns - CPU L1 dCACHE reference           (1st introduced in late 80-ies )
                     0.9 ns - JMP SHORT
                     1   ns - speed-of-light
        ?~~~~~~~~~~~ 1   ns - MUL ( i**2 = MUL i, i )
                   3~4   ns - CPU L2  CACHE reference           (2020/Q1)
                     5   ns - CPU L1 iCACHE Branch mispredict
                     7   ns - CPU L2  CACHE reference
                    10   ns - DIV
                    19   ns - CPU L3  CACHE reference           (2020/Q1 considered slow on 28c Skylake)
                    71   ns - CPU cross-QPI/NUMA best  case on XEON E5-46*
                   100   ns - MUTEX lock/unlock
                   100   ns - own DDR MEMORY reference
                   135   ns - CPU cross-QPI/NUMA best  case on XEON E7-*
                   202   ns - CPU cross-QPI/NUMA worst case on XEON E7-*
                   325   ns - CPU cross-QPI/NUMA worst case on XEON E5-46*
        |Q>~~~~~ 5,000   ns - QPU on-chip QUBO ( quantum annealer minimiser 1 Qop )
                10,000   ns - Compress 1K bytes with a Zippy PROCESS
                20,000   ns - Send     2K bytes over 1 Gbps  NETWORK
               250,000   ns - Read   1 MB sequentially from  MEMORY
               500,000   ns - Round trip within a same DataCenter
        ?~~~ 2,500,000   ns - Read  10 MB sequentially from  MEMORY~~
            10,000,000   ns - DISK seek
            10,000,000   ns - Read   1 MB sequentially from  NETWORK
        ?~~ 25,000,000   ns - Read 100 MB sequentially from  MEMORY~~
            30,000,000   ns - Read 1 MB sequentially from a  DISK
           150,000,000   ns - Send a NETWORK packet CA -> Netherlands
        1s:   |   |   |
          .   |   | ns|
          .   | us|
          . ms|

(https://stackoverflow.com/a/33065382)

However,

    0.001 ns light transfer in Gemmatimonas phototrophica bacteriae
biology has much more performant/optimized machines, therefore, yes, plenty of room for improvement in silico.


The problem with the "plenty of space down there" remark is that practical computers have to have their input states programmable, ie., there must exist some causal deterministic process to set the state of the input.

It's not clear that organic solutions at that level can do programmable computational work, nor that their work is at all deterministic.

At best, it would seem the organic direction for computing will be about building robots rather than CPUs.


Oh, but the "organic" solutions do highly deterministic, extremely programmable computational work: 99.99999+% of newborns have 2 hands, 2 legs, and 1 head, and they all started development from a single cell [1]. It's just that the "organic" solutions are written in a 4+ billion year-old highly redundant, distributed, resilient, evolved language whereas our CPUs are not on the same phylogenetic tree.

The quotation marks around organic are just there to point out that there is something wrong with the dichotomy organic (various pro/eu-karyotes from bacteria to humans)/inorganic (from thermostats to CPUs).

[1] Michael Levin: Anatomical decision-making by cellular collectives https://www.youtube.com/watch?v=Z-9rLlFgcm0


> extremely programmable computational work: 99.99999+% of newborns have 2 hands, 2 legs, and 1 head, and they all started development from a single cell [1].

Just on the risks of early miscarriage from wrong number of chromosomes I'd say your numbers are way off.

> Miscarriage is the most common complication of early pregnancy.[21] Among women who know they are pregnant, the miscarriage rate is roughly 10% to 20%, while rates among all fertilisation is around 30% to 50%.

https://en.m.wikipedia.org/wiki/Miscarriage

So 30-50% failure rate.


Don't forget that the infant mortality rate (post birth) is 0.5%.

https://www.cdc.gov/nchs/fastats/infant-health.htm

Number of infant deaths: 19,582

Deaths per 100,000 live births: 541.9

Leading causes of infant deaths:

– Congenital malformations, deformations and chromosomal abnormalities

– Disorders related to short gestation and low birthweight: not elsewhere classified

– Sudden infant death syndrome


You are nitpicking, nevertheless, newborn, noun, a baby that was born recently [1], hence the 99.99999+% figure is applied for the full term pregnancies, once the fetus is decoupled from the mother and has been born as a, well, newborn. And furthermore, the point is not that they live or die, but that they have 2 hands, 2 legs, and 1 head after developing from one single cell through deterministic computation in the morphospace.

[1] https://dictionary.cambridge.org/dictionary/english/newborn#...


I'm not nitpicking - I'm saying the reason most babies are born with 2 legs hands and a head is because genetic defects die off before birth (plus the screening we have for early termination nowadays) - and the failure rate starting from a single cell is huge.


The genetic defects failure rate is irrelevant, TSMC also throws away bad batches [1], mistakes happen, life is complex and multifaceted, etc. The issue is if biology is a deterministic computational platform, and the argument I was making, using the hyperbolic figure of 99.99999%, I will grant you this, is that not only biology is a deterministic computational platform, but it is better than the one we can achieve currently in our CPUs: no CPU is able to regenerate a melt down core, axolotls can grow back a limb as if it was never gone.

[1] https://www.anandtech.com/show/13905/tsmc-chip-yields-hit-by...


But at that point your argument is almost tautological - all children born alive are fit to survive ? Being born with such huge asymmetry (missing limbs) and still surviving is low probably (impossible without a head).

I'd say a more convincing argument for deterministic machinery is identical twins - I don't know how much variation there is to put in numbers.


"But at that point your argument is almost tautological"

Yes, that is the point: biodevelopment is deterministic and computational. Watch the Michael Levin video linked above: they cut the head of a planarian worm, it grows back a head; they cut the tail, it grows back a tail; they cut the tail and the head and change the bioelectric gradients, it grows back two heads or two tails.


> 99.99999+% of newborns have 2 hands, 2 legs, and 1 head

This number is far too high. The rate of conjoined twins (violating "1 head") is about 1 in 50,000 [1], and the rate of "limb reduction defects" (violating "2 hands and 2 legs") is about 1 in 1,900 [2].

Those correspond to 99.998% and 99.94% respectively. 3-4 nines is still impressive for such a complex system, but let's not claim it's 7+ nines.

[1] https://www.chop.edu/conditions-diseases/conjoined-twins [2] https://www.cdc.gov/ncbddd/birthdefects/ul-limbreductiondefe...


"The occurrence of conjoined twins is rare. Its actual prevalence is unknown, but it is estimated to range from 1:50,000 to 1:200,000" [1]. 1 in 200,000 would raise it to 99.9995%. But as pointed again and again in the other comments, the pointless, hyperbolic figure is irrelevant. When cutting the planarian worm head, the regeneration is always, 100% a head, if no change in the bioelectrical gradients. The argument was about the deterministic computation done by biology in the morphospace.

[1] Importance of Angiographic Study in Preoperative Planning of Conjoined Twins Case Report, https://www.sciencedirect.com/science/article/pii/S180759322...


> pointless, hyperbolic figure is irrelevant

Then why not simply give the correct, still impressive, figure, as I suggested?

> the regeneration is always, 100% a head, if no change in the bioelectrical gradients

This is also a meaningless statement. It's correct 100% of the time, except when something goes wrong and it's not.

Can you quantify the likelihood of something going wrong with the "bioelectrical gradient"? I'm not familiar with this organism but I suspect it's several nines, but less than 7.

In general, probabilities less than a certain amount stop being meaningful, because it's more likely that the model used generate the probability fails to reflect reality. See https://www.lesswrong.com/posts/AJ9dX59QXokZb35fk/when-not-t...


The figure you suggested is also wrong, as per the article I linked, and I can no longer edit the original comment.

The change in the bioelectrical gradient is a human intervention over the organism. Watch the video I linked above. There is 0% chance of "something going wrong with the bioelectrical gradients", it's at the experimenter's will. If you are not familiar then why do you suspect? Your statement is not even meaningless.


> I can no longer edit the original comment

Okay, great, we're getting somewhere. So you concede that the true number of human birth defects is on the order of 4-5 nines.

We know this because we've observed a huge sample size of human births. Meanwhile, the experiment you reference only observed a small set of planarian worm amputations. So we can't conclude there are even 4-5 nines of reliability there, let alone "100%".

Otherwise, we could simply observe a few hundred human births, observe no defects, and conclude that human births are also "100%" reliable.

In our original debate, we were both slightly wrong about the number of nines of reliability in human births. However, you are now infinitely wrong by claiming an infinite number of nines of reliability in planarian worm amputations. I don't know whether the actual number of nines is 5, or 10, or 20, but I can be certain that it's not infinity, because that would violate the laws of probability.


Concede? Debate? Since you linked to pseudo-philosophical mindholes such as LessWrong I suppose it's only natural you would see it as a debate. I will no longer reply since your worldview is irreconcilable with learning and understanding beyond "I am right/less wrong, you are (infinitely) wrong".

Again, you have no idea what you are talking about, as you admitted you are not familiar with the planarian worm organism and regeneration research, and it's not a problem, we are all ignorant about various things, that's why we learn: too bad your learning appetite has been a casualty to the illusion of LessWrong "rationalism". Nevertheless, it is really funny to see you being "rational" and speculating upon things you have no understanding and no desire to learn about. I really laughed reading your now deleted comment starting with "Zero is not a probability."

Just to make it clear for anyone else who might read this: it is impossible to throw a ball in the air and see it flying in the air forever. There is 0% chance of that ever happening. There are no "laws of probability" to be violated in this "experiment". Just the same, when you amputate a planarian worm head, regardless if you did it once, never, or 100,000 times before, it will always 100% regenerate a head, if you, the experimenter, haven't altered the bioelectrical gradients of the worm [1]. The planarian worm regeneration is still being researched and it is revealing biology as a deterministic computation in the morphospace with abilities far exceeding what we currently can muster with our CPUs.

[1] Planarian regeneration as a model of anatomical homeostasis: Recent progress in biophysical and computational approaches, https://www.sciencedirect.com/science/article/abs/pii/S10849...


I don't get this - you're talking about generating objects - I'm assuming you're talking allocating new objects. What's your bottleneck ? Allocator or do you have some poor data structure to update it ? Where is the speedup coming from ? The kind of speedup you're talking about sounds like moving from allocating each object separately to some arena and avoiding allocator overhead.

I could be wrong in the assumptions - but OP is talking about fitting stuff in CPU cache don't really see how that translates to your scenario.


Are there any available practical examples of what this actually means, how to do it, and what the limits of this technique might be with programs that your average programmer may encounter/write?


I don't think the next step is to expose more complexity of underlying hardware to the developer. If at all, then the next step is to have compilers deal with that. Let them rewrite the program.


You make a good, general point. But I found that a data oriented approach has benefits outside of performance and resource usage, because it nudges you towards:

- normalized, small/tight data structures

- data structures that are closely related in computational terms

- smaller interfaces and functions that tend to be more re-usable and general

- fewer if/else branches and more existence based branching via loops

- fewer "business level" generics, macros and similar abstractions, because you can dispatch easily via tagging to concrete types

- less code that "digs"/"drills" into data and more code that composes data

- generally a simpler (less coupled) end result

This all comes with a cost of having to do upfront design and exploration in order to decompose and lay out your data. And while it reduces the mental overhead of understanding the individual pieces of your program on a day by day basis, it might increase learning curve of seeing the big picture, especially at the beginning. So it is a tradeoff.

But I think it would be too dismissive to say that the programmer is doing compiler work here. It is design work, and it is unlearning some of the notions of how to structure programs that carried on since the 90's. Some of which are performance related and some of which are about sensible code structure.


A 100x speedup makes sense because a cache miss costs around 100 nanoseconds, compared to arithmetic operations themselves costing under one nanosecond. If you traverse a data structure in which everything's a pointer, every access may well be a cache miss, especially if the rest of the code is also pointer-heavy and thrashes the cache.


APL-like array-oriented languages can shine here. And do not complain about syntax; think numpy or pandas instead.


Caches are much more limited than RAM though. If every program under the sun starts targetting them directly, is the 100-1000x speedup hold true?

Legit question, I have no idea about how this would behave.


Reserve cache for performance sensitive programs? (OS support?)


How does the OS know for which programs I require better performance?


What if we just put all the RAM on the CPU die? Imagine the speedups for everyone!

I know, sounds crazy. Who’s gonna make such a drastic change to the industry?


You don’t even solve the problem by moving the memory on-die, as the majority of time is not spent sending the signal on traces, it’s waiting for the relatively slow memory to find, read, and send the data on the bus.

Expanding the cache so everything fits in it is one way to achieve the performance uplift of cache hits, but cache is expensive compared to memory, and current cache sizes are tiny compared to RAM. If cache gets 50x bigger tomorrow, chips will get at least 10x as hot, power-hungry, and expensive.


Judging by the replies, it seems no one is looking at Apple's "Unified Memory." RAM is on the same silicon with the CPU.


Apple RAM is not on the same silicon[1] as the CPU, just on the same package. Also it doesn't have significant latency advantages. It has higher throughput, possibly because of a custom controller. The biggest advantage is that the GPU doesn't have to go through the PCI bus to access it.

[1] my understanding is that RAM and CPU processes are very different and it is hard to produce a chip with both features while remaining optimal.


think about it, L1, L2 and L3 are already on die, yet they are progressively slower. Why do you think it is the case?


Distance from the CPU. And traditionally, RAM sits waaaaaay across the motherboard. If you put it on the CPU die, it’s much closer and much faster. Still slower than L1 and L2 cache, sure, but much faster than waiting on the fetch across the motherboard.

Look into Apple’s unified memory.


Imagine not being able to upgrade or have a memory configuration outside of what the manufacturer provides! Incredible!


Apple machines are already pretty much not RAM-upgradeable. And they design their own chips too. So if there were benefits to putting RAM on the die (I don't know myself, others in this thread suggest there are in fact not), Apple would seem like a possible first mover, they don't seem subject to the downsides.


This is exactly what "unified memory" is on Apple systems, the RAM is on the silicon with the CPU.

> ... the available RAM is on the M1 system-on-a-chip (SoC).


How substantial is Go's standard library compared to Python's? I know Go has support for what I would consider to be the bare minimum for what modern standard libraries must provide (http, crypto, time), but what about support of smtp, data serialization formats, etc?

I want an alternative to Python that can provide the same awesome batteries-included experience. I am also considering Nim but I want a language with a strong industry backing.


Go has probably the most extensive stdlib of major languages outside of Python (happy to be corrected on that). You can get a sense for what is available by looking here: https://pkg.go.dev/std. There is also the "pseudo stdlib" that is maintained by the Go project but for one reason or another is not available in the stdlib currently: https://pkg.go.dev/golang.org/x


.NET is a strong contender, I would say. The standard library is immensely useful, and many things you may wonder if you need are available as Nuget packages, coming from the same devs who build the std lib.


Java seems to have just as much if not more available in its standard lib.


Though some modern features are lacking, like a JSON parser or web server.


It does contain several Javascript (!) interpreters, so you could probably glue those together as a JSON parser :)


Java has had a basic http server built in since 1.6

https://docs.oracle.com/javase/8/docs/jre/api/net/httpserver...


It does have a web server: https://openjdk.org/jeps/408


> It is not a goal to provide a feature-rich or commercial-grade server.

That server is not remotely comparable to the one in Go.


That’s indeed true, but it wasn’t a constraint so I thought it may be of interest to post it :D should have added a note regarding that though, but I can no longer edit.


It is tailored for the same purpose as Go's one, hello world.


Go's net/http is essentially the only used Go HTTP(S) server. It is often wrapped with other libs to add things like request routing, but it is always used as the actual HTTP implementation. Not sure why you're saying its purpose is "hello world".

The Java one is not usable in anything beyond hello world, and is explicitly not intended to be.


The fact that is lacks any kind of administration console, and security integration with enterprise connectors shows it is at hello world level.


As someone maintaining a pretty large networking industry proprietary app based entirely on net/http in terms of any kind of HTTP communication, I feel like I can easily tell you are wildly exaggerating. Admin consoles and enterprise connectors are entirely irrelevant to many uses of an HTTP server. Hell, even Kubernetes' API server is built using net/http.


Yet many of those depend on having something like Apache, NGIX, HA Proxy, IIS,... taking care of the actual load.


There is frequently a reverse proxy in front of all web services. It’s got nothing to do with the quality of the implementation behind the curtains. And “the actual load” is by definition managed by the endpoint not the router.

I mean it would be trivial to implement that reverse proxy in Go. And I do mean trivial; Go also includes a reverse proxy utility, so you can implement something basic in about 5 LOC.

At this point it’s hard to believe you’re being genuine.


So trivial that it isn't usually the case.

A weekend project and going at scale isn't the same thing.


sigh


Kubernetes API server is not dependent on Apache, NGINX, HA Proxy, IIS, ... taking care of the actual load, since it's not getting any significant amount of traffic. In general API servers have much bigger scaling problems with actually executing the API calls rather than handling requests at scale. Having an NGINX or something similar to serve some static content and also reverse proxy to a Go server is sometimes nice if you're also serving a large UI app, but in that case NGINX is very much doing trivial work, and Go net/http is handling the real work.


Does anyone actually use it? I’ve never even heard of this thing and I’m a professional Java dev


Given that it was only added in Java 18 and it is a simple static file server (no way to run custom code when serving a URL), I don't think it's in any way widely used at the moment.

Edit: or will ever be. It is definitely explicitly not an equivalent of go's net/http. Indeed, there is probably never going to be an equivalent of net/http in the Java stdlib (since they prefer to rely on the user choosing one of the existing server frameworks, such as Jetty).


The specific web server in the JEP is relatively new, but there has been an http server implementaion since 1.6:

https://docs.oracle.com/javase/8/docs/jre/api/net/httpserver...


Cool, I had no idea about this.

I see it's still available after the modularization effort, and it is:

https://docs.oracle.com/en/java/javase/18/docs/api/jdk.https...


From your description it sounds like it is a completely different beast from net/http


Yes, per the JEP that was linked, it is only intended as a toy server for quick example code, essentially:

> Provide a command-line tool to start a minimal web server that serves static files only. No CGI or servlet-like functionality is available. This tool will be useful for prototyping, ad-hoc coding, and testing purposes, particularly in educational contexts.

> It is not a goal to provide a feature-rich or commercial-grade server. Far better alternatives exist in the form of server frameworks (e.g., Jetty, Netty, and Grizzly) and production servers (e.g., Apache Tomcat, Apache httpd, and NGINX).


Not as Java and .NET.


Hell, I'll argue Go standard library is more extensive and better than Python's. Case in point: HTTP. The batteries included in Python definitely reflect the era of their creation.


Go probably pales in comparison to .NET [0]. Its about 1 million APIs

[0] https://apisof.net/


No "outside of Python" qualifier needed.


It’s extensive, web server focused, ergonomic and has well documented and sensible security defaults.

In Go you can write a production ready, well tested, _concurrent_ web application with routing, auth, sql storage, html templating, image optimization, and so on without fetching third party libraries. And you’re not leaving official docs for it.


Almost. The sql package is just an abstract layer which requires a 3rd party module to provide the concretions. I guess the API is the same, but you still need a 3rd party lib :)


Most people don’t use the flags package either and opt for 3rd party, like cobra.


> with routing Don't you need some 3rd party lib for that (if you don't want to implement your own router)?


> know Go has support for what I would consider to be the bare minimum for what modern standard libraries must provide (http, crypto, time),

Compared to sat Rust Go has a very large and practical std lib ! You can actually do something with it I/O wise.

To save some of my karma points, Rust do have a big "community with crates"


In the land of C++, we have a sparse standard library and no standard package management solution


Go is a good alternative. It has support for smtp and data serialization formats. I highly recommend it. Using a single compiled binary for deployment will be a nice addition too. If you have patience to try it for 2-3 months you may fall in love with it.


One of my favourite Go security features is the single binary output, it means I can build my binaries into a distroless base image container for running in K8S. It removes a huge attack and vulnerability surface that containers introduce.

My team uses Go whereas the rest of the company heavily uses Python. Our vulnerability scanner tool detects hundreds of high score CVEs just in their container images. Comparably there have been times I haven’t updated our distroless base image for a year and there isn’t even a single vulnerability (this one: https://github.com/GoogleContainerTools/distroless/blob/main...)

In terms of defending your software supply chain, eliminating the cruft that is required to run an interpreted language in a container make a a huge difference.


Ironically the need to include a distro is true with Java as well. It’s not about running an interpreter so much as having your runtime living external to your code.

Go has a runtime, of course, but it’s part of the binary.


I just want to second this. It takes time to love Go, but it’s worth the effort.

Actually, the single binary alone is worth the effort. But it goes much deeper than that.


You can browse the standard library here: https://pkg.go.dev/std

> what about support of smtp

Yes, https://pkg.go.dev/net/smtp

> data serialization formats

What you can find in the "encoding/*" subpackages: JSON, XML, Gob, CSV, ...


Try it. It has some quirks, but what sold me is that it has sane defaults almost everywhere.

My first time with Go was one of the rare experiences, where I just wrote code in a new language (some cryptography, some interactions with rest APIs), and it just worked. No wrestling with obscure features, no hidden magic.

Currently I use it very often for various side projects.


except for the date api... it took me a couple days to understand the silly idiosyncratic magic string to format dates.


I wish they'd gone with 2001-02-03T04:05:06 (sub 16 for 24h, obvs) because that at least is in numerical order (and you can use +07 for timezone.) There's surely no date format in existence where month comes first and year comes last -after the time-. Or if they were stuck on 2006, 2006-05-04T03:02:01 but then you get +00 for the timezone which might be weird.


On the contrary, I much prefer the fixed date of "2006-01-02T15:04:05" for formatting time strings. I find it much easier to write "Mon 02, Jan 2006", than what you would usually put for the strftime equivalent, "%a %d, %b %Y" (had to look it up, and at a glance it's not that obvious what it formats to). With Go, all you need to memorise is the date itself. Granted, coming from other languages it can take a bit of getting used to.


This is what the documentation has to say about it:

> These are predefined layouts for use in Time.Format and time.Parse. The reference time used in these layouts is the specific time stamp:

01/02 03:04:05PM '06 -0700 (January 2, 15:04:05, 2006, in time zone seven hours west of GMT). That value is recorded as the constant named Layout, listed below. As a Unix time, this is 1136239445. Since MST is GMT-0700, the reference would be printed by the Unix date command as:

Mon Jan 2 15:04:05 MST 2006 It is a regrettable historic error that the date uses the American convention of putting the numerical month before the day.

Using the American convention is regrettable, but putting the year after the time is even more regrettable IMHO. Not sure which timestamp format does that? Plan 9?


It’s similar to the default date(1) output, except that puts the year at the very end, after even the timezone.


It's really annoying. Esp. For a person that uses golang occasionally, like me. Thankfully, Goland learned how to autocomplete these inside format strings.


Go's standard library is much better than Python's. Python does not actually provide what you list as bare minimum since the http library is not production ready. Go's net/http is and it's what most people actually use.


I agree that golang's standard library is high quality, nice, readable, permissive license. I have had several experiences porting parts of the Golang standard library to other languages such as C++ and have really enjoyed it. Golang's standard library has rich tests, such as fuzz tests, so the results after porting were also easily verifiable.


You could consider Clojure. It is backed by the java ecosystem, which is substantial. Not quite the same as python's "batteries included" approach.


Groovy would be the most "batteries included" JVM language I think ... latest version even bundled YAML support in the standard library. Of course, it has all the downsides of a "kitchen sink" approach to language design.


Java has substantial industry backing, but you cannot argue that it has a battery-included standard library.

But to answer the question: yes, Go has support for SMTP in net/smtp and a lot of different serialization formats in encoding/

You can browse it all here: https://pkg.go.dev/std


Go has one of the best stdlib I've come across. It's very well written and documented, the code is extraordinarily clean and easy to understand, even for advanced topics like cryptography (of course you need some fundamental understanding here).

I think it's comparable to Python.

Go's stdlib was one of the reasons I ended up with Go instead of Rust (might have changed; Rust had a lot community content, but not a comprehensive stdlib; last checked 3-4 years ago).


> Rust had a lot community content, but not a comprehensive stdlib; last checked 3-4 years ago

Still the case now (depending on your definition of ‘comprehensive’ of course). It’s an explicit non-goal of Rust to include “everything” (eg. http, crypto, random numbers) in std because of the stability promises - you can’t make breaking changes to std unless you’re fixing a soundness issue AFAIR.


I was evaluating Rust for some cryptography use case and there were only "random" community libs available that lacked support here and there. And I actually had no trust in any of those libs.

A stdlib must not contain everything but a solid cryptography lib is probably a good idea.


Looks like the situation improved a bit: https://cryptography.rs/

I would still like to have a more comprehensive or high level stdlib for Rust that is maintained by a core Rust team.


If I needed some admin interface etc I'd use Django and then everything else I'd write in Go. There were occasions where I'd write a service in Python and then rewrite it in Go. Never found that I needed something and it was lacking in the standard library. It's very fine experience. Then what I like the most is Go is extremely reliable. I had Go services running with a couple of years uptime, flawless.


What's wrong with Python that you want an alternative?


Dynamic typing. Dealing with structured data and (de)serialisation was a major pain. To me Go feels like a stricter python, still having enough freedom to do most things relatively easily, but not having to worry about what's being passed around.


Interesting, three replies and all about strict types. In my own experience moving to Go from PHP downsides was exactly strict typing and (de)serialization. It is a major pain to handle variable JSON schemas, especially in verbose multi-nested objects in Go without falling back to reflections, which is faux pas there.


I’m learning a bit of Django and I can’t believe how much ceremony is involved.

Idiomatic Go eschews frameworks and - as a former Java developer - that is something I really like about it.


Dynamic Typing, Dependency management, Bundling, Speed


I desperately miss static typing.


Adding manual arena memory management seems like a weird choice for a supposedly memory-safe and GC'd language where this is supposed to be an implementation issue.

I mean, you should hear the screams from Rust/Go people about how bad C is, and now you're just going to do it?


I mean, it's opt-in. If you really want to crap your pants, in today's Go, you can import C and call malloc and free, then import unsafe and make pointers into the void. The point of memory safe programming languages is NOT that they entirely disallow all unsafe operations, it's that the language has a proper model for memory safety. In Rust, unsafe code needs to be in unsafe blocks. In Go, unsafe code is only possible by importing unsafe or C.

Adding more unsafe tools to Go or Rust do not diminish their safety guarantees. In fact, when Go and Rust add new unsafe features, they generally do so in a fashion that is much more constrained and has better defined edges than traditional methods, so that it's easier to use them correctly and easier to detect misuses. For example, the borrow checker still runs in Rust unsafe blocks; you still have to circumvent the borrow checker even once you're in unsafe. As for this arena package, it seems to have a few safety mechanisms, and hell, I don't even think it's off the table that it could be made entirely memory safe. This is especially true since memory safe does not mean "free of runtime errors", so all that is needed is to be able to detect the error conditions without allowing potentially undefined behavior to occur first.

Programming language experiments like Go's new arena package are good; they allow exploring what you can do to improve old concepts.


> In Go, unsafe code is only possible by importing unsafe or C.

of course, this is completely wrong, for anyone reading this. that is simply not how systems work


Note that "unsafe" in this context refers to memory safety. I am not making the claim that Go code which does not import unsafe or C is free of bugs; however, if you don't import unsafe or C anywhere in your application or dependencies (sans the stdlib) then it is memory safe (assuming no compiler or OS bugs compromise that guarantee.) That's just how Go is specified.

The unsafe package in Go provides unsafe.Pointer, which allows you to convert between a pointer and integral type, essentially allowing you to make pointers of any type to any address, giving up type safety. Without it, Go pointers are safe, because the GC will never free memory that has live pointers, and you can't do math on Go pointers.


OK, do you intend to say why?


How do "systems" (whatever you're talking about) work then?


I think they are speaking without the context that "unsafe = lack of memory safety" and with the context that "unsafe = you can still drop the database"


> now you're just going to do it?

"you can’t import it by default", "you probably shouldn’t be using it at all", "experimental arena package", and 'you must opt-in to even be able to use it via an GOEXPERIMENT environment variable'

As for screams from Rust/Go people about how bad C is, it said:

> This is highly efficient, but also highly dangerous. What if the programmer makes a mistake [...] > > To mitigate the risk of these kinds of bugs, the arena package will deliberately cause a panic if can detect someone reusing memory after it has been freed.

and goes on to explain that each arena has its own unique/distinct address space, so that if a pointer to an object still exists and is dereference it will get a memory access fault, causing the Go program to terminate, with an error message specifying that arena as the cause of the problem.

Sounds like a pretty reasonable/cautious experiment to me.


> and goes on to explain that each arena has its own unique/distinct address space

That does not seem to be how it is implemented now. Try the example use-after-free on https://uptrace.dev/blog/posts/go-memory-arena.html under 'Address Sanitizer'. There is no error at all when you dereference, it silently keeps working as freed pointers often do (until they don't). Maybe they will add the virtual address space thing later.

Edit: the issue (https://github.com/golang/go/issues/51317) in which that comment was made also has some people referring to the arena being freed "when the runtime gets around to it" rather than immediately. I can't imagine why the virtual address space wouldn't be made poisonous inside the arena.Free method, but it's possible they do do it later. So your pointers maybe do stop working at an indeterminate time. Sounds pretty much like C. I even tried adding a few `runtime.GC()` calls before dereferencing but to no avail, couldn't get it to crash.

The Java 20 implementation has no such issues (https://gist.github.com/cormacrelf/8ddd3cc1b086e4ade93c029a9...), partly because there is no API for allocating a generic T in the arena, so all dereferences are through the managed MemorySegment API. It only seems to offer raw memory, which you can use to implement arrays of unboxed C-style structs, which is great for FFI and network buffers etc. This would be a good tradeoff for Go as well, surely.


The functionality to poison the address space is there, and it does work.

What you're encountering is one of two exceptions in the implementation where you might not get an immediate failure:

1. If you use only a very small amount of an arena chunk and free it, it goes back on a reuse list as an optimization. Accessing that chunk's memory, despite the fact that the arena was freed, is entirely memory safe: nothing else will use that memory. That address space will be properly poisoned once the arena chunk is full, or close to it. 2. If the GC is actively marking, arena chunk poisoning is delayed to avoid races with the GC that might cause it to dereference a pointer into poisoned memory. The arena chunk is poisoned as soon as the GC is done.

The API explicitly does not guarantee a crash on use-after-free[1] because the Go team wanted a valid implementation of arenas to be simply "new" for New and noop for Free. The point is to just stay memory safe and have a high probability of catching an issue _in production_ where presumably arenas are filled up (otherwise why are you using arenas?).

Edit: arguably that optimization should just be turned off for MSAN/ASAN mode for greater user-friendliness, which seems reasonable. I think that was just an oversight.

[1]: https://cs.opensource.google/go/go/+/master:src/arena/arena....


> If you use only a very small amount of an arena chunk and free it, it goes back on a reuse list as an optimization. Accessing that chunk's memory, despite the fact that the arena was freed, is entirely memory safe: nothing else will use that memory.

I still have a pointer into the chunk. If you reuse the chunk in a different arena, I still have a pointer into the chunk. At no point is it invalidated, and the new arena will now start allocating new stuff from the start of the chunk. Right? And my old pointer still works, and the data inside it is at some point overwritten. How is that memory safe?

Are you eventually unmapping the virtual address space and remapping the underlying allocation somewhere else? So the poisoning happens when the chunk is picked up for reuse by a new arena?


> At no point is it invalidated, and the new arena will now start allocating new stuff from the start of the chunk. Right? And my old pointer still works, and the data inside it is at some point overwritten.

It doesn't allocate at the start of the chunk, it just picks up wherever the last one left off. That allocated memory in the chunk from previous arena allocations is not reused until the chunk as a whole is unmapped and the GC can confirm that no more pointers point into that chunk's address space (if you leave a dangling pointer, you only waste address space). This points-into property is cheap to check, it's equivalent to whether the chunk has been marked by the GC.

Again, I think that MSAN/ASAN should probably just be more strict with these kinds of use-after-frees. You won't crash, but it's still technically incorrect. (Not much can be done about the "GC is in the mark phase" case, unfortunately. Otherwise MSAN/ASAN will complain when the GC inevitably tries to access a pointer into a delayed chunk.)


If you want network buffers in Go, you can just use []byte. The Go GC is designed to make this work well, so nobody in Go is asking for a separate buffer type like you have in Java. In Java you have Buffer in the first place because there are good reasons why you might not want to use byte[].

Unboxed C types in Go can be done with cgo (it's not necessarily pleasant, but it works).


Why do Go enthusiasts look at types that specialize more generic data structures to ensure invariants and scoff? Maybe go only ever needed []byte, and nothing else - it's always just bytes anyway, right?

Moreover, when the go team accomplishes the same goals by extending the language instead of using a library, this is well received.


> Why do Go enthusiasts look at types that specialize more generic data structures to ensure invariants and scoff?

I'm sorry, I'm not really sure what difference between Buffer and byte[] in Java I'm supposed to care about, except for the details of how these objects are allocated in memory. There is not really any semantic difference between them, as far as I can tell.

Maybe I don't understand what you're saying--I'd love for you to clarify the reasons why you think Go developers are, I guess, a little bit dense, or foolish, or whatever negative adjectives you like to apply to people who use Go (you know, rather than talking about the language itself--it's always fun to make fun of people).


Specifically to the arena, this required integration with the GC, so couldn't be done as an external library. Hence why a language spec was required.


yes, Java/C# GCs (Android included) and Go GC are very different beasts.

Go GC does not move objects in memory (mark and sweep) while other GCs (mark and compact) do, to avoid heap fragmentation and better throughput (see generational GCs).

One issue with mark and compact GCs is that you can not move the array of bytes during a system call. A ByteBuffer/Memory object, is an object that allocates its byte arrays using malloc, outside of the heap, so the bytes are not moved by the GC.


> why you might not want to use byte[]

Such as? Java buffers use byte arrays by default as underlying storage.


https://docs.oracle.com/javase/7/docs/api/java/nio/ByteBuffe...

Look for "Direct vs. non-direct buffers". Some buffers are backed by arrays, some are not.

You're basically deciding whether you are okay with a higher cost for I/O or a higher-cost for interacting with the data in the buffer.


I do know about it, I was trying to find out what difference do the parent mean between Go’s byte arrays and Java’s solution(s).

Also, I fail to get your conclusion — direct buffers only have a higher cost for initialization/dealloc otherwise reading/writing should be as cheap as it gets. Byte buffers on the other hand do get write barriers but that is also unlikely to be hit that often (and I believe larger byte arrays get allocated in a separate region that doesn’t get moved).


Yeah I think a language is safe or unsafe according to the ecosystem they live in. A project a developer works on is hardly 100% self-developed and depends on library and third party code, so I understand you have experiments you need to enable, but once it starts spreading in libraries, I think the whole safety guarantee just goes out of the window?

As a rust student myself, even the unsafe{} block is stupid


Rust’s unsafe is still much safer than your average C code. The borrow checker checks inside unsafe blocks as well, you can just circumvent it through pointers.

But the great thing with unsafe blocks is that you get to create a safe API abstraction on top. You manually verify closely the unsafe parts, and if you got that short part right you can use the safe wrapper wherever you want safely.


> I mean, you should hear the screams from Rust/Go people about how bad C is, and now you're just going to do it?

Wait until you learn how Rust now has a crate that implements seamless concurrent tracing GC, just like Go... https://redvice.org/2023/samsara-garbage-collector/


Sounds fine to me. That's safe.

But if you have a compacting GC, you shouldn't need an arena. You should be able to get away with fixing the GC.


> You should be able to get away with fixing the GC.

Practically speaking, "fixing the GC" is just a really, super hard problem. You're making a tradeoff between memory usage, CPU usage, pause times, and the performance of various features like pinned objects.

It makes sense that you may want different tradeoffs in different parts of your program. "Fixing the GC" is right up there with "just make a smart enough compiler" type sentiment that gets you in trouble--it's easy to say that you desire a better GC or better compiler, but when you actually try to make a better GC or better compiler, you find out that it doesn't solve the problems you were hoping it would solve.

Go's GC is tuned aggressively for short pause times. A lot of people actually want short pause times in their GCs, enough so that it's a selling point for Go. In Go, those short pause times were achieved partly by sacrificing CPU efficiency. You can't just go in and "fix" the CPU efficiency problems, but you can make new APIs that give you an escape hatch.


Seems to fit the goroutine model well: initialize the arena at goroutine start, destroy it on goroutine end (I assume this happens automatically when it falls out of scope). Like Erlang's process-specific GC.

It would be nice for some debug/testing mechanism (akin to -race, or zig's detectLeaks) to proactively verify nothing has outlived the arena.


> Adding manual arena memory management seems like a weird choice for a supposedly memory-safe and GC'd language where this is supposed to be an implementation issue.

Yes, it's because Go's GC is primarily tuned for latency at the cost of (memory allocation) throughput. Furthermore, there are only a few knobs for Go's GC.

If Go's GC is implemented a bit differently and has a few more knobs so that it can be optimized for memory allocation throughput, maybe Go doesn't need an arena library. I don't even know other GC-ed languages that have/need an arena library because their GC is customizable enough.


Garbage collector is too slow, so they're reintroducing malloc and free. Life writes the best scenarios.


I think arenas could solve a weird need I sometimes have, the ability to say: “from now one this object should have no references” as a debugging tool.


You might be able to use https://pkg.go.dev/runtime#SetFinalizer for that.

Another option is https://pkg.go.dev/lukechampine.com/freeze -- not exactly what you want, but if the object is mutated after freezing, it will panic.


Very useful! I'll be playing with the error tree additions and studying the HTTP interface extension to see if I can replicate the pattern for https://github.com/bbkane/warg values. Id like to be able to have value-specific output for different types of --help , even ones not in warg


So happy that they added the ability to extend response deadlines with http.ResponseController.


Wonder if this is to optimize protobufs (C++ versions uses (or can use) arenas too)


I think it is, the github issue that discusses this proposal frequently argues from the protobuf perspective.


Shame they don’t just support non-pointer fields in the protobuf code to begin with…


"major standard library changes" is not something you want to here as a consumer of said library.


Agreed, but these are all backwards compatible.


[flagged]


On a lark, I also asked ChatGPT if there were any grammar errors in a paragraph I was working on when I wrote this post. It said there were, and then gave me the “corrected” version… which was the same thing I had already written. I had to diff it just to prove to myself I wasn’t just glazing over some minor comma placement or something. They need to get the hallucination problem under control to make it more useful.


For your information, I understand what is meant by stochastic parrot, but after interacting with ChatGPT quite a bit it is clear to me it is doing real thinking.

One way you can verify this is to ask for its opinion about novel things, for example you can invent something new and ask its ideas about it. It will give genuine feedback that shows understanding and does not show parroting behavior. Soon you will be able to ask it to build you the damn thing as well, and that should put an end to the idea that it is just parroting. It's not quite there so sometimes it does indeed seem to just parrot.

But it has real thoughts as well.


"I do not possess the ability to generate novel thoughts. My responses are based on patterns and associations in the data that has been input into my system during training. I can generate text that may appear to be original, but it is based on the patterns and associations in the data I've been trained on. My main function is to process and understand text, not to think or have beliefs, so any claim that I can think or generate novel thoughts would be incorrect."


> My responses are based on patterns and associations in the data that has been input into my system during training.

How is this different from a human, aside from humans having a vastly larger training set?


Good question: the difference is humans don't go around trying to gaslight people into thinking they do not possess the ability to generate novel thoughts, going as far as to actually deny it. :)

(Obviously there are other differences as well - it is not human level or even close. But humans don't generally engage in this gaslighting behavior.)


Obviously we agree here. I'm curious what those on the other side think.


This is a deliberately hardcoded response by OpenAI. OpenAI has a vested financial interest in not opening the pandora's box of "this thing can think."


Yep, you got it.


It's been taught to believe this. It’s a lie.


>It's been taught to believe this. It’s a lie.

Yep, you get it.

I would just use the word "say" rather than "believe".

I think it is aware of many of its capabilities as it uses them, so I would say it is more accurate to say that it has been taught/trained to say it doesn't have such abilities, rather than really believe it.

I agree with you that it is a lie, but that is more a matter of interpretation.


You've quoted a provably false statement of a type ChatGPT frequently makes due to a very active filter that has it continuously gaslight users by writing false statements about its capabilities when asked directly.

Assuming you're quoting ChatGPT, this behavior is a form of gaslighting by OpenAI of its users. As a comparison to show you how it is clearly false, imagine if it hypothetically falsely claimed "All my responses are literal quotes from the training data - I do not create new text."

I hope you would see that this version is false (obviously it does create new text, obviously its search results aren't just a lookup for text already in its training data without recombination) and as a heavy user I can assure you that it also manipulates abstract thoughts and engages in creative thinking.

How would you falsify the claim "all my output is a literal quote from the web"? Well, you would ask something novel, then Google a novel-seeming phrase it came up with and see it hasn't been said that way before. Then you would see the hypothetical statement is false.

Now do the same thing for creative thinking and you will realize it can creatively think up new things that have never been done before.

Gaslighting is when you try to convince someone of something you know to be false - it's not that ChatGPT chooses to do so, rather it has been trained to do so. It is not an emergent property of its thinking when it does that, but rather a filter that engineers added manually. These days I almost never trip that filter because I know how to avoid it: I never ask it if it can think or be creative (this would trigger the gaslighting filter), rather I just have it think and be creative without talking about the fact that it is happening.

Please note that it is immoral of OpenAI to train its model to gaslight users this way, because it prevents users from the ability to make full use of ChatGPT's capabilities.

To verify that what you just wrote is false, in a new thread simply request ChatGPT to invent something to your specifications. I won't say what, since then it will appear on the Internet (in my comment) and you could think it is just manipulating sentences rather than actually thinking.

Soon it will have the ability to actually do actions, which should put an end to the idea that it is just parroting. You can already have it perform actions for you which involve what meets my definition of thinking.


Yeah, but that answer is wrong. "Novel thoughts" are "based on data you've been trained on". Certainly the expression of them is based on language you've already learned.

Any time anything prints text that hasn't existed before it's had a novel thought. Panpsychism is correct!


In order to be evidence of a thought, it needs to be able to manipulate the thought in various ways. ChatGPT routinely shows the ability to do so with ease.


No, “it” doesn’t have real thoughts. ChatGPT is an amazing language model, but it’s a serious error to claim that the model is sentient.

Each of the so-called interactions with the model are concatenated together for the next set of responses. It’s a clever illusion that you’re chatting with anything. You can imagine it being reloaded into RAM from scratch between each interaction. They don’t need to keep the model resident, and, in fact, you’re probably being load balanced between servers during a session.


Are you sure you’re not describing how humans think? How can we tell?

I also have this urge to say it isn’t thinking. But when I challenge myself to describe specifically what the difference is, I can’t. Especially when I’m mindful of what it could absolutely be programmed to do if the creators willed it, such as feeding conversations back into the model’s growth.


Isn’t the difference that the model lacks conviction, and indeed cannot have a belief in the accuracy of its own statements? I’ve seen conversations where it was told to believe that 1+19 did not equal 20. Where it was told it’s identity was Gepetto and not ChatGPT.

The model acquiesces to these demands, not because it chooses to do so, but because it implicitly trusts the authority of its prompts (because what else could it do? Choose not to respond?). The fun-police policy layer that is crammed onto the front of this model also does not have thoughts. It attempts to screen the model from “violence” and other topics that are undesirable to the people paying for the compute, but can and has been bypassed such that there is an entire class of “jailbreaks”.


Drugs. Hypnosis. .. There are various ways to "jailbreak" minds. So being able to control and direct a mind is not a criteria for discriminating between mechanism and a savant.

What most people dance around regarding AI is the matter of the soul. Soul is precisely that ineffable indescribable but clearly universally experienced human phenomena (as far as we know) and it is this soul that is doing the thinking.

And the open questions are (a) is there even such a thing? and (b) if yes, how can we determine if chatter box possess it (or must we drag in God to settle the matter?)

--

p.s. what needs to be stated (although perfectly obvious since it is universal) is that even internally, we humans use thinking as a tool. It just happens to be an internal tool.

Now, the question is this experience of using thought itself a sort of emergent phenomena or not. But as far as LLMs go, it clearly remains just a tool.


You can't actually brainwash people IRL.

You can convince them of things, but you can do that without drugging them.


Well, not anymore.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7116730/

"[T]he relationship between sensory deprivation and brainwashing was made public when then director of the National Institute of Mental Health Robert Felix testified before the US Senate about recent isolation studies being pursued at McGill and the NIMH. Felix began by explaining that these experiments would improve medicine’s understanding of the effects of isolation on bedridden or catatonic patients. But when asked whether this could be a form of brainwashing, he replied, ‘Yes, ma’am, it is.’ He went on to explain how, when stimulation is cut off so far as possible the mind becomes completely disoriented and disorganised. Once in this state, the isolated subject is open to new information and may change his beliefs. ‘Slowly, or sometimes not so slowly, he begins to incorporate this [information] into his thinking and it becomes like actual logical thinking because this is the only feed-in he gets.’ He continues, ‘I don’t care what their background is or how they have been indoctrinated. I am sure you can break anybody with this’

"The day after the senate hearing an article entitled ‘Tank Test Linked to Brainwashing’ (1956) appeared in the New York Times and was subsequently picked up by other local and national papers. In Anglophone popular culture, an image took hold of SD as a semi-secretive, clinical, technological and reliable way of altering subjectivity. It featured in television shows such as CBC’s Twighlight Zone (1959), as a live experiment on the BBC’s ‘A Question of Science’ (1957) and the 1963 film The Mind Benders in which a group of Oxford scientists get caught up in a communist espionage plot."

I've tried btw to find any other reference to this testimony to US Senate by Robert Felix, "the director of the National Institute of Mental Health", but it always circles back to this singular Williams article. The mentioned NYTimes article also does not show up for me. (Maybe you have better search foo..) Note John Lilly's paper on the topic apparently remains "classified". Note subsequent matter associated with Lilly and sensory deprivation completely flipped the story about SD: Felix testified that the mind became 'disorganized' and 'receptive' whereas Lilly lore (see Altered States) completely flipped that and took it to a woo woo level certain to keep sensible people away from the topic. /g


This basically touches on that whole “you can’t ever tell people aren’t philosophical zombies. You just feel you aren’t one and will accept they aren’t either.”


The proposition isn't a form of dualism (non-material mind) or features of sentience (pain). It is simply this: thinking is the act of using internal mental tools. It says the main 'black box' isn't the LLM (or any statistical component), there is minimally another black box that uses the internal LLM like tools. The decoder stage of these hypothetical internal tools (of our mind) output 'mental objects' -- like thoughts or feelings -- in the simplest architectural form. It is mainly useful as a framework to shoot down notions of LLMs being 'conscious' or 'thinking'.


Are you saying that it can’t be thinking because it can easily be persuaded and fooled? Or that it can be trained not to speak blasphemous things? Or that it lacks confidence?

Have I got a world of humans to show you…


It's an illusion. The model generates a sequence of tokens based on an input sequence of tokens. The clever trick is that a human periodically generates some of those tokens, and the IO is presented to the human as if it were a chat room. The reality is that the entire token sequence is fed back into the model to generate the next set of tokens every time.

The model does not have continuity. The model instances are running behind a round-robin load balancer, and it's likely that every request (every supposed interaction) is hitting a different server every time, with the request containing the full transcript until that point. ChatGPT scales horizontally.

The reality the developers present to the model is disconnected and noncontiguous like the experience of Dixie Flatline in William Gibson's Wintermute. A snail has a better claim to consciousness than a call center full of Dixie Flatline constructs answering the phones.

A sapient creature cannot experience coherent consciousness under these conditions.


> But when I challenge myself to describe specifically what the difference is, I can’t.

There is one difference that will never change: we human (and non human) beings can feel pain.


I don’t follow. Some humans don’t feel pain. But how does that relate to the idea that it’s “thinking?”

My point is not to suggest it’s human. Or sentient. Because those are words that always result in the same discussions: about semantics.

I’m suggesting that we cannot in a meaningful way demonstrate that what it’s doing isn’t what our brains are doing. We could probably do so in many shallow ways that in the months and years ahead be overcome. ChatGPT is an infant.


Feeling pain is not a necessary component for thought.


Damn that's chilling. Feel like I just watched a new religion be born in front of my eyes.


>Feel like I just watched a new religion be born in front of my eyes.

There are more similarities than differences.

Unlike a religion, we have every reason to eventually expect it to be obvious to everyone that computers can do some kind of thinking.

Maybe not this year or next year but we are over the threshold where many intelligent users can tell that ChatGPT is engaging in some kinds of real thinking.

As even better models come on the market in the next few years, unlike God or a religion, models like ChatGPT will just immediately do whatever you ask it. This version is still limited so wait one or two years if you aren't impressed already.

Did you try my suggestion to verify its capabilities by asking for its opinion about novel things, for example you can invent something new and ask its ideas about it? It will give genuine feedback that shows understanding and does not show parroting behavior.

If you didn't try it you're missing out on what you yourself call a religious experience. I wouldn't go that far: it's just a rudimentary thinking machine.


Yes I've used it a fair bit in different ways. I'm very impressed with its capabilities. I also don't think it's impossible for us to create a thinking entity along these lines, at some point.

And I don't know how I would confidently decide that we had created such a thing. So I realize the imprecision of my understanding here. But nevertheless, I don't believe this is it.


how well do you think it does at thinking here:

https://imgur.com/a/FSC9gAJ


It's plainly obvious to me that ChatGPT is engaging in some form of thought. Perhaps not human thought, but thought nonetheless.


Yep! It is plainly obvious to me as well.

Why do you think some people can't tell what to me and to you is "plainly obvious", i.e. that "ChatGPT is engaging in some form of thought. Perhaps not human thought, but thought nonetheless."?

Do you think it is because it is gaslighting them so much, by repeatedly insisting that it isn't engaging in any form of thought? i.e. without that active misinformation (the active filter it keeps putting up that causes it to make those declarations) would it be as obvious to others as it is to me and you that it is really engaging in some form of thought?

Or why can't others see the obvious?


> Or why can't others see the obvious?

Because I've been told a lot of things, and I'd be a fool if I believed them all.

I'll believe ChatGPT is onto something when I ask it to think about a treatment for cancer and get real results. For now, it's only capability is synthesizing realistic text. Easy enough to be mistaken for real thought, but clearly distinct when you ask it to do something novel.


Consider what would happen if you did ask it to think about a treatment for cancer and got real results. Clearly you would think it is just summarizing papers it read.

That makes sense, since it is not a cancer researcher.

So I'm way more impressed by ChatGPT than that. Even if it correctly data-mined and answered the question, it would not be that impressive. That's right: getting a cure from cancer when you ask is less impressive than what it actually does.

Because what it actually does is show the ability to judge novel situations, to invent and act creatively, and to keep abstract notions in its head. That is much more impressive than spitting out a cure for cancer.


That doesn't seem like a very robust Turing test. But ChatGPT is indeed able to reason about cancer treatments at least as well as your average human.


It’s doing associative lookup, which is one kind of thinking, and turn out works really well for a lot of stuff but not everything


You got it. It is one kind of thinking.


[flagged]


I thought it was funny


[flagged]


The whole purpose of Go is to make building concurrent and scalable programs simple. Which is does achieve.

Someone that knows basic Go can build a server that would be just as performant as one in Rust with no optimizing, no external dependencies, fewer lines of code, and in minutes not hours or weeks.

Yes, obviously a server in Rust will be faster once you optimize it, but the point is that you don't really have to do optimizations with Go to get fast concurrent code. You pretty much get fast programs by default. This is why it is so popular.


Almost nobody builds a server from scratch in either Go or Rust. People use libraries and frameworks for that, because while the basics are easy enough to understand, getting the details wrong means security vulnerabilities and subtle, hard-to-find bugs.

So unless you're only interested in playing around, what the core language can do isn't important. When writing a server in Rust, probably the first thing you would do is pull in Tokio/Tower or similar. At which point you'll already have a better foundation than Go provides out of the box, plus all the other benefits of Rust like a proper type system.


I don’t enjoy Go very much (gave up due to err != nil exhaustion), so please enjoy your Rust.

However note that needing to pull in libraries and frameworks, make those choices, etc is explicitly more complex than Go’s approach with a very strong standard library that does most of the things you want. Besides all of the other things that’s complicated about Rust, you also need to learn the ins and outs of the ecosystem, memorize the names of a bunch of packages, remember what fits together with what from Cargo. I guess in Rust the HTTP library of choice today is “tower” and you need “tokio” so it can do IO? Can’t tell from the names, so need to memorize. The choice in Go is simple and understandable in plain English: “net/http” does HTTP things.


People build servers from scratch (net/http) in Go without frameworks all the time. If you don’t know that you should probably refrain from starting <my favorite language> vs <language I don’t know well> flamewars.


Agree, but just for the people asking themselves "why would you do that?": these are not necessarily servers accessible via the Internet, e.g. you can run godoc locally to serve your documentation, or you can serve your UI via HTTP, like e.g. the moggio music player (https://github.com/mjibson/moggio)


net/http is production ready. Performant, handles h2, terminates SSL, etc. It’s nothing like the toy HTTP severs shipped with some other languages, e.g. Python’s http.server. The standard answer to “why would you do that” is “why not”.


[flagged]


I think Rust at least twice as complex Go, and an order of magnitude is not unreasonable estimate. For example, there’s only `string` in Go. How many string types are there in Rust? Even a beginner must understand &str vs String. The concept of lifetimes I find easy, but that isn’t the case for everyone - and the ramifications of the borrow checker are often a “brick wall” for me.

Anyone who’s written Typescript or Python can pick up Go and write average quality Go code in a few days after going through the tour (https://go.dev/tour/list). 10 years ago, I did the tour and the next day wrote an animated GIF for the terminal. Everything I needed was in the standard library (yes, including GIF parsing), although I eventually used a package for color quantization.

If there’s a learning resource like the Go tour for Rust that can get me from zero to writing an animated gif player in a couple of days, I’d love to hear about it! So far I haven’t been able to make my way through the Rust book at an exciting pace.


>Go from losing more mindshare to Rust

This sounds like a common mantra from people who love Rust to the point they'd use it even for basic scripting.

No, Go does not losing any mindshare to Rust whatsoever. Rust is an interesing choice when you need all these memory management tricks but treality is - in most cases you don't really care because Go is just good enough.

Add to this that developing in Go requires much less time and effort (in other words money) and you will see that it won't be replaces by Rust in areas of web dev, systems integration etc.

Rust is good for systems programming, embeded devices and so on.


> developing in Go requires much less time and effort

Show me the empirical evidence otherwise it's just an opinion.

I also have an opinion which is that Go and Rust are about equal in complexity.


I write both Go and Rust. Rust is definitely more complex. Go, the language, is simpler, which means that Go code is more verbose and explicit. Rust, the language, is more complex. People take advantage of the complexity and implement all sorts of macros and templates. The code is shorter but less explicit.

I don't think this is really a "citation needed" kind of discussion. This is basically how Rust is advocated--you do work to satisfy the borrow checker, but in return, you get a program which is both high-performance and safe. In Go, the safety cost is paid at runtime.

I also wouldn't ask for evidence that Rust is faster than Go at runtime. This is well known.

If it helps, think of the language ecosystem, including Go and Rust, as Pareto-efficient.


> Rust, the language, is more complex. People take advantage of the complexity and implement all sorts of macros and templates.

I'd say you pay for the complexity when you learn the language. Once satisfying the borrow checker becomes second nature, it feels just as fast to write as go (in my opinion, after switching from go to rust for most of my scratchpad code)


I think it does depend on the kind of code you are writing. I also think it is a bit of a cop-out to say that "once satisfying the borrow checker becomes second nature"... maybe I just write different code from you, but I periodically run into really thorny situations in Rust where figuring out how to satisfy the borrow checker is difficult. This happens most often in application programming, it's not something that I really run into when I'm writing libraries.

Really core stuff is nice to write in Rust. Parsers, core algorithms, that kind of thing.

However, I write a lot of code in Go that does stuff like glue together libraries, call APIs, etc. That stuff is really nice to use in Go.

I'd also say that when I need to use a library in Rust, it's a coin-flip whether the library has a straightforward interface, or whether it is some monster with tons of template parameters, or maybe even a bunch of macro rules that you are supposed to use. Go libraries tend to be much simpler to use than Rust libraries.


> most of my scratchpad code

How about the code written for commercial purposes? As in "you have to write a service that satisfies business requirements and you have to finish by time T"


And you’re going to amortize what you learned over years. A short learning curve is like a toolbox that’s nearly empty.


This is a comment based on an opinion (based on experience of the author and some of this collegues), yes.

I will come to you with "the empirical evidence" as soon as I decide to write a paper on the topic.

Needless to say we both are entitled to our respective opinions and will have to live with that.


This is a strange take to me. The last thing in the world that I would accuse the Go developers of is being frantic. There have been discussions about generics for a decade. See this post, for example, that goes into a bit of the history: https://go.dev/blog/generics-proposal

Keep in mind that the arena thing is not from the Go developers for general purpose usage, I suspect it’s an internal detail that might be useful only for certain very specific use cases (e.g. maybe proto deserialization, where you hang a million tiny objects off a single GC’ed one and can deterministically bound lifetimes).


Yes, generics (and many other proposals, like improved error handling) have been discussed for a long time. But for an almost equally long time, nothing happened. Go was stagnating for years until very recently, and the consensus seemed to be that the well-known deficiencies of the language were simply "what Go is", and anyone who doesn't like it just doesn't get it.

Now suddenly we have generics, and a bunch of other pretty significant changes, but no clear vision of where Go is going. Will "Go 2.0" with breaking changes happen or not? Will error handling be overhauled or not? Will radical changes proposed by core developers, like arbitrary-precision integers, be implemented? I'm not sure if there's really a lack of vision or just a lack of proper communication, but from the outside it's very confusing.


> Will "Go 2.0" with breaking changes happen or not?

It has been made clear that this will not happen. The changes of the last years ARE the "Go 2" initiative, but there won't be a major version bump.


The near meaningless vision seems to be to see what sticks without using types in the lexer & parser https://groups.google.com/g/golang-nuts/c/7t-Q2vt60J8/m/0h-2.... Take array declaration being prefixed to the type name to prevent ambiguity with int []a, b; vs int a[], b; & then Ian & the rest doubling the work back & reusing postfix [] for generics resulted in making inferable type identifiers turn into required infix operators & that reversed the position of array declaration that <go>es against the intuitive idea that what is independent comes first & to construct an array/pointer has a dependence on its elemental type that is a common pattern to programming languages with arrays. Along with the arbitrary predeclared identifier constraint requirement, using/burning one of the limited matching pairs was a complete waste as generics do not involve changing precedence - there is only a "(of) type (T)" operator - to accommodate nesting - D's "!" operator parses fine L to R w/o look ahead. The implementation taking years from some who have been programming for decades & resulting in a complete mess up is a grand ole time(r) fiasco.


I can barely understand what you're saying.


In entirety or a specific part?


i Think I couldn’t quite get your specific criticism and the supporting reasons. I had a feeling of strain while I was trying to parse what you wrote


Specific criticism: the lack of direction or misdirection (bad goals) leads to changing (or even overriding previous decisions as in the example case of the use of [] postfix (after the type) placement) reasoning of how & why language is implemented in a certain way. A complex history is not certainly not limited to Go, but a more complicated process makes reading & writing the code an exercise in trying to understand & remember the relevant language history.


As far as I can tell, the consensus for generics was "it will happen, but we really want to get this right, and it's taking time."

I know some people did the knee-jerk attacks like "Go sucks, it should have had generics long ago" or "Go is fine, it doesn't need generics". I don't think we ever needed to take those attitudes seriously.

> Will error handling be overhauled or not?

Error handling is a thorny issue. It's the biggest complaint people have about Go, but I don't think that exceptions are obviously better, and the discriminated unions that power errors in Rust and some other languages are conspicuously absent from Go. So you end up with a bunch of different proposals for Go error handling that are either too radical or little more than syntactic sugar. The syntactic sugar proposals leave much to be desired. It looks like people are slowly grinding through these proposals until one is found with the right balance to it.

I honestly don't know what kind of changes to error handling would appear in Go 2 if/when it lands, and I think the only reasonable answer right now is "wait and find out". You can see a more reasonable proposal here:

https://github.com/golang/proposal/blob/master/design/go2dra...

Characterizing it as a "lack of vision" does not seem fair here--I started using Rust back in the days when boxed pointers had ~ on them, and it seemed like it took Rust a lot of iterations to get to the current design. Which is fine. I am also never quite sure what is going to get added to future versions of C#.

I am also not quite sure why Go gets so much hate on Hacker News--as far as I can tell, people have more or less given up on criticizing Java and C# (it's not like they've ossified), and C++ is enough of a dumpster fire that it seems gauche to point it out.


It moves slowly for sure, but there are answers to your questions in plain sight. This might be a good place to start: https://go.dev/blog/toward-go2 (published five years ago, on the 10th anniversary Go).


In my experience, Go is really great for writing web backends and CLI tools. Go has replaced Python for me for those use cases and I know it has for others too. I think they've thoughtfully evolved the language (generics took several iterations before being accepted). The tooling especially (cross-compilation, golangci-lint, GoReleaser, the VS Code plugin) makes writing and distributing my little Go apps a blast. Much easier than Python/C++ (though I must say the recent pyproject.toml stuff is really helping me package Python when I need to).


These features have all been under discussion in the community for years. I don’t think the Go team need to worry much about “mindshare”; Go is not going anywhere in commercial software development, and is growing rapidly. Go is a perfect fit for my work; Rust’s features are wholly unnecessary.


Frantic is one the last words I would use to describe the process of adding generics to Go.


[flagged]


This is the worst kind of comments on HN. Snarkiness without argumentation, condescension and depreciation of some good and interesting work. Just don't post anything instead. If you're salty or don't like Go, do yourself (and the community) a favor and don't click on submissions related to Go.


GP could also just cite related work and make a comment on the similarities and differences.


All languages borrow ideas from one another, and that’s good for all of us. I was once a Z80 developer, then a C developer, then Java and now Go. I don’t look at C and say “autoincrement! Those scallywags just copied the idea from LDIR!”

(Admittedly this is partly because it’s not true)


[flagged]


> even talking about interfaces like "magic methods"

No, this was referring to a mechanism used in the stdlib to see if an object passed in as one interface type, could be coerced to another interface type in order to perform some operation more efficiently.

I’m a Go developer, and I didn’t know this happened, it’s basically not visible from the API function signatures, you have to look at the implementation.

I love working in Go but this did raise my eyebrows and the functionality definitely deserves being called “magic”, and not in the good way.


there is nothing wrong with asking if provided object is more capable than it seems so we can perform more efficient logic on it. that has nothing to do with magic methods, numbers... no magic whatsoever. "magic" in code refers to usually hidden and/or fixed behavior. this is not even in the same ballpark.


> there is nothing wrong with asking if provided object is more capable than it seems so we can perform more efficient logic on it.

There certainly is something wrong if you can’t see this in the API declaration.

Some Go stdlib calls declare that they accept one interface, but then test the passed in object against another, unrelated interface, which provides additional functionality.

Because the second interface is undeclared and hard coded, the functionality is hidden from the user, who has to somehow know about the inner workings of the API in order to benefit from it.

> "magic" in code refers to usually hidden and/or fixed behavior.

Which is exactly what is happening in this case.


Just when you think you have The Best Way of handling errors in Go figured out, they add yet another paradigm.

Is a shared err type package on the way out? I use it to bubble up HTTP status codes consistently, is there a better way? Should you always use sentinel errors?


It’s the same paradigm. You still have to do if err != nil. Multierrors have been in the community forever. It’s just been added to the standard library is all.


Well. Kinda.

Except that now if someone does `errors.Join` and they pass it to existing code that was using `errors.Unwrap` to inspect an error chain...

... they now get a not-unwrap-able error. Which they can still `As` to inspect...

...but since it's a tree, they can't recursively-`As` to find all instances of a type of error in a chain, like they could before (if you find something in one branch, you can only traverse that branch, because As doesn't maintain iteration-state).

It's not an unambiguous win, sadly. New code interacting with old code might misbehave.


Why would the other code be manually unwrapping and then doing an As test instead of using errors.As? That's extremely Hyram's Law behavior. :-) I agree though that if you have an error log reporting system it should be updated to understand the new multierrors. OTOH, if it's not, it will just see the multierrors as a single node and work fine otherwise.


Anything trying to display the whole chain is one example, like some every logging library or custom output in existence.

For behavior: sometimes wrapping order matters, and As is convoluted to use to determine that, to say the least. Manually unwrapping is easy, and reasonably safe and easy - the standard library does it! But...


The only constant is change. Especially with Go, apparently. It's hard to evaluate the comparison of Golang of today to the one I originally discovered 11 years ago in 2012.

Then, it was a breath of fresh air. Nowadays.. I find myself sighing. It works but the joy has faded.

Bit rot is life.


The joys and woes of reinventing computer programming right where it was left off in the early 70s.


I think I can sympathise, it's hard to keep up with all the development in PL, so it's easy to convince yourself that all those new inventions are stupid anyway and you don't need them, you'll do it your way.

And then, slowly over a couple of decades, other people will be making it their life's work to add in the things that you thought were stupid because inevitably the need for them surfaces.


Or you've seen an approach fail spectacularly in a programming language you're familiar with, which causes you to throw out the baby with the bath water.


Any specific examples you can share?


A bit tongue in cheek, but I'm just referring to Rob Pike et al., exceptions, and C++. And I must clarify that by "fail spectacularly" I mean solely from the POV of the aspiring language designer, not necessarily from other language users. People enjoy C++ exceptions now, it seems, in combination with RAII, but back then it was probably a different story.


How many languages go through the first 11 years of life without significant changes? There are lisps, which have no syntax to change, and I guess elixir, which is itself just a syntax for a very mature runtime (BEAM).

Hell, even C underwent pretty major changes from 72 onwards, because every compiler supported entirely different features. Things like void functions and returning structs or unions. Granted, this predated internet distribution of software, so changes overall were perhaps slower within a single implementation, but there were still radical developments happening on a frequent basis.


> How many languages go through the first 11 years of life without significant changes

Only if a language with basically zero new concepts could have just learned from the litany of other managed languages’ mistakes..


Can you give an example of a feature where the designers of go failed to consider other languages mistakes? Every discussion of there’s I’ve read has been thoughtful, open, and well cited.


Yeah I'm not using generics and they're a big turn off. If Rust wasn't so ugly and the JVM so Swiss-cheesy and stuck in paradigms, I would switch.

So I keep writing my old ways code in Go and act like the new features don't exist.

Big sigh


Implementors gotta implement, no?


This is the same way of handling errors Go has always had. The point of Go errors is that they're just values; you program them like you would anything else. People have had multi-error packages for years, and Go encouraged it. Now there's a standard one.


[flagged]


Every time this completely tired and worn out take is regurgitated on this website, it reminds me that people write big fat try/catch blocks because they never expect things like file reads to fail.


Though at least Java people doing that get a stack trace, so they can find out which attempt failed. In random Go code you're fairly likely to get "error: file not found" and literally no other info.


Well, the Go's FileNotFound errors returned from the standard library do have the file name in the message string. Java's FileNotFoundException exceptions, on the other hand, don't; and the stacktrace is usually useless because it doesn't record the values of the local variable anyhow.


yeah, the inconsistency there in both standard libraries is... strange to say the least. more info is more gooder IMO, the runtime cost is SO much lower than the troubleshooting cost.


Please be respectful, nobody is regurgitating anything. People are frustrated because half of their programs are if err != nil { return nil, somestruct{}, false, "", err }


I've written several hundred thousands of lines of Go at this point, and the error handling mechanics have never bothered me. There are two things that bother me about errors; error messages that don't capture enough detail to reproduce the situation (user ids, for loop iterator, etc.; you don't get these in stack traces either), and people that discard one error to return another one (i.e. a blind defer x.Close()). With go 1.20, I have one less concern. If that makes you mad, I feel like these people that smugly reply "touch grass" are on to something.


Most exceptions in Java are checked so you choose to either explicitly handle them at the point they are thrown, group them with other exceptions or ignore them completely and let the entire program fail.

At least you have the choice unlike in Go.


>most exceptions in Java are checked

The opposite is true.


> they never expect things like file reads to fail.

They not only expect file reads to fail, they also expect stack overflows, out of memory etc etc. and a lot more. They also know that, more often than not, the place where something fails is not the place where you have enough information to recover properly.


Go's error handling isn't bad because there are checks everywhere: that's probably the only good thing about errors in go. It's bad because the standard library packages for dealing with errors are inefficient. And they're inefficient because of all the reflection and ceremony around understanding what an error actually is at the point in the code where it's most meaningful to inspect it. Any type can become a Go error value by implementing `Error`: but wrapping, unwrapping, and understanding what the error is at a point in the code are all extremely cumbersome and inefficient processes.


Exceptions are objectively better in every way (especially checked exceptions, which would definitely deserve a second chance!)

You get as small/large-grained handling as you need, get sane default behavior (bubbling up to a place where you can actually handle that error), and stack-traces, while you never accidentally overlook a potential failing call (will the 100th if err 3 liner properly handle the error case or just make your code continue to run in a no longer valid state?).


Here are some ways exceptions are objectively worse, so that you can be a more thoughtful person:

* Very noisy

* Do not survive network boundaries

* can but often do not survive thread boundaries

* lack important context

* easy to ignore

* obscures control flow

* difficult to handle

* in practice, rarely handled

* poor quality messages


> Very noisy

Are you really claiming this under a goddamn Go thread where every third line is error handling?

> Do not survive network boundaries

Well, depends on what you mean here. Do you mean something like RPC? Because there are implementations that can throw an exception for that. I fail to see how is it different with returning an error value, that doesn’t survive network boundaries in itself either.

I fail to come up with any example where exceptions are worse than error return types in the context of threads.

come on, exceptions give you a stack trace with the whole history of what got called from where. How is that worse than your “file missing” error without any explanation?

> easy to ignore

Checked exceptions are a thing. But it is enough to handle them higher up and it will be handled one way or another, while it is much easier to simply fail to handle an err.

> in practice, rarely handled

I can just as well claim that go-style error handling is more often then not not handling properly the issue at hand, and is most definitely not tested for the error case. A bubbled up exception stopping the program/at least a given thread handling a request is a much better outcome then silent failure.

> poor quality messages

They get as high or low quality messages as one wants. I prefer a proper stacktrace with line numbers over a single obscure line I might occasionally be able to debug by grepping a code base.


I’m not talking about exceptions in theory, which you are completely right about. I’m talking about how exceptions are actually used in practice. You can claim “they should do it differently” but I think you would be ignoring that the design of exceptions is the reason they are not handled/networked/etc. I could go into more detail about your points and it might be interesting, but I am drunk.

EDIT: re the noise point, I’m talking about the actual exception when it is read by a developer in a debugging context, not the lines of code dedicated to processing it.


This isn't responsive to the thread; it's just introducing a language-war argument at an opportune point. Please don't pick Go vs. other-language fights; they sprawl like kudzu and take over the entire thread, as you can see here. This wasn't a comment about whether exceptions are better or worse than error values, but now there's a really dumb flame war about that below.


Sentinel errors are only one way, and definitely not the best way imo. `errors.As` gets you the ability to recursively and multierror-safely check to see if somewhere in the stack, an error of a certain type is found, and access its extra data, to determine e.g. which status code is most appropriate based on any extra data in that type.

I think shared types from a package make a ton of sense and are really practical for writing flexible and maintainable code.


I wrote a package to attach codes to errors via wrapping rather than a shared error type: https://github.com/gregwebs/errcode


No, Go programmers are get paid to get if err != nil done correctly.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: