Slide 13 shows an example memory recycler. I understand the concept, but it uses a Go List to keep track of the elements. list.PushFront() allocates a new object on every invocation (see http://golang.org/src/pkg/container/list/list.go line 138) and list.Remove() creates garbage for the GC.
This doesn't seem like it saves much GC work.
A while back I wrote a "StackSlice" implementation to recycle memory which uses a statically allocated slice and pushes pointers to objects onto the stack. This is nice because pushes and pops don't involve any memory allocations.
The elements of the doubly-linked list are slices. A slice can be thought of as a reference to a backing array. The only things being created or collected by the GC in this case are the Element bookkeeping structures (and possibly some slice headers) which contains the the slice.
The memory that is being recycled in the example is storage region of the slice, the size of which is arbitrary. It seems that by default they preallocate a 100-element byte array.
Right, I agree with you. Its not that the strategy won't save any memory. But it won't save any memory allocations because the list still makes them. The number of allocations can put pressure on a garbage collector just as much as large allocations. Maybe even more with fragmentation and the like.
In C you explicitly choose stack vs heap allocation. In go, the runtime does this for you. I had a couple of situations where, after profiling, most of my CPU time was going to heap memory allocations. Using the strategy on slide 13 probably would have helped some to reduce the size of the allocation, but in my case only by a few bytes. Using a fixed-size slice that held references to objects made a big different in performance.
In regards to slide 13 it just seems funny to "recycle" memory by forcing a whole new allocate/deallocate cycle.
Uh... isn't this a race condition? EDIT: Oh, wait, "die" is unbuffered... But if you have more than one worker it is a race condition though, unless I'm overlooking something else as well.
func worker(die chan bool) {
for {
select {
// ... do stuff cases
case <-die:
// ... do termination tasks
die <- true
return
}
}
}
func main() {
die := make(chan bool)
go worker(die)
die <- true
<-die
}
You would not want to share a die channel like that with multiple workers because when doing die <- true you would not know which one received it.
As I said early on in the presentation using unbuffered channels is 'best' because it makes reasoning about concurrency easy. Given that die is unbuffered the die <- true has to synchronize with something in order to happen; that something is the case <-die. If die were buffered then this example would simply not work.
I realise that, I just find the example interesting because I often would expect to use a whole pool of workers, not just one. The close(start) idiom you used earlier makes more sense in that context (but in a close(die) form).
Because I shortened it for the slide. Add something at the end of main() that waits a bit (say time.Sleep(time.Minute)). What's happening is main() terminates before those goroutines get to run.
Yeah, but I think that's only because there seems to be a mild bias for Go among the members on here. So any submissions with "Go" in the title get upvoted pretty quickly.
Not that I'm complaining as I probably fall under the aforementioned bias (albeit I'm not as liberal with my voting).
I did already say that I'm also slightly biased ;)
However the point of that post was to be a little more objective because constant fanboy posts aren't constructive. Plus there's plenty of communities like this place except which aren't so pro-Go, so you have to remember that HN's bias isn't representative of every community nor individual (let alone real life). Which was the other reason I wasn't my former post to sound impartial.
It looks like an interesting language, but most of the articles are really shallow. I would love to see an example of something, like, "fetch X records from a database, encode as json and send an email with the json embedded as an attachment." That would answer a lot of the questions I have about library support for basic everyday tasks.
I was at the talk last night and jgrahamc said that nearly all of the code in the slide deck is what they are doing in production code, though simplified to fit on a slide essentially.
Things like the memory recycler are needed to help keep the high tide of memory usage as low as possible (prior to the garbage collector coming along and taking care of stuff).
One of the other talks covered ways in which you could deal with JSON, and it is pretty trivial. The example used in that talk was how to solve a semi-structured JSON file (where items in an array were of differing structure), but even then working with JSON remains pretty easy and the built-in marshalling can even be over-ridden if you have such a need (for example to map a complex type into a simple JSON representation).
Anyhow, your scenario... a few snippets to help you see it, if you need a working example you can run locally just ask and I'll knock one up.
This is a struct:
type MyRecord struct {
Id int64 `json:"id"`,
Title string `json:"title,omitempty"`,
}
And it might match a database table like this:
CREATE TABLE my_records (
record_id bigint NOT NULL,
title character varying(150)
)
That struct has some tags after the field declarations that describe to the JSON processor how to marshal an instance of MyRecord into some JSON.
If that table were Postgres you would probably use the driver located here: https://github.com/lib/pq
You could connect to the database and populate your record from the database thus:
db, err := sql.Open("postgres", myConnectionString)
// do something if err != nil
defer db.Close()
stmt, err := db.Prepare(`SELECT record_id, title FROM my_records LIMIT 1 OFFSET 0`)
// do something if err != nil
defer stmt.Close()
row := stmt.QueryRow()
record := new(MyRecord)
err = row.Scan(&record.Id, &record.Title)
// do something if err != nil
Now that your record is populated from the database, you can marshal it to JSON:
output, err = json.Marshal(record)
// do something if err != nil
If your row in the database looked like:
INSERT INTO my_records (record_id, title) VALUES (1,"Hello World")
Then you JSON now in the output var looks like:
{"id":1,"title":"Hello World"}
The example in the sibling comment shows the essence of the whole program with the email bit too, but this post shows some of the detail of actually fetching the record and marshalling it... there isn't much to it.
That's your example of a less shallow demonstration of Go? Something that you could do in any language given a DB library, a JSON library, and an email library?
Well it's a start at least, and his answer shows go is a programming language.
How about one of these:
How to write an OS (in go)
Maybe an article about why static typing makes for a better compiler despite what that guy said about JIT and predictive something trees (it was about why python is slow and it isn't the reason you expect and it was on hn a month ago)
Malware in go
How to search for go related stuff using google
Something cool about go object files and the linker
More about tooling like the regular grammar, the profiler, etc.
How to write a xen guest OS in go (this is probably smarter than targeting bare metal)
Assuming you are talking about slide 7 I'm not trying to synchronize I'm just using the closing of the start channel as a signal to the goroutines that they can start their work. Clearly, it's not guaranteed that all the <- start's have failed before close() returns.