Hacker News new | past | comments | ask | show | jobs | submit login
Go Concurrency (slideshare.net)
104 points by jgrahamc on March 28, 2013 | hide | past | favorite | 35 comments



This talk was given at the Go London User Group yesterday -- if you're in London, come along to the next one: http://www.meetup.com/Go-London-User-Group/


And if you want to speak please write to us.


Good examples, especially where we use close() to start go routines :) However slide 24 (of 26) is not available.


Odd that slide 24 fails. You can download the presentation as a PDF and that slide is there.


func balancer() on slide 23 isn't shown. Would you care to elaborate on that?

Are you creating a buffered channel there? make(chan *job, 10)?


It's on the mystery slide 24. Download the file and you can see it, or visit this gist.

https://gist.github.com/jgrahamc/5262578


Oh, sorry. Thanks.

Nice to see CloudFront using Go. Good talk on channels, keep up the Good work :)



And since people want it, I've added all the code from the slides to a gist: https://gist.github.com/jgrahamc/5262578


So correct me if I'm wrong...

Slide 13 shows an example memory recycler. I understand the concept, but it uses a Go List to keep track of the elements. list.PushFront() allocates a new object on every invocation (see http://golang.org/src/pkg/container/list/list.go line 138) and list.Remove() creates garbage for the GC.

This doesn't seem like it saves much GC work.

A while back I wrote a "StackSlice" implementation to recycle memory which uses a statically allocated slice and pushes pointers to objects onto the stack. This is nice because pushes and pops don't involve any memory allocations.


The elements of the doubly-linked list are slices. A slice can be thought of as a reference to a backing array. The only things being created or collected by the GC in this case are the Element bookkeeping structures (and possibly some slice headers) which contains the the slice.

The memory that is being recycled in the example is storage region of the slice, the size of which is arbitrary. It seems that by default they preallocate a 100-element byte array.


Right, I agree with you. Its not that the strategy won't save any memory. But it won't save any memory allocations because the list still makes them. The number of allocations can put pressure on a garbage collector just as much as large allocations. Maybe even more with fragmentation and the like.

In C you explicitly choose stack vs heap allocation. In go, the runtime does this for you. I had a couple of situations where, after profiling, most of my CPU time was going to heap memory allocations. Using the strategy on slide 13 probably would have helped some to reduce the size of the allocation, but in my case only by a few bytes. Using a fixed-size slice that held references to objects made a big different in performance.

In regards to slide 13 it just seems funny to "recycle" memory by forcing a whole new allocate/deallocate cycle.


Can we assume from this that Go is helping to battle against DDOS attacks?


I am not involved in the whole CloudFlare/DDoS thing and the Go code I am writing is not used in that part of the network.


Uh... isn't this a race condition? EDIT: Oh, wait, "die" is unbuffered... But if you have more than one worker it is a race condition though, unless I'm overlooking something else as well.

    func worker(die chan bool) {
    	for {
    		select {
    		// ... do stuff cases
    		case <-die:
    			// ... do termination tasks
    			die <- true
    			return
    		}
    	}
    }
    
    func main() {
    	die := make(chan bool)
    	go worker(die)
    	die <- true
    	<-die
    }


You would not want to share a die channel like that with multiple workers because when doing die <- true you would not know which one received it.

As I said early on in the presentation using unbuffered channels is 'best' because it makes reasoning about concurrency easy. Given that die is unbuffered the die <- true has to synchronize with something in order to happen; that something is the case <-die. If die were buffered then this example would simply not work.


I realise that, I just find the example interesting because I often would expect to use a whole pool of workers, not just one. The close(start) idiom you used earlier makes more sense in that context (but in a close(die) form).


Interesting use of goroutines/channels to implement a memory recycler. They could be used to implement a thread-safe alloc/free pool of objects.


Someone who has a slideshare account post the pdf ?


Anyone has any idea why the code from slide 5 doesn't print anything? http://play.golang.org/p/NtjHL3F1LC


Because I shortened it for the slide. Add something at the end of main() that waits a bit (say time.Sleep(time.Minute)). What's happening is main() terminates before those goroutines get to run.

http://play.golang.org/p/rJ9eE7p7gz


Oh thanks! Sorry for the noise.


You should open a channel, pass it to the go routine, and send a completion message from the goroutine, and then read from it before the end of main.

3 lines of code to show a cool festure and avoid an embarassing bug.


a lot of articles about Go these days , interesting.


Yeah, but I think that's only because there seems to be a mild bias for Go among the members on here. So any submissions with "Go" in the title get upvoted pretty quickly.

Not that I'm complaining as I probably fall under the aforementioned bias (albeit I'm not as liberal with my voting).


Nah, there's plenty of Go hating going around, so that kind of balances it.


Have you tried it? I can see why theres Go favoritism here.


I did already say that I'm also slightly biased ;)

However the point of that post was to be a little more objective because constant fanboy posts aren't constructive. Plus there's plenty of communities like this place except which aren't so pro-Go, so you have to remember that HN's bias isn't representative of every community nor individual (let alone real life). Which was the other reason I wasn't my former post to sound impartial.


It looks like an interesting language, but most of the articles are really shallow. I would love to see an example of something, like, "fetch X records from a database, encode as json and send an email with the json embedded as an attachment." That would answer a lot of the questions I have about library support for basic everyday tasks.



I was at the talk last night and jgrahamc said that nearly all of the code in the slide deck is what they are doing in production code, though simplified to fit on a slide essentially.

Things like the memory recycler are needed to help keep the high tide of memory usage as low as possible (prior to the garbage collector coming along and taking care of stuff).

One of the other talks covered ways in which you could deal with JSON, and it is pretty trivial. The example used in that talk was how to solve a semi-structured JSON file (where items in an array were of differing structure), but even then working with JSON remains pretty easy and the built-in marshalling can even be over-ridden if you have such a need (for example to map a complex type into a simple JSON representation).

Anyhow, your scenario... a few snippets to help you see it, if you need a working example you can run locally just ask and I'll knock one up.

This is a struct:

    type MyRecord struct {
        Id int64 `json:"id"`,
        Title string `json:"title,omitempty"`,
    }
And it might match a database table like this:

    CREATE TABLE my_records (
        record_id bigint NOT NULL,
        title character varying(150)
    )
That struct has some tags after the field declarations that describe to the JSON processor how to marshal an instance of MyRecord into some JSON.

If that table were Postgres you would probably use the driver located here: https://github.com/lib/pq

You could connect to the database and populate your record from the database thus:

    db, err := sql.Open("postgres", myConnectionString)
    // do something if err != nil
    defer db.Close()

    stmt, err := db.Prepare(`SELECT record_id, title FROM my_records LIMIT 1 OFFSET 0`)
    // do something if err != nil
    defer stmt.Close()
    
    row := stmt.QueryRow()
    
    record := new(MyRecord)
    err = row.Scan(&record.Id, &record.Title)
    // do something if err != nil
Now that your record is populated from the database, you can marshal it to JSON:

    output, err = json.Marshal(record)
    // do something if err != nil
If your row in the database looked like:

    INSERT INTO my_records (record_id, title) VALUES (1,"Hello World")
Then you JSON now in the output var looks like:

    {"id":1,"title":"Hello World"}
The example in the sibling comment shows the essence of the whole program with the email bit too, but this post shows some of the detail of actually fetching the record and marshalling it... there isn't much to it.


That's your example of a less shallow demonstration of Go? Something that you could do in any language given a DB library, a JSON library, and an email library?


Well it's a start at least, and his answer shows go is a programming language.

How about one of these:

How to write an OS (in go)

Maybe an article about why static typing makes for a better compiler despite what that guy said about JIT and predictive something trees (it was about why python is slow and it isn't the reason you expect and it was on hn a month ago)

Malware in go

How to search for go related stuff using google

Something cool about go object files and the linker

More about tooling like the regular grammar, the profiler, etc.

How to write a xen guest OS in go (this is probably smarter than targeting bare metal)

Writing an exokernel in go instead of OCaml.


How are you tmusing close to synchronize channels? Close doesn't block.

It looks like a race condition, the workers might try to write their start messages after main closes the channel.

http://golang.org/ref/spec#Close


Assuming you are talking about slide 7 I'm not trying to synchronize I'm just using the closing of the start channel as a signal to the goroutines that they can start their work. Clearly, it's not guaranteed that all the <- start's have failed before close() returns.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: