Hacker News new | past | comments | ask | show | jobs | submit login

There's so much wrong with the second example (it works, yes, but for all the wrong reasons) I don't even know where to start.

Using the buffer size of a channel to control number of concurrent jobs is just the wrong approach. It's so much easier and cleaner to just use the number of goroutines for that:

    const workers = 3
    const jobs = 20
    
    jobsChan := make(chan int)
    var wg sync.WaitGroup
    for i := 0; i < workers; i++ {
      wg.Add(1)
      go func() {
        defer wg.Done()
        for work := range jobsChan {
          time.Sleep(time.Second)
          fmt.Println(work)
        }
      }()
    }
    
    for i := 1; i <= jobs; i++ {
      jobsChan <- i
    }
    close(jobsChan)
    wg.Wait()
    fmt.Println("done")
One thing about channels in go is that the only time you want to use a buffered channel is the time that you know exactly how many writes (n) you'll have to the channel, while n is finite and reasonably small, so you create a channel with buffer size n to unblock all the writes. An example to that is that if you want to strictly enforce a timeout to a blocking call:

    const timeout = time.Millisecond*10
    ctx, cancel := context.WithTimeout(context.Background(), timeout)
    defer cancel()
    resultChan := make(chan resultType, 1)
    go func() {
      result, err := myBlockCall(ctx)
      resultChan <- resultType{result, err}
    }()
    select {
    case <- ctx.Done():
      return nil, ctx.Err()
    case r := <- resultChan:
      return r.result, r.err
    }
If you are using a buffered channel in other cases, you are likely doing it wrong.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: