Best way to implement global counters for highly concurrent applications?
ConcurrencyGoConcurrency Problem Overview
What is the best way to implement global counters for a highly concurrent application? In my case I may have 10K-20K go routines performing "work", and I want to count the number and types of items that the routines are working on collectively...
The "classic" synchronous coding style would look like:
var work_counter int
func GoWorkerRoutine() {
for {
// do work
atomic.AddInt32(&work_counter,1)
}
}
Now this gets more complicated because I want to track the "type" of work being done, so really I'd need something like this:
var work_counter map[string]int
var work_mux sync.Mutex
func GoWorkerRoutine() {
for {
// do work
work_mux.Lock()
work_counter["type1"]++
work_mux.Unlock()
}
}
It seems like there should be a "go" optimized way using channels or something similar to this:
var work_counter int
var work_chan chan int // make() called somewhere else (buffered)
// started somewher else
func GoCounterRoutine() {
for {
select {
case c := <- work_chan:
work_counter += c
break
}
}
}
func GoWorkerRoutine() {
for {
// do work
work_chan <- 1
}
}
This last example is still missing the map, but that's easy enough to add. Will this style provide better performance than just a simple atomic increment? I can't tell if this is more or less complicated when we're talking about concurrent access to a global value versus something that may block on I/O to complete...
Thoughts are appreciated.
Update 5/28/2013:
I tested a couple implementations, and the results were not what I expected, here's my counter source code:
package helpers
import (
)
type CounterIncrementStruct struct {
bucket string
value int
}
type CounterQueryStruct struct {
bucket string
channel chan int
}
var counter map[string]int
var counterIncrementChan chan CounterIncrementStruct
var counterQueryChan chan CounterQueryStruct
var counterListChan chan chan map[string]int
func CounterInitialize() {
counter = make(map[string]int)
counterIncrementChan = make(chan CounterIncrementStruct,0)
counterQueryChan = make(chan CounterQueryStruct,100)
counterListChan = make(chan chan map[string]int,100)
go goCounterWriter()
}
func goCounterWriter() {
for {
select {
case ci := <- counterIncrementChan:
if len(ci.bucket)==0 { return }
counter[ci.bucket]+=ci.value
break
case cq := <- counterQueryChan:
val,found:=counter[cq.bucket]
if found {
cq.channel <- val
} else {
cq.channel <- -1
}
break
case cl := <- counterListChan:
nm := make(map[string]int)
for k, v := range counter {
nm[k] = v
}
cl <- nm
break
}
}
}
func CounterIncrement(bucket string, counter int) {
if len(bucket)==0 || counter==0 { return }
counterIncrementChan <- CounterIncrementStruct{bucket,counter}
}
func CounterQuery(bucket string) int {
if len(bucket)==0 { return -1 }
reply := make(chan int)
counterQueryChan <- CounterQueryStruct{bucket,reply}
return <- reply
}
func CounterList() map[string]int {
reply := make(chan map[string]int)
counterListChan <- reply
return <- reply
}
It uses channels for both writes and reads which seems logical.
Here are my test cases:
func bcRoutine(b *testing.B,e chan bool) {
for i := 0; i < b.N; i++ {
CounterIncrement("abc123",5)
CounterIncrement("def456",5)
CounterIncrement("ghi789",5)
CounterIncrement("abc123",5)
CounterIncrement("def456",5)
CounterIncrement("ghi789",5)
}
e<-true
}
func BenchmarkChannels(b *testing.B) {
b.StopTimer()
CounterInitialize()
e:=make(chan bool)
b.StartTimer()
go bcRoutine(b,e)
go bcRoutine(b,e)
go bcRoutine(b,e)
go bcRoutine(b,e)
go bcRoutine(b,e)
<-e
<-e
<-e
<-e
<-e
}
var mux sync.Mutex
var m map[string]int
func bmIncrement(bucket string, value int) {
mux.Lock()
m[bucket]+=value
mux.Unlock()
}
func bmRoutine(b *testing.B,e chan bool) {
for i := 0; i < b.N; i++ {
bmIncrement("abc123",5)
bmIncrement("def456",5)
bmIncrement("ghi789",5)
bmIncrement("abc123",5)
bmIncrement("def456",5)
bmIncrement("ghi789",5)
}
e<-true
}
func BenchmarkMutex(b *testing.B) {
b.StopTimer()
m=make(map[string]int)
e:=make(chan bool)
b.StartTimer()
for i := 0; i < b.N; i++ {
bmIncrement("abc123",5)
bmIncrement("def456",5)
bmIncrement("ghi789",5)
bmIncrement("abc123",5)
bmIncrement("def456",5)
bmIncrement("ghi789",5)
}
go bmRoutine(b,e)
go bmRoutine(b,e)
go bmRoutine(b,e)
go bmRoutine(b,e)
go bmRoutine(b,e)
<-e
<-e
<-e
<-e
<-e
}
I implemented a simple benchmark with just a mutex around the map (just testing writes), and benchmarked both with 5 goroutines running in parallel. Here are the results:
$ go test --bench=. helpers
PASS
BenchmarkChannels 100000 15560 ns/op
BenchmarkMutex 1000000 2669 ns/op
ok helpers 4.452s
I would not have expected the mutex to be that much faster...
Further thoughts?
Concurrency Solutions
Solution 1 - Concurrency
If you're trying to synchronize a pool of workers (e.g. allow n goroutines to crunch away at some amount of work) then channels are a very good way to go about it, but if all you actually need is a counter (e.g page views) then they are overkill. The sync and sync/atomic packages are there to help.
import "sync/atomic"
type count32 int32
func (c *count32) inc() int32 {
return atomic.AddInt32((*int32)(c), 1)
}
func (c *count32) get() int32 {
return atomic.LoadInt32((*int32)(c))
}
Solution 2 - Concurrency
Don't use sync/atomic - from the linked page
> Package atomic provides low-level atomic memory primitives useful for > implementing synchronization algorithms. > These functions require great care to be used correctly. Except for > special, low-level applications, synchronization is better done with > channels or the facilities of the sync package
Last time I had to do this I benchmarked something which looked like your second example with a mutex and something which looked like your third example with a channel. The channels code won when things got really busy, but make sure you make the channel buffer big.
Solution 3 - Concurrency
Don't be afraid of using mutexes and locks just because you think they're "not proper Go". In your second example it's absolutely clear what's going on, and that counts for a lot. You will have to try it yourself to see how contented that mutex is, and whether adding complication will increase performance.
If you do need increased performance, perhaps sharding is the best way to go: http://play.golang.org/p/uLirjskGeN
The downside is that your counts will only be as up-to-date as your sharding decides. There may also be performance hits from calling time.Since()
so much, but, as always, measure it first :)
Solution 4 - Concurrency
The other answer using sync/atomic is suited for things like page counters, but not for submitting unique identifiers to an external API. To do that, you need an "increment-and-return" operation, which can only be implemented as a CAS loop.
Here's a CAS loop around an int32 to generate unique message IDs:
import "sync/atomic"
type UniqueID struct {
counter int32
}
func (c *UniqueID) Get() int32 {
for {
val := atomic.LoadInt32(&c.counter)
if atomic.CompareAndSwapInt32(&c.counter, val, val+1) {
return val
}
}
}
To use it, simply do:
requestID := client.msgID.Get()
form.Set("id", requestID)
This has an advantage over channels in that it doesn't require as many extra idle resources - existing goroutines are used as they ask for IDs rather than using one goroutine for every counter your program needs.
TODO: Benchmark against channels. I'm going to guess that channels are worse in the no-contention case and better in the high-contention case, as they have queuing while this code simply spins attempting to win the race.
Solution 5 - Concurrency
Old question but I just stumbled upon this and it may help: https://github.com/uber-go/atomic
Basically the engineers at Uber has built a few nice util functions on top of the sync/atomic
package
I haven't tested this in production yet but the codebase is very small and the implementation of most functions is quite stock standard
Definitely preferred over using channels or basic mutexes
Solution 6 - Concurrency
The last one was close:
package main
import "fmt"
func main() {
ch := make(chan int, 3)
go GoCounterRoutine(ch)
go GoWorkerRoutine(1, ch)
// not run as goroutine because mein() would just end
GoWorkerRoutine(2, ch)
}
// started somewhere else
func GoCounterRoutine(ch chan int) {
counter := 0
for {
ch <- counter
counter += 1
}
}
func GoWorkerRoutine(n int, ch chan int) {
var seq int
for seq := range ch {
// do work:
fmt.Println(n, seq)
}
}
This introduces a single point of failure: if the counter goroutine dies, everything is lost. This may not be a problem if all goroutine are executed on one computer, but may become a problem if they are scattered over the network. To make the counter immune to failures of single nodes in the cluster, special algorithms have to be used.
Solution 7 - Concurrency
I implemented this with a simple map + mutex which seems to be the best way to handle this since it is the "simplest way" (which is what Go says to use to choose locks vs channels).
package main
import (
"fmt"
"sync"
)
type single struct {
mu sync.Mutex
values map[string]int64
}
var counters = single{
values: make(map[string]int64),
}
func (s *single) Get(key string) int64 {
s.mu.Lock()
defer s.mu.Unlock()
return s.values[key]
}
func (s *single) Incr(key string) int64 {
s.mu.Lock()
defer s.mu.Unlock()
s.values[key]++
return s.values[key]
}
func main() {
fmt.Println(counters.Incr("bar"))
fmt.Println(counters.Incr("bar"))
fmt.Println(counters.Incr("bar"))
fmt.Println(counters.Get("foo"))
fmt.Println(counters.Get("bar"))
}
You can run the code on https://play.golang.org/p/9bDMDLFBAY. I made a simple packaged version on gist.github.com
Solution 8 - Concurrency
see by yourself and let me know what you think.
src/test/helpers/helpers.go
package helpers
type CounterIncrementStruct struct {
bucket string
value int
}
type CounterQueryStruct struct {
bucket string
channel chan int
}
var counter map[string]int
var counterIncrementChan chan CounterIncrementStruct
var counterQueryChan chan CounterQueryStruct
var counterListChan chan chan map[string]int
func CounterInitialize() {
counter = make(map[string]int)
counterIncrementChan = make(chan CounterIncrementStruct, 0)
counterQueryChan = make(chan CounterQueryStruct, 100)
counterListChan = make(chan chan map[string]int, 100)
go goCounterWriter()
}
func goCounterWriter() {
for {
select {
case ci := <-counterIncrementChan:
if len(ci.bucket) == 0 {
return
}
counter[ci.bucket] += ci.value
break
case cq := <-counterQueryChan:
val, found := counter[cq.bucket]
if found {
cq.channel <- val
} else {
cq.channel <- -1
}
break
case cl := <-counterListChan:
nm := make(map[string]int)
for k, v := range counter {
nm[k] = v
}
cl <- nm
break
}
}
}
func CounterIncrement(bucket string, counter int) {
if len(bucket) == 0 || counter == 0 {
return
}
counterIncrementChan <- CounterIncrementStruct{bucket, counter}
}
func CounterQuery(bucket string) int {
if len(bucket) == 0 {
return -1
}
reply := make(chan int)
counterQueryChan <- CounterQueryStruct{bucket, reply}
return <-reply
}
func CounterList() map[string]int {
reply := make(chan map[string]int)
counterListChan <- reply
return <-reply
}
src/test/distributed/distributed.go
package distributed
type Counter struct {
buckets map[string]int
incrQ chan incrQ
readQ chan readQ
sumQ chan chan int
}
func New() Counter {
c := Counter{
buckets: make(map[string]int, 100),
incrQ: make(chan incrQ, 1000),
readQ: make(chan readQ, 0),
sumQ: make(chan chan int, 0),
}
go c.run()
return c
}
func (c Counter) run() {
for {
select {
case a := <-c.readQ:
a.res <- c.buckets[a.bucket]
case a := <-c.sumQ:
var sum int
for _, cnt := range c.buckets {
sum += cnt
}
a <- sum
case a := <-c.incrQ:
c.buckets[a.bucket] += a.count
}
}
}
func (c Counter) Get(bucket string) int {
res := make(chan int)
c.readQ <- readQ{bucket: bucket, res: res}
return <-res
}
func (c Counter) Sum() int {
res := make(chan int)
c.sumQ <- res
return <-res
}
type readQ struct {
bucket string
res chan int
}
type incrQ struct {
bucket string
count int
}
func (c Counter) Agent(bucket string, limit int) *Agent {
a := &Agent{
bucket: bucket,
limit: limit,
sendIncr: c.incrQ,
}
return a
}
type Agent struct {
bucket string
limit int
count int
sendIncr chan incrQ
}
func (a *Agent) Incr(n int) {
a.count += n
if a.count > a.limit {
select {
case a.sendIncr <- incrQ{bucket: a.bucket, count: a.count}:
a.count = 0
default:
}
}
}
func (a *Agent) Done() {
a.sendIncr <- incrQ{bucket: a.bucket, count: a.count}
a.count = 0
}
src/test/helpers_test.go
package counters
import (
"sync"
"testing"
)
var mux sync.Mutex
var m map[string]int
func bmIncrement(bucket string, value int) {
mux.Lock()
m[bucket] += value
mux.Unlock()
}
func BenchmarkMutex(b *testing.B) {
b.StopTimer()
m = make(map[string]int)
buckets := []string{
"abc123",
"def456",
"ghi789",
}
b.StartTimer()
var wg sync.WaitGroup
wg.Add(b.N)
for i := 0; i < b.N; i++ {
go func() {
for _, b := range buckets {
bmIncrement(b, 5)
}
for _, b := range buckets {
bmIncrement(b, 5)
}
wg.Done()
}()
}
wg.Wait()
}
src/test/distributed_test.go
package counters
import (
"sync"
"test/counters/distributed"
"testing"
)
func BenchmarkDistributed(b *testing.B) {
b.StopTimer()
counter := distributed.New()
agents := []*distributed.Agent{
counter.Agent("abc123", 100),
counter.Agent("def456", 100),
counter.Agent("ghi789", 100),
}
b.StartTimer()
var wg sync.WaitGroup
wg.Add(b.N)
for i := 0; i < b.N; i++ {
go func() {
for _, a := range agents {
a.Incr(5)
}
for _, a := range agents {
a.Incr(5)
}
wg.Done()
}()
}
for _, a := range agents {
a.Done()
}
wg.Wait()
}
results
$ go test --bench=. --count 10 -benchmem
goos: linux
goarch: amd64
pkg: test/counters
BenchmarkDistributed-4 3356620 351 ns/op 24 B/op 0 allocs/op
BenchmarkDistributed-4 3414073 368 ns/op 11 B/op 0 allocs/op
BenchmarkDistributed-4 3371878 374 ns/op 7 B/op 0 allocs/op
BenchmarkDistributed-4 3240631 387 ns/op 3 B/op 0 allocs/op
BenchmarkDistributed-4 3169230 389 ns/op 2 B/op 0 allocs/op
BenchmarkDistributed-4 3177606 386 ns/op 0 B/op 0 allocs/op
BenchmarkDistributed-4 3064552 390 ns/op 0 B/op 0 allocs/op
BenchmarkDistributed-4 3065877 409 ns/op 2 B/op 0 allocs/op
BenchmarkDistributed-4 2924686 400 ns/op 1 B/op 0 allocs/op
BenchmarkDistributed-4 3049873 389 ns/op 0 B/op 0 allocs/op
BenchmarkMutex-4 1000000 1106 ns/op 17 B/op 0 allocs/op
BenchmarkMutex-4 948331 1246 ns/op 9 B/op 0 allocs/op
BenchmarkMutex-4 1000000 1244 ns/op 12 B/op 0 allocs/op
BenchmarkMutex-4 1000000 1246 ns/op 11 B/op 0 allocs/op
BenchmarkMutex-4 1000000 1228 ns/op 1 B/op 0 allocs/op
BenchmarkMutex-4 1000000 1235 ns/op 2 B/op 0 allocs/op
BenchmarkMutex-4 1000000 1244 ns/op 1 B/op 0 allocs/op
BenchmarkMutex-4 1000000 1214 ns/op 0 B/op 0 allocs/op
BenchmarkMutex-4 956024 1233 ns/op 0 B/op 0 allocs/op
BenchmarkMutex-4 1000000 1213 ns/op 0 B/op 0 allocs/op
PASS
ok test/counters 37.461s
If you change the limit value to 1000, the code gets much faster, instantly without worries
$ go test --bench=. --count 10 -benchmem
goos: linux
goarch: amd64
pkg: test/counters
BenchmarkDistributed-4 5463523 221 ns/op 0 B/op 0 allocs/op
BenchmarkDistributed-4 5455981 220 ns/op 0 B/op 0 allocs/op
BenchmarkDistributed-4 5591240 213 ns/op 0 B/op 0 allocs/op
BenchmarkDistributed-4 5277915 212 ns/op 0 B/op 0 allocs/op
BenchmarkDistributed-4 5430421 213 ns/op 0 B/op 0 allocs/op
BenchmarkDistributed-4 5374153 226 ns/op 0 B/op 0 allocs/op
BenchmarkDistributed-4 5656743 219 ns/op 0 B/op 0 allocs/op
BenchmarkDistributed-4 5337343 211 ns/op 0 B/op 0 allocs/op
BenchmarkDistributed-4 5353845 217 ns/op 0 B/op 0 allocs/op
BenchmarkDistributed-4 5416137 217 ns/op 0 B/op 0 allocs/op
BenchmarkMutex-4 1000000 1002 ns/op 135 B/op 0 allocs/op
BenchmarkMutex-4 1253211 1141 ns/op 58 B/op 0 allocs/op
BenchmarkMutex-4 1000000 1261 ns/op 3 B/op 0 allocs/op
BenchmarkMutex-4 987345 1678 ns/op 59 B/op 0 allocs/op
BenchmarkMutex-4 925371 1247 ns/op 0 B/op 0 allocs/op
BenchmarkMutex-4 1000000 1259 ns/op 2 B/op 0 allocs/op
BenchmarkMutex-4 978800 1248 ns/op 0 B/op 0 allocs/op
BenchmarkMutex-4 982144 1213 ns/op 0 B/op 0 allocs/op
BenchmarkMutex-4 975681 1254 ns/op 0 B/op 0 allocs/op
BenchmarkMutex-4 994789 1205 ns/op 0 B/op 0 allocs/op
PASS
ok test/counters 34.314s
Changing Counter.incrQ length will also greatly affect performance, though it is more memory.
Solution 9 - Concurrency
If your work counter types are not dynamic, i.e. you can write them all out upfront, I don't think you'll get much simpler or faster than this.
No mutex, no channel, no map. Just a statically sized array and an enum.
type WorkType int
const (
WorkType1 WorkType = iota
WorkType2
WorkType3
WorkType4
NumWorkTypes
)
var workCounter [NumWorkTypes]int64
func updateWorkCount(workType WorkType, delta int) {
atomic.AddInt64(&workCounter[workType], int64(delta))
}
Usage like so:
updateWorkCount(WorkType1, 1)
If you need to sometimes work with work types as strings for display purposes, you can always generate code with a tool like stringer