Max number of goroutines

GoMultitaskingGoroutine

Go Problem Overview


How many goroutines can I use painless? For example wikipedia says, in Erlang 20 million processes can be created without degrading performance.

Update: I've just investigated in goroutines performance a little and got such a results:

  • It looks like goroutine lifetime is more then calculating sqrt() 1000 times ( ~45µs for me ), the only limitation is memory
  • Goroutine costs 4 — 4.5 KB

Go Solutions


Solution 1 - Go

If a goroutine is blocked, there is no cost involved other than:

  • memory usage
  • slower garbage-collection

The costs (in terms of memory and average time to actually start executing a goroutine) are:

Go 1.6.2 (April 2016)
  32-bit x86 CPU (A10-7850K 4GHz)
    | Number of goroutines: 100000
    | Per goroutine:
    |   Memory: 4536.84 bytes
    |   Time:   1.634248 µs
  64-bit x86 CPU (A10-7850K 4GHz)
    | Number of goroutines: 100000
    | Per goroutine:
    |   Memory: 4707.92 bytes
    |   Time:   1.842097 µs

Go release.r60.3 (December 2011)
  32-bit x86 CPU (1.6 GHz)
    | Number of goroutines: 100000
    | Per goroutine:
    |   Memory: 4243.45 bytes
    |   Time:   5.815950 µs

On a machine with 4 GB of memory installed, this limits the maximum number of goroutines to slightly less than 1 million.


Source code (no need to read this if you already understand the numbers printed above):

package main

import (
    "flag"
    "fmt"
    "os"
    "runtime"
    "time"
)

var n = flag.Int("n", 1e5, "Number of goroutines to create")

var ch = make(chan byte)
var counter = 0

func f() {
    counter++
    <-ch // Block this goroutine
}

func main() {
    flag.Parse()
    if *n <= 0 {
            fmt.Fprintf(os.Stderr, "invalid number of goroutines")
            os.Exit(1)
    }

    // Limit the number of spare OS threads to just 1
    runtime.GOMAXPROCS(1)

    // Make a copy of MemStats
    var m0 runtime.MemStats
    runtime.ReadMemStats(&m0)

    t0 := time.Now().UnixNano()
    for i := 0; i < *n; i++ {
            go f()
    }
    runtime.Gosched()
    t1 := time.Now().UnixNano()
    runtime.GC()

    // Make a copy of MemStats
    var m1 runtime.MemStats
    runtime.ReadMemStats(&m1)

    if counter != *n {
            fmt.Fprintf(os.Stderr, "failed to begin execution of all goroutines")
            os.Exit(1)
    }

    fmt.Printf("Number of goroutines: %d\n", *n)
    fmt.Printf("Per goroutine:\n")
    fmt.Printf("  Memory: %.2f bytes\n", float64(m1.Sys-m0.Sys)/float64(*n))
    fmt.Printf("  Time:   %f µs\n", float64(t1-t0)/float64(*n)/1e3)
}

Solution 2 - Go

Hundreds of thousands, per Go FAQ: Why goroutines instead of threads?: > It is practical to create hundreds of thousands of goroutines in the same address space.

The test test/chan/goroutines.go creates 10,000 and could easily do more, but is designed to run quickly; you can change the number on your system to experiment. You can easily run millions, given enough memory, such as on a server.

To understand the max number of goroutines, note that the per-goroutine cost is primarily the stack. Per FAQ again:

> …goroutines, can be very cheap: they have little overhead beyond the memory for the stack, which is just a few kilobytes.

A back-of-the-envelop calculation is to assume that each goroutine has one 4 KiB page allocated for the stack (4 KiB is a pretty uniform size), plus some small overhead for a control block (like a Thread Control Block) for the runtime; this agrees with what you observed (in 2011, pre-Go 1.0). Thus 100 Ki routines would take about 400 MiB of memory, and 1 Mi routines would take about 4 GiB of memory, which is still manageable on desktop, a bit much for a phone, and very manageable on a server. In practice the starting stack has ranged in size from half a page (2 KiB) to two pages (8 KiB), so this is approximately correct.

The starting stack size has changed over time; it started at 4 KiB (one page), then in 1.2 was increased to 8 KiB (2 pages), then in 1.4 was decreased to 2 KiB (half a page). These changes were due to segmented stacks causing performance problems when rapidly switching back and forth between segments ("hot stack split"), so increased to mitigate (1.2), then decreased when segmented stacks were replaced with contiguous stacks (1.4):

Go 1.2 Release Notes: Stack size: > In Go 1.2, the minimum size of the stack when a goroutine is created has been lifted from 4KB to 8KB

Go 1.4 Release Notes: Changes to the runtime: > the default starting size for a goroutine's stack in 1.4 has been reduced from 8192 bytes to 2048 bytes.

Per-goroutine memory is largely stack, and it starts low and grows so you can cheaply have many goroutines. You could use a smaller starting stack, but then it would have to grow sooner (gain space at cost of time), and the benefits decrease due to the control block not shrinking. It is possible to eliminate the stack, at least when swapped out (e.g., do all allocation on heap, or save stack to heap on context switch), though this hurts performance and adds complexity. This is possible (as in Erlang), and means you’d just need the control block and saved context, allowing another factor of 5×–10× in number of goroutines, limited now by control block size and on-heap size of goroutine-local variables. However, this isn’t terribly useful, unless you need millions of tiny sleeping goroutines.

Since the main use of having many goroutines is for IO-bound tasks (concretely to process blocking syscalls, notably network or file system IO), you’re much more likely to run into OS limits on other resources, namely network sockets or file handles: golang-nuts › The max number of goroutines and file descriptors?. The usual way to address this is with a pool of the scarce resource, or more simply by just limiting the number via a semaphore; see Conserving File Descriptors in Go and Limiting Concurrency in Go.

Solution 3 - Go

That depends entirely on the system you are running on. But goroutines are very lightweight. An average process should have no problems with 100.000 concurrent routines. Whether this goes for your target platform is, of course, something we can't answer without knowing what that platform is.

Solution 4 - Go

To paraphrase, there are lies, damn lies, and benchmarks. As the author of the Erlang benchmark confessed,

> It goes without saying that there wasn't enough memory left in the > machine to actually do anything useful. stress-testing erlang

What is your hardware, what is your operating system, where is your benchmark source code? What is the benchmark trying to measure and prove/disprove?

Solution 5 - Go

Here's a great article by Dave Cheney on this topic: http://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite

Solution 6 - Go

If the number of goroutine ever become an issue, you easily can limit it for your program:
See mr51m0n/gorc and this example.

> Set thresholds on number of running goroutines > > Can increase and decrease a counter when starting or stopping a goroutine.
It can wait for a minimum or maximum number of goroutines running, thus allowing to set thresholds for the number of gorc governed goroutines running at the same time.

Solution 7 - Go

When the operation was CPU bounded, anything beyond the amount of cores proved to do nothing.

In any other case you will need to test yourself.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionOCyrilView Question on Stackoverflow
Solution 1 - Gouser811773View Answer on Stackoverflow
Solution 2 - GoNils von BarthView Answer on Stackoverflow
Solution 3 - GojimtView Answer on Stackoverflow
Solution 4 - GopeterSOView Answer on Stackoverflow
Solution 5 - GoTravis ReederView Answer on Stackoverflow
Solution 6 - GoVonCView Answer on Stackoverflow
Solution 7 - GoAlberto Salvia NovellaView Answer on Stackoverflow