Question

As far as I can understand, goroutines will block other goroutines from running if they are too busy. For me, this means that the performance and responsiveness of my application will probably depend on me knowing which library methods that will yield control over to other goroutines (e.g. typically Read() and Write())

Is there any way I can know exactly how different library methods will yield control over to other goroutines i.e. not actually block?

Is there any way I can implement a new method that calls third party code (including a async Win32 API like findnextchangenotification that relies on waitforsingleobject or waitformultipleobjects) and behaves "nice" with the Go scheduler? In this particular example, the syscall will signal once finished and I need to wait until it is finished withot exhausting all other goroutines.

Is there perhaps another "best practice" for how to deal with third party blocking operations in Go so that they don't exhaust the other goroutines?

I assume that the Go runtime perhaps have some sort of IO-loop internally running on a background thread in order to "suspend" blocking goroutine operations until they finished with IO. If this is indeed the case, then I guess it could be useful to be able to build upon this for new blocking operations.

Was it helpful?

Solution

Go's scheduler will suspend goroutines that are waiting for a syscall and wake them when the syscall has completed, giving you a synchronous API to deal with asynchronous calls.

Read more about how the scheduler works.

There is, however - no exact way of determining which goroutine will be awoken after another or yielding control from one goroutine directly to another - that's the scheduler's job.

Your concern is a solved problem in Go and you don't have to worry about it - code away!

Edit:

To further clarify; you are not supposed to code to conform to (or make better use of) Go's scheduler's semantics -- rather the other way around. There might be some code-tricks that can get you slight performance increase today but the scheduler can and will change in any future Go release -- making your code optimizations useless or even work against you.

OTHER TIPS

Two mechanisms can enhance your control over this:

  1. runtime.Gosched() - yields control back to the scheduler, but of course it won't help with a blocking call you've already issued.

  2. runtime.LockOSThread() dedicates a real OS thread to this goroutine and no other, meaning there will be less competition in the scheduler. from the docs:

LockOSThread wires the calling goroutine to its current operating system thread. Until the calling goroutine exits or calls UnlockOSThread, it will always execute in that thread, and no other goroutine can.

Through Go 1.1, goroutines would only yield control on blocking calls (syscalls, channel reads/writes, mutex locks, etc). This meant that goroutines could in theory completely hog the CPU, and not allow the scheduler to run.

In Go 1.2, a change was introduced to fix this:

In prior releases, a goroutine that was looping forever could starve out other goroutines on the same thread, a serious problem when GOMAXPROCS provided only one user thread. In Go 1.2, this is partially addressed: The scheduler is invoked occasionally upon entry to a function. This means that any loop that includes a (non-inlined) function call can be pre-empted, allowing other goroutines to run on the same thread.

(source: go1.2 release notes)

In theory, it's still possible to have goroutines hog the CPU by never calling a non-inlined function, but such cases are much more rare than goroutines which never make blocking calls, which is what would get you prior to Go 1.2.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top