Question

I wrote a simple async based load-testing library, and it also has a console interface to test from the command-line.

Basically, it runs a huge number of requests concurrently, aggregates them, and shows a summary and a simple histogram. Nothing fancy. But I run a lot of tests in the local system, so I wanted to make sure the test tool got out of the way for a relatively accurate benchmark, using the least resource possible. So it uses bare asynchrony with Begin/End methods to maintain the least overhead.

All done, fully asynchronous, it works, and gets out of the way (well, mostly). But the number of threads in a normal session was well over 40. So, a really neat wastage of resources for a machine with 4 hardware threads, considering the local machine is also running the server being tested.

I'm already running the program in an AsyncContext, which basically is just a simple queued context putting everything on to the same thread. So, all aync post-backs are on the main thread. Perfect.

Now, all I have to do is to limit the ThreadPool's maximum threads, and see how well it performs. Limited it to the actual core, with 4 workers, and 4 IOCP threads.

Result?

Exception: "There were not enough free threads in the ThreadPool to complete the operation."

Well, this is not a new issue, and is quite scattered all over the internet. But isn't the whole point of the ThreadPool, that you can put things over onto the pool's queue, and it executes whenever a thread is available?

Infact, the name of the method is 'Queue' UserWorkItem. And the documentation appropriately says, "Queues a method for execution. The method executes when a thread pool thread becomes available."

Now, if there are not enough free threads available, ideally speaking, what's expected, is perhaps, a slow down in the execution of the program. IOCP, and asynchronous tasks should just be queued in, but why is it implemented in such a way that it knocks over, and fails instead? Increasing the number of threads is not the solution, when its called a ThreadPool intended to be a queue.

Edit - Clarification:

I'm fully aware of the concept of the threadpool, and why the CLR spins up more threads. It should. I agree that it is infact the correct thing to do when there are heavy IO-bound tasks. But the point is, if you do infact restrict the threads in the ThreadPool, it is expected to queue the task for execution whenever a free thread is available, not throw an exception. The concurrency could be affected, perhaps even slowing down the outcome, but a QueueWorkUserItem is intented to Queue, not work only when a new thread is available, or fail - hence, my speculative assertion that its an implementation bug, as stated in the title.

Update 1:

The same problem as documented in Microsoft's Support Forums with an example: http://support.microsoft.com/default.aspx?scid=kb;EN-US;815637

The workaround suggested, obviously being to increase number of threads, as it fails to queue.

Note: This is under a very old runtime, and the method to reproduce the same issue on the 4.5.1 runtime is given below.

Update 2:

Ran the same pieces of code on the Mono Runtime, and the ThreadPool seems to have no issues there. It gets queued up, and executed. The issue happens only under the Microsoft CLR.

Update 3:

After @Noseratio's pointed out the valid issue of not being able to reproduce the same code under .NET 4.5.1, below is a piece of code that will reproduce the issue. In order to break the code that works while being queued as expected, all that really has to be done is adding a true asynchronous call to the queued delegate.

For example just adding the below line to the end of the delegate should end up in a exception:

(await WebRequest.Create("http://www.google.com").GetResponseAsync()).Close(); 

Code for reproduction:

Here's a code that's slightly modified from the MSKB article, and that should fail quickly under .NET 4.5.1 in Windows 8.1.

(Feel free to change the url, and the thread limits).

public static void Main()
{
    ThreadPool.SetMinThreads(1, 1);
    ThreadPool.SetMaxThreads(2, 2);

    for (int i = 0; i < 5; i++)
    {
        Console.WriteLine("Queued {0}", i);
        ThreadPool.QueueUserWorkItem(PoolFunc);
    }
    Console.ReadLine();
}

private static async void PoolFunc(object state)
{
    int workerThreads, completionPortThreads;
    ThreadPool.GetAvailableThreads(out workerThreads, out completionPortThreads);
    Console.WriteLine(
        "Available: WorkerThreads: {0}, CompletionPortThreads: {1}",
        workerThreads,
        completionPortThreads);
    Thread.Sleep(1000);

    string url = "http://localhost:8080";

    HttpWebRequest myHttpWebRequest;
    // Creates an HttpWebRequest for the specified URL.    
    myHttpWebRequest = (HttpWebRequest)WebRequest.Create(url);
    // Sends the HttpWebRequest, and waits for a response.
    Console.WriteLine("Wait for response.");
    var myHttpWebResponse = await myHttpWebRequest.GetResponseAsync();
    Console.WriteLine("Done.");
    myHttpWebResponse.Close();
}

Any insight into this behavior, that could bring reasoning to this is much appreciated. Thanks.

Was it helpful?

Solution

In your sample code it is not the call to QueueUserWorkItem which throws an exception, it is the call to await myHttpWebRequest.GetResponseAsync() which throws the exception. If you look at the exception detail you can see exactly what method is throwing this exception

System.InvalidOperationException was unhandled by user code
  _HResult=-2146233079
  _message=There were not enough free threads in the ThreadPool to complete the operation.
  HResult=-2146233079
  IsTransient=false
  Message=There were not enough free threads in the ThreadPool to complete the operation.
  Source=System
  StackTrace:
       at System.Net.HttpWebRequest.BeginGetResponse(AsyncCallback callback, Object state)
       at System.Threading.Tasks.TaskFactory`1.FromAsyncImpl(Func`3 beginMethod, Func`2 endFunction, Action`1 endAction, Object state, TaskCreationOptions creationOptions)
       at System.Threading.Tasks.TaskFactory`1.FromAsync(Func`3 beginMethod, Func`2 endMethod, Object state)
       at System.Net.WebRequest.<GetResponseAsync>b__8()
       at System.Threading.Tasks.Task`1.InnerInvoke()
       at System.Threading.Tasks.Task.Execute()
    --- End of stack trace from previous location where exception was thrown ---
       at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
       at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
       at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
       at ConsoleApplication1.Program.<PoolFunc>d__0.MoveNext() in c:\Users\Justin\Source\Repos\Azure\ConsoleApplication1\ConsoleApplication1\Program.cs:line 39
  InnerException: 

Indeed, if we look at the HttpWebRequest.BeginGetResponse method we can see the following

if (!RequestSubmitted && NclUtilities.IsThreadPoolLow())
{
    // prevent new requests when low on resources
    Exception exception = new InvalidOperationException(SR.GetString(SR.net_needmorethreads));
    Abort(exception, AbortState.Public);
    throw exception;
}

The moral of the story is that the thread pool is a shared resource that other code (including parts of the .Net framework) also uses - setting the maximum number of threads to 2 is what Raymond Chen would call a global solution to a local problem and as a result is breaking the expectation of other parts of the system.

If you want explicit control over what threads are being used then you should create your own implementation, however unless you really know what you are doing you are better off letting the .Net framework handle thread management.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top