Skip to content

Deadlock in balancing implementation #2380

@kalduzov

Description

@kalduzov

Hi @JamesNK

We have already set tasks related to balancing. You corrected them and now everything works as it should. We successfully wrote our own balancers and even transferred about 100 services to them. Everything worked perfectly for more than 2 months.

But yesterday we encountered a deadlock in the balancer code. We can't reproduce it right now, but its presence really worries us.

The next 2 lines of code, taking into account blocking, in some situation block each other.

https://github.com/grpc/grpc-dotnet/blob/master/src/Grpc.Net.Client/Balancer/Internal/ConnectionManager.cs#L291

https://github.com/grpc/grpc-dotnet/blob/master/src/Grpc.Net.Client/Balancer/Internal/ConnectionManager.cs#L368

We use standard classes for our balancers and pickers.

Our balancer works according to the push model, i.e. it receives new data on endpoints, and then we send it to the standard Listener().

At this time, our Picker implementations are also working - they are constantly being edited. Every time new endpoints appear, we recreate the pickers with new endpoints (just like in the examples in the documentation).

In the example, it looks like the PickAsync method simply gets stuck in the WaitAsync method immediately after new endpoints arrive. The service continues to work, but each new request in balancing begins to eat up the thread from the ThreadPool. As a result, the picture looks like this:

  1. Calls to a quiet client no longer work (but with us they are rejected by CancellationToken or Deadline)
  2. The lock continues to hang indefinitely and block the pool thread.
  3. The pool begins to swell because they still try to make requests to clients.
  4. And because our services in pods with memory limitations - after a while OOM comes (each thread eats up memory a little) and restarts the pod.
  5. After this balancing it works stably.

This is a really rare situation. The discovery happened completely by chance out of about 500 pods that occurred in just 3 months over the past month.

We would really like to somehow solve this problem - it greatly hinders us, because... We don’t understand how it works at all and whether it blocks the extreme API for us at some point.

Attached is a screenshot from the captured trace from the pod
image

Library version: 2.60.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions