Skip to content

Conversation

haraldschilly
Copy link
Contributor

@haraldschilly haraldschilly commented Sep 9, 2025

This is now different. Caches the subject splits and consistent hashing for stickiness. I have to work more on the actual benchmark, and to get a better understanding what's going on first.

hsy@x2:~/p/cocalc/src/packages/conat$ pnpm benchmark
[...]
CoCalc Conat Routing Benchmark
===============================
Running 10 iterations with 100,000 messages each...

Iteration 1/10...
Iteration 2/10...
Iteration 3/10...
Iteration 4/10...
Iteration 5/10...
Iteration 6/10...
Iteration 7/10...
Iteration 8/10...
Iteration 9/10...
Iteration 10/10...

╭──────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                        Benchmark Results                                         │
├─────────────┬──────────────┬──────────────┬──────────────┬──────────────┬──────────────┬─────────┤
│   Variant   │  Setup (ms)  │  Match (ms)  │  Throughput  │ Split Hit %  │  Hash Hit %  │ Speedup │
├─────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼─────────┤
│ No Caching  │     24±  31% │   3436±   7% │  29248±   7% │ 0.0%         │ 0.0%         │    1.00 │
│ Split Cache │     20±  17% │   3390±   4% │  29544±   4% │ 79.6%        │ 0.0%         │    1.01 │
│ Hash Cache  │     15±   8% │    809±   8% │ 124338±   7% │ 0.0%         │ 82.0%        │    4.25 │
│ Both Caches │     19±  17% │    772±   5% │ 129883±   5% │ 79.6%        │ 82.0%        │    4.45 │
╰─────────────┴──────────────┴──────────────┴──────────────┴──────────────┴──────────────┴─────────╯

@haraldschilly haraldschilly marked this pull request as ready for review September 9, 2025 15:15
@williamstein
Copy link
Contributor

Thanks. I'm concerned that this is too risky for such a minimal performance improvement.

Shouldn't we try to somehow figure out what the current work is that most of the effort in the routers? We don't actually know if it is this pattern stuff at all. It could be something else entirely:

  • related to sticky routing
  • related to the persistence servers
  • in socket.io itself (e.g., JSON'ing messages).

I don't know what the current bottlekneck is or even if it is one of the above.

@haraldschilly
Copy link
Contributor Author

Yes, I'll think about this. This is not easy. There is probably potential to cache that consistent hashing in "stickyChoice". I also realized, the string splitting is in many places. Overall, such caching also has second order effects on reducing the pressure on GC, which is not captured by small benchmarks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants