Skip to content

[release/microbenchmarks] Please take a look at the results (and possibly sign off) #15247

@krfricke

Description

@krfricke
single client get calls per second 45451.85 +- 260.67
single client put calls per second 34434.73 +- 1169.37
multi client put calls per second 147515.09 +- 4571.9
single client get calls (Plasma Store) per second 8851.22 +- 70.09
single client put calls (Plasma Store) per second 5454.56 +- 8.64
multi client put calls (Plasma Store) per second 9306.63 +- 120.43
single client put gigabytes per second 16.03 +- 11.37
multi client put gigabytes per second 37.95 +- 1.8
single client tasks sync per second 1658.41 +- 47.78
single client tasks async per second 15046.33 +- 516.59
multi client tasks async per second 43697.03 +- 5338.69
1:1 actor calls sync per second 2710.45 +- 68.32
1:1 actor calls async per second 7021.72 +- 163.53
1:1 actor calls concurrent per second 7201.4 +- 198.41
1:n actor calls async per second 16829.7 +- 349.17
n:n actor calls async per second 46951.12 +- 651.15
n:n actor calls with arg async per second 13085.9 +- 171.39
1:1 async-actor calls sync per second 1754.34 +- 46.87
1:1 async-actor calls async per second 3870.6 +- 168.3
1:1 async-actor calls with args async per second 2637.22 +- 77.39
1:n async-actor calls async per second 14599.06 +- 515.27
n:n async-actor calls async per second 37827.88 +- 860.12
client: get calls per second 2155.77 +- 80.11
client: put calls per second 1190.19 +- 9.34
client: remote put calls per second 9119.79 +- 83.24
client: 1:1 actor calls sync per second 649.32 +- 15.85
client: 1:1 actor calls async per second 712.39 +- 2.36
client: 1:1 actor calls concurrent per second 715.0 +- 5.37

Results from 1.2.0:

single client get calls per second 48106.48 +- 847.52
single client put calls per second 42709.1 +- 84.85
multi client put calls per second 172608.71 +- 3071.81
single client get calls (Plasma Store) per second 10669.26 +- 286.63
single client put calls (Plasma Store) per second 6622.51 +- 47.03
multi client put calls (Plasma Store) per second 9804.51 +- 462.32
single client put gigabytes per second 11.45 +- 10.79
multi client put gigabytes per second 35.06 +- 0.26
single client tasks sync per second 1899.11 +- 87.63
single client tasks async per second 18599.58 +- 124.02
multi client tasks async per second 50388.88 +- 2585.47
1:1 actor calls sync per second 3053.21 +- 60.37
1:1 actor calls async per second 7768.59 +- 268.78
1:1 actor calls concurrent per second 7106.24 +- 219.87
1:n actor calls async per second 17132.11 +- 881.8
n:n actor calls async per second 51037.11 +- 1732.95
n:n actor calls with arg async per second 13746.19 +- 171.94
1:1 async-actor calls sync per second 2103.39 +- 52.51
1:1 async-actor calls async per second 4100.13 +- 53.6
1:1 async-actor calls with args async per second 3085.78 +- 165.8
1:n async-actor calls async per second 13906.28 +- 363.9
n:n async-actor calls async per second 40269.65 +- 1113.55
client: get calls per second 2414.77 +- 43.07
client: put calls per second 1346.13 +- 8.2
client: remote put calls per second 58855.54 +- 849.21
client: 1:1 actor calls sync per second 730.58 +- 11.66
client: 1:1 actor calls async per second 774.79 +- 14.1
client: 1:1 actor calls concurrent per second 805.73 +- 11.46

It seems there are a few regressions here, e.g.:

multi client put calls per second 147515.09 +- 4571.9
vs.
multi client put calls per second 172608.71 +- 3071.81

and notably

client: remote put calls per second 9119.79 +- 83.24
vs.
client: remote put calls per second 58855.54 +- 849.21

I also got this

2021-04-12 04:45:15,897 WARNING services.py:1726 -- WARNING: The object store is using /tmp instead of /dev/shm because /dev/shm has only 53772902400 bytes available. This will harm performance! You may be able to free up space by deleting files in /dev/shm. If you are inside a Docker container, you can increase /dev/shm size by passing '--shm-size=55.09gb' to 'docker run' (or add it to the run_options list in a Ray cluster config). Make sure to set this to more than 30% of available RAM.

is this the reason for the regressions?

cc @wuisawesome @rkooo567 @ericl

Metadata

Metadata

Labels

release-blockerP0 Issue that blocks the release

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions