Skip to content

Conversation

@robertnishihara
Copy link
Collaborator

No description provided.

@robertnishihara robertnishihara force-pushed the subtree branch 4 times, most recently from 8c21042 to 845576c Compare October 26, 2016 01:21
@robertnishihara robertnishihara changed the title Switch hiredis to a git subtree. Fixes to make Travis pass. Oct 26, 2016
@pcmoritz pcmoritz merged this pull request into master Oct 26, 2016
@pcmoritz pcmoritz deleted the subtree branch October 26, 2016 05:30
richardliaw referenced this pull request in richardliaw/ray Oct 3, 2017
* checkpointing

* small changes

* signatures

* more sigs

* removed yaml renaming

* final
osalpekar pushed a commit to osalpekar/ray that referenced this pull request Mar 16, 2018
11rohans pushed a commit to 11rohans/ray that referenced this pull request Apr 2, 2018
tulip3659 added a commit to tulip3659/ray that referenced this pull request Jan 3, 2019
Signed-off-by: tulip3659 <[email protected]>
romilbhardwaj added a commit to romilbhardwaj/ray that referenced this pull request Apr 8, 2019
� This is the 1st commit message:

Rounding error fixes, removing cpu addition in cython and test fixes.

� This is the commit message ray-project#2:

Plumbing and support for dynamic custom resources.

� This is the commit message ray-project#3:

update cluster utils to use EntryType.

� This is the commit message ray-project#4:

Update node_manager to use new zero resource == deletion semantics.
eisber pushed a commit to eisber/ray that referenced this pull request Feb 27, 2020
rkooo567 referenced this pull request in rkooo567/ray Mar 24, 2020
[Hosted Dashboard] End to end flow
bentzinir pushed a commit to bentzinir/ray that referenced this pull request Jul 29, 2020
elliottower pushed a commit to elliottower/ray that referenced this pull request Apr 22, 2023
pcmoritz pushed a commit that referenced this pull request Apr 22, 2023
Why are these changes needed?

Right now the theory is as follow.

pubsub io service is created and run inside the GcsServer. That means if pubsub io service is accessed after GCSServer GC'ed, it will segfault.
Right now, upon teardown, when we call rpc::DrainAndResetExecutor, this will recreate the Executor thread pool.
Upon teardown, If DrainAndResetExecutor -> GcsServer's internal pubsub posts new SendReply to the newly created threadpool -> GcsServer.reset -> pubsub io service GC'ed -> SendReply invoked from the newly created thread pool, it will segfault.
NOTE: the segfault is from pubsub service if you see the failure

#2 0x7f92034d9129 in ray::rpc::ServerCallImpl<ray::rpc::InternalPubSubGcsServiceHandler, ray::rpc::GcsSubscriberPollRequest, ray::rpc::GcsSubscriberPollReply>::HandleRequestImpl()::'lambda'(ray::Status, std::__1::function<void ()>, std::__1::function<void ()>)::operator()(ray::Status, std::__1::function<void ()>, std::__1::function<void ()>) const::'lambda'()::operator()() const /proc/self/cwd/bazel-out/k8-opt/bin/_virtual_includes/grpc_common_lib/ray/rpc/server_call.h:212:48
As a fix, I only drain the thread pool. And then reset it after all operations are fully cleaned up (only from tests). I think there's no need to reset for regular proc termination like raylet, gcs, core workers.

Related issue number

Closes #34344

Signed-off-by: SangBin Cho <[email protected]>
ProjectsByJackHe pushed a commit to ProjectsByJackHe/ray that referenced this pull request May 4, 2023
Why are these changes needed?

Right now the theory is as follow.

pubsub io service is created and run inside the GcsServer. That means if pubsub io service is accessed after GCSServer GC'ed, it will segfault.
Right now, upon teardown, when we call rpc::DrainAndResetExecutor, this will recreate the Executor thread pool.
Upon teardown, If DrainAndResetExecutor -> GcsServer's internal pubsub posts new SendReply to the newly created threadpool -> GcsServer.reset -> pubsub io service GC'ed -> SendReply invoked from the newly created thread pool, it will segfault.
NOTE: the segfault is from pubsub service if you see the failure

ray-project#2 0x7f92034d9129 in ray::rpc::ServerCallImpl<ray::rpc::InternalPubSubGcsServiceHandler, ray::rpc::GcsSubscriberPollRequest, ray::rpc::GcsSubscriberPollReply>::HandleRequestImpl()::'lambda'(ray::Status, std::__1::function<void ()>, std::__1::function<void ()>)::operator()(ray::Status, std::__1::function<void ()>, std::__1::function<void ()>) const::'lambda'()::operator()() const /proc/self/cwd/bazel-out/k8-opt/bin/_virtual_includes/grpc_common_lib/ray/rpc/server_call.h:212:48
As a fix, I only drain the thread pool. And then reset it after all operations are fully cleaned up (only from tests). I think there's no need to reset for regular proc termination like raylet, gcs, core workers.

Related issue number

Closes ray-project#34344

Signed-off-by: SangBin Cho <[email protected]>
Signed-off-by: Jack He <[email protected]>
@bveeramani bveeramani mentioned this pull request Jul 25, 2023
8 tasks
khluu added a commit to khluu/ray that referenced this pull request Jan 24, 2024
angelinalg added a commit that referenced this pull request Nov 5, 2024
missing comma related to #48564
Had to try again because @can-anyscale suspects a rare ReadTheDocs failure and there's no easy way to kick off a rebuild from the browser.

Signed-off-by: angelinalg <[email protected]>
angelinalg added a commit that referenced this pull request Nov 6, 2024
missing comma related to #48564
Had to try again because @can-anyscale suspects a rare ReadTheDocs
failure and there's no easy way to kick off a rebuild from the browser.

Signed-off-by: angelinalg <[email protected]>
JP-sDEV pushed a commit to JP-sDEV/ray that referenced this pull request Nov 14, 2024
missing comma related to ray-project#48564
Had to try again because @can-anyscale suspects a rare ReadTheDocs
failure and there's no easy way to kick off a rebuild from the browser.

Signed-off-by: angelinalg <[email protected]>
mohitjain2504 pushed a commit to mohitjain2504/ray that referenced this pull request Nov 15, 2024
missing comma related to ray-project#48564
Had to try again because @can-anyscale suspects a rare ReadTheDocs
failure and there's no easy way to kick off a rebuild from the browser.

Signed-off-by: angelinalg <[email protected]>
Signed-off-by: mohitjain2504 <[email protected]>
cszhu added a commit that referenced this pull request Oct 7, 2025
Signed-off-by: Christina Zhu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants