Skip to content

Conversation

notoriaga
Copy link
Contributor

@notoriaga notoriaga commented Nov 9, 2021

Initial plan was to persist the Tabs across reconnections. That ended up being really hard to do and I burned a bunch of time on that. Found an easier way to solve to problem though. I included a few changes that I made to try and get the Tabs to stick around between reconnections, because I think they are still generally useful

Currently the timer that tracks how long the log has been going for will still increment during a disconnect (size doesn't obviously because there are no messages). I assume this is ok, but probably could figure out a way around that if need be

use indexmap::IndexMap;
use lazy_static::lazy_static;
use log::error;
use parking_lot::{MappedMutexGuard, Mutex, MutexGuard};
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A lot of the changes are switching the mutex to a parking lot mutex. I needed to use MappedMutexGuard and there is no equivalent for the std mutex.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow MappedMutexGuard is a pretty cool idea and beyond perfect for what we are doing. We might want to eventually move all use of shared_state to this approach. Possibly improve performance especially for things that touch shared_state on every new sbp message.

}
}

#[derive(Debug)]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already store the the current connection in shared_state, so to get the name you just get it and call .name() on it

LoggingBarState {
sbp_logging: false,
sbp_logging_format: SbpLogging::SBP_JSON,
sbp_logger: None,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moves the logger into shared_state so it sticks around even when Tabs is dropped


pub trait ClientSender: ClientSenderClone {
fn send_data(&mut self, msg_bytes: Vec<u8>);
fn send_data(&self, msg_bytes: Vec<u8>);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main ClientSender we use (the channel based one) doesn't actually need a mutable receiver. Immutable receivers are a bit more ergonomic, and allows us to avoid a few clones.

Copy link
Collaborator

@john-michaelburke john-michaelburke Nov 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the future, I think it would be cool if we added storage to the clientsender similar to how the TestSender is a vec and then have a function for just grabbing the last vec sent. Could be a way to make more thorough integration tests. I created a ticket for it:
https://swift-nav.atlassian.net/browse/CPP-412

use indexmap::IndexMap;
use lazy_static::lazy_static;
use log::error;
use parking_lot::{MappedMutexGuard, Mutex, MutexGuard};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow MappedMutexGuard is a pretty cool idea and beyond perfect for what we are doing. We might want to eventually move all use of shared_state to this approach. Possibly improve performance especially for things that touch shared_state on every new sbp message.

@notoriaga
Copy link
Contributor Author

Wow MappedMutexGuard is a pretty cool idea and beyond perfect for what we are doing. We might want to eventually move all use of shared_state to this approach. Possibly improve performance especially for things that touch shared_state on every new sbp message.

Yeah definitely super cool. I think I nice way to refactor everything would be to make all the fields of SharedStateInner private, then fix the compiler errors by adding methods to SharedState. From what I can tell most of the places that directly modify SharedState without going thru one of its methods do so because of a lack of something like MappedMutexGuard. And I think it'd be nice from a maintainability standpoint to have all the access of SharedState thru methods rather than direct access.

R.e. performance, I think there could be a pretty big/easy win by using finer grained locks with SharedState. Currently we have Arc<Mutex<SharedStateInner>>, but we could drop the outer mutex and add locks to the interior fields. So something like

pub struct SharedState(Arc<SharedStateInner>);
struct SharedStateInner {
    logging_bar: Mutex<LoggingBarState>,
    log_panel: Mutex<LogPanelState>,
    tracking_tab: Mutex<TrackingTabState>,
    ...
}

That way when we have multiple threads accessing different parts of SharedState they won't block each other. Another bonus is fields like debug wouldn't even require locks, we could just use atomics

@john-michaelburke
Copy link
Collaborator

R.e. performance, I think there could be a pretty big/easy win by using finer grained locks with SharedState. Currently we have Arc<Mutex<SharedStateInner>>, but we could drop the outer mutex and add locks to the interior fields. So something like

pub struct SharedState(Arc<SharedStateInner>);
struct SharedStateInner {
    logging_bar: Mutex<LoggingBarState>,
    log_panel: Mutex<LogPanelState>,
    tracking_tab: Mutex<TrackingTabState>,
    ...
}

That way when we have multiple threads accessing different parts of SharedState they won't block each other. Another bonus is fields like debug wouldn't even require locks, we could just use atomics

Sounds good to me. I could definitely see a gain from that. I created a ticket for this suggestion: https://swift-nav.atlassian.net/browse/CPP-413

@notoriaga notoriaga merged commit 880d009 into main Nov 12, 2021
@notoriaga notoriaga deleted the steve/reconnect branch November 12, 2021 00:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants