HDFS-17237. Remove IPCLoggerChannelMetrics when the logger is closed #6217
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description of PR
When an IPCLoggerChannel is created (which is used to read from and write to the Journal nodes) it also creates a metrics object. When the namenodes failover, the IPC loggers are all closed and reopened in read mode on the new SBNN or the read mode is closed on the SBNN and re-opened in write mode. The closing frees the resources and discards the original IPCLoggerChannel object and causes a new one to be created by the caller.
If a Journal node was down and added back to the cluster with the same hostname, but a different IP, when the failover happens, you end up with 4 metrics objects for the JNs:
The old stale metric will remain forever and will no longer be updated, leading to confusing results in any tools that use the metrics for monitoring.
This change, ensures we un-register the metrics when the logger channel is closed and a new metrics object gets created when the new channel is created.
For info, the logger metrics look like:
Note the name includes the IP, rather than the hostname.
How was this patch tested?
I have added a small test to prove this, but also reproduced the original issue on a docker cluster and validated it is resolved with this change in place.