-
Notifications
You must be signed in to change notification settings - Fork 808
#257: Add gauge for total fields mapped in an index #386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@sysadmind @zwopir anyone willing to review this change? I think a lot of folks who use ES for logging will appreciate this change, logging often causes field cardinality explosions and being able to alert on those before they cause an outage is a big win. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fixes the feedback I had from the original PR.
collector/indices_mappings.go
Outdated
| } | ||
|
|
||
| var imr IndicesMappingsResponse | ||
| if err := json.NewDecoder(res.Body).Decode(&imr); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a big deal, but it would be cleaner to use json.Unmarshal() instead. The decoder isn't guaranteed to read the whole body of the response and could leave the connection lingering open and unusable. That also means you can explicitly close the body after reading the whole thing to ensure that the connections get returned to the pool for reuse.
https://changelog.com/gotime/141#transcript-111
https://pkg.go.dev/net/http?tab=doc#Response see the notes on the Body property of the Response here.
collector/indices_mappings.go
Outdated
| err = res.Body.Close() | ||
| if err != nil { | ||
| _ = level.Warn(im.logger).Log( | ||
| "msg", "failed to close http.Client", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the body, not the client.
|
Thanks for the feedback @sysadmind !! Just pushed some updates. |
|
Just tested this against one of our prod clusters, the new gauge |
|
Thanks @sysadmind, appreciate your help! @zwopir would you please merge? |
|
Any motion on this PR? This is something we would really like to have as well |
|
@repl-mike-roest no PRs have been merged in 15 months; we're running a build off this branch since september with good results. @zwopir any chance you'd be willing to add others (maybe @sysadmind ?) as admins on this project so that they can merge and release to keep it alive? |
|
@jnadler I've been trying to build it locally as well to test if it will work with what I'm trying to pull from our clusters. However I seem to be hitting a build failure when trying to build the docker container I tried updating the docker file to use a go 1.13,1.14 and 1.15 base image from the quay.io/prometheus/golang-builder and all of them fail with slightly different errors (errors above are with the Dockerfile as is so using latest which seems to be 1.16) Any ideas? |
|
try |
|
I looks like something that is broken upstream with promu prometheus/promu#218 if I update the docker file back to |
|
strange i got a different vendoring error until running i'm on go |
|
With docker 1.16.2 if I do I get a error go: inconsistent vendoring in /Users/mike.roest/Workspaces/elasticsearch_exporter: Doing another go mod vendor then make build again all the package issues seem to be resolved but getting an actual compile failure: Then following this floyd871/prometheus_oracle_exporter#22 suggestion I now have it building with make build and make docker |
|
I've expanded on your changes here @jnadler to also add the total_fields limit setting This allows for adding alerts on a total_fields approaching the limit |
|
that's great @repl-mike-roest ! Hey @zwopir totally understand that you're busy but would you consider making someone else an admin? Or even turning this repo over to someone else who will review/merge PRs and do releases? |
|
Hey @SuperQ ... I see that you've been maybe merging things and even did a release recently, it's so good to see that someone is keeping this thing alive! Any chance of merging this? |
|
This needs a rebase before it can be merged. After a rebase, I can take a look just to make sure everything still looks good and merge. Thank you! |
|
Equivalent functionality appears to have been merged in #411 |
This PR replaces #364 so that I can take over the work started by @msodi. The original PR feedback from @sysadmind has been addressed.