Skip to content

Commit eca55bf

Browse files
committed
Add in opentelemetry tracing as a feature
This is an evolution on the previous tracing support I added a few years back. See CONTRIBUTING for details. The rs_tracing crate is functionally fine but very limited in features compared to the tracing crate that provides the developer interface for opentelemetry instrumentation. As we start looking at performance again I think this tooling will be useful. Included is support for both tracing rustup-init and individual tests
1 parent e724717 commit eca55bf

File tree

20 files changed

+1410
-248
lines changed

20 files changed

+1410
-248
lines changed

CONTRIBUTING.md

Lines changed: 73 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -217,8 +217,8 @@ break our testing (like `RUSTUP_TOOLCHAIN`, `SHELL`, `ZDOTDIR`, `RUST_BACKTRACE`
217217
But if you want to debug locally, you may need backtrace. `RUSTUP_BACKTRACE`
218218
is used like `RUST_BACKTRACE` to enable backtraces of failed tests.
219219

220-
**NOTE**: This is a backtrace for the test, not for any rustup process running
221-
in the test
220+
**NOTE**: This is a backtrace for the test, not for any subprocess invocation of
221+
rustup process running in the test
222222

223223
```bash
224224
$ RUSTUP_BACKTRACE=1 cargo test --release --test cli-v1 -- remove_toolchain_then_add_again
@@ -273,3 +273,74 @@ test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 26 filtered out
273273

274274
error: test failed, to rerun pass '--test cli-v1'
275275
```
276+
277+
## Tracing
278+
279+
The feature "otel" can be used when building rustup to turn on Opentelemetry
280+
tracing with an OLTP GRPC exporter. This requires protoc installed, which can be
281+
downloaded from github or installed via package manager.
282+
283+
The normal [OTLP environment
284+
variables](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/exporter.md)
285+
can be used to customise its behaviour, but often the simplest thing is to just
286+
run a Jaeger docker container on the same host:
287+
288+
```sh
289+
docker run -d --name jaeger -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 -e COLLECTOR_OTLP_ENABLED=true -p 6831:6831/udp -p 6832:6832/udp -p 5778:5778 -p 16686:16686 -p 4317:4317 -p 4318:4318 -p 14250:14250 -p 14268:14268 -p 14269:14269 -p 9411:9411 jaegertracing/all-in-one:latest
290+
```
291+
292+
Then build rustup-init with tracing:
293+
294+
```sh
295+
cargo build --features=otel
296+
```
297+
298+
Run the operation you want to analyze:
299+
300+
```sh
301+
RUSTUP_FORCE_ARG0="rustup" ./target/debug/rustup-init.exe show
302+
```
303+
304+
And [look in Jaeger for a trace](http://localhost:16686/search?service=rustup).
305+
306+
### Tracing and tests
307+
308+
The custom macro `rustup_macros::test` adds a prelude and suffix to each test to
309+
ensure that there is a tracing context setup, that the test function is a span,
310+
and that the spans from the test are flushed. Build with features=otel to
311+
use this feature.
312+
313+
### Adding instrumentation
314+
315+
The `otel` feature uses conditional compilation to only add function instrument
316+
when enabled. Instrumenting a currently uninstrumented function is mostly simply
317+
done like so:
318+
319+
```rust
320+
#[cfg_attr(feature = "otel", tracing::instrument(skip_all))]
321+
```
322+
323+
`skip_all` is not required, but some core structs don't implement Debug yet, and
324+
others have a lot of output in Debug : tracing adds some overheads, so keeping
325+
spans lightweight can help avoid frequency bias in the results - where
326+
parameters with large debug in frequently called functions show up as much
327+
slower than they are.
328+
329+
Some good general heuristics:
330+
331+
- Do instrument slow blocking functions
332+
- Do instrument functions with many callers or that call many different things,
333+
as these tend to help figure the puzzle of what-is-happening
334+
- Default to not instrumenting thin shim functions (or at least, only instrument
335+
them temporarily while figuring out the shape of a problem)
336+
- Be way of debug build timing - release optimisations make a huge difference,
337+
though debug is a lot faster to iterate on. If something isn't a problem in
338+
release don't pay it too much heed in debug.
339+
340+
### Caveats
341+
342+
Cross-thread propogation isn't connected yet. This will cause instrumentation in
343+
a thread to make a new root span until it is fixed. If any Tokio runtime-related
344+
code gets added in those threads this will also cause a panic. We have a couple
345+
of threadpools in use today; if you need to instrument within that context, use
346+
a thunk to propogate the tokio runtime into those threads.

0 commit comments

Comments
 (0)