Skip to content

🐛 [Bug] Outdated "Compiling with Torch-TensorRT in C++" Documentation #2947

@Borotalcohol

Description

@Borotalcohol

Bug Description

In the "Using Torch-TensorRT in C++" page, under the "Compiling with Torch-TensorRT in C++" section, the syntax of an older version of the library is being used:

Example

#include "torch/script.h"
#include "torch_tensorrt/torch_tensorrt.h"
...

mod.to(at::kCUDA);
mod.eval();

auto in = torch::randn({1, 1, 32, 32}, {torch::kCUDA});
auto trt_mod = torch_tensorrt::CompileGraph(mod, std::vector<torch_tensorrt::CompileSpec::InputRange>{{in.sizes()}});
auto out = trt_mod.forward({in});

torch_tensorrt doesn't have a CompileGraph method and torch_tensorrt::CompileSpec, moved now to torch_tensorrt::ts::CompileSpec, doesn't have an InputRange class.

The same result should be achievable right now with something like:

#include "torch/script.h"
#include "torch_tensorrt/torch_tensorrt.h"
...

mod.to(at::kCUDA);
mod.eval();

auto in = torch::randn({1, 1, 32, 32}, {torch::kCUDA});
torch_tensorrt::ts::CompileSpec spec({1, 1, 32, 32});
std::vector<torch_tensorrt::Input> inputs;
inputs.push_back(torch_tensorrt::Input(in));
spec.graph_inputs.inputs = inputs;
auto trt_mod = torch_tensorrt::ts::compile(mod, spec);
auto out = trt_mod.forward({in});

Note, however, that syntax should be updated also in subsequent code snippets.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions