-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Description
Description
Hi, I've found out that step 7 from here (also below) that always worked for me when creating custom execution environments for models deployed via python backend no longer does the trick with release 2.53.0 (container 24.12)
If you encounter the "GLIBCXX_3.4.30 not found" error during runtime, we recommend upgrading your conda version and installing libstdcxx-ng=12 by running conda install -c conda-forge libstdcxx-ng=12 -y. If this solution does not resolve the issue, please feel free to open an issue on the [GitHub issue page](https://github.com/triton-inference-server/server/issues) following the provided [instructions](https://github.com/triton-inference-server/server#reporting-problems-asking-questions).
When I try to load model with custom execution environment I get: GLIBCXX_3.4.32 not found
I'm interested in using container 24.12 as (if I'm not mistaken) it should allow me to use numpy>=2.
There is no problem if I install packages directly inside container but that's not an option for me as I want to deploy multiple python models with different requirements
Triton Information
I'm using official 24.12-pyt-python-py3
container
nvcr.io/nvidia/tritonserver:24.12-pyt-python-py3
To Reproduce
Minimal example:
- execution environment
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh -b
rm Miniconda3-latest-Linux-x86_64.sh
miniconda3/bin/conda install conda-pack
export PYTHONNOUSERSITE=True
miniconda3/bin/conda create -n example python=3.10
miniconda3/envs/example/bin/pip install numpy==2.2.0
miniconda3/bin/conda install -n example -c conda-forge libstdcxx-ng=12 -y
miniconda3/bin/conda pack -p miniconda3/envs/example
(I've prepared such example.tar.gz
file both on host machine (Debian 12) and from inside 24.12-pyt-python-py3
container - results were the same)
2. config.pbtxt
name: "example"
backend: "python"
input [
{
name: "INPUT0"
data_type: TYPE_STRING
dims: [ 1 ]
}
]
output [
{
name: "OUTPUT0"
data_type: TYPE_STRING
dims: [ 1 ]
}
]
parameters: {
key: "EXECUTION_ENV_PATH",
value: {string_value: "$$TRITON_MODEL_DIRECTORY/example.tar.gz"}
}
instance_group [
{
count: 1
kind: KIND_CPU
}
]
version_policy: {latest: {num_versions: 1}}
- model.py
import numpy as np
import triton_python_backend_utils as pb_utils
class TritonPythonModel:
def initialize(self, args: dict) -> None:
self.model_config = model_config = json.loads(args["model_config"])
output0_config = pb_utils.get_output_config_by_name(model_config, "OUTPUT0")
self.output0_dtype = pb_utils.triton_string_to_numpy(
output0_config["data_type"]
)
self.model = np.array([1,2,3])
def execute(self, requests: list) -> list:
output0_dtype = self.output0_dtype
responses = []
for request in requests:
in_0 = pb_utils.get_input_tensor_by_name(request, "INPUT0")
in_0_str = in_0.as_numpy()[0]
out_np = np.array([in_0_str], dtype=object)
out_tensor_0 = pb_utils.Tensor("OUTPUT0", out_np.astype(output0_dtype))
inference_response = pb_utils.InferenceResponse(
output_tensors=[out_tensor_0]
)
responses.append(inference_response)
return responses
- model repository layout
model_repository/
└── example
├── 1
│ └── model.py
├── config.pbtxt
└── example.tar.gz
- starting container
docker run --rm -d --shm-size=10g -p8000:8000 -p8001:8001 -p8002:8002 -v/home/user/model_repository:/models nvcr.io/nvidia/tritonserver:24.12-pyt-python-py3 tritonserver --model-repository=/models --model-control-mode=explicit --load-model=example
- result
I0102 10:22:28.834798 1 model_lifecycle.cc:473] "loading: example:1"
I0102 10:22:28.852974 1 python_be.cc:1811] "Using Python execution env /models/example/example.tar.gz"
/opt/tritonserver/backends/python/triton_python_backend_stub: /tmp/python_env_KxJRSg/0/lib/libstdc++.so.6: version `GLIBCXX_3.4.32' not found (required by /opt/tritonserver/backends/python/triton_python_backend_stub)
Expected behavior
Model should be loaded without problems