You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm encountering an error while converting the plnet1 ONNX model to a TensorRT engine on my Jetson Orin Nano (JetPack 6.2) environment using Docker. The Docker container is based on the image nvcr.io/nvidia/l4t-tensorrt:r8.6.2-devel (CUDA 12.2, TensorRT 8.6.2). The issue does not occur when running the model normally; the error shows up only when I attempt to build the engine separately with the plnet1 ONNX model.
During the engine build process, I observed the following warnings and errors:
Warnings:
The --explicitBatch flag is deprecated and has no effect.
A warning indicating that the ONNX model was generated with INT64 weights, with a message like “Your ONNX model has been generated with INT64 weights… Attempting to cast down to INT32,” and that some values have been clamped.
Critical Error:
value.h:682: DCHECK(new_vv.cast(myelinTypeUint64)) failed.
...
[E] Error[10]: Could not find any implementation for node {ForeignNode[/Cast_1.../Gather_69]}.
[E] Error[10]: [optimizer.cpp::computeCosts::3869] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[/Cast_1.../Gather_69]}).
[E] Engine could not be created from network
[E] Building engine failed
My Questions:
Root Cause:
Is the error caused by the ONNX model being generated with INT64 weights that TensorRT does not natively support (thus automatically casting them to INT32), leading to potential issues during the conversion?
Or does this error indicate that certain operations in the ONNX model (such as those involving Cast or Gather) are not currently implemented in TensorRT 8.6.2?
Potential Solutions:
Should the ONNX export process be modified to use supported types (e.g., using INT32 instead of INT64) to avoid these issues?
Would a custom plugin implementation or modifications to the model be required to handle the problematic operations?
Are there any known workarounds or updates to TensorRT that address this error?
Any insights or suggestions for resolving this issue would be greatly appreciated.
Thank you!
The text was updated successfully, but these errors were encountered:
@xukuanHIT
On my Jetson Orin Nano, I get an error when building plnet_s1.engine from plnet_s1.onnx. I'm not using the provided Docker image but instead using the nvcr.io/nvidia/l4t-tensorrt:r8.6.2-devel image (CUDA 12.2, TensorRT 8.6.2). Since I don't have much experience with ONNX or TensorRT, I can't easily figure out the cause.
Do I need to re-export the ONNX from the provided PLNet Python code to better match the Jetson platform? Also, could you please share the file used to generate the ONNX?
i am get :
10: [optimizer.cpp::computeCosts::3728] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[/Equal.../backbone/point_detector/Unsqueeze_10]}.)
for building the plnet_s0.onnx file.
Hello,
I'm encountering an error while converting the plnet1 ONNX model to a TensorRT engine on my Jetson Orin Nano (JetPack 6.2) environment using Docker. The Docker container is based on the image nvcr.io/nvidia/l4t-tensorrt:r8.6.2-devel (CUDA 12.2, TensorRT 8.6.2). The issue does not occur when running the model normally; the error shows up only when I attempt to build the engine separately with the plnet1 ONNX model.
I used the following command to build the engine:
During the engine build process, I observed the following warnings and errors:
Warnings:
The --explicitBatch flag is deprecated and has no effect.
A warning indicating that the ONNX model was generated with INT64 weights, with a message like “Your ONNX model has been generated with INT64 weights… Attempting to cast down to INT32,” and that some values have been clamped.
Critical Error:
My Questions:
Is the error caused by the ONNX model being generated with INT64 weights that TensorRT does not natively support (thus automatically casting them to INT32), leading to potential issues during the conversion?
Or does this error indicate that certain operations in the ONNX model (such as those involving Cast or Gather) are not currently implemented in TensorRT 8.6.2?
Should the ONNX export process be modified to use supported types (e.g., using INT32 instead of INT64) to avoid these issues?
Would a custom plugin implementation or modifications to the model be required to handle the problematic operations?
Are there any known workarounds or updates to TensorRT that address this error?
Any insights or suggestions for resolving this issue would be greatly appreciated.
Thank you!
The text was updated successfully, but these errors were encountered: