Internal Error (Could not find any implementation for node {...Tensordot/Reshape}) failure of TensorRT 8.6 when running generate INT8 engine on GPU RTX3080 #4291
Labels
Engine Build
Issues with engine build
quantization
Issues related to Quantization
triaged
Issue has been triaged by maintainers
Hi,
I'm using TensorRT8.6 to conduct int8 calibration and genereate tensorrt engine with polygraphy tools like this
polygraphy convert onnx_model/model.onnx --trt-min-shapes xs:[1,1120] xlen:[1] --trt-opt-shapes xs:[1,160000] xlen:[1] --trt-max-shapes xs:[1,480000] xlen:[1] --int8 --data-loader-script data_loader.py --calibration-cache trt86_minmax_calib.cache --calib-base-cls IInt8MinMaxCalibrator --output trt_model/trt86_minmax_int8.plan
, but it always give the following error:
The convert onnx model is generated from the saved model, which produced with codes as below:
Then, convert saved model to onnx model with
python3 -m tf2onnx.convert --opset 13 --saved-model ./saved_model/ --output ./onnx_model/model.onnx
, what's wrong with it ?By the way, the file data_loader.py is used for int8 calibration, you can reproduce it like this:
Anyone can give some helps? Thanks a lot for the help!!!
The text was updated successfully, but these errors were encountered: