Skip to content

INT8 Quantization of dinov2 TensorRT Model is Not Faster than FP16 Quantization #6543

INT8 Quantization of dinov2 TensorRT Model is Not Faster than FP16 Quantization

INT8 Quantization of dinov2 TensorRT Model is Not Faster than FP16 Quantization #6543

Triggered via issue December 11, 2024 00:44
@lix19937lix19937
commented on #4273 17003e4
Status Skipped
Total duration 5s
Artifacts

blossom-ci.yml

on: issue_comment
Authorization
0s
Authorization
Upload log
0s
Upload log
Vulnerability scan
0s
Vulnerability scan
Start ci job
0s
Start ci job
Fit to window
Zoom out
Zoom in