Skip to content

Actions: NVIDIA/TensorRT

Actions

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
4,954 workflow runs
4,954 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

INT8EntropyCalibrator2 implicit quantization superseded by explicit quantization
Blossom-CI #6557: Issue comment #4095 (comment) created by moraxu
December 13, 2024 19:52 5s
December 13, 2024 19:52 5s
Polygraphy GPU memory leak when processing a large enough number of images
Blossom-CI #6556: Issue comment #3791 (comment) created by ludekcizinsky
December 13, 2024 12:00 5s
December 13, 2024 12:00 5s
incompatible types Int64 and Int32
Blossom-CI #6555: Issue comment #4268 (comment) created by antithing
December 13, 2024 07:18 4s
December 13, 2024 07:18 4s
incompatible types Int64 and Int32
Blossom-CI #6554: Issue comment #4268 (comment) created by LeoZDong
December 13, 2024 00:14 5s
December 13, 2024 00:14 5s
[Feature request] allow uint8 output without an ICastLayer before
Blossom-CI #6553: Issue comment #4282 (comment) created by QMassoz
December 12, 2024 14:47 5s
December 12, 2024 14:47 5s
[Feature request] allow uint8 output without an ICastLayer before
Blossom-CI #6552: Issue comment #4278 (comment) created by QMassoz
December 12, 2024 14:43 4s
December 12, 2024 14:43 4s
Blossom-CI
Blossom-CI #6551: created by QMassoz
December 12, 2024 14:43 5s
December 12, 2024 14:43 5s
TopK 3840 limitation and future plans for this operator
Blossom-CI #6549: Issue comment #4244 (comment) created by amadeuszsz
December 12, 2024 11:26 5s
December 12, 2024 11:26 5s
Polygraphy GPU memory leak when processing a large enough number of images
Blossom-CI #6548: Issue comment #3791 (comment) created by michaeldeyzel
December 11, 2024 09:17 6s
December 11, 2024 09:17 6s
How to make 4bit pytorch_quantization model export to .engine model?
Blossom-CI #6547: Issue comment #4262 (comment) created by StarryAzure
December 11, 2024 07:20 6s
December 11, 2024 07:20 6s
converting to TensorRT barely increases performance
Blossom-CI #6546: Issue comment #3646 (comment) created by watertianyi
December 11, 2024 07:14 4s
December 11, 2024 07:14 4s
TensorRT8.6.1.6 Inference cost too much time
Blossom-CI #6545: Issue comment #3993 (comment) created by watertianyi
December 11, 2024 06:16 5s
December 11, 2024 06:16 5s
TensorRT8.6.1.6 Inference cost too much time
Blossom-CI #6544: Issue comment #3993 (comment) created by xxHn-pro
December 11, 2024 04:00 4s
December 11, 2024 04:00 4s
INT8 Quantization of dinov2 TensorRT Model is Not Faster than FP16 Quantization
Blossom-CI #6543: Issue comment #4273 (comment) created by lix19937
December 11, 2024 00:44 5s
December 11, 2024 00:44 5s
Is there a plan to support more recent PTQ methods for INT8 ViT?
Blossom-CI #6542: Issue comment #4276 (comment) created by lix19937
December 11, 2024 00:41 5s
December 11, 2024 00:41 5s
Disable/Enable graph level optimizations
Blossom-CI #6541: Issue comment #4275 (comment) created by lix19937
December 11, 2024 00:40 4s
December 11, 2024 00:40 4s
December 10, 2024 21:39 5s
Plugin inference and loading from onnx
Blossom-CI #6538: Issue comment #4266 (comment) created by idantene
December 10, 2024 09:19 5s
December 10, 2024 09:19 5s
TensorRT8.6.1.6 Inference cost too much time
Blossom-CI #6537: Issue comment #3993 (comment) created by watertianyi
December 10, 2024 08:24 5s
December 10, 2024 08:24 5s
December 10, 2024 02:50 5s
December 10, 2024 01:38 5s
Plugin inference and loading from onnx
Blossom-CI #6534: Issue comment #4266 (comment) created by venkywonka
December 10, 2024 01:34 6s
December 10, 2024 01:34 6s