Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: Failed to create network using custom network creation function ERROR: Failed to get cuda engine from custom library API #24

Open
jingruhou opened this issue Apr 13, 2022 · 1 comment

Comments

@jingruhou
Copy link

jingruhou commented Apr 13, 2022

`Opening in BLOCKING MODE
ERROR: [TRT]: INVALID_CONFIG: The engine plan file is generated on an incompatible device, expecting compute 7.2 got compute 5.3, please rebuild.
ERROR: [TRT]: engine.cpp (1546) - Serialization Error in deserialize: 0 (Core engine deserialization failure)
ERROR: [TRT]: INVALID_STATE: std::exception
ERROR: [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: Deserialize engine failed from file: /opt/maskcam_1.0/yolo/maskcam_y4t_1024_608_fp16.trt
0:00:03.391191797 89 0x1cf0fd00 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/maskcam_1.0/yolo/maskcam_y4t_1024_608_fp16.trt failed
0:00:03.391251448 89 0x1cf0fd00 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/maskcam_1.0/yolo/maskcam_y4t_1024_608_fp16.trt failed, try rebuild
0:00:03.391282234 89 0x1cf0fd00 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
Yolo type is not defined from config file name:
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:03.391650410 89 0x1cf0fd00 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
0:00:03.391677900 89 0x1cf0fd00 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1821> [UID = 1]: build backend context failed
0:00:03.391722606 89 0x1cf0fd00 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1148> [UID = 1]: generate backend failed, check config file settings
0:00:03.392212548 89 0x1cf0fd00 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:03.392239590 89 0x1cf0fd00 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start: error: Config file path: maskcam_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
ERROR inference | gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(809): prints.py:42
gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: maskcam_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
INFO inference | prints.py:48
TROUBLESHOOTING HELP

           If the error is like: v4l-camera-source / reason not-negotiated                                                                                                                                
           Solution: configure camera capabilities                                                                                                                                                        
           Run the script under utils/gst_capabilities.sh and find the lines with type                                                                                                                    
           video/x-raw ...                                                                                                                                                                                
           Find a suitable framerate=X/1 (with X being an integer like 24, 15, etc.)                                                                                                                      
           Then edit config_maskcam.txt and change the line:                                                                                                                                              
           camera-framerate=X                                                                                                                                                                             
           Or configure using --env MASKCAM_CAMERA_FRAMERATE=X (see README)                                                                                                                               
                                                                                                                                                                                                          
           If the error is like:                                                                                                                                                                          
           /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block                                                                                                            
           Solution: preload the offending library                                                                                                                                                        
           export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1                                                                                                                                      
                                                                                                                                                                                                          
           END HELP                                                                                                                                                                                       

INFO inference | Inference main loop ending. prints.py:48
INFO inference | Output file saved: output_Robot.mp4 prints.py:48
INFO maskcam-run | Sending interrupt to streaming process prints.py:48
INFO maskcam-run | Waiting for process streaming to terminate... prints.py:48
INFO streaming | Ending streaming prints.py:48
INFO maskcam-run | Process terminated: streaming prints.py:48
`

@javalier
Copy link

javalier commented Jun 1, 2022

I met the problems like these in some way,have you solve these problems?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants