Skip to content

make: *** [nvinfer_plugin] Error 2 #928

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Sunhil opened this issue Nov 25, 2020 · 10 comments
Closed

make: *** [nvinfer_plugin] Error 2 #928

Sunhil opened this issue Nov 25, 2020 · 10 comments
Labels
Module:OSS Build Issues building open source code triaged Issue has been triaged by maintainers

Comments

@Sunhil
Copy link

Sunhil commented Nov 25, 2020

Met problem when run code, make nvinfer_plugin -j$(nproc).

Environment:
CUDA:10.0
CUDNN:7.5
TensorRT:6.0.1

Below is my steps,and I occur error after run, make nvinfer_plugin -j$(nproc).

TensorRT$ git submodule update --init --recursive
TensorRT$ export TRT_SOURCE=`pwd`
TensorRT$ cd $TRT_SOURCE
TensorRT$ mkdir -p build && cd build
TensorRT/build$ cmake .. -DGPU_ARCHS=61  -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out
Building for TensorRT version: 6.0.1.0, library version: 6.0.1
-- The CXX compiler identification is GNU 5.4.0
-- The CUDA compiler identification is NVIDIA 10.0.130
-- Check for working CXX compiler: /usr/bin/g++
-- Check for working CXX compiler: /usr/bin/g++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working CUDA compiler: /usr/local/cuda-10.0/bin/nvcc
-- Check for working CUDA compiler: /usr/local/cuda-10.0/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Targeting TRT Platform: x86_64
-- CUDA version set to 10.1
-- cuDNN version set to 7.5
-- Protobuf version set to 3.0.0
-- Looking for C++ include pthread.h
-- Looking for C++ include pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Found PkgConfig: /usr/bin/pkg-config (found version "0.29.1") 
-- Checking for one of the modules 'zlib'
-- Found CUDA: /usr/local/cuda-10.0 (found suitable version "10.1", minimum required is "10.1") 
-- Using libprotobuf /DATA/hhyang/tools/TensorRT/build/third_party.protobuf/lib/libprotobuf.a
-- ========================= Importing and creating target nvinfer ==========================
-- Looking for library nvinfer
-- Library that was found nvinfer_LIB_PATH-NOTFOUND
-- ==========================================================================================
-- ========================= Importing and creating target nvuffparser ==========================
-- Looking for library nvparsers
-- Library that was found nvparsers_LIB_PATH-NOTFOUND
-- ==========================================================================================
-- Protobuf proto/trtcaffe.proto -> proto/trtcaffe.pb.cc proto/trtcaffe.pb.h
-- /DATA/hhyang/tools/TensorRT/build/parsers/caffe
-- The C compiler identification is GNU 5.4.0
-- Check for working C compiler: /usr/bin/gcc
-- Check for working C compiler: /usr/bin/gcc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Build type not set - defaulting to Release
-- 
-- ******** Summary ********
--   CMake version         : 3.13.5
--   CMake command         : /home/hhyang/cmake/bin/cmake
--   System                : Linux
--   C++ compiler          : /usr/bin/g++
--   C++ compiler version  : 5.4.0
--   CXX flags             : -Wno-deprecated-declarations  -DBUILD_SYSTEM=cmake_oss -Wall -Wno-deprecated-declarations -Wno-unused-function -Wno-unused-but-set-variable -Wnon-virtual-dtor
--   Build type            : Release
--   Compile definitions   : _PROTOBUF_INSTALL_DIR=/DATA/hhyang/tools/TensorRT/build;ONNX_NAMESPACE=onnx2trt_onnx
--   CMAKE_PREFIX_PATH     : 
--   CMAKE_INSTALL_PREFIX  : /usr/lib/aarch64-linux-gnu/..
--   CMAKE_MODULE_PATH     : 
-- 
--   ONNX version          : 1.3.0
--   ONNX NAMESPACE        : onnx2trt_onnx
--   ONNX_BUILD_TESTS      : OFF
--   ONNX_BUILD_BENCHMARKS : OFF
--   ONNX_USE_LITE_PROTO   : OFF
--   ONNXIFI_DUMMY_BACKEND : OFF
-- 
--   Protobuf compiler     : 
--   Protobuf includes     : 
--   Protobuf libraries    : 
--   BUILD_ONNX_PYTHON     : OFF
-- GPU_ARCH defined as 61. Generating CUDA code for SM 61
-- Found CUDNN: /usr/local/cuda-10.0/include  
-- Found TensorRT headers at /DATA/hhyang/tools/TensorRT/include
-- Find TensorRT libs at /DATA/hhyang/tools/TensorRT-6.0.1.5/lib/libnvinfer.so;/DATA/hhyang/tools/TensorRT-6.0.1.5/lib/libnvinfer_plugin.so
-- Found TENSORRT: /DATA/hhyang/tools/TensorRT/include  
-- Adding new sample: sample_char_rnn
--     - Parsers Used: none
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_dynamic_reshape
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_fasterRCNN
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_googlenet
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_int8
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_int8_api
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mlp
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mnist
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mnist_api
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_movielens
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_movielens_mps
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_nmt
--     - Parsers Used: none
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_onnx_mnist
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_plugin
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_reformat_free_io
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_ssd
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_fasterRCNN
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_maskRCNN
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_mnist
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_uff_plugin_v2_ext
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_uff_ssd
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: trtexec
--     - Parsers Used: caffe;uff;onnx
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Configuring done
-- Generating done
-- Build files have been written to: /DATA/hhyang/tools/TensorRT/build

TensorRT/build$ make nvinfer_plugin -j$(nproc)
Scanning dependencies of target nvinfer_plugin
[  0%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/nmsPlugin/nmsPlugin.cpp.o
[  0%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/normalizePlugin/normalizePlugin.cpp.o
[  0%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/priorBoxPlugin/priorBoxPlugin.cpp.o
[  7%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/reorgPlugin/reorgPlugin.cpp.o
[  7%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/gridAnchorPlugin/gridAnchorPlugin.cpp.o
[  7%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/regionPlugin/regionPlugin.cpp.o
[ 14%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/nvFasterRCNN/nvFasterRCNNPlugin.cpp.o
[ 14%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/batchedNMSPlugin/batchedNMSInference.cpp.o
[ 14%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/batchedNMSPlugin/batchedNMSPlugin.cpp.o
[ 14%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/batchedNMSPlugin/gatherNMSOutputs.cu.o
[ 21%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/flattenConcat/flattenConcat.cpp.o
[ 21%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/cropAndResizePlugin/cropAndResizePlugin.cpp.o
[ 21%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/proposalPlugin/proposalPlugin.cpp.o
[ 28%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/batchTilePlugin/batchTilePlugin.cpp.o
[ 28%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/detectionLayerPlugin/detectionLayerPlugin.cpp.o
[ 28%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/proposalLayerPlugin/proposalLayerPlugin.cpp.o
[ 28%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/pyramidROIAlignPlugin/pyramidROIAlignPlugin.cpp.o
[ 35%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/resizeNearestPlugin/resizeNearestPlugin.cpp.o
[ 35%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/specialSlicePlugin/specialSlicePlugin.cpp.o
[ 35%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/instanceNormalizationPlugin/instanceNormalizationPlugin.cpp.o
[ 42%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/allClassNMS.cu.o
[ 42%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/bboxDeltas2Proposals.cu.o
[ 42%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/common.cu.o
[ 42%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/cropAndResizeKernel.cu.o
[ 50%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/decodeBBoxes.cu.o
[ 50%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/detectionForward.cu.o
[ 50%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/extractFgScores.cu.o
[ 57%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/gatherTopDetections.cu.o
[ 57%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/generateAnchors.cu.o
[ 57%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/gridAnchorLayer.cu.o
[ 57%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/kernel.cpp.o
[ 64%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/maskRCNNKernels.cu.o
[ 64%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/nmsLayer.cu.o
[ 64%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/normalizeLayer.cu.o
[ 71%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/permuteData.cu.o
[ 71%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/priorBoxLayer.cu.o
[ 71%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/proposalKernel.cu.o
[ 71%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/proposalsForward.cu.o
/DATA/hhyang/tools/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced

[ 78%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/regionForward.cu.o
[ 78%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/reorgForward.cu.o
[ 78%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/roiPooling.cu.o
[ 85%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/rproiInferenceFused.cu.o
[ 85%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/sortScoresPerClass.cu.o
[ 85%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/sortScoresPerImage.cu.o
[ 85%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/cudaDriverWrapper.cu.o
[ 92%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/common/nmsHelper.cpp.o
make[3]: *** No rule to make target 'nvinfer_LIB_PATH-NOTFOUND', needed by 'plugin/CMakeFiles/nvinfer_plugin.dir/cmake_device_link.o'.  Stop.
make[3]: *** Waiting for unfinished jobs....
[ 92%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/InferPlugin.cpp.o
CMakeFiles/Makefile2:283: recipe for target 'plugin/CMakeFiles/nvinfer_plugin.dir/all' failed
make[2]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/all] Error 2
CMakeFiles/Makefile2:295: recipe for target 'plugin/CMakeFiles/nvinfer_plugin.dir/rule' failed
make[1]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/rule] Error 2
Makefile:238: recipe for target 'nvinfer_plugin' failed
make: *** [nvinfer_plugin] Error 2

I notice that I hadn't modify the version of cuda, so then I run the commons below.

TensorRT/build$ cmake .. -DGPU_ARCHS=61  -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out -DCUDA_VERSION=10.0
Building for TensorRT version: 6.0.1.0, library version: 6.0.1
-- Targeting TRT Platform: x86_64
-- CUDA version set to 10.0
-- cuDNN version set to 7.5
-- Protobuf version set to 3.0.0
-- Found CUDA: /usr/local/cuda-10.0 (found suitable version "10.0", minimum required is "10.0") 
-- Using libprotobuf /DATA/hhyang/tools/TensorRT/build/third_party.protobuf/lib/libprotobuf.a
-- ========================= Importing and creating target nvinfer ==========================
-- Looking for library nvinfer
-- Library that was found nvinfer_LIB_PATH-NOTFOUND
-- ==========================================================================================
-- ========================= Importing and creating target nvuffparser ==========================
-- Looking for library nvparsers
-- Library that was found nvparsers_LIB_PATH-NOTFOUND
-- ==========================================================================================
-- Protobuf proto/trtcaffe.proto -> proto/trtcaffe.pb.cc proto/trtcaffe.pb.h
-- /DATA/hhyang/tools/TensorRT/build/parsers/caffe
-- 
-- ******** Summary ********
--   CMake version         : 3.13.5
--   CMake command         : /home/hhyang/cmake/bin/cmake
--   System                : Linux
--   C++ compiler          : /usr/bin/g++
--   C++ compiler version  : 5.4.0
--   CXX flags             : -Wno-deprecated-declarations  -DBUILD_SYSTEM=cmake_oss -Wall -Wno-deprecated-declarations -Wno-unused-function -Wno-unused-but-set-variable -Wnon-virtual-dtor
--   Build type            : Release
--   Compile definitions   : _PROTOBUF_INSTALL_DIR=/DATA/hhyang/tools/TensorRT/build;ONNX_NAMESPACE=onnx2trt_onnx
--   CMAKE_PREFIX_PATH     : 
--   CMAKE_INSTALL_PREFIX  : /usr/lib/aarch64-linux-gnu/..
--   CMAKE_MODULE_PATH     : 
-- 
--   ONNX version          : 1.3.0
--   ONNX NAMESPACE        : onnx2trt_onnx
--   ONNX_BUILD_TESTS      : OFF
--   ONNX_BUILD_BENCHMARKS : OFF
--   ONNX_USE_LITE_PROTO   : OFF
--   ONNXIFI_DUMMY_BACKEND : OFF
-- 
--   Protobuf compiler     : 
--   Protobuf includes     : 
--   Protobuf libraries    : 
--   BUILD_ONNX_PYTHON     : OFF
-- GPU_ARCH defined as 61. Generating CUDA code for SM 61
-- Found TensorRT headers at /DATA/hhyang/tools/TensorRT/include
-- Find TensorRT libs at /DATA/hhyang/tools/TensorRT-6.0.1.5/lib/libnvinfer.so;/DATA/hhyang/tools/TensorRT-6.0.1.5/lib/libnvinfer_plugin.so
-- Adding new sample: sample_char_rnn
--     - Parsers Used: none
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_dynamic_reshape
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_fasterRCNN
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_googlenet
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_int8
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_int8_api
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mlp
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mnist
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mnist_api
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_movielens
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_movielens_mps
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_nmt
--     - Parsers Used: none
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_onnx_mnist
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_plugin
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_reformat_free_io
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_ssd
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_fasterRCNN
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_maskRCNN
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_mnist
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_uff_plugin_v2_ext
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_uff_ssd
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: trtexec
--     - Parsers Used: caffe;uff;onnx
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Configuring done
-- Generating done
-- Build files have been written to: /DATA/hhyang/tools/TensorRT/build

TensorRT/build$ make nvinfer_plugin -j$(nproc)
make[3]: *** No rule to make target 'nvinfer_LIB_PATH-NOTFOUND', needed by 'plugin/CMakeFiles/nvinfer_plugin.dir/cmake_device_link.o'.  Stop.
CMakeFiles/Makefile2:283: recipe for target 'plugin/CMakeFiles/nvinfer_plugin.dir/all' failed
make[2]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/all] Error 2
CMakeFiles/Makefile2:295: recipe for target 'plugin/CMakeFiles/nvinfer_plugin.dir/rule' failed
make[1]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/rule] Error 2
Makefile:238: recipe for target 'nvinfer_plugin' failed
make: *** [nvinfer_plugin] Error 2

However, the problem about nvinfer_plugin showed again.
Any suggestion will be appreciated!

@DCSong
Copy link

DCSong commented Dec 1, 2020

Same question with cudn11.0 on Ubuntu 18.04 x86_64.

@Sunhil
Copy link
Author

Sunhil commented Dec 3, 2020

Same question with cudn11.0 on Ubuntu 18.04 x86_64.

Please tell me whether your problem had solved. And if had sovled, could you mind share the solution.

@DCSong
Copy link

DCSong commented Dec 3, 2020

@Sunhil I install TensorRT with .deb with the same error of nvinfer. But when I reinstall cuda and cudnn with .deb, it has been solved.
Not .run, the .deb. Hope it works for you.

@ttyio
Copy link
Collaborator

ttyio commented Dec 10, 2020

Hello @Sunhil , the error is

  -- Looking for library nvinfer
  -- Library that was found nvinfer_LIB_PATH-NOTFOUND

Could you change to -DTRT_LIB_DIR=/DATA/hhyang/tools/TensorRT-6.0.1.5/lib/ -DTRT_INC_DIR=/DATA/hhyang/tools/TensorRT-6.0.1.5/include in your cmake command line?

@ttyio ttyio added Module:OSS Build Issues building open source code triaged Issue has been triaged by maintainers labels Dec 14, 2020
@ttyio
Copy link
Collaborator

ttyio commented Apr 14, 2021

Closing since no activity in this thread for more than 3 weeks, please reopen if you still have question, thanks!

@ttyio ttyio closed this as completed Apr 14, 2021
@andrewssobral
Copy link

I had the same problem. The solution proposed by @ttyio worked for me:

export TRT_LIBPATH=~/Downloads/TensorRT-7.2.3.4
~/TensorRT/build$ cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH/lib/ -DTRT_INC_DIR=$TRT_LIBPATH/include -DTRT_OUT_DIR=`pwd`/out

instead of: $ cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out

@lchunleo
Copy link

lchunleo commented Jul 4, 2021

can i check if the build is successful, other than libnvds_infercustomparser_tlt.so generated, for the proposal plugin etc, am i supposed to see something? what is the expected result? i am using faster rcnn and required to have plugin for proposal and cropandresize plugin ?

@FreemanGong
Copy link

The problems caused by your cmakelist, because the file cannot find your tensorrt lib & include. So, you need to append the address of your tensorrt lib & include manually as follows:
set(CMAKE_PREFIX_PATH "/home/gong/tool/TensorRT/TensorRT-7.1.3.4/lib")
include_directories(/home/gong/tool/TensorRT/TensorRT-7.1.3.4/include/)
link_directories(/home/gong/tool/TensorRT/TensorRT-7.1.3.4/lib/)

find_library(NVINFER NAMES nvinfer)
find_library(NVPARSERS NAMES nvparsers)
find_library(NVONNXPARSERS NAMES nvonnxparser)
find_library(NVONNXPARSER_STATIC NAMES nvonnxparser_static)

@QMZ321
Copy link

QMZ321 commented Mar 31, 2023

@andrewssobral thanks so much!!!I successfully !

@tashrifbillah
Copy link

@ttyio , this should be corrected in the README:

https://github.com./NVIDIA/TensorRT/tree/main?tab=readme-ov-file#optional---if-not-using-tensorrt-container-specify-the-tensorrt-ga-release-build-path

TRT_LIBPATH should really be set to TRT_LIBPATH=~/TensorRT-10.4.0.26/lib/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Module:OSS Build Issues building open source code triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests

8 participants