前言
近两周一来一直在做视觉识别圆环工件的项目,深度学习训练以及识别实现并不是重点,重点在于如何不卡顿的在香橙派5开发板上实现。过程中走了许多弯路,并且经过试错找到了合适的解决方案,在此处把加速的方案进行记录。
过程记录
模型训练的过程不作说明,从得到.pt模型文件开始进行记录,项目使用的瑞芯微RK3588是一款搭载了NPU的国产开发板,NPU是为了嵌入式神经网络和边缘计算量身定制的,但若想调用RK3588的NPU单元进行推理加速,则需要首先将模型转换为.rknn格式的模型,否则无法使用。
首先官方的github仓库下有三个需要用的项目,分别是rknn-toolkit 、rknn-toolkit2以及rknpu2,这三个项目最好现在电脑中保存下来以便后续进行处理。
win端步骤:
先使用anaconda创建一个rknn虚拟环境,配成3.6的python环境,再通过pip指令进行依赖包安装,安装时可以切到rknn-toolkit2-master\rknn-toolkit2-master\doc目录下运行如下指令实现大量安装,速度过慢记得换源。
1
pip install -r requirements_cp36-1.5.0
要从yolov5的工程目录中运行export.py将pt模型转换成onnx模型,可以参考官方网址中对后处理部分进行修改,注意只参考修改的部分即可,其它部分并不适用于win端操作。然后将权重.pt文件复制到export.py的同一级目录下,在虚拟环境中运行指令:
1
python export.py --weights best.pt --img 640 --batch 1 --include onnx
不同版本的yolo中export的参数指令可能有所不同,有报错的话打开改文件进行查看,修改上述指令的先后顺序。
比如在我的export.py中就没有include参数:然后从yolo官网重新下载了一个源码,在这个目录里面找到export.py(如下所示),里面就有include参数,之后模型便可以顺利导出了。
成功导出onnx模型后将模型及三个项目源码包放在U盘里面进行下一步操作。
linux端步骤:
进入linux系统中,虚拟机或者双系统都无所谓,我使用的是Ubuntu20.04的系统,并且也安装了anaconda(因为本身使用的是python3.10),新建一个3.8的虚拟环境,进入knn-toolkit2源码doc目录中,输入命令:
1
pip install -r requirements_cp38-1.4.0.txt -i https://mirror.baidu.com/pypi/simple
命令可能随着版本更新有细微差别,具体以文件夹中文件名字为准,一定要带上镜像源,不然会报错。
环境装好以后进入package文件夹,输入以下命令:1
pip install rknn_toolkit2-1.4.0_22dcfef4-cp38-cp38-linux_x86_64.whl
一般情况下只要环境没问题不会出错,先输入python进入python环境,再输入以下命令,如果没有报错证明环境搭建完成。
1
from rknn.api import RKNN
接下来进入进入example/onnx/yolov5文件夹下,找到我们的test.py文件,修改地址、类别、npu型号,如果后续出现多框重叠的情况还可以修改锚点信息、后处理参数。(以下截图为别的案例截图,不是本项目内容,可以自行修改)
之后在examples/onnx/yolov5目录下运行命令:
1
python test.py
之后,程序正确运行,目录中有了rknn文件,此阶段的任务完成。
开发板步骤:
理论上来说,是可以通过tookit2中的源码文件在板子上实现python程序运行识别的,但是这个由于参考案例太少一直没能实现。从官方的角度来讲,tookit1和2都是将板子接到电脑上来进行调试的,专门用于板子上的还是rknpu2源码,也许之后随着开发的深入能弥补上单板运行的空缺。将rknpu2源码以及得到的rknn模型放到板子上,进入yolov5模型转换目录(注意路径自行修改):
1
cd /home/ptay/rknpu2-master/examples/rknn_yolov5_demo
修改include文件中的头文件postprocess.h:
1
修改model目录下的coco_80_labels_list.txt文件,改为自己的类并保存:
1
2red_jeep
missile_vehicle将转换后的rknn文件放在model/RK3588目录下,然后在demo文件下进行编译:
1
bash ./build-linux_RK3588.sh
成功后生成install目录
1
cd install/rknn_yolov5_demo_linux
在model目录下放入需要推理的图片,运行
1
./rknn_yolov5_demo ./model/RK3588/best.rknn ./model/test.jpg
由此,便实现了通过rknn模型的推理检测,但这只是单张图片的检测,如果要使用视频实时检测则需要对demo下的src目录中的源码main.cc文件进行修改,调用opencv及循环进行实时显示。需要注意的是,要在板子里配好opencv的c++环境,之后修改CMakelist中的内容才可以正常编译。opencv越新越好,老版本会遇到很多报错,编译具体过程参考此处。修改后的main文件和CMakelist内容如下所示,主要修改的部分是和opencv有关的内容,另外在循环中添加了调用gpio进行控制继电器的语句,可选择性增删。
main.cc:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373// Copyright (c) 2021 by Rockchip Electronics Co., Ltd. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/*-------------------------------------------
Includes
-------------------------------------------*/
using namespace std;
using namespace cv;
/*-------------------------------------------
Functions
-------------------------------------------*/
static void dump_tensor_attr(rknn_tensor_attr* attr)
{
std::string shape_str = attr->n_dims < 1 ? "" : std::to_string(attr->dims[0]);
for (int i = 1; i < attr->n_dims; ++i) {
shape_str += ", " + std::to_string(attr->dims[i]);
}
printf(" index=%d, name=%s, n_dims=%d, dims=[%s], n_elems=%d, size=%d, w_stride = %d, size_with_stride=%d, fmt=%s, "
"type=%s, qnt_type=%s, "
"zp=%d, scale=%f\n",
attr->index, attr->name, attr->n_dims, shape_str.c_str(), attr->n_elems, attr->size, attr->w_stride,
attr->size_with_stride, get_format_string(attr->fmt), get_type_string(attr->type),
get_qnt_type_string(attr->qnt_type), attr->zp, attr->scale);
}
double __get_us(struct timeval t) { return (t.tv_sec * 1000000 + t.tv_usec); }
static unsigned char* load_data(FILE* fp, size_t ofst, size_t sz)
{
unsigned char* data;
int ret;
data = NULL;
if (NULL == fp) {
return NULL;
}
ret = fseek(fp, ofst, SEEK_SET);
if (ret != 0) {
printf("blob seek failure.\n");
return NULL;
}
data = (unsigned char*)malloc(sz);
if (data == NULL) {
printf("buffer malloc failure.\n");
return NULL;
}
ret = fread(data, 1, sz, fp);
return data;
}
static unsigned char* load_model(const char* filename, int* model_size)
{
FILE* fp;
unsigned char* data;
fp = fopen(filename, "rb");
if (NULL == fp) {
printf("Open file %s failed.\n", filename);
return NULL;
}
fseek(fp, 0, SEEK_END);
int size = ftell(fp);
data = load_data(fp, 0, size);
fclose(fp);
*model_size = size;
return data;
}
static int saveFloat(const char* file_name, float* output, int element_size)
{
FILE* fp;
fp = fopen(file_name, "w");
for (int i = 0; i < element_size; i++) {
fprintf(fp, "%.6f\n", output[i]);
}
fclose(fp);
return 0;
}
/*-------------------------------------------
Main Functions
-------------------------------------------*/
int main(int argc, char** argv)
{
int status = 0;
char* model_name = NULL;
rknn_context ctx;
size_t actual_size = 0;
int img_width = 0;
int img_height = 0;
int img_channel = 0;
const float nms_threshold = NMS_THRESH;
const float box_conf_threshold = BOX_THRESH;
struct timeval start_time, stop_time;
int ret;
// init rga context
rga_buffer_t src;
rga_buffer_t dst;
im_rect src_rect;
im_rect dst_rect;
memset(&src_rect, 0, sizeof(src_rect));
memset(&dst_rect, 0, sizeof(dst_rect));
memset(&src, 0, sizeof(src));
memset(&dst, 0, sizeof(dst));
if (argc != 2) {
printf("Usage: %s <rknn model> \n", argv[0]);
return -1;
}
printf("post process config: box_conf_threshold = %.2f, nms_threshold = %.2f\n", box_conf_threshold, nms_threshold);
model_name = (char*)argv[1];
//~ char* image_name = argv[2];
/* Create the neural network */
printf("Loading mode...\n");
int model_data_size = 0;
unsigned char* model_data = load_model(model_name, &model_data_size);
ret = rknn_init(&ctx, model_data, model_data_size, 0, NULL);
if (ret < 0) {
printf("rknn_init error ret=%d\n", ret);
return -1;
}
rknn_sdk_version version;
ret = rknn_query(ctx, RKNN_QUERY_SDK_VERSION, &version, sizeof(rknn_sdk_version));
if (ret < 0) {
printf("rknn_init error ret=%d\n", ret);
return -1;
}
printf("sdk version: %s driver version: %s\n", version.api_version, version.drv_version);
rknn_input_output_num io_num;
ret = rknn_query(ctx, RKNN_QUERY_IN_OUT_NUM, &io_num, sizeof(io_num));
if (ret < 0) {
printf("rknn_init error ret=%d\n", ret);
return -1;
}
printf("model input num: %d, output num: %d\n", io_num.n_input, io_num.n_output);
rknn_tensor_attr input_attrs[io_num.n_input];
memset(input_attrs, 0, sizeof(input_attrs));
for (int i = 0; i < io_num.n_input; i++) {
input_attrs[i].index = i;
ret = rknn_query(ctx, RKNN_QUERY_INPUT_ATTR, &(input_attrs[i]), sizeof(rknn_tensor_attr));
if (ret < 0) {
printf("rknn_init error ret=%d\n", ret);
return -1;
}
dump_tensor_attr(&(input_attrs[i]));
}
rknn_tensor_attr output_attrs[io_num.n_output];
memset(output_attrs, 0, sizeof(output_attrs));
for (int i = 0; i < io_num.n_output; i++) {
output_attrs[i].index = i;
ret = rknn_query(ctx, RKNN_QUERY_OUTPUT_ATTR, &(output_attrs[i]), sizeof(rknn_tensor_attr));
dump_tensor_attr(&(output_attrs[i]));
}
int channel = 3;
int width = 0;
int height = 0;
if (input_attrs[0].fmt == RKNN_TENSOR_NCHW) {
printf("model is NCHW input fmt\n");
channel = input_attrs[0].dims[1];
height = input_attrs[0].dims[2];
width = input_attrs[0].dims[3];
} else {
printf("model is NHWC input fmt\n");
height = input_attrs[0].dims[1];
width = input_attrs[0].dims[2];
channel = input_attrs[0].dims[3];
}
printf("model input height=%d, width=%d, channel=%d\n", height, width, channel);
rknn_input inputs[1];
memset(inputs, 0, sizeof(inputs));
inputs[0].index = 0;
inputs[0].type = RKNN_TENSOR_UINT8;
inputs[0].size = width * height * channel;
inputs[0].fmt = RKNN_TENSOR_NHWC;
inputs[0].pass_through = 0;
//~ printf("Read %s ...\n", image_name);
cv::Mat orig_img;
cv::VideoCapture cap(0);
cap.set(CAP_PROP_FRAME_WIDTH, 640);
cap.set(CAP_PROP_FRAME_HEIGHT, 480);
cap.set(CAP_PROP_FPS, 30);
system("sudo gpio mode 3 output");
system("orangepi");
while(1){
//~ count start
auto start = std::chrono::high_resolution_clock::now();
cap >> orig_img;
//~ show end
auto end = std::chrono::high_resolution_clock::now();
std::chrono::duration<double> elapsed = end - start;
std::cout << "total time:" << elapsed.count() << "second" << std::endl;
//~ cv::Mat orig_img = cv::imread(frame, 1);
if (!orig_img.data) {
printf("cv::imread %s fail!\n");
return -1;
}
cv::Mat img;
cv::cvtColor(orig_img, img, cv::COLOR_BGR2RGB);
img_width = img.cols;
img_height = img.rows;
printf("img width = %d, img height = %d\n", img_width, img_height);
// You may not need resize when src resulotion equals to dst resulotion
void* resize_buf = nullptr;
if (img_width != width || img_height != height) {
printf("resize with RGA!\n");
resize_buf = malloc(height * width * channel);
memset(resize_buf, 0x00, height * width * channel);
src = wrapbuffer_virtualaddr((void*)img.data, img_width, img_height, RK_FORMAT_RGB_888);
dst = wrapbuffer_virtualaddr((void*)resize_buf, width, height, RK_FORMAT_RGB_888);
ret = imcheck(src, dst, src_rect, dst_rect);
if (IM_STATUS_NOERROR != ret) {
printf("%d, check error! %s", __LINE__, imStrError((IM_STATUS)ret));
return -1;
}
IM_STATUS STATUS = imresize(src, dst);
// for debug
cv::Mat resize_img(cv::Size(width, height), CV_8UC3, resize_buf);
//~ imshow("resize_input.jpg", resize_img);
inputs[0].buf = resize_buf;
} else {
inputs[0].buf = (void*)img.data;
}
gettimeofday(&start_time, NULL);
rknn_inputs_set(ctx, io_num.n_input, inputs);
rknn_output outputs[io_num.n_output];
memset(outputs, 0, sizeof(outputs));
for (int i = 0; i < io_num.n_output; i++) {
outputs[i].want_float = 0;
}
ret = rknn_run(ctx, NULL);
ret = rknn_outputs_get(ctx, io_num.n_output, outputs, NULL);
gettimeofday(&stop_time, NULL);
printf("once run use %f ms\n", (__get_us(stop_time) - __get_us(start_time)) / 1000);
// post process
float scale_w = (float)width / img_width;
float scale_h = (float)height / img_height;
detect_result_group_t detect_result_group;
std::vector<float> out_scales;
std::vector<int32_t> out_zps;
for (int i = 0; i < io_num.n_output; ++i) {
out_scales.push_back(output_attrs[i].scale);
out_zps.push_back(output_attrs[i].zp);
}
post_process((int8_t*)outputs[0].buf, (int8_t*)outputs[1].buf, (int8_t*)outputs[2].buf, height, width,
box_conf_threshold, nms_threshold, scale_w, scale_h, out_zps, out_scales, &detect_result_group);
// Draw Objects
char text[256];
for (int i = 0; i < detect_result_group.count; i++) {
detect_result_t* det_result = &(detect_result_group.results[i]);
sprintf(text, "%s %.1f%%", det_result->name, det_result->prop * 100);
printf("%s @ (%d %d %d %d) %f\n", det_result->name, det_result->box.left, det_result->box.top,
det_result->box.right, det_result->box.bottom, det_result->prop);
char s = *det_result->name;
printf("%d here here!",s);
if(s == 48)
{
printf("detect");
system("sudo gpio write 3 0");
system("sudo gpio write 3 1");
}
if(s == 49)
{
system("sudo gpio write 3 1");
}
int x1 = det_result->box.left;
int y1 = det_result->box.top;
int x2 = det_result->box.right;
int y2 = det_result->box.bottom;
rectangle(orig_img, cv::Point(x1, y1), cv::Point(x2, y2), cv::Scalar(255, 0, 0, 255), 3);
putText(orig_img, text, cv::Point(x1, y1 + 12), cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(0, 0, 0));
}
imshow("jpg", orig_img);
waitKey(27);
}
}
//~ ret = rknn_outputs_release(ctx, io_num.n_output, outputs);
//~ // loop test
//~ int test_count = 10;
//~ gettimeofday(&start_time, NULL);
//~ for (int i = 0; i < test_count; ++i) {
//~ rknn_inputs_set(ctx, io_num.n_input, inputs);
//~ ret = rknn_run(ctx, NULL);
//~ ret = rknn_outputs_get(ctx, io_num.n_output, outputs, NULL);
//~ #if PERF_WITH_POST
//~ post_process((int8_t*)outputs[0].buf, (int8_t*)outputs[1].buf, (int8_t*)outputs[2].buf, height, width,
//~ box_conf_threshold, nms_threshold, scale_w, scale_h, out_zps, out_scales, &detect_result_group);
//~ #endif
//~ ret = rknn_outputs_release(ctx, io_num.n_output, outputs);
//~ }
//~ gettimeofday(&stop_time, NULL);
//~ printf("loop count = %d , average run %f ms\n", test_count,
//~ (__get_us(stop_time) - __get_us(start_time)) / 1000.0 / test_count);
//~ deinitPostProcess();
// release
//~ ret = rknn_destroy(ctx);
//~ if (model_data) {
//~ free(model_data);
//~ }
//~ if (resize_buf) {
//~ free(resize_buf);
//~ }
//~ return 0;
//~ }CMakelist:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132cmake_minimum_required(VERSION 3.4.1)
project(rknn_yolov5_demo)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -Wl,--allow-shlib-undefined")
set(CMAKE_INSTALL_PREFIX ${CMAKE_SOURCE_DIR}/install/rknn_yolov5_demo_${CMAKE_SYSTEM_NAME})
set(CMAKE_SKIP_INSTALL_RPATH FALSE)
set(CMAKE_BUILD_WITH_INSTALL_RPATH TRUE)
set(CMAKE_INSTALL_RPATH "${CMAKE_INSTALL_PREFIX}/lib")
if (CMAKE_C_COMPILER MATCHES "aarch64")
set(LIB_ARCH aarch64)
else()
set(LIB_ARCH armhf)
endif()
include_directories(${CMAKE_SOURCE_DIR})
if(TARGET_SOC STREQUAL "rk356x")
set(RKNN_API_PATH ${CMAKE_SOURCE_DIR}/../../runtime/RK356X/${CMAKE_SYSTEM_NAME}/librknn_api)
elseif(TARGET_SOC STREQUAL "rk3588")
set(RKNN_API_PATH ${CMAKE_SOURCE_DIR}/../../runtime/RK3588/${CMAKE_SYSTEM_NAME}/librknn_api)
else()
message(FATAL_ERROR "TARGET_SOC is not set, ref value: rk356x or rk3588 or rv110x")
endif()
if (CMAKE_SYSTEM_NAME STREQUAL "Android")
set(RKNN_RT_LIB ${RKNN_API_PATH}/${CMAKE_ANDROID_ARCH_ABI}/librknnrt.so)
else()
set(RKNN_RT_LIB ${RKNN_API_PATH}/${LIB_ARCH}/librknnrt.so)
endif()
include_directories(${RKNN_API_PATH}/include)
include_directories(${CMAKE_SOURCE_DIR}/../3rdparty)
find_package(OpenCV REQUIRED)
if(TARGET_SOC STREQUAL "rk356x")
set(RGA_PATH ${CMAKE_SOURCE_DIR}/../3rdparty/rga/RK356X)
elseif(TARGET_SOC STREQUAL "rk3588")
set(RGA_PATH ${CMAKE_SOURCE_DIR}/../3rdparty/rga/RK3588)
else()
message(FATAL_ERROR "TARGET_SOC is not set, ref value: rk356x or rk3588")
endif()
if (CMAKE_SYSTEM_NAME STREQUAL "Android")
set(RGA_LIB ${RGA_PATH}/lib/Android/${CMAKE_ANDROID_ARCH_ABI}/librga.so)
else()
set(RGA_LIB ${RGA_PATH}/lib/Linux//${LIB_ARCH}/librga.so)
endif()
include_directories( ${RGA_PATH}/include)
set(MPP_PATH ${CMAKE_CURRENT_SOURCE_DIR}/../3rdparty/mpp)
if (CMAKE_SYSTEM_NAME STREQUAL "Linux")
set(MPP_LIBS ${MPP_PATH}/${CMAKE_SYSTEM_NAME}/${LIB_ARCH}/librockchip_mpp.so)
elseif (CMAKE_SYSTEM_NAME STREQUAL "Android")
set(MPP_LIBS ${MPP_PATH}/${CMAKE_SYSTEM_NAME}/${CMAKE_ANDROID_ARCH_ABI}/libmpp.so)
endif()
include_directories(${MPP_PATH}/include)
set(ZLMEDIAKIT_PATH ${CMAKE_SOURCE_DIR}/../3rdparty/zlmediakit)
if (CMAKE_SYSTEM_NAME STREQUAL "Linux")
include_directories(${ZLMEDIAKIT_PATH}/include)
set(ZLMEDIAKIT_LIBS ${ZLMEDIAKIT_PATH}/${LIB_ARCH}/libmk_api.so)
endif()
if (ZLMEDIAKIT_LIBS)
add_definitions(-DBUILD_VIDEO_RTSP)
endif()
set(CMAKE_INSTALL_RPATH "lib")
# rknn_yolov5_demo
include_directories( ${CMAKE_SOURCE_DIR}/include)
add_executable(rknn_yolov5_demo
src/main.cpp
src/postprocess.cpp
)
target_link_libraries(rknn_yolov5_demo
${RKNN_RT_LIB}
${RGA_LIB}
)
target_link_libraries(rknn_yolov5_demo ${OpenCV_LIBS})
if (MPP_LIBS)
add_executable(rknn_yolov5_video_demo
src/main_video.cpp
src/postprocess.cpp
utils/mpp_decoder.cpp
utils/mpp_encoder.cpp
utils/drawing.cpp
)
target_link_libraries(rknn_yolov5_video_demo
${RKNN_RT_LIB}
${RGA_LIB}
${OpenCV_LIBS}
${MPP_LIBS}
${ZLMEDIAKIT_LIBS}
)
endif()
set(CMAKE_INSTALL_PREFIX ${CMAKE_SOURCE_DIR}/install/rknn_yolov5_demo_${CMAKE_SYSTEM_NAME})
install(TARGETS rknn_yolov5_demo DESTINATION ./)
install(PROGRAMS ${RKNN_RT_LIB} DESTINATION lib)
install(PROGRAMS ${RGA_LIB} DESTINATION lib)
install(DIRECTORY model DESTINATION ./)
if (MPP_LIBS)
install(TARGETS rknn_yolov5_video_demo DESTINATION ./)
install(PROGRAMS ${MPP_LIBS} DESTINATION lib)
endif()
if (ZLMEDIAKIT_LIBS)
install(PROGRAMS ${ZLMEDIAKIT_LIBS} DESTINATION lib)
endif()
总结
整个开发过程中走了不少弯路,各个教程也都看了一个遍,最重要的还是跟着官方的文档走,各个教程系统不一,使用的源码版本不一,很有可能产生各种错误。因此还是需要从官方文档出发,走自己的路。