Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Android端推理LightGlue模型crash了 #10509

Open
Jverson opened this issue Apr 30, 2024 · 1 comment
Open

Android端推理LightGlue模型crash了 #10509

Jverson opened this issue Apr 30, 2024 · 1 comment
Assignees

Comments

@Jverson
Copy link

Jverson commented Apr 30, 2024

  • 标题:尝试用Paddle-Lite推理开源的Lightglue模型,一调Run()函数就crash了

  • 版本、预测库信息:
       1)Paddle Lite 版本:v2.13rc
       2)Host 环境:
       3)运行设备环境:展锐T760(UMS9620W)/Android
       4)预测后端信息:CPU/GPU

  • 预测信息
       1)预测 API:C++
       2)预测选项信息:arm64-v8a、4线程
       3)预测库来源:官网下载

  • 复现信息:如为报错,请提供剥离业务环境的可复现预测 demo,用于问题复现
    开源模型:https://github.com/fabio-sim/LightGlue-ONNX.git
    模型转化路径:pytorch-->onnx-->nb
    x2paddle --framework=onnx
    --model=lightglue_1.onnx
    --save_dir=lightglue_1_paddlelite
    --to_lite=True
    --lite_valid_places=arm
    --lite_model_type=naive_buffer

  • 问题描述:请详细描述您的问题,同步贴出报错信息、日志/代码关键片段
    代码:

          // -------------------------step 1: create predictor------------------------------ //
          paddle::lite_api::MobileConfig mb_config;
          mb_config.set_model_from_file(config->model_path);
          mb_config.set_threads(config->num_threads);
          //mb_config.set_power_mode(paddle::lite_api::LITE_POWER_HIGH);
          predictor = CreatePaddlePredictor<paddle::lite_api::MobileConfig>(mb_config);
          ...
          // -------------------------step 2: copy input data------------------------------ //
          std::unique_ptr<paddle::lite_api::Tensor> input_kpts0_tensor(std::move(predictor->GetInput(0)));
          input_kpts0_tensor->Resize({1, keypoints0.shape[2], 2});
          auto* input_kpts0_data = input_kpts0_tensor->mutable_data<float>();
          memcpy(input_kpts0_data, keypoints0.data, keypoints0.length * sizeof(float));
    
          std::unique_ptr<paddle::lite_api::Tensor> input_kpts1_tensor(std::move(predictor->GetInput(1)));
          input_kpts1_tensor->Resize({1, keypoints1.shape[2], 2});
          auto* input_kpts1_data = input_kpts1_tensor->mutable_data<float>();
          memcpy(input_kpts1_data, keypoints1.data, keypoints1.length * sizeof(float));
    
          std::unique_ptr<paddle::lite_api::Tensor> input_desc0_tensor(std::move(predictor->GetInput(2)));
          input_desc0_tensor->Resize({1, descriptors0.shape[2], descriptors0.shape[3]});
          auto* input_desc0_data = input_desc0_tensor->mutable_data<float>();
          memcpy(input_desc0_data, descriptors0.data, descriptors0.length * sizeof(float));
    
          std::unique_ptr<paddle::lite_api::Tensor> input_desc1_tensor(std::move(predictor->GetInput(3)));
          input_desc1_tensor->Resize({1, descriptors1.shape[2], descriptors1.shape[3]});
          auto* input_desc1_data = input_desc1_tensor->mutable_data<float>();
          memcpy(input_desc1_data, descriptors1.data, descriptors1.length * sizeof(float));
    
          // -------------------------step 3: run model------------------------------ //
          timespec begin, end;
          gettime(begin);
          predictor->Run();    // 这里crash了
          gettime(end);
          LOGD("Inference -- step 3: Paddle-Lite, %d threads, run lightglue complete, time: %d",
                      this->config.num_threads, get_milliseconds(begin, end));
    
          // -------------------------step 4: parse output--------------------------- //
          std::vector<std::string> output_names = predictor->GetOutputNames();
          assert(output_names.size() == 1);
          ...
    

    调用Run()函数就crash了,报错日志:
    2024-04-30 14:59:17.576 18781-18781/com.abcde.pointglue A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x70d5a85514 in tid 18781 (abcde.pointglue), pid 18781 (abcde.pointglue)

@MuShangCC
Copy link
Collaborator

运行前 export GLOG_v=5,执行的时候加上 >log.txt 2>&1 就可以将 log 重定向到 log.txt 文件,把log.txt和nb模型放上来哈~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants