<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: NXP i.XM8MP EVK：NNAPI run insightface in Android in eIQ Machine Learning Software</title>
    <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/NXP-i-XM8MP-EVK-NNAPI-run-insightface-in-Android/m-p/1244187#M352</link>
    <description>&lt;P&gt;Hi &lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/31519"&gt;@Geo&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;How did you obtain the insight face model? Can you share it? Did you use the 'benchmark_model' eIQ TFLite app or custom code?&lt;/P&gt;
&lt;P&gt;Thanks,&lt;/P&gt;
&lt;P&gt;Raluca&lt;/P&gt;</description>
    <pubDate>Thu, 11 Mar 2021 13:31:59 GMT</pubDate>
    <dc:creator>raluca_popa</dc:creator>
    <dc:date>2021-03-11T13:31:59Z</dc:date>
    <item>
      <title>NXP i.XM8MP EVK：NNAPI run insightface in Android</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/NXP-i-XM8MP-EVK-NNAPI-run-insightface-in-Android/m-p/1239137#M351</link>
      <description>&lt;P&gt;My Environment&lt;BR /&gt;Hardware: NXP i.XM8MP EVK A01&lt;BR /&gt;Software: Android version 10&lt;BR /&gt;Model：insightface_quant Input：type:&amp;nbsp;uint8[1,112,112,3]Output：type:&amp;nbsp;float32[1,512]&lt;/P&gt;&lt;P&gt;I try to use NNAPI load insightface to inference in Android.&lt;BR /&gt;When I load the model that npu will do VsiPreparedModel::initialize() three times.&lt;BR /&gt;Then when I run predict, &amp;nbsp;npu will do compute three times.&lt;BR /&gt;So total cost time will be same use CPU.&lt;BR /&gt;Even I use smaller size model insightface_r32(34.5MB) there will be a issue.&lt;/P&gt;&lt;P&gt;Please refer attach file.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 03 Mar 2021 06:47:48 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/NXP-i-XM8MP-EVK-NNAPI-run-insightface-in-Android/m-p/1239137#M351</guid>
      <dc:creator>Geo</dc:creator>
      <dc:date>2021-03-03T06:47:48Z</dc:date>
    </item>
    <item>
      <title>Re: NXP i.XM8MP EVK：NNAPI run insightface in Android</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/NXP-i-XM8MP-EVK-NNAPI-run-insightface-in-Android/m-p/1244187#M352</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/31519"&gt;@Geo&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;How did you obtain the insight face model? Can you share it? Did you use the 'benchmark_model' eIQ TFLite app or custom code?&lt;/P&gt;
&lt;P&gt;Thanks,&lt;/P&gt;
&lt;P&gt;Raluca&lt;/P&gt;</description>
      <pubDate>Thu, 11 Mar 2021 13:31:59 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/NXP-i-XM8MP-EVK-NNAPI-run-insightface-in-Android/m-p/1244187#M352</guid>
      <dc:creator>raluca_popa</dc:creator>
      <dc:date>2021-03-11T13:31:59Z</dc:date>
    </item>
    <item>
      <title>Re: NXP i.XM8MP EVK：NNAPI run insightface in Android</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/NXP-i-XM8MP-EVK-NNAPI-run-insightface-in-Android/m-p/1244626#M353</link>
      <description>&lt;P&gt;The reason that you observed that&amp;nbsp;&lt;SPAN&gt;VsiPreparedModel::initialize() three times is due to your model were splitted to 3 sub-graph, those sub-graph were executed separately by VsiNpu. Would you please refer to following command to enable npu profiling?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;On Target&lt;/P&gt;
&lt;P&gt;Click 10 times on About Tablet option in Settings, to become a developer&lt;/P&gt;
&lt;P&gt;Choose Settings -&amp;gt; Developer Options -&amp;gt; OEM Unlocking to enable OEM unlocking.&lt;/P&gt;
&lt;P&gt;In Android terminal (UART terminal) enter the following command:&lt;/P&gt;
&lt;P&gt;$ reboot bootloader&lt;/P&gt;
&lt;P&gt;On Host&lt;/P&gt;
&lt;P&gt;device connected via USB-C:&lt;/P&gt;
&lt;P&gt;$ sudo fastboot oem unlock&lt;/P&gt;
&lt;P&gt;disable the DM-verity&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;$&amp;nbsp; adb root&lt;/P&gt;
&lt;P&gt;$&amp;nbsp; adb disable-verity&lt;/P&gt;
&lt;P&gt;$&amp;nbsp; adb reboot&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;disable selinux, exec the below cmd in uboot command,&lt;/P&gt;
&lt;P&gt;# setenv append_bootargs androidboot.selinux=permissive&lt;/P&gt;
&lt;P&gt;or&lt;/P&gt;
&lt;P&gt;$ setenforce 0&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After unlock android, then run following steps to enable profiling service:&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Rename service binary in /vendor/bin/hw/ from android.neural.network***vsi-npu*** to other name.&lt;/LI&gt;
&lt;LI&gt;Kill current server: ps -ef | grep vsi-npu then kill it.&lt;/LI&gt;
&lt;LI&gt;Start the service from /vendor/bin/hw&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;setprop VSI_NN_LOG_LEVEL 5&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;Collect log in logcat&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 12 Mar 2021 06:24:02 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/NXP-i-XM8MP-EVK-NNAPI-run-insightface-in-Android/m-p/1244626#M353</guid>
      <dc:creator>xiaofengren</dc:creator>
      <dc:date>2021-03-12T06:24:02Z</dc:date>
    </item>
    <item>
      <title>Re: NXP i.XM8MP EVK：NNAPI run insightface in Android</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/NXP-i-XM8MP-EVK-NNAPI-run-insightface-in-Android/m-p/1245422#M354</link>
      <description>&lt;P&gt;My environment&amp;nbsp;&lt;BR /&gt;Python 3.7.0&lt;BR /&gt;tensorflow 2.4.0&lt;/P&gt;&lt;P&gt;The model is from &lt;A href="https://github.com/deepinsight/insightface" target="_blank"&gt;https://github.com/deepinsight/insightface&lt;/A&gt;, and use mmdnn converted to pd format and then use tensorflow converted to tflite.&lt;BR /&gt;I uploaded insightface_r100_quant.tflite to wetransfer link：&lt;A href="https://we.tl/t-Mdz4PKLYJv" target="_blank"&gt;https://we.tl/t-Mdz4PKLYJv&lt;/A&gt;&lt;/P&gt;&lt;P&gt;insightface_r100_quant.tflite&lt;BR /&gt;input：name: data&amp;nbsp;type:&amp;nbsp;uint8[1,112,112,3]&lt;BR /&gt;Output：name: output&amp;nbsp;type:&amp;nbsp;float32[1,512]&lt;/P&gt;&lt;P&gt;Attach file is use&amp;nbsp;insightface_r100_quant.tflite run&amp;nbsp;benchmark on&amp;nbsp;NXP i.XM8MP EVK&lt;/P&gt;</description>
      <pubDate>Mon, 15 Mar 2021 06:53:41 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/NXP-i-XM8MP-EVK-NNAPI-run-insightface-in-Android/m-p/1245422#M354</guid>
      <dc:creator>Geo</dc:creator>
      <dc:date>2021-03-15T06:53:41Z</dc:date>
    </item>
    <item>
      <title>Re: NXP i.XM8MP EVK：NNAPI run insightface in Android</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/NXP-i-XM8MP-EVK-NNAPI-run-insightface-in-Android/m-p/1253445#M355</link>
      <description>&lt;P&gt;Update my current state of issue&lt;/P&gt;&lt;P&gt;benchmark download from &lt;A href="https://storage.googleapis.com/tensorflow-nightly-public/prod/tensorflow/release/lite/tools/nightly/latest/android_aarch64_benchmark_model_plus_flex" target="_blank"&gt;https://storage.googleapis.com/tensorflow-nightly-public/prod/tensorflow/release/lite/tools/nightly/latest/android_aarch64_benchmark_model_plus_flex&lt;/A&gt;&lt;/P&gt;&lt;P&gt;I use benchmark to run model in NXP i.XM8MP EVK A01.&lt;BR /&gt;Attached are reports on whether NNAPI has been used or not.&lt;BR /&gt;insightface_r100_quant_4_1_50_nnapi_profiling.txt ==&amp;gt; ./android_aarch64_benchmark_model_plus_flex --num_threads=4 --graph=insightface_r100_quant.tflite --warmup_runs=1 --num_runs=50 --use_nnapi=true --enable_op_profiling=true &amp;gt; insightface_r100_quant_4_1_50_nnapi_profiling.txt&lt;BR /&gt;insightface_r100_quant_4_1_50_profiling.txt ===&amp;gt; ./android_aarch64_benchmark_model_plus_flex --num_threads=4 --graph=insightface_r100_quant.tflite --warmup_runs=1 --num_runs=50 --enable_op_profiling=true &amp;gt; insightface_r100_quant_4_1_50_profiling.txt&lt;/P&gt;&lt;P&gt;The inference time with NNAPI(491ms) is faster than without NNAPI(988ms).&lt;BR /&gt;Is this reasonable? I originally thought that using NPU can be within 400ms.&lt;/P&gt;&lt;P&gt;Another question is that even if the result of using the benchmark is 491ms, but using TensorFlow Lite in Android, the total cost is still close to 1000 ms, and the warmup time is 4950 ms.&lt;BR /&gt;Please refer attach file Android_TensorFlow_Lite_debug.nn.vlog==1.txt&lt;BR /&gt;Is this reasonable? I thought that inference time should be 490ms in TensorFlow lite.&lt;/P&gt;</description>
      <pubDate>Mon, 29 Mar 2021 08:33:46 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/NXP-i-XM8MP-EVK-NNAPI-run-insightface-in-Android/m-p/1253445#M355</guid>
      <dc:creator>Geo</dc:creator>
      <dc:date>2021-03-29T08:33:46Z</dc:date>
    </item>
    <item>
      <title>Re: NXP i.XM8MP EVK：NNAPI run insightface in Android</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/NXP-i-XM8MP-EVK-NNAPI-run-insightface-in-Android/m-p/1253498#M356</link>
      <description>&lt;P&gt;update my current state of issue&lt;/P&gt;&lt;P&gt;benchmark download from &lt;A href="https://storage.googleapis.com/tensorflow-nightly-public/prod/tensorflow/release/lite/tools/nightly/latest/android_aarch64_benchmark_model_plus_flex" target="_blank"&gt;https://storage.googleapis.com/tensorflow-nightly-public/prod/tensorflow/release/lite/tools/nightly/latest/android_aarch64_benchmark_model_plus_flex&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Attached are reports on whether NNAPI has been used or not.&lt;/P&gt;&lt;P&gt;insightface_r100_quant_4_1_50_profiling.txt ===&amp;gt; ./android_aarch64_benchmark_model_plus_flex --num_threads=4 --graph=insightface_r100_quant.tflite --warmup_runs=1 --num_runs=50 --enable_op_profiling=true &amp;gt; insightface_r100_quant_4_1_50_profiling.txt&lt;BR /&gt;insightface_r100_quant_4_1_50_nnapi_profiling.txt ===&amp;gt; ./android_aarch64_benchmark_model_plus_flex --num_threads=4 --graph=insightface_r100_quant.tflite --warmup_runs=1 --num_runs=50 --use_nnapi=true --enable_op_profiling=true &amp;gt; insightface_r100_quant_4_1_50_nnapi_profiling.txt&lt;/P&gt;&lt;P&gt;The inference time of using NNAPI is 491ms, and the inference time of not using NNAPI is 988ms&lt;BR /&gt;Is this reasonable? I originally thought that using NPU can be within 400ms.&lt;/P&gt;&lt;P&gt;Another problem is that even though the inference time of benchmark is 491ms, the inference time of Tensorflow Lite on Android is nearly 1000ms warmup time is 4950ms&lt;BR /&gt;PLease refer attach file Android_TensorFlow_Lite_debug.nn.vlog==1.txt&lt;BR /&gt;Is this reasonable? I thought the inference time using NNAPI on Tensorflow Lite should be about 491ms as the benchmark.&lt;/P&gt;</description>
      <pubDate>Mon, 29 Mar 2021 09:41:58 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/NXP-i-XM8MP-EVK-NNAPI-run-insightface-in-Android/m-p/1253498#M356</guid>
      <dc:creator>Geo</dc:creator>
      <dc:date>2021-03-29T09:41:58Z</dc:date>
    </item>
    <item>
      <title>Re: NXP i.XM8MP EVK：NNAPI run insightface in Android</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/NXP-i-XM8MP-EVK-NNAPI-run-insightface-in-Android/m-p/1254166#M357</link>
      <description>&lt;P&gt;Dear&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/31519"&gt;@Geo&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;Could you please upload again your TFLite file in wetransfer link because this is expired now?&lt;/P&gt;&lt;P&gt;And could you&amp;nbsp; mind sharing your python code to convert InsightFace model to TFLite format with input is uint8 and output is float32 ? I have no idea on how to convert to TFLite model which input and output have different data type.&lt;/P&gt;&lt;P&gt;Thank you so much!&lt;/P&gt;&lt;P&gt;Bao&lt;/P&gt;</description>
      <pubDate>Tue, 30 Mar 2021 09:26:16 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/NXP-i-XM8MP-EVK-NNAPI-run-insightface-in-Android/m-p/1254166#M357</guid>
      <dc:creator>PhamHoangBao</dc:creator>
      <dc:date>2021-03-30T09:26:16Z</dc:date>
    </item>
  </channel>
</rss>

