<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Tensorflow Lite operator refused by NNAPI Delegate in i.MX Processors</title>
    <link>https://community.nxp.com/t5/i-MX-Processors/Tensorflow-Lite-operator-refused-by-NNAPI-Delegate/m-p/1332553#M179398</link>
    <description>&lt;P&gt;Hello Creative,&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;that's from an underlying NNAPI driver. When you delegate your model to NNAPI, the NNAPI runtime asks all the drivers to decide which driver / hardware is the right one to dispatch the an op to run on it.&lt;/LI&gt;
&lt;LI&gt;you can try&lt;/LI&gt;
&lt;/OL&gt;
&lt;DIV class="snippet-clipboard-content position-relative"&gt;
&lt;PRE&gt;&lt;CODE&gt;adb shell setprop debug.nn.vlog 1
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;P&gt;and then &lt;CODE&gt;adb logcat | grep -i best&lt;/CODE&gt; to see how your model is handled by NNAPI. Check &lt;A href="https://developer.android.com/ndk/guides/neuralnetworks" rel="nofollow" target="_blank"&gt;https://developer.android.com/ndk/guides/neuralnetworks&lt;/A&gt; for more NNAPI related information&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;use either one of them, not both.&lt;/LI&gt;
&lt;LI&gt;you may want to read related guides first, &lt;A href="https://www.tensorflow.org/lite/performance/model_optimization" rel="nofollow" target="_blank"&gt;https://www.tensorflow.org/lite/performance/model_optimization&lt;/A&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards&lt;/P&gt;</description>
    <pubDate>Tue, 31 Aug 2021 13:01:41 GMT</pubDate>
    <dc:creator>Bio_TICFSL</dc:creator>
    <dc:date>2021-08-31T13:01:41Z</dc:date>
    <item>
      <title>Tensorflow Lite operator refused by NNAPI Delegate</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Tensorflow-Lite-operator-refused-by-NNAPI-Delegate/m-p/1332272#M179370</link>
      <description>&lt;P&gt;Hello!&lt;/P&gt;&lt;P&gt;I'm trying to run real-time face detection on I.MX 8M Plus. I have trained my .tflite model on pre-trained model ssd_mobilenet_v2_320x320_coco, converted and quantized it with TensorflowLite converter with interference_input_type=uint8, interference_output_type=float32&lt;/P&gt;&lt;P&gt;When I run the model I have this warnings:&lt;/P&gt;&lt;P&gt;WARNING: Operator PACK (v2) refused by NNAPI delegate: Unsupported operation type.&lt;BR /&gt;WARNING: Operator PACK (v2) refused by NNAPI delegate: Unsupported operation type.&lt;BR /&gt;WARNING: Operator PACK (v2) refused by NNAPI delegate: Unsupported operation type.&lt;BR /&gt;WARNING: Operator PACK (v2) refused by NNAPI delegate: Unsupported operation type.&lt;BR /&gt;WARNING: Operator CUSTOM (v1) refused by NNAPI delegate: Unsupported operation type.&lt;/P&gt;&lt;P&gt;Because of this my model does not fully run on NPU and more slower than I wanted.&lt;/P&gt;&lt;P&gt;I have tried other conversion types - none of them helped.&lt;/P&gt;&lt;P&gt;How could I get rid of this and have my model run faster fully on NPU?&lt;/P&gt;&lt;P&gt;I attach my model for you to examine.&lt;/P&gt;</description>
      <pubDate>Tue, 31 Aug 2021 06:53:20 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Tensorflow-Lite-operator-refused-by-NNAPI-Delegate/m-p/1332272#M179370</guid>
      <dc:creator>Creative</dc:creator>
      <dc:date>2021-08-31T06:53:20Z</dc:date>
    </item>
    <item>
      <title>Re: Tensorflow Lite operator refused by NNAPI Delegate</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Tensorflow-Lite-operator-refused-by-NNAPI-Delegate/m-p/1332553#M179398</link>
      <description>&lt;P&gt;Hello Creative,&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;that's from an underlying NNAPI driver. When you delegate your model to NNAPI, the NNAPI runtime asks all the drivers to decide which driver / hardware is the right one to dispatch the an op to run on it.&lt;/LI&gt;
&lt;LI&gt;you can try&lt;/LI&gt;
&lt;/OL&gt;
&lt;DIV class="snippet-clipboard-content position-relative"&gt;
&lt;PRE&gt;&lt;CODE&gt;adb shell setprop debug.nn.vlog 1
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;P&gt;and then &lt;CODE&gt;adb logcat | grep -i best&lt;/CODE&gt; to see how your model is handled by NNAPI. Check &lt;A href="https://developer.android.com/ndk/guides/neuralnetworks" rel="nofollow" target="_blank"&gt;https://developer.android.com/ndk/guides/neuralnetworks&lt;/A&gt; for more NNAPI related information&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;use either one of them, not both.&lt;/LI&gt;
&lt;LI&gt;you may want to read related guides first, &lt;A href="https://www.tensorflow.org/lite/performance/model_optimization" rel="nofollow" target="_blank"&gt;https://www.tensorflow.org/lite/performance/model_optimization&lt;/A&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards&lt;/P&gt;</description>
      <pubDate>Tue, 31 Aug 2021 13:01:41 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Tensorflow-Lite-operator-refused-by-NNAPI-Delegate/m-p/1332553#M179398</guid>
      <dc:creator>Bio_TICFSL</dc:creator>
      <dc:date>2021-08-31T13:01:41Z</dc:date>
    </item>
  </channel>
</rss>

