<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Use Audio/Voice as inputs for eIQ toolkit in eIQ Machine Learning Software</title>
    <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Use-Audio-Voice-as-inputs-for-eIQ-toolkit/m-p/1316045#M390</link>
    <description>&lt;P&gt;Hi Shai,&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp;Currently eIQ Portal can only be used to generate vision based models for classification/detection. Audio/voice models would need to be created using TensorFlow or Pytorch, an example of which can be found here:&amp;nbsp;&lt;A href="https://www.tensorflow.org/tutorials/audio/simple_audio" target="_blank"&gt;https://www.tensorflow.org/tutorials/audio/simple_audio&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; Once the model is created, then eIQ can be used to run the model like found in the i.MX and i.MX RT examples.&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 30 Jul 2021 14:50:38 GMT</pubDate>
    <dc:creator>anthony_huereca</dc:creator>
    <dc:date>2021-07-30T14:50:38Z</dc:date>
    <item>
      <title>Use Audio/Voice as inputs for eIQ toolkit</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Use-Audio-Voice-as-inputs-for-eIQ-toolkit/m-p/1312883#M383</link>
      <description>&lt;P&gt;Dear Team,&lt;/P&gt;&lt;P&gt;My customer is planning to use i.MX RT for running a TensorFlow model for&amp;nbsp;Voice/Audio Recognition but&lt;/P&gt;&lt;P&gt;I have noticed that eIQ Toolkit and eIQ Portal are suited for&amp;nbsp;Image Classification/detection.&lt;/P&gt;&lt;P&gt;Could you please advise what is the correct process to works with the eIQ toolkit to handle Audio/Voice as input?&lt;/P&gt;&lt;P&gt;Waiting for your kind responses, Thanks in advance.&lt;/P&gt;&lt;P&gt;B.Regrads,&lt;/P&gt;&lt;P&gt;Shai&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 25 Jul 2021 21:14:18 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Use-Audio-Voice-as-inputs-for-eIQ-toolkit/m-p/1312883#M383</guid>
      <dc:creator>shai_b</dc:creator>
      <dc:date>2021-07-25T21:14:18Z</dc:date>
    </item>
    <item>
      <title>Re: Use Audio/Voice as inputs for eIQ toolkit</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Use-Audio-Voice-as-inputs-for-eIQ-toolkit/m-p/1315687#M389</link>
      <description>&lt;P&gt;Hi Team,&lt;/P&gt;&lt;P&gt;There is any update regards my question above?&lt;/P&gt;&lt;P&gt;in addition to that, could the eIQ toolkit import a model that has been already trained to handle Audio, can it be possible to run via MCU TensorFlow lite micro inference?&lt;/P&gt;&lt;P&gt;Waiting for your kind feedback, Thanks in advance&lt;/P&gt;&lt;P&gt;BR,&lt;/P&gt;&lt;P&gt;Shai&lt;/P&gt;</description>
      <pubDate>Fri, 30 Jul 2021 07:05:51 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Use-Audio-Voice-as-inputs-for-eIQ-toolkit/m-p/1315687#M389</guid>
      <dc:creator>shai_b</dc:creator>
      <dc:date>2021-07-30T07:05:51Z</dc:date>
    </item>
    <item>
      <title>Re: Use Audio/Voice as inputs for eIQ toolkit</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Use-Audio-Voice-as-inputs-for-eIQ-toolkit/m-p/1316045#M390</link>
      <description>&lt;P&gt;Hi Shai,&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp;Currently eIQ Portal can only be used to generate vision based models for classification/detection. Audio/voice models would need to be created using TensorFlow or Pytorch, an example of which can be found here:&amp;nbsp;&lt;A href="https://www.tensorflow.org/tutorials/audio/simple_audio" target="_blank"&gt;https://www.tensorflow.org/tutorials/audio/simple_audio&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; Once the model is created, then eIQ can be used to run the model like found in the i.MX and i.MX RT examples.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 30 Jul 2021 14:50:38 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Use-Audio-Voice-as-inputs-for-eIQ-toolkit/m-p/1316045#M390</guid>
      <dc:creator>anthony_huereca</dc:creator>
      <dc:date>2021-07-30T14:50:38Z</dc:date>
    </item>
    <item>
      <title>Re: Use Audio/Voice as inputs for eIQ toolkit</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Use-Audio-Voice-as-inputs-for-eIQ-toolkit/m-p/1340215#M464</link>
      <description>&lt;P&gt;MCUXpresso SDK has tensorflow lite for microcontroller (TFLm) examples. If your model does not contain operators that is not supported by TFLm, in general it could be inferenced, and note that you may need some pre-processing and post-processing code besides the model inference.&lt;/P&gt;&lt;P&gt;An important note is:in the eIQ example, it configures TFLm to include only the operators needed by the example model (mobilenet) to save flash size, it is VERY LIKELY that your model use other operators so the safest way is modify the code in "model.cpp" to use "tflite::AllOpsResolver" instead of&amp;nbsp;MODEL_GetOpsResolver() (in model_mobilenet_ops_micro.cpp). After you know the exact operators required by your model, you can also specify your version of&amp;nbsp;MODEL_GetOpsResolver() to only include needed operators to save flash.&lt;/P&gt;</description>
      <pubDate>Wed, 15 Sep 2021 04:13:23 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Use-Audio-Voice-as-inputs-for-eIQ-toolkit/m-p/1340215#M464</guid>
      <dc:creator>rocky_song</dc:creator>
      <dc:date>2021-09-15T04:13:23Z</dc:date>
    </item>
  </channel>
</rss>

