HI,
We are trying using tflite framework with npu to do machince learing. Here is our step:
1)tranin a model by pytorch
2) export model with onnx format
3) covnert model by eiq tools
and we meet this issue :

And we need you to help us to answer these questions:
a) Dose eIQ(version 2.7.12 ) support convert onnx model to tflte format ?(file see in attach)
b) we can not found quantization option for float16 , do you know whether tf-lite framework can support use npu (nnapi) to inference with float16 precision ? (PS :original model is onnx)
c) Any recommend for use NPU to do inference with float 16 ? ONNX runtme 、 Tensor flow 、DeepViewRT ?(PS :original model is onnx , bsp version we use 5.10.72-2.2.2)
Waiting for your response.
Best Regards!