Dear team!
For our bachelors thesis we (me and 2 colleagues) have to evaluate the board for industrial ML-Applications.
We found this paper:
First, there is a description that the board uses Verisilicon´s VIP 8000 Chip and below they wrote that a VIP9000 is used. Which one is the right now? Are there any specification sheets for us? We really want to understand how the NPU calculates over neural networks in detail and where and how we can evaluate boarders for the usage of this SoC. Especially how the hardware acceleration works.
On page 9 is a description off the different VIP types. How can we understand this graphics what does the x-axis say? We read, that the NPU has 3 Tensor Processing Cores and 6 Neural Network Cores, sum = 9 cores.
Hopefully you are able to help us and to gain an understanding!
Kind regards,
Michael
Hello,
thank you for your response!
We read the iMX8Mplus reference manual, but there are still some questions open.
Page 5910 says that there is a network size limit of 2048, but how is the network size defined? Nodes, layers weights, etc.?
It also describes RNN support with yes, how can RNN´s be accelerated with the NPU?
Regards
Hello mhafner,
You can read the iMX8Mplus reference manual and the Linux reference manual for more information.
Regards