Please consider the following:
* You want to transfer image pixels to FPGA - how many pixels per image and what is a size of an image? Can you transfer only a bunch of pixels at a time that can be processed by first step processing (I suppose some filtering), without a need to store a whole image?
* Do you want your system to work in real time? If no then even slow/old FPGA with very limited resources can handle that task, but a memory buffer is needed to store images: Camera => RAM => FPGA => RAM => Display).
* If you are interested in a real time processing you should estimate how many resources are needed inside FPGA to do a job.
PYNQ is Zynq based, that means you can elaborate the synergy between CPU and FPGA - do you think about HLS or SDSoC in an acceleration context?
Hi
I think you should start with building the network from known framework such as
Keras /Tensorflow etc.. check if you get what you wish for.
Then do the math & calculation for inferncing on FPGA.
Read this artical for beter assessment:
https://hal.archives-ouvertes.fr/hal-01695375/file/hal-accelerating-cnn.pdf
Gil
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?