does python run in embedded vision systems.
yep! on many systems.
There is even a site on python wiki about it:
there is an interesting tutorial (HW independent) about optical flow in openCV here:
Out of the box it is quite good on any system. You want to start by trying it out on a PC with USB cameras just to see what kind of power you need. Then in principle you would have to choose your embedded HW depending on other non technical factors (price, reliability, is it easy to find). These will depend on your target market. Most markets have defined the processor these days. openCV runs on all notable embedded processors.
Qualcomm and NVidia make good processors and there are various people offering their chips on boards at the moment.
Hope this helps!
you mean the Zynq range of products from Xilinx?
they're very good and come with an HW optimised version of openCV for the FPGA in their reVision product. The ARM core is itself to underpowered to really run lots of Image Processing (in your case optical flow) - so you would have to do the processing on the FPGA part, meaning VHDL. If you can deal with that it's a very good choice. They have a C to VHDL compiler which will help (a lot). There are a lot of impressive demos and kits using the Zynq 7000s with image processing. for instance on imaginghub itself:
any technical infos you can get direct from Xilinx (https://www.xilinx.com/products/silicon-devices/soc/zynq-7000.html). There is even a free demo to download which shows it all:
In principle for a decent embedded processing system you just need enough processing power to pull off the optical flow. Direct in C/C++ you can pull it off on larger processors with a decent memory connection. With an interpreted language like python, you'll need a more powerful processor for the interpreter overhead. FPGAs can proces almost anything, only limited by the space on them. Theoretically with an FPGA you are building your own specialised processor. It has nothinig with python to do though... Another possibility is to use a GPU with openCL or CUDA.
A hgh end TX2 from NVidia or the larger MPSOC Zynqs from Xilinx you can solve (practically) anything. But they're not cheap...
thank you sir, but actually i am from instrumentation background .. and we got a project on real time optical flow which we know.. but the hardware choice we know litle about... now sir i have cmos sensor , can i connect it directly to embedded procesors ?? do i need any other hardware interfaces to connect the sensor to the embedded vision processor.
as I said you can use normal USB cameras. Or GigE cameras. Also in embedded.
Classic embedded interfaces are LVDS based (good for direct connection to an FPGA) There are many popular implementation of this.
Very popular in the mobile world is CSI from the MIPI group:
There is a Xilinx development board that runs Python
PYNQ is an open-source project from Xilinx® that makes it easy to design embedded systems with Xilinx Zynq® All Programmable Systems on Chips (APSoCs).
Using the Python language and libraries, designers can exploit the benefits of programmable logic and microprocessors in Zynq to build more capable and exciting embedded systems.