Embedded Computer Vision Real Time? Nvidia Jetson TX2 + VisionWorks toolkit
The Computer Vision is a really computational power requesting task, if you work with computer vision you know that to reach very high frame rates you need to maximize the parallelism of the algorithms and you know that massive parallelism is reachable only moving elaborations to GPUs.
The embedded massive parallelism was a dream for computer vision until Nvidia launched Jetson TK1 board in 2014. If you read my previous two blog posts (unboxing, first booth) you know that Nvidia launched its new Jetson board, the Nvidia Jetson TX2, last 7th march 2017 and you know that an astonishing step has been done in terms of elaboration performances ahead of its two predecessors.
The Nvidia’s Jetson boards have born to bring CUDA and the massive parallelism to “portable” systems where a desktop or a laptop computer is only a heavy limitation.
Autonomous robots, autonomous and semi-autonomous car vehicles, aerial drones, intelligent videosurveillance cameras, can now execute real time computer vision algorithms using a “simple” embedded card.
What’s amazing in the Nvidia’s work is that they have not only focused on the hardware, but they have realized a very impressive ensemble of hardware and software that makes “easy” to enjoy its full computational power.
In this particular case, speaking about computer vision, my focus goes to the library VisionWorks™, written to exploit the full power of the Jetson boards.
VisionWorks™ is a toolkit that implements and extends the Khronos OpenVX standard, and it is optimized for CUDA-capable GPUs and SOCs enabling developers to realize CV applications on a scalable and flexible platform.
VisionWorks™ v1.6 is available in the SDK provided by Nvidia for the Jetson boards.
But you cannot deeply understand the power of an embedded board until you do not see it working on real tasks… so enjoy this video:
Se il mio lavoro ti è stato utile offrimi un caffè, mi darà la carica per continuare...