Every day the users of the Imaginghub online community, mainly engineers and developers, design complex embedded vision systems and solutions for a variety of applications. While doing this, they constantly make decisions on technologies and procedures in order to build a first prototype or even a final product.
We wanted to learn more about how developers work on embedded vision projects and we thus carried out a survey amongst our users.
The survey, conducted in October 2017, focused on the engineers’ day-to-day embedded vision project challenges, product requirements and favorite Computer Vision technologies. It reached about 165 Imaginghub community members who contributed by sharing their personal experience and knowledge with us and we really like to say thank you to all participants again!
But now it's time to dive into the results of the Imaginghub Survey 2017.
In our view, the survey delivers excellent insights into trending technologies, most used software and hardware platforms for Computer Vision powered applications and the biggest challenges embedded vision developers have to face during their projects.
In the following we will highlight the most interesting findings of the survey. If you are interested to get the full report, just click here.
For more information about the Imaginghub Embedded Vision Survey 2017 results report, please send an e-mail to info@imaginghub.com.
Demographics
We asked our respondents about their age and found out that 67% of them are between 20 and 36 years old. So, the Imaginghub's members and thus the respondents seem to be in the prime of their life. In terms of their gender we expected a higher level of female respondents but only 6.1% of the respondents are female. This is a bit surprising since almost 50% of the respondents come from the Asia region and countries like India for example normally show the highest percentage of female developers amongst students and young professionals.
User Profile
About 20% of all respondents consider themselves Makers or Professional Makers and 30% are still in education. Although more than 40% of the respondents identify as professionals the overall level of experience in the field of Computer Vision is rather low. This is due to the high amount of students and Makers/Hobbyists among the respondents.
In which of the following fields are you involved?
We asked our respondents about their professional role and the field they are normally involved in. Most of the respondents say that they are involved in Software Development, followed by System Integration and Hardware Development. Since other roles such as Sales and Project Management are rather under-represented, it can be summarized that the majority of respondents clearly identify as developers.
What markets or industries are your applications, products or services primarly aiming at?
Which technology areas are you currently most passionate about?
Most of the developers are interested in technologies around Internet of Things and even most of the applications and services they build seem to target at that area. Although Internet of Things is rather an enabling technology than a market, it's quite interesting how many of the companies behind the respondents deliver products and services throughout the Internet of Things value chain. Not surprisingly, Machine Learning, Neural Networks and also Robotics are the other fields of technology that stand out. These truly belongs to the world-wide industry trends at the moment and are often combined with each other when realizing applications.
Software Environment
What are your preferred programming languages for vision applications?
C and C++ are still the most commonly used programming languages among our respondents. Python, however, is also heavily used and even surpasses C in the ranks. This is probably due to its clear and easy syntax that allows a quick learning curve. Also, the language is widely supported and used in the context of machine learning that makes it the fastest growing major programming language of these days.
Which of the following frame works & processing libraries do you use for embedded vision development?
We asked our respondents about what frame works & processing libraries they normally use in their projects. OpenCV is clearly the most used one with a considerable distance to OpenGL, for example. This underlines the importance of OpenCV for vision application development but also shows the high amount of Makers, Students and Hobbyists among the respondents.
Hardware Environement
What type of processing architectures do you typically use for your projects, products or services?
In general, the processors of which vendor(s) are of most interest to you when designing an embedded system?
ARM is THE processing architecture for efficient embedded systems design. However, for our respondents also x86 seems to play an important role in their projects. The level of FPGA based designs is rather low but sill remarkable when considering the complexity and the required skills for such designs. The high amount of x86 based projects is also reflected in the vendors ranking. The majority of respondents prefer processors from Intel whereas most of the ARM based designs seem to rely on processors from NVIDA and Texas Instruments. But also Qualcomm starts claiming a bigger share in the industrial embedded space.
Do you normally use any additional hardware component for image processing acceleration?
The majority of respondents make use of the GPU in terms of hardware acceleration for image processing. This is probably also related to the high amount of NVIDIA users and their focus on machine learning and associated technologies. Nonetheless, there is a remarkable amount of respondents who say that they do not use any hardware acceleration at all. In our view, this is due to the fact that about 40% of our respondents have one year or less experience in the field of Computer Vision.
We asked our respondents about their projects' image sensing requirements. In particular we wanted to learn more about the required image sensor resolution and the amount of frames per second the sensor needs to acquire. About half of the respondents say that they normally use image sensors with a resolution between 1 and 5 megapixels that run at a framerate of 15 to 60 frames per second. With that in mind we conclude that most of the respondents build applications where the image sensor just monitors a scene and is supposed to deliver an average quality picture instead to excel by high resolution, linearity or dynamics. With an eye on the distribution of markets, this might be in general related to the relevance of IoT and the applications and products of our respondents that are built on top that technology.
Which type of physical interface do you prefer for connecting a camera module to your processing platform?
Camera modules and digital industrial cameras have different interfaces that in turn have a massive effect on the efforts that are needed for integration.
Most of our respondents seem to use plug & play like interfaces such as USB 3.0 and Gigabit Ethernet whereas only a few of them use MIPI CSI-2 and parallel interfaces. From an experience and prototyping point of view this clearly makes sense as the makers and students among our respondents want to focus on building an application instead of how to integrate low level embedded camera interfaces which takes time and money. Interesting in our view is the higher importance of LVDS based connections in system designs of our respondents, though it is the least standardized and most complex way how to connect to camera modules. However, the percentage corresponds to the amount of respondents who prefer FPGAs as processing architecture and also to the amount of students who might be involved in FPAG design as part of their studies.
What distance does a cable connection between image sensor and processing unit normally have to bridge for your applications, products or services?
Despite the fact that most of the respondents use standardized interfaces such as USB 3.0 and Gigabit Ethernet the cable length seem to be no compelling reason for using these types of interface standards . More than 56% of our respondents say that the distance between the image sensor and the processing unit is normally below one meter in their projects. In our view, this has to do on the one hand with the experience level of the majority of respondents again and on the other hand with the lack of standards, open source driver stacks and bridging technologies in the embedded space. That's why the respondents rather use standardized interfaces with broad operating system support for rapid prototyping .
Application Development
What types of sensors besides image sensors do you normally incorporate into your applications, products or services?
We asked our respondents what other sensors beside image sensors they typically use for building applications. Not surprisingly, we got a very homogeneous distribution across the different options as feedback. This is bound to the fact that there are rather no pure vision systems in the embedded system industry. In particular in the field of Industry 4.0, Smart Cities or in the automotive sector, systems often rely on a bunch of different sensors in order to collect data, no matter if this data is fused on the edge or via a central processing unit.
How often do you use open source hardware or software technologies in your projects, applications or services?
The Imaginghub community builds on users who freely exchange knowledge and source code with each other. That's why we wanted to underline the importance of open source hardware and software technologies for the embedded industry and in particular for makers and start-ups again. The results show impressively that almost all respondents use open source software libraries or hardware documents in order to realize their projects.
What are the greatest technology challenges you are periodically facing in your embedded development projects?
With every project there is a new challenge a developer has to face. We asked our respondents what they consider as their biggest project challenges when engaging in embedded system development. Most of the respondents say that algorithm design is the biggest hurdle to overcome, followed by real-time requirements and image optimization. But also getting the hardware running by developing the required driver stack and the implementation of interfaces seem to be one of the bigger challenges for our respondents during their projects. While algorithm and real-time system design have ever been a matter of experience, the achievement of sufficient image quality and the availability of e.g. complete sensor driver stacks seem to be a real challenge in the embedded industry.
How long does the design phase of your projects normally take to finish, starting from research to final product?
We wanted to know how fast our respondents run though the different development phases from the first idea to the final product. Interestingly, more than 70% of our respondents have vey short design-in cycles of less than 12 months. This might have something to do with either the type of company the majority of respondents belong to or with the high amount of makers and hobbyists among the participants. In both cases the objective is probably to deliver a working design instead of a final product with compliance to sector specific regulations and sophisticated product requirements. In our view this picture is also related to the extensive use of standardized interfaces which shows the necessity of less complexity for sensor integration and a clear focus on software application development in the field of IoT and Machine Learning.
For more information about the Imaginghub Embedded Vision Survey 2017 results report, please send an e-mail to info@imaginghub.com.
Comments
Report comment