Dataspeed and Autobrains had the privilege of working together to create a Drive-by-Wire and sensor equipped demo vehicle that showcased Autobrains’ self-learning AI for CES 2023. We recently spoke with Adir Lev, Head of Tech Ops, Tal Glantz, Senior Director of Technical Business Development, and Sophia Eichler, Director of Marketing, of Autobrains, to discuss the collaborative project and learn more about their solutions.
Adir, Tal, and Sophia shared their experiences and insights about the collaboration process, the challenges they faced, and the advantages of working together.
Sophia Eichler: Autobrains is an Israeli automotive AI software company. We develop technologies and solutions for advanced driver-assistance systems and autonomous driving. Today’s ADAS & AD approaches are based on supervised deep learning methods, our self-learning signature-based technology at Autobrains is based on unsupervised deep learning.
Tal Glantz: Today, the main technology in the market of AI is called Supervised Deep Learning. This technology is based on modeling neural networks and training these models with labeled data, and this is why the name is “Supervised.” So, you basically have to teach the system on a lot of labeled examples in order to produce high accuracy results in real time. The problem with that is it requires very deep networks that are highly inefficient. They have high power consumption and require expensive manual labeling processes of billions of data.
We at Autobrains developed a different approach and we call it a paradigm shift. We try to look at a biological system, the brain, at how it performs the perception test, the navigation test, the understanding of the environment and we try to reverse engineer it. In simple words, our technology looks at thousands of unlabeled images, tries to understand the environment, and then creates one signature that basically represents the information seen in the image. After that, the system learns about common elements inside all those thousands of signatures and clusters them together. So, you basically end up with a database of what we call concepts which has the representation of the object in the real world. You end up with a concept of a pedestrian, a concept of vehicles, bicycles, traffic signs, and you can go beyond that.
Our self-learning technology is quite mature and it was tested and proven on the road through demo vehicles in various countries. Our technology also meets the global regulations and requirements such as GSR and MCAP requirements. Because we are so in this representation task, it makes us highly efficient, we are hardware and sensor agnostic, very modular and scalable in our feature set. We can actually start the development even before the actual target device is ready, because we don’t have to use this kind of labeling efforts as other companies do. After we received the first vehicle in the US, a few weeks before the CES demo, we started the integration of the existing perception feature set.
TG: We wrapped up our demo vehicle in a very short time towards the CES convention. Also, thanks to the good collaboration we had with Dataspeed, we were able to achieve a great performance outcome of the first features that we implemented inside the vehicle, which basically conveyed our quick integration capabilities. In the vehicle, we demonstrated our perception stack which included multiple object detection, a lane detection for level 2 products such as vehicles, VRU, free space, traffic signs, traffic lights, and more.
AL: We got a recommendation and then we had the first few talks which were very good and then we created all the work packages needed and quotes and everything and started working together.
AL: A few weeks ago, I was visiting Dataspeed to work together on the car to add a few things that we needed to add from our side and it was basically very good. It went smoothly, the guys did everything they can to help us with everything we needed so we were happy. There were some things that didn’t work well, but they fixed them right away.
TG: At Autobrains, we will continue to work on products and technologies to develop scalable solutions throughout all levels of autonomy to ensure a safe transition from ADAS to autonomous driving.
AL: So far, we are very happy with the operation and we have additional sensors that we want to add to the car, additional integrations that we want to do, so we are looking forward to working together.
Virginia Tech Enables CARMA Platform With Dataspeed By-Wire Kit
The Autonomous Systems and Intelligent Machines (ASIM) Lab at Virginia Tech was kind enough to share their experience of working with Dataspeed for their AV
The History of Autonomy Meetup
Dataspeed HQ recently hosted the latest Detroit Motor City Self Driving 101 Meetup event! We were excited to host the Meetup as it is inspiring to
Autobrains’ Cutting Edge Demonstration Vehicle at CES
Dataspeed and Autobrains had the privilege of working together to create a Drive-by-Wire and sensor equipped demo vehicle that showcased Autobrains’ self-learning AI for CES