Dataspeed HQ recently hosted the latest Detroit Motor City Self Driving 101 Meetup event! We were excited to host the Meetup as it is inspiring to see local organizations bringing together like-minded individuals to share knowledge and discuss AV topics. Hannah Osborn, Director of America Sales and Business Development at LeddarTech, is the group’s visionary leader who has been instrumental in the success of the Detroit chapter.

We had the pleasure of having Daniel Bartz, Principal Engineer for Automated Driving at Volkswagen Group of America’s Vehicle Safety Office, come and speak to us about the history of autonomy. The atmosphere was lively as we all gathered around to listen to Daniel’s engaging presentation. Before the presentation, attendees enjoyed networking and delicious pizza, which made for a great evening. After the informational presentation, attendees were able to ask Daniel questions and share in a dynamic discussion. See below for the Q&A from the event!

Daniel Bartz: There were some automated roads in Minnesota that were automated specifically for snow plows. They weren’t automated for daily drivers, but a lot of the snow plows drive them when there are snow storms and need this help. Some of this technology has been deployed in limited applications, and I know that a lot of the places that were built to test these automated roads, such as the Transportation Research Center (TRC), were built for automated vehicles. That was TRC’s original goal, and then they realized that they could use the test track for all sorts of things. 

DB: The whole idea of robust systems is that if you take any of the noise factors and you tweak them, the system performance should have little impact to the variants of any of the noise factors. If you have a super sensor that does all the heavy lifting and then have it fail, your system performance would have a huge setback. Whereas, if all the sensors were relatively the same strength, the ability to build a more robust system is typically easier when you’re weighting your system, so it is not dependent on any one sensor modality. 

Daniel Bartz presenting the History of Autonomy

We see some of the things in AI algorithms where they get really good at certain tasks and then fail at much more general tasks. For example, AI can identify specific breeds of dogs, but then sometimes fail to tell if it is a cat or a dog. Compared to a human, the AI will be able to tell you the breed much more often, but it is sometimes an obvious fact that it is a dog. Or even with simple things, sometimes AIs have trouble telling the difference between donuts and bagels. With overoptimizing, we’re seeing a lot of more of this in the AI space. You can see a system that has great performance under ideal circumstances, but then it tends to fall off a cliff when you are in less ideal circumstance. At least, that is my personal experience.

DB:  There is research being done, and right now it is kind of DARPA-level research, meaning that it is being considered almost an impossible problem to explain it with AI. One of the things with systems engineering that is also really important is decomposability. This means taking a big problem and making it into small problems and showing that you can solve that small problem over enough of it’s use case and then putting those small pieces back into the bigger puzzle, and seeing that it still fits. 

A problem with AI algorithms is that sometimes AI works a certain way as a standalone piece and when you put it in a bigger puzzle, it works differently. Now, it may work better, but if it works differently, you can no longer use the assumptions you had when you tested it as a standalone part. There are a lot of concepts in systems engineering about decomposability and then composability which is when I take the pieces apart, can I put them back together and have them still work back together. 

DB: That was what Norman Bel Geddes was designing in the 1930s. He said if we built the infrastructure smart enough, we could have these relatively dumb cars on the road. We could have done that. We had the opportunity to do that when we were building the Eisenhower Highway System, but because of humans and their nature of how they want to experience driving lead to missing that opportunity. There are some countries that don’t have the same type of infrastructure. In some other cultures, it might not be the same issue that it is in the US. 

Share

Explore More

AI Test Vehicle for NIST

Project Overview   Primary objective: The project goal for National Institute of Standard and Technology (NIST) is to provide measurement methods and metrics to study the interaction

Language »