The development of autonomous vehicles (AVs) does not necessarily require artificial intelligence or deep learning. Simply put, not all AVs need to be powered by AI. However, the rapid progress and increased accuracy of deep learning has attracted developers seeking to improve its highly automated tools.
But there are still some difficulties in validating the safety of AI-driven AVs. Security researchers are concerned about the “black-box” nature of deep learning, just one of many thorny problems. It is uncertain whether AV designers will be able to verify that a continuously learning AI system or function, once deployed on dedicated hardware in the car, will perform as well as when developed and trained on a more powerful computer system. an unknown.
Despite these issues, experts in the field of autonomous AV and safety recognize that the development of artificial intelligence is inevitable.
With the release of the draft UL 4600 specification, Phil Koopman, CTO of Edge Case Research, said: “We’re after full autonomy.”
UL 4600, a safety standard currently under development by Underwriters Laboratories to evaluate autonomous products, neither assumes nor requires the deployment of deep learning in AVs. However, the standard covers the verification of any machine learning and other autonomous functions used in safety-critical applications.
Automotive-grade software toolkit with deep learning
Against this backdrop, NXP Semiconductors has launched the eIQ automated deep learning toolkit, which enables a 30-fold increase in automotive AI application development performance and enables customers to develop AI applications faster.
“Most deep learning frameworks and neural networks developed to date are used in consumer applications such as vision, speech and natural language,” said Ali Osman Ors, director of automotive AI strategy and partnerships at NXP Semiconductors. But they are not necessarily in Considered when developing life-threatening applications.
As a leading supplier of automotive chips, NXP is further improving its software toolkit to comply with automotive software performance improvement and ASPICE capability rating. ASPICE is a set of guidelines developed by German automakers to improve the software development process.
Specifically designed for NXP’s S32V234 processor, the eIQ automated toolset will help AV developers optimize embedded hardware development of deep learning algorithms and accelerate time-to-market, NXP said.
When asked if there is a similar automated scoring toolkit for deep learning, Ors said: “Some automotive OEMs may have designed their own tools in-house. But as far as I know, I haven’t seen Auto-quality software toolkits like ours are available from other automotive chip suppliers for deep learning.”
Trim, Quantize, Compress
Today, we have a good understanding of the process of data preparation and training (learning) and AI inference for embedded systems.
The AV developers are said to collect data at 4 gigabits per second as the test vehicles drive on public roads. Cleaning and annotating such a large amount of data and using it on training data is prohibitively expensive.In some cases, the data tagging process itself
The cost of AV startups will be greatly weakened.
But equally challenging for AV designers is the daunting task involved in optimizing AI models and deploying them on inference engines. Ors explained that NXP’s tools speed up the “quantization, pruning and compression” process of neural networks.
First, pruning means removing redundant connections that exist in the neural network structure, removing unimportant weights. Of course, the new “pruned” model will lose accuracy. Therefore, the model must be fine-tuned after pruning to restore its accuracy.
Next, quantization creates an “efficient computational process” that involves binding weights by clustering or rounding so that less memory is used to represent the same number of connections. Another common technique is to convert floating-point weights to fixed-point representation by rounding. Like pruning, the model must be fine-tuned after quantization.
AV designers evaluate the accuracy of the converted model by running on test data (not seen before by the deep learning system) and further fine-tune the model.
In addition to this, eIQ Auto partitions the workload and selects the optimal computing engine for each part of the neural network. It speeds up the process of hand-crafting an inference engine, as the tool helps AV designers figure out which tasks run best in a CPU, DSP or GPU. Ors explained that since eIQ Auto has to be very familiar with what’s going on inside the processor, it can’t be used on non-NXP devices.
In addition to providing model optimization and usage tools (scripts, compiler toolchains) and runtime libraries (C/c++, vector DSP, NEON), eIQ Auto also provides interfaces to training frameworks and model formats such as TensorFlow, ONNX, Caffe, and Pytorch .
In summary, the toolkit helps customers quickly move from a development environment to an AI implementation that meets stringent automotive standards.
Application of artificial intelligence in AV
Today, “vision” is the most popular artificial intelligence application in the car, which uses neural networks to classify objects on images. Vision is also used for driver and cabin monitoring, facial recognition and occupancy detection.
Other potential AI applications in the automotive sector include radar.
Future radars are expected to use neural networks to classify road users based on their images. However, Ors pointed out that the use of AI in radar remains undeveloped. “Due to the regulations related to using radar as a sensor, and the barriers to entry can be high.” He added: Radar is also expensive compared to CMOS image sensors. This means that radar data cannot be easily obtained, thus limiting the available datasets.
AI is also expected to be applied to data fusion, such as radar vision. However, there is no consensus in the industry on when to fuse the two sensory data. “Early fusion and late fusion are still being debated,” Ors said.
Today, most AVs tested are equipped with power-hungry hardware, which is not ideal for high-volume car production. NXP hopes its new eIQ toolkit will enable customers to deploy powerful neural networks “in an embedded processor environment with the highest level of security and reliability.”