Researchers predict more autonomous vehicles by 2030, better ADAS tech needed

2024-04-08 Source:https://www.repairerdrivennews.com/2024/02/07/researchers-predict-more-autonomy-use-by-2030-better-adas-tech-needed/ Views:1330

ABI Research has reviewed the current state of autonomous driving technology within the automotive industry in conjunction with artificial intelligence (AI), high-performance computing, mapping, and location intelligence to find that, by 2030, the majority of vehicles on the road will offer SAE Level 2+ or higher features.

 

ABI’s latest research paper, “A Scalable Approach to ADAS and Autonomous Driving,” states that 69.3% will be so equipped.

 

“Different autonomous applications vary in features and the level of driver involvement,” said James Hodgson, ABI Research smart mobility and automotive research director and author of the whitepaper, in a news release. “Some demand constant supervision, while others permit manual, visual, or cognitive disengagement.

 

“Active safety systems offer limited support, keeping the driver fully in control. In contrast, driverless vehicles eliminate the need for human operators by handling all driving tasks autonomously. Therefore, the automotive industry should adopt a scalable approach to their active safety, semi-autonomous, and fully driverless applications. Maximizing the re-use of components between different feature/disengagement combinations will yield many benefits to the market.”

 

The whitepaper provides an overview of each SAE Level and explores the technology implications of ADAS and active safety, 360-degree perception, high-performance compute, and the redundancy in perception, processing, and software. It also discusses the core role of safety rating agencies in making cars safer and driving the adoption of active safety.

 

To consumers, autonomous vehicle (AV) features and driver supervision combinations seem radically different in terms of their value, cost, and overall impact on their personal mobility experience, Hodgson wrote.

 

“However, from an architecture perspective, these applications share a common set of enabling technologies, with additional components added to enable more features and greater redundancy in the more comprehensive autonomous vehicle implementations.

 

“Therefore, the automotive industry should adopt a scalable approach to their active safety, semi-autonomous, and fully driverless applications.”

 

Maximizing the re-use of components between different feature and disengagement combinations will result in the following benefits for the market, he added:

 

Cost reduction via “a common set of enabling technologies powering active safety, supervised autonomous driving, and unsupervised autonomous driving;” and

A ramped-up experience for consumers by deploying common components into advanced driver assistance system (ADAS) technologies, supervised autonomous driving, and unsupervised autonomous driving to form an understanding of and prep for future unsupervised autonomous driving.

The most widely adopted ADAS approach by automakers is camera and radar sensor fusion for a more robust perception, according to Hodgson’s research, but he noted that camera sensors struggle in extreme lighting and weather conditions.

 

“In contrast, radar sensors have relatively poor resolution, but continue to perform in the same circumstances that compromise camera performance,” he wrote. “Radar sensors also deliver useful inputs such as range and relative velocity.”

 

Alternatively, the use of stereovision to deliver high-performance ADAS, such as Subaru’s EyeSight, builds a 3D model of the environment around the vehicle, much like human vision can determine depth and range, according to Hodgson.

 

Increasingly, OEMs are bringing Level 2 systems to market that provide compute platforms and software originally thought of for Level

3 and Level 4 systems, Hodgson noted, such as automatic lane change, exiting off highways, and target speed on highways with or without any human input, as well as hands-free city driving.

 

Higher levels of unsupervised automated driving include lidar, imaging/HD radar, and duplicate AV systems-on-a-chip (SoCs), according to Hodgson’s research.

 

Over time, some additional sensor technologies, particularly imaging/HD radar, are expected to be incorporated into Level 2+ systems to further improve their safety. Unlike active safety systems, which tend to be shaped by safety rating agency testing protocols, the success of Level 2+ systems will depend on their real-world performance, creating an opportunity for imaging/HD radar in the future.

 

“Overall, a Level 2+ strategy takes advantage of the relatively lower costs, lower risk, and broader regulatory accommodation of supervised automation to kick-start the autonomous vehicle revolution.”

 

Hodgson added that while Level 4 vehicles have more technologies, they aren’t expected to be available for consumers to buy in the short- or medium-term.

 

“[F]ully driverless vehicles will be deployed in a robotaxi context, with fleet operators employing as few vehicles as possible to fulfill the mobility demand,” he wrote. “Driverless vehicle deployments in support of people transit on public roads are still highly limited, and expected to remain so until legislation evolves to accommodate the introduction of driverless vehicles at scale.”


Copyright (c) 2013 www.c-its.org, All Rights Reserved 中国智能交通产业联盟
Designed:zqjq