Leveraging data analytics in automotive industry for R&D
Our AI-based virtual sensors and our related platform are used to replace existing sensors or to measure non-measurable quantities. Specifically related to development fleets, engineers in automotive industry need to access additional insights to understand the vehicle behavior and dynamics, and to validate the design. However, to equip development vehicles with additional hardware sensors is expensive (up to 500.000€ per vehicle) and unreliable (sensors and equipment might fail several times during the tests). Furthermore, there is a lack of consistency in data acquisition and management for development vehicles. This is why, only a few development vehicles, usually up to five, are equipped with additional hardware sensors. This leads to a very small statistical basis, so that conclusions are limited. Additionally, with the scaled introduction of electric vehicles on the market and the lack of know-how of traditional OEMs in this area, risks for wrong design are now higher than ever. With our platform and the virtual sensors, we only need that one development vehicle is equipped with additional sensors as a reference. This vehicle is then used to train the corresponding virtual sensors, which are then automatically deployed in the cloud. This way, we can turn every standard vehicle in a fully-equipped development vehicle at low cost and within seconds, and thus extend the development vehicles’ fleet to 10, 100 or even 10.000 vehicles. Additionally, with our end-to-end platform, we can centralize the data and enable and visualization as well as further big data analytics for engineers. Through this, the costs for equipped test vehicles can be reduced by up to 95% and the statistical basis for decision-making is hugely increased.
Introduction: the role of data science in automotive R&D
Automotive companies are facing more and more stringent rules regarding CO2 reduction. The fuel consumption of conventional and hybrid electric vehicles can be optimized through more lightweight mechanical components. Today, mechanical components are often oversized to avoid customers having any quality issues in the field. Optimizing component weight requires detailed information about the actual driving profiles like acceleration, braking habits etc.
Reliable estimations of the interactions between the different parts of a given or future powertrain on a solid methodological basis are critical in durability engineering, within the even shorter development processes of conventional vehicles and today’s complex electrified cars. Incorrect requirement specifications of one powertrain component can lead to unplanned R&D cycles that are likely to jeopardize the tight overall development timeline of an automaker.
Physical quantities like strains, stresses, forces, torques and temperatures are at the foundation of fatigue and durability engineering in automotive industry. However, measuring their effective level usually requires effort- and cost-intensive, sensitive laboratory equipment. The costs for equipping just one development vehicle with hardware sensors can reach up to 500.000€ of initial purchase costs, with Total Cost of Ownership (TCO) being higher due to mounting, calibration and maintenance. Furthermore, many of these hardware sensors have limited reliability and long-term durability and need regular new calibrations: It is not uncommon, that additional hardware sensors fail and need replacement during a test campaign. Sometimes, the given failed hardware sensor cannot even be changed, due to the too high costs related to an interruption of the test campaign. Because of these high costs and high effort, manufacturers usually only equip a small number of development cars with additional hardware sensors. This traditional methodology yields key data for durability engineering at a limited and unreliable statistical basis. In case of electrified powertrains, for instance energy recuperation and overloads due to high torque peaks during launches bring forth different usage profiles and load spectra. Solid statistical data material is required as early as possible within the vehicle development process to reduce risks of product recalls.
Another constraint in automotive industry comes from the inherent characteristics of durability issues: If any issue with a component has “slipped through” the durability validation program before the Start Of Production (SOP), it will take months until the most demanding car drivers will encounter issues in the field. Again, the statistical basis of these driver profiles and quality issues will be much too limited for sound conclusions. Typically, it takes 12 to 18 months after SOP until an automotive manufacturer has gained enough data from the field to establish a reliable list of the top 10 or top 20 durability and reliability issues of a new vehicle model. Sticking with traditional methods in the development process is not the best option to keep the cost of quality issues under control.
The publication describes a solution based on data analytics, to demonstrate how AI-based virtual sensors can be used for predictive analytics of fatigue and damage when applied to fleets of development cars.
Methodology and results
What are virtual sensors based on CAN-bus signals?
The paper describes the data analytics tools and methodology how virtual sensors for steering forces, tie rod forces and suspension displacement can be trained for one specific vehicle model, based on our current demo vehicle, a Tesla model 3 Performance. During the initial algorithm training, the car has been equipped with hardware sensors that are explained in detail. The overall solution architecture contains a data-logger with a proprietary software stack, and a three-fold cloud architecture consisting of a data lake, the virtual sensor hub and a web interface.
Test results show that the virtual sensors correlate accurately with the dedicated hardware sensors for steering forces, tie rod forces and suspension displacement. The ability of virtual sensors to reproduce the absolute value and the usage patterns of the corresponding physical quantities is very high.
Conditions of trust in AI-based virtual sensors for development fleets
A frequent objection against the use of data science, machine learning and artificial intelligence is their lack of explainability. It is impossible to explain why an AI-based algorithm has yielded the result XY. Was it a pure product of coincidence that a brake system or track rod failed when the virtual sensor indicated a dangerous fatigue or anomaly?
The present methodology overcomes the intrinsic limitations of AI with a rigorous data preparation and feature selection, based on traditional mechanical and electrical/electronic engineering know-how. Statistical methods demonstrate the validity of the made assumptions and the generated insights.
Thus, the outcome of virtual sensors following the proposed big data analytics methodology is never the fruit of random coincidence.
Good engineering practice in machine learning requests that a part of the input data is kept aside instead of being used for training the algorithms. These “reserved” data are used later for validation of the already trained algorithms. This is a typical manner to prove the reliability of a virtual sensor based on AI.