Posted by on in Electronics

Advanced Materials Innovation Driving Autonomous Vehicle Development

Advanced Materials Innovation Driving Autonomous Vehicle Development

Five years ago, Marc Andreessen famously wrote that “software is eating the world” in an op-ed piece for the Wall Street Journal. It’s hard to argue with the observation as we’ve continued to see the proliferation of big software-driven disruptions in diverse industries ranging from transportation (e.g. Lyft, Uber, etc.) to media (e.g. Twitter, Facebook, etc.) to insurance (e.g. Epic Systems, Zenefits, etc.).

Autonomous driving is no exception. The two largest companies in the world today by market cap, Apple and Alphabet (Google), are both primarily software companies and both of them are working on ushering in a new era of self-driving mobility. Combined with Tesla, Uber, and many of the automotive OEMs, a whole lot of people are working hard to optimize the algorithms and control software to improve safety, widen the acceptable operating conditions, and make better/faster actionable observations from the plethora of data from autonomous vehicle’s suite of sensors. However, all of this relies on the information gathered from the sensors themselves. Therefore, there’s a fundamental limitation to what software innovation can do alone. Improvements to the hardware are crucial to the advancement of self-driving cars and advanced materials play a fundamental role in hardware innovation. It’s all built up from advanced materials (and, lest we forget, by advanced materials as Purnesh’s previous blog touches on).

Cars today already leverage a huge number of sensors. These include accelerator pedal position sensors, exhaust gas oxygen sensors, tire pressure sensors, air-fuel ratio sensors, and catalyst temperature sensors to name but a few. Sensors systems specific to enabling autonomous vehicles include but are not limited to cameras, radar, ultrasound, lidar, GPS, odometry, infrared, and computer vision. Furthermore, the concept of sensor fusion is being pushed forward as a way to overcome the individual shortcomings of each type of sensor alone. The idea here is that the sum can be more than the parts. By the different types of sensors seamlessly exchanging information with each other, the surroundings can be mapped more accurately and the redundancy of information helps reduce the chance of malfunction or catastrophic failures. Although improved signal processing and leveraging sensor fusion are certainly important, we must remember that they are really an augmentation of imperfect data from imperfect hardware. Thus, the aforementioned techniques as well as improved AI/machine learning are not substitutes for higher fidelity raw surroundings mapping data. And we gather better surroundings data through physical materials and hardware advancements.

Improvements to three-dimensional image sensing and depth detection for computer vision are being driven by creative advanced materials usage. From novel diffraction grating arrays and microlenses to RF-modulated time-of-flight cameras, there are a range of sensor approaches being explored and optimized. Today, time-of-flight cameras are already finding use in some vehicles to assist drivers and increase both passenger and pedestrian safety. Pangaea has recently reviewed materials-based technology capable of inexpensively converting conventional 2D image sensors to 3D sensors without affecting 2D image quality. Instead of just measuring light intensity at each pixel, new image sensors may be able to accurately measure depth as well through in part the directionality of the captured light.

Lidar is another technology that’s being leveraged for autonomous vehicles. Traditionally, lidar relies on firing rapid pulses of laser light and a sensor that measures the amount of time it takes for each pulse to bounce back. The system architectures for lidar modules can be quite varied as there are many ways to aim the laser and photodetectors. Some lidar systems require a spinning assembly for 360° viewing, some house the scanning portion in a solid package so that there are no external moving parts (but most of these still have internal moving mirrors), and some use beam steering and micro-mirrors to scan across the field of view. Again, materials are driving performance up and costs down. Traditional mechanical rotating Lidar systems are expensive and hard to scale down. Solid-state MEMS-based technologies offer the promise of much lower cost as they can leverage CMOS infrastructure, higher reliability due to the lack of moving parts, and smaller system form factors. All of this with equivalent or superior HD sensing performance! As I’ve discussed in the past (link), this use case is just one of an already long list of applications enabled by MEMS and materials science & engineering at the micro/nanoscale.

Another area where materials innovation is expected to have an impact is on eye safety of lidar systems. The majority of lidar systems today operate at either 906 or 1060 nm wavelengths, and unfortunately these are not eye-safe at high power. This is because the human eye can focus and absorb these wavelengths (though we can’t actually “see” them). 1550 nm lasers however – which are not focused by the eye – can be operated at much higher power levels. This leads to improved resolution and frame-rates. The challenge is that the typical materials used in CMOS image sensors do not absorb light well in this regime. Exciting materials development work on strained superlattices of Group IV elements, quantum dot films, germanium photodetectors, and novel silicon surface treatments are pushing the envelope in IR image sensing and better lidar.

Autonomous driving is a problem worth solving. Thousands of lives could potentially be saved every year by preventing automotive fatalities cause by human error. Not to mention reducing traffic, congestion, and CO2 emissions. Plus, the market size is not immaterial here either. Industry observers estimate that the sensor market for autonomous cars will grow sevenfold from $3B in 2016 to $22B in 2026 (Yole Développement, October 2015).

Software relies on advanced materials to run so it definitely will rely on advanced materials to drive.

Associate, Pangaea Ventures Ltd. Matt holds an MPhil in Micro- & Nanotechnology Enterprise from the University of Cambridge and graduated summa cum laude from the University of Pennsylvania with a BSE in Materials Science & Engineering.View Matthew Cohen's profile on LinkedIn

Comments

  • No comments made yet. Be the first to submit a comment

Leave your comment

Guest
Guest Sunday, 26 March 2017