28.03.2024

Apple just gave us all a look inside the AI of its secretive self-driving car project

The company is working on a new approach to make self-driving cars understand where other cars, cyclists, and pedestrians are located nearby, by reading data from a LiDAR sensor, according to a publicly-posted research paper that has not been peer-reviewed.

The research not only shows that autonomous car research is actually taking place at Apple, but that the company is working with the same LiDAR technology as the rest of the field. This sets it apart from Apple’s history of relying on proprietary technologies like FaceID for a competitive edge

Apple is beginning to show the world that it’s undertaking real, competitive research for its secretive autonomous car project.

A LiDAR sensor is basically a spinning laser that captures a 360° view of the car’s environment by bouncing a laser beam off surrounding objects, many times per second. The objects reflect the light back onto the sensor, and the distance from the sensor to the object is calculated based on the time it takes for the light to bounce back.

A LiDAR sensor gives the car’s computer clouds of points. Apple’s proposed AI system would take these points and find the patterns that indicate the things to avoid-a person, bicycle, or another vehicle-and the patterns for open roadway, safe to traverse. Apple’s method, called “VoxelNet”, would use a deep neural network to group the data points into simple boxes, foregoing more complex efforts to accurately calculate every swoop and curve of the complex data cloud.

The researchers were limited to using publicly-available dataset to demonstrate their work, likely so as to not divulge Apple’s internal data. But they did use a few tricks to amplify what the AI system could learn from the same data, including scaling point clouds to be larger and smaller to emulate different size cars, and rotating the entire dataset sideways so it appears to the AI system that the car is turning.

Apple’s proposed “VoxelNet” method uses a deep network to group LiDAR data points into bounding boxes that a self-driving car would recognize as obstacles to avoid. (Screenshot/Apple Research Paper)

Apple has slowly started to open up about its work in artificial intelligence, likely in part from the growing pressure of seeing companies like Google, Facebook, Microsoft, and to some extent even Amazon regularly publishing work in academic journals-to much fanfare.

Artificial intelligence research is still inextricably tied to academia. Facebook’s Yann LeCun founded the company’s AI lab in New York so he could continue teaching at New York University, Google’s AI whiz Geoff Hinton still holds a position at University of Toronto, and even Apple’s head of AI research Russ Salakhutdinov still regularly publishes as a part of his Carnegie Mellon professorship. In academia, the AI labs that publish often attract the best talent. The same is true for AI labs in the private sector.

Apple has only publicly published three other AI research papers, though the first won a top conference prize. A recent BuzzFeed report found that competing tech companies posted hundreds of papers online in the same time as Apple published its three, leading to a perception that Apple is lagging in cutting-edge research. But it does seem to be trying to catch up.

Last year the company hired Salakhutdinov, a highly regarded figure in the field, to lead its AI research efforts. And in an interview with Bloomberg last July, Apple CEO Tim Cook confirmed long-standing rumors of the company’s work on autonomous vehicle technology for the first time, and suggested that’s where the company was focusing its AI efforts. Though he revealed little about its plans within the space, he alluded to its ambitions: “We sort of see self-driving cars as the mother of all AI projects”, he said. “It’s probably one of the most difficult AI projects to work on.”

Leave a Reply

Your email address will not be published. Required fields are marked *