
As carmakers race toward autonomous driving, UC San Diego researchers are looking out for you.
At the turn of the century, the idea of a “horseless carriage” was met with skepticism by those forced to share the road. Today those reservations are rekindled with the prospect of the driverless car. And though removing the human element seems as inevitable as embracing the engine, autonomy on the roadways comes with its own set of concerns, with safety seated at the top of the list.
Yet before any car can control itself, it has to be smart enough to do so—that’s where UC San Diego researchers are steering the conversation. In the lab of electrical engineering professor Nuno Vasconcelos, researchers are working on technologies that will help cars autonomously identify objects and react quickly enough to avoid collisions. The technology is part of broader interdisciplinary efforts at UC San Diego aimed at creating robotic and software systems that can better cooperate with humans. This requires developing systems capable of interpreting and responding to humans, other robotics and the environment in real time. “We’re aiming to build vision systems that will help computers better understand the world around them,” says Vasconcelos, who is affiliated with the Contextual Robotics Institute and the Center for Visual Computing, both at UC San Diego.
The lab’s latest breakthrough shows promise behind the wheel: a new pedestrian detection system that performs with higher speed and accuracy than ever before. Instead of costly sensor technology, Vasconcelos’ system uses a dashboard camera and software that can process video images closer to real-time (two to four frames per second) with nearly half the error of image recognition systems developed by other academic and corporate research teams.

“It’s much cheaper to use cameras instead of sensors to identify pedestrians,” says Vasconcelos. “Some sensors cost more than the car itself. We’ve shown that the images do not need to have high resolution in order for the system to work well.”
Naturally, the secret is in the software. Vasconcelos and his team designed the new pedestrian detection system to “think” via a novel algorithm that combines a traditional computer vision classification architecture, known as cascade detection, with the more complex technology of deep learning models. While each system alone has its pros and cons, the two complement each other and the balance between them allows for maximum accuracy as well as efficiency.
Cascade detection is a well-known process that works over multiple stages to crop out areas in an image that do not contain the desired object—in this case, pedestrians. In early stages, the algorithm quickly identifies and discards areas that it can easily recognize as “lacking a person,” like the sky or an empty road. In the later stages, the algorithm processes areas that are considerably harder to classify, such as a tree, which could be recognized as having person-like features due to its shape, color and contours. While this method is fast initially, it isn’t quite powerful enough to distinguish between a pedestrian and very similar objects during the final stages.
This is where deep learning models come in. Deep learning models are capable of complex pattern recognition, which they perform after being trained with hundreds or thousands of examples. But because of their complexity, deep learning models process too slowly for real-time implementation.
Vasconcelos’ algorithm combines the best of both worlds: traditional cascade detection quickly filters out most of the non-pedestrian parts of an image during the early stages, then deep learning models are used to process the more complex parts in later stages.
According to Vasconcelos, this is the first algorithm to incorporate stages of deep learning into cascade detection. “No previous algorithms have been capable of optimizing the trade-off between detection accuracy and speed for cascades with stages of such different complexities. The results we’re obtaining with this new algorithm are substantially better for real-time, accurate pedestrian detection.”
While currently the algorithm only works for binary detection tasks (only detecting one intended object, such as pedestrians), researchers are working toward simultaneous detection of different types of objects. “We want to train just one detector to recognize, for example, five or more different
objects,” says Vasconcelos. “Developing that algorithm is the next challenge.”
Algorithms are just one of the many challenges when it comes to autonomous cars. For all the excitement over the prospect of being chauffeured by a computer program, there’s an equal amount of anxiety about relinquishing the wheel. Mohan Trivedi, electrical engineering professor at the Jacobs School of Engineering and researcher in the Contextual Robotics Institute, recognizes this dichotomy and is driven to answer the most fundamental questions behind the onset of smart cars: When we talk about intelligent automobiles, what does it mean, exactly? Does it mean autonomous self-driving robots, or something else? He also encourages researchers to consider what role humans will play in the autonomous vehicles of the future. For example, would humans be required to interact with, or take control of, the vehicle, or would they trust their robotic vehicles completely?

Trivedi has a definitive answer to these questions. “Here at UC San Diego, we are working on a very different vision of the future,” says Trivedi. “One where drivers and occupants feel safe, with driver-assistance technologies to help them and vehicles make better, faster decisions. In other words, systems that support, rather than replace, the driver.”
As director of UC San Diego’s Laboratory for Intelligent and Safe Automobiles (LISA), Trivedi leads the development of intelligent technologies that can understand the driver and the surrounding environment to help navigate through chaotic situations and avoid accidents. LISA’s aim for intelligent vehicles is what Trivedi calls a “human-centered, distributed cognitive system,” in which human drivers and robotic cars cooperate as a team while driving, rather than compete with each other for control of the steering wheel. “This distributed cognitive system should be able to learn and execute perceptual, cognitive and motor functions in a synergistic manner, where humans and machines both understand the strengths and limits of one another,” says Trivedi.
Over the last 15 years, LISA researchers have undertaken projects funded by carmakers including Nissan, Toyota, Mercedes, Volkswagen and Audi, as well as various federal and California-funded programs. The team has pioneered technologies to monitor and assess what’s happening both inside and outside cars on the road—Trivedi calls this the LiLo approach, or “looking in, looking out.” The team conducts its experiments by driving a fleet of testbed vehicles equipped with computer processors, cameras, GPS systems and other sensors that
record the movements of the vehicle, the areas immediately surrounding the vehicle, as well as the movement of the driver’s head, eyes, hands and feet.
Researchers then use the data to develop machine vision and deep learning algorithms that help a car learn the driver’s patterns—where the driver looks, how the driver steers, when the driver tends to stop, go, slow down or speed up—and then predict the driver’s intended maneuvers a few seconds before they happen. Onboard computer processors run this information and send a set of instructions to actuators on the steering wheel, accelerator and brakes. Armed with this technology, the car could identify which driving patterns could lead to more risky maneuvers and act in real time to alert or assist the driver to change course.
For example, LISA researchers are developing intelligent driver assistance systems that assess when it’s safe to merge, brake, change lanes, accelerate and
decelerate. So if drivers take their eyes off the road and begin swerving, cars could momentarily take control of steering and braking to avoid obstacles and collisions. The car could also determine the best speed at which to merge into the designated lane, based on the distances and speeds of cars in surrounding traffic.
“These vehicles will have to understand various factors,” says Trivedi. “For instance, when and how to engage humans in controlling the vehicle in case of an emergency, the readiness of an occupant to take control, the gestures and intentions of humans in the car and on the streets, and also how to safely and smoothly move around vehicles which are driven in the old-fashioned way.”
To consider a human at the wheel “old-fashioned” speaks volumes about the pace of innovation and the drive behind making mere possibilities a reality on the roadways. As vehicles speed ever toward automation, new research challenges will be continuously emerging, a prospect that excites Trivedi. “Our goal is to better understand dangerous and critical situations,” he says. “Ultimately, that understanding will help us and others to design effective counter-measures in order to make driving safer for drivers, passengers, pedestrians … anyone who may be at risk in an automotive incident.”
______________________________
Along for the Ride
Breakthroughs in understanding driver behavior and intent are products of a truly interdisciplinary collaboration at UC San Diego. The Laboratory for Intelligent and Safe Automobiles (LISA) works closely with leading researchers in a variety of fields, such as psychology professor Harold Pashler; cognitive science professors Ed Hutchins, Revelle ’71, M.A. ’73, Ph.D. ’78, and Jim Hollan; and Scott Makeig, Ph.D. ‘85, director of UC San Diego’s Swartz Center for Computational Neuroscience. Trivedi
readily credits these interdisciplinary partnerships as a strong influence on
the early thinking of LISA.
“Our collaborators are pioneers in the field of distributed cognition,” says Trivedi. “With their help, we’ve developed a new machine-learning-based paradigm that enables us to observe and learn the patterns which are associated with drivers’ intentions to do safe maneuvers, as well as their intentions to change or not change
the course of the journey.”