As car companies large and small make steady but incremental progress towards the commercialization of autonomy in consumer vehicles, the big question is when we're going to finally see (and be able to benefit from) full, level 4 autonomy. The kind of autonomy where you don't have to pay attention at all, and your car simply takes you where you want to go. This is what's going to completely change transportation, turning time spent getting from where you are to where you want to be from a frustrating experience into a productive (or relaxing) one.

So far, we can buy cars that come equipped with autonomous braking, autonomous parking, and autonomous highway driving, but fully urban autonomy has only been demonstrated by a few, and not in a form that's ready for consumers to take advantage of. An MIT spinout called nuTonomy (which closed a $3.6M seed funding round in January) is ready to change everything by deploying a fully autonomous urban taxi service in downtown Singapore. Using your phone, you'll call a self-driving car to you, tell it your destination, and then sit back and let the car drive you there. This would be a massive advance for both autonomous cars and urban mobility, and we talked with nuTonomy co-founder and CEO Karl Iagnemma about how they're going to make it happen.

nuTonomy was launched in 2013, but the company was based on robotics research at MIT that goes back almost a decade. Karl Iagnemma and Emilio Frazzoli, nuTonomy's CEO and CTO, both directed mobility-focused robotics labs from MIT, and most recently, Frazzoli was part of an MIT experiment in Singapore which set up autonomous golf carts to ferry tourists around a park for a week. Singapore and MIT have been collaborating on research projects like these since 2007, and nuTonomy is one of the results of this partnership: part of nuTonomy's 25-member core team comes directly from the team that developed those autonomous golf carts.

For nuTonomy, as with most autonomous car companies, the progress towards full autonomy is incremental. Part of nuTonomy's business involves providing autonomous features to automotive OEMs and tier 1 manufacturers. For Jaguar Land Rover, for example, nuTonomy is working on a variety of autonomous features that will end up in dealerships in the coming years. "There's a real opportunity for companies like ours to be providers of this technology," nuTonomy CEO Karl Iagnemma told IEEE Spectrum. "The reason for that is the technology in this area isn't primarily automotive technology—it's really being drawn from the robotics community, technology that's been developed in robotics research labs over the last 20 years. We come to this problem as natives."

The problem with incremental progression towards autonomy in personal vehicles, Iagnemma explains, is fundamentally one of cost: "you're not trying to sell a feature to a customer, who might only be willing to pay a couple thousand dollars, which really constrains your sensor and computer cost." Removing consumer ownership from the equation with a commercial vehicle, like a robotic taxi, completely changes things, however: "Now you're trading against the cost of a human driver, so you have a lot fewer constraints on your cost," Iagnemma says. "And it's very likely that the technology will reach the market earlier in the form of this autonomous mobility-on-demand system."

A mobility-on-demand system only really makes commercial sense in urban areas, and urban areas are the most challenging for autonomous vehicles because of the density and complexity of information that needs to be understood in order to make safe and productive decisions. "This is one of the core problems of autonomous vehicles," Iagnemma tells us, "and a problem that a lot of groups in our community are really struggling with." 

“We saw an opportunity to build on a lot of the work that myself and Emilio [Frazzoli] were doing at MIT over the past 15 years, and apply it to this problem. The result is that we feel that we have an approach to the planning and decision making problem that is state of the art and robust. It's not hand-engineered if-then statements in code, it's a rigorous algorithmic process that's translating specifications on how the car should behave into verifiable software. And that's something that's really been lacking in the industry.”

IEEE Spectrum: How is nuTonomy's approach to planning and decision making for autonomous vehicles unique?

Karl Iagnemma: What nuTonomy is focusing on as a company is this decision making problem: how will cars be smart enough to navigate in urban environments? And it's not sufficient to just be safe: being safe is the necessary condition. But for people who want to use the technology, you not only have to be safe, but you have to drive in some sense the way a human drives.

Sometimes, for example, human drivers actually break the rules of the road. They do it in a principled and safe way, but it's something you do almost every time you get behind the wheel of a car. So one of the really unique and differentiating things that we're doing is building into our decision-making engine the ability for cars to actually violate the rules of the road when it's necessary to do so, it in a safe and reliable manner. 


At some point, autonomous vehicles will have to make what are commonly called "ethical" decisions in the interest of safety. How will your cars be programmed do this?

As of today, we don't have any procedure for what we would commonly think of as ethical decision making. I'm not aware of any other group that does either. I think the topic is a really important one. It's a question that's very important to pose, but it's going to take a while for us to converge to a technical solution for it. We'd love to be able to address that question today, but we just don't have the technology.

The other part of it, not that this is a bad thing, is that we're putting more of a burden on the autonomous car than we do on the human driver. Human drivers, when faced with emergency situations where they might have to make a difficult ethical decision, aren't always able to make a reasonable ethical decision in that short amount of time. What level of performance are we going to hold autonomous cars to? The answer is, quite probably, a higher level of performance than we would hold a human driver to, or most people won't accept the technology. That may be unfair, but it doesn't necessarily mean that it's wrong.