Skip to Content
Artificial intelligence

What Robots and AI Learned in 2015

It was the year that self-driving cars became a commercial reality; robots gained all sorts of new abilities; and some people worried about the existential threat posed by super-intelligent future AI.
December 29, 2015

The robots didn’t really take over in 2015, but at times it felt as if that might be where we’re headed.

There were signs that machines will soon take over manual work that currently requires human skill. Early in the year details emerged of a contest organized by Amazon to help robots do more work inside its vast product fulfillment centers.

The Amazon Picking challenge, as the event was called, was held at a prominent robotics conference later in the year. Teams competed for a $25,000 prize by designing a robot to identify and grasp items from one of Amazon’s storage shelves as quickly as possible (the winner picked and packed 10 items in 20 minutes). This might seem a trivial task for human workers, but figuring out how to grasp different objects arranged haphazardly on shelves in a real warehouse is still a formidable challenge for robot-kind.

Later in the year, we also got an exclusive look inside one of Amazon’s fulfillment centers, which showed just how sophisticated and automated they already are. Inside these warehouses, robots ferry products between human workers, and people operate as part of a carefully orchestrated, finely tuned production system.

A few months later, an even more impressive robot competition, the DARPA Robotics Challenge, was held in Pomona, California. Funded by the U.S. military and created in response to the nuclear disaster at Fukushima in Japan, the event was designed to inspire the creation of humanoid robots capable of taking over in highly dangerous disaster scenarios.

The contest pushed the limits of robot sensing, locomotion, and manipulation with a series of grueling challenges, including opening doors, climbing stairs, and operating power tools. Again, these things might be easy enough for humans, but they are still extremely hard for robots, as a series of pratfalls involving several of the million-dollar robot contestants quickly highlighted. The $2 million first place prize eventually went to a robot that was able to navigate the course quickly because it could both walk and roll along on its knees.

And while robots are still inferior to us in lots of ways, the underlying technology is improving quickly. Researchers are devising new ways for robots to learn, and ways for them to share the information they have picked up, which should help accelerate progress further still. It’s hardly surprising, then, that robots are appearing all sorts of new commercial settings, from store greeters and shopping assistants to hospital helpers and hotel concierges.

It was also a big year for automated, or “self-driving,” cars. Several new companies, including Apple, Uber, and even China’s Baidu, joined Google and many automakers in researching automated driving technology. We explored how this trend is enabled not only by cheaper sensors and better control software, but also by the increasing computerization of the automobile. The emissions scandal currently engulfing Volkswagen is another example of the growing importance of computer code in today’s vehicles.

The company that most epitomizes vehicular computerization, Tesla, also became the first to introduce advanced self-driving technology on the roads, issuing a software update that included something called Autopilot for Model S cars with the necessary sensors.

It wasn’t an entirely smooth roll out, however. Several Tesla owners posted alarming videos showing the system behaving in unexpected ways on the road, and the company was forced to backtrack by limiting the capabilities of the system until further development and testing can be done.

Google also revealed that its prototype self-driving cars have been in a number of accidents, although it blamed the crashes on the fact that its cars tend to drive in ways that can sometimes confuse other drivers on the road. Still, these incidents point to a looming ethical conundrum facing the creators of self-driving cars. As strange as it sounds, some researchers are already considering the circumstances under which these systems must be programmed to kill.

Huge progress has been made in AI over the past few years, due to the development of very large and sophisticated “deep learning” neural networks that learn by feeding on large amounts of data, and this trend continued in 2015. The world’s biggest tech companies have hired experts in the field to apply the technique to tasks such as voice recognition. We profiled the team at Facebook working on the ambitious effort to create a deep learning AI capable of parsing language and holding meaningful conversations. More recently, Facebook introduced a personal assistant service called M that uses human workers but will be used to help train Facebook’s conversational AI.

With such rapid advances in AI and robotics it is perhaps unsurprising that some experts have started to worry about the long-term ramifications. A book written by the Oxford University philosopher Nick Bostrom fueled this worry, with many troubling hypothetical scenarios involving an artificial “super-intelligence.” We reviewed the book, however, and found that the technical progress doesn’t exactly justify our doomsday fears just yet.

For a little more perspective, then, who better to turn to than one of the fathers of artificial intelligence, Marvin Minsky? In a rare video interview, Minsky offered his thoughts on the history of AI, and some reflections on what the field still needs to achieve.

If the coming year can match some of the early optimism felt by pioneers such as Minsky, then we may well be headed for robot revolution after all.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.