Skip to Content
Artificial intelligence

Car-hailing firm Didi has a new dispatching algorithm that adapts to rider demand

December 12, 2018

Didi, China’s Uber equivalent, has been testing out a new algorithm for assigning drivers to riders in select cities.

The dispatching system uses reinforcement learning (RL), a subset of machine learning that relies on penalties and rewards to get “agents” to achieve a clear objective. In this case, the agents are the drivers and the rewards are their payments for completing a ride.

The company’s current dispatching algorithm has two parts: a forecasting system that predicts how rider demand changes over time, and a matching system that assigns drivers to jobs on the basis of those predictions.

It has served the company well thus far, but it can be inefficient. If the patterns of driver supply and rider demand change, the forecasting model needs to be retrained to continue making accurate predictions.

Moving to an RL approach solves this problem by collapsing both parts into one: with every subsequent piece of data, the algorithm learns to dispatch drivers more efficiently. That allows it to keep evolving with changing supply and demand, without any need to retrain. A/B tests between the old and new algorithms in a handful of cities have confirmed that the new one is indeed more efficient.

Didi is now planning a gradual roll-out of the new dispatching system to cities in China, though an exact time line hasn't been set. Tony Qin, the AI research lead for the company’s US division, told MIT Technology Review that the company may continue to conduct A/B tests between its different algorithms for different locations and use the one that produces the most efficient results.

The RL algorithm may not always be the best one, Qin said. It largely depends on the city’s supply and demand patterns. In the meantime, the company is also developing another RL dispatching algorithm, with different agents and rewards, to add to its arsenal.

An abridged version of this originally appeared in our AI newsletter The Algorithm. To have it delivered directly to your inbox, subscribe here for free.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.