Skip to Content
77 Mass Ave

An Optimal Optimizer

Making code more efficient.
February 22, 2017

MIT researchers have developed a compiler that makes parallel programs much more efficient, pulling off a coding feat that the industry had thought impossible. The compiler—a program that converts computer code written in a high-level language into low-level machine instructions—“optimizes parallel code better than any commercial or open-source compiler,” says professor Charles Leiserson. “And it also compiles where some of these other compilers don’t.”

A typical compiler has a “front end” tailored to a specific programming language and a “back end” tailored to a specific chip design. In between—in the so-called middle end—the compiler uses an “intermediate representation,” compatible with many different front and back ends, to describe computations.

Optimization typically occurs in the middle end. There, the compiler extensively analyzes a program, trying to deduce the most efficient implementation of its algorithms.

But that approach generally doesn’t work for parallel computing programs. That’s because managing parallel execution requires a lot of extra code, and existing compilers add it before the optimizations occur. The optimizers aren’t sure how to interpret the new code, so they don’t try to improve its performance.

Postdoc Tao Schardl and undergrad William Moses designed a new intermediate representation for the popular open-source compiler LLVM that allows it to preserve a high-level language’s general instructions about parallel execution without first adding all the extra code.

“T.B. and Billy did it by modifying 6,000 lines of a four-million-line code base,” Leiserson says. “Everybody said it was going to be too hard, that you’d have to change the whole compiler. And these guys basically showed that conventional wisdom to be flat-out wrong.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.