Skip to Content
Policy

The biggest technology failures of 2018

From gene-edited babies to guaranteed-fatal brain uploads, it was a bumper year for technology misfires and misuses.
December 27, 2018
Photo illustration showing a round-up of Technology Review's worst tech of 2018
Photo illustration showing a round-up of Technology Review's worst tech of 2018John Ueland

It was the year that technology—and the people who create it—seemingly could do no right, and did much that was wrong. As one of my sources put it in a tweet reacting to a dumb tech stunt, “2018 can’t end soon enough.”

For the past few years MIT Technology Review has published a list of what we consider the most pointless or destructive uses of technology (here are 2017, 2016, 2015, and 2014). This year, though, the naughty were naughtier and the wrongs seemed wronger: technology was used to spread hate and addiction, to justify suicide, and to experiment on newborn children. Here’s our list of the very worst.

CRISPR babies

Photo of He Jiankui during National Academies presentation
The National Academies | Flicker

We all knew that gene-edited humans would one day be born, but nobody wanted it to happen so soon, and definitely not like this. In November, MIT Technology Review reported that He Jiankui, a scientist at the Southern University of Science and Technology in Shenzhen, China, had secretly launched the first attempt to create children with edited genes. He edited human embryos using the molecular tool CRISPR to remove a single gene. He claimed that twin girls—named Lula and Lala—had been born and that they would be immune to HIV because of how he’d altered their genomes.

The editing, though, didn’t go particularly well and wasn’t even necessary—there are cheaper and easier ways to prevent HIV infection. It now looks as though the twins were the unconsenting subjects of a reckless bid for a scientific first. He, who was hoping for a Nobel Prize, is instead under investigation in China.

More: EXCLUSIVE: Chinese scientists are creating CRISPR babies (MIT Technology Review), The CRISPR Baby Scandal Gets Worse by the Day (The Atlantic), Why Are Scientists So Upset About the First Crispr Babies? (New York Times), Editing Babies? We Need to Learn a Lot More First (New York Times)  

 

Juul

Photo of Juul device
JUUL

Give credit where it’s due: Stanford-trained product designers James Monsees and Adam Bowen are responsible for an epidemic of youth nicotine addiction.

The duo founded Juul Labs and created a slick-looking electronic vaping device designed to dispense the addictive substance. Yeah, sure, some yellow-fingered smokers accustomed to inhaling burnt leaves might benefit from a switch to huffing drug-laced liquid from pods. The problem is that Juul offered the “iPod of e-cigs” in fruity-tooty flavors like Creme and Mango and pitched it to younger folks on Instagram.

Now, the US Food and Drug Administration says there is a “youth nicotine epidemic.” The number of teen vapers doubled in the last year, in what health officials are calling the fastest-moving substance addiction they’ve ever seen. Juul, with something close to 75% of the market, is the company profiting the most from the problem.

In November, Juul said it would shut down its social-media accounts and restrict sales of some flavors.

More: The Price of Cool: A Teenager, a Juul and Nicotine Addiction (New York Times), Vaping gone viral: the astonishing surge in teens’ e-cigarette use, JUUL Labs Action Plan (Juul Labs)  

 

Censored search

Photo of Google building and Chinese flag
AP Photo/Ng Han Guan

When Google bailed out of China in 2010, shuttering Google.cn, the search giant said it could no longer abide by China’s insistence that it hide politically sensitive results. In a blog post, the chief legal officer of Google made a “promise to stop censoring search.”

So much for promises. A team of as many as 100 Googlers has been at work on “Project Dragonfly,” an effort to build a new search engine for China. It’s an Android app engineered to comply with China’s censorship regime and block sites like Wikipedia and the BBC.

Since August, Google’s own employees have been the ones trying to squash Dragonfly. Some are hoisting placards saying “Don’t be a brick in the Chinese firewall,” while others signed a letter saying the app would “make Google complicit in oppression.”

Google CEO Sundar Pichai is hedging. He told Congress in December that the project was merely exploratory and there are “no plans for us to launch a search product in China.” But Pichai didn’t renew Google’s anti-censorship promise. Instead, he said Google still felt compelled to provide search to people all over the world. That’s because “getting access to information is an important human right.”

Is it? If you searched for human rights from inside China, you might never know.

More: Google plans to launch censored search engine in China, leaked documents reveal (The Intercept),Inside Google’s effort to develop a censored search engine in China (The Intercept), Sundar Pichai testimony to House Judiciary Committee (C-SPAN).

 

Facebook-powered ethnic cleansing

Photo of Rohingya people in Myanmar
AP

Russian intelligence, political tricksters, neo-Nazis—it seems everyone with some hate to sell found that Facebook, the world’s largest social network, was a pretty friendly platform to do it on.

That was definitely the case for a large team of Myanmar military officers who systematically used Facebook to set the stage for ethnic cleansing against that country’s mostly Muslim Rohingya minority in 2016. They employed the now familiar mix of fake news and troll accounts to stoke religious hatred and public fear before they set out to rape and kill Rohingya people and burn their villages. More than 700,000 Rohingya ultimately fled their country in what the New York Times identified as “the largest forced human migration in recent history.” Facebook acknowledged that its platform was used “to covertly spread propaganda” in Myanmar.

Those events occurred two years ago. So why does Facebook make our 2018 list? It’s because Facebook hasn’t been able to stop its product from being used as a platform for organized hate crimes. Instead, it’s dabbled in fake news and propaganda of its own, admitting that it hired a PR firm to attack billionaire George Soros and other critics of the social network. In December, the Southern Poverty Law Center joined other groups in asking for a change at the top. They called for founder Mark Zuckerberg to step down as the company’s chairman (but remain as CEO) to allow more independent oversight.

More: A Genocide Incited on Facebook, With Posts From Myanmar’s Military (New York Times), Letter to Zuckerberg (Muslim Advocates) Facebook policy chief admits hiring PR firm to attack George Soros (Guardian)

 

 “100% fatal” brain uploads

Luckily, a startup called Nectome never actually hooked up a dying person to a heart-lung bypass machine to be pumped full of flesh-preserving chemicals. The problem is it wanted to. Some people had already given the company $25,000 deposits to get in line.

The brain of an elderly woman, preserved using fixative chemicals
Nectome

The eventual goal was the transhumanist aim of mind uploading. Preserve your brain perfectly today, and maybe one day your memories and personality could be extracted and loaded into a computer or robot. The catch: to prevent damage to the brain, the embalming procedure has to start before you actually die—in other words, it involves euthanizing you. (Nectome believes this would be legal under doctor-assisted suicide laws in California, at least.)

The company, which is supported by Y Combinator, has actually done a great job preserving animal brains, but its interest in suicide-by-brain-fixation proved a little too controversial for MIT, which had to cancel a research collaboration it had with the startup. Nectome isn’t dead, though: it says it is continuing basic research and is looking to hire. Better ask about the retirement plan.

More: A startup is pitching a mind-uploading service that is “100 percent fatal” (MIT Technology Review), MIT severs ties to company promoting fatal brain uploading (MIT Technology Review)

             

 

 

 

Deep Dive

Policy

Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

Africa’s push to regulate AI starts now        

AI is expanding across the continent and new policies are taking shape. But poor digital infrastructure and regulatory bottlenecks could slow adoption.

Yes, remote learning can work for preschoolers

The largest-ever humanitarian intervention in early childhood education shows that remote learning can produce results comparable to a year of in-person teaching.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.