We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not a subscriber? Subscribe now for unlimited access to online articles.

  • Jeremy Portje
  • Intelligent Machines

    How malevolent machine learning could derail AI

    AI security expert Dawn Song warns that “adversarial machine learning” could be used to reverse-engineer systems—including those used in defense.

    Artificial intelligence won’t revolutionize anything if hackers can mess with it.

    That’s the warning from Dawn Song, a professor at UC Berkeley who specializes in studying the security risks involved with AI and machine learning.

    Speaking at EmTech Digital, an event in San Francisco produced by MIT Technology Review, Song warned that new techniques for probing and manipulating machine-learning systems—known in the field as “adversarial machine learning” methods—could cause big problems for anyone looking to harness the power of AI in business.

    Sign up for The Algorithm
    Artificial intelligence, demystified

    Song said adversarial machine learning could be used to attack just about any system built on the technology.

    “It’s a big problem,” she told the audience. “We need to come together to fix it.”

    Adversarial machine learning involves experimentally feeding input into an algorithm to reveal the information it has been trained on, or distorting input in a way that causes the system to misbehave. By inputting lots of images into a computer vision algorithm, for example, it is possible to reverse-engineer its functioning and ensure certain kinds of outputs, including incorrect ones.

    Song presented several examples of adversarial-learning trickery that her research group has explored.

    One project, conducted in collaboration with Google, involved probing machine-learning algorithms trained to generate automatic responses from e-mail messages (in this case the Enron e-mail data set). The effort showed that by creating the right messages, it is possible to have the machine model spit out sensitive data such as credit card numbers. The findings were used by Google to prevent Smart Compose, the tool that auto-generates text in Gmail, from being exploited.

    Another project involved modifying road signs with a few innocuous-looking stickers to fool the computer vision systems used in many vehicles. In a video demo, Song showed how the car could be tricked into thinking that a stop sign actually says the speed limit is 45 miles per hour. This could be a huge problem for an automated driving system that relies on such information.

    Adversarial machine learning is an area of growing interest for machine-learning researchers. Over the past couple of years, other research groups have shown how online machine-learning APIs can be probed and exploited to devise ways to deceive them or to reveal sensitive information.

    Unsurprisingly, adversarial machine learning is also of huge interest to the defense community. With a growing number of military systems—including sensing and weapons systems—harnessing machine learning, there is huge potential for these techniques to be used both defensively and offensively.

    This year, the Pentagon’s research arm, DARPA, launched a major project called Guaranteeing AI Robustness against Deception (GARD), aimed at studying adversarial machine learning. Hava Siegelmann, director of the GARD program, told MIT Technology Review recently that the goal of this project was to develop AI models that are robust in the face of a wide range of adversarial attacks, rather than simply able to defend against specific ones.

    Learn from the humans leading the way in intelligent machines at EmTech Next. Register Today!
    June 11-12, 2019
    Cambridge, MA

    Register now
    More from Intelligent Machines

    Artificial intelligence and robots are transforming how we work and live.

    Want more award-winning journalism? Subscribe to Print + All Access Digital.
    • Print + All Access Digital {! insider.prices.print_digital !}*

      {! insider.display.menuOptionsLabel !}

      The best of MIT Technology Review in print and online, plus unlimited access to our online archive, an ad-free web experience, discounts to MIT Technology Review events, and The Download delivered to your email in-box each weekday.

      See details+

      12-month subscription

      Unlimited access to all our daily online news and feature stories

      6 bi-monthly issues of print + digital magazine

      10% discount to MIT Technology Review events

      Access to entire PDF magazine archive dating back to 1899

      Ad-free website experience

      The Download: newsletter delivery each weekday to your inbox

      The MIT Technology Review App

    You've read of three free articles this month. for unlimited online access. You've read of three free articles this month. for unlimited online access. This is your last free article this month. for unlimited online access. You've read all your free articles this month. for unlimited online access. You've read of three free articles this month. for more, or for unlimited online access. for two more free articles, or for unlimited online access.