Skip to Content

Gene-Edited Skin Could Be Its Own Blood-Sugar Sensor

Diabetics might ultimately have glucose sensors built into their bodies.
Image courtesy of Xiaoyang Wu

For diabetics, the constant finger pricks to obtain a blood drop and measure its glucose level is an annoyance. But it’s essential, too. Out-of-whack blood sugar can be fatal.

That’s why engineers have tried for decades to create a noninvasive glucose sensor,  but developing one has proved difficult. It’s just not feasible to accurately measure sugar levels through the skin.

So why not, instead, redesign a person’s body to do the measuring instead? That’s the bright idea that Xiaoyang Wu and colleagues at the University of Chicago’s Ben May Department for Cancer Research had. 

In a fascinating mashup of technologies, the Chicago team says it has genetically edited skin cells from a mouse and turned them into a glucose detector that, once grafted onto the animals, works all the time and doesn’t need a battery.

It’s the first time living skin has been turned into a sensor, says Wu, adding that “a skin-based technology would have a lot of advantages” over finger pricks or even the continuous monitors some diabetics use.

Skin is one of the largest organs in the body, Wu and his colleagues point out in their report, which appeared last week on the publishing website bioRxiv. Skin is easy to get ahold of and—they say—easy to remove later if things go wrong.

To make their biological invention, Wu and team first collected from mice some of the stem cells whose job it is to make new skin. Next, they used the gene-editing technique CRISPR to create their built-in glucose detector. That involved adding a gene from E. coli bacteria whose product is a protein that sticks to sugar molecules. 

Next, they added DNA that produces two fluorescent molecules. That way, when the E. coli protein sticks to sugar and changes shape, it moves the fluorescent molecules closer or further apart—generating a signal that Wu’s team could see using a microscope.

All that was done in a lab dish—so next the team tested whether the glucose-sensing cells could be incorporated into a mouse’s body by grafting the engineered skin patches onto their backs. When mice who were left hungry were suddenly given a big dose of sugar, Wu says, the cells reacted within 30 seconds. Measuring glucose this way was just as accurate as a blood test, which they also tried.

The technology points to a totally new and unexpected use for gene therapy. But before a person gets a CRISPR sensor in the arm, the technology will need some adjustments. That’s because to read out the glowing skin of his mice, Wu had to illuminate it with a laser and watch through a high-end laboratory microscope.

That would be “challenging” for humans, Wu admits. The researcher says he has ideas for an approach that might work in people, but since he might file a patent and start a company, he’s not ready to reveal it.

One thing that’s already clear is that you might not stop at just turning skin into a sensor—you might engineer the same cells to also crank out drugs to treat diabetes. Wu’s team recently developed a gene-therapy treatment that causes mice to lose weight (see “Five Ways to Get CRISPR into the Body”). He says combining their skin-based glucose sensor with that gene therapy could create a closed-loop, living form to control diabetes.

“Our study unravels the tempting potential of cutaneous gene therapy for various clinical applications in the future,” Wu writes.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.