Skip to Content

This Startup Is Making Virtual and Augmented Reality So Crisp It Looks Real

Varjo’s prototype of a VR headset shows much clearer images in the center of the display, a much-needed advancement that could help consumer adoption.
June 19, 2017
These images, shot by mixed-reality startup Varjo, show the clarity of images seen through the company’s prototype—a modified Oculus Rift headset—compared to those in a regular Rift display.

I’m looking around a hip city dweller’s home through the lenses of an Oculus Rift virtual-reality headset: there are posters on the walls, sweaters on an industrial clothing rack, a cool orange couch, and gray beanbag chairs.

This isn’t a typical virtual-world view, though. Rather than seeing the whole scene as uniformly in focus, when I look straight ahead—through a rectangle that makes up about 5 percent of my field of view—I can see much more detail. I’m able to note striations in the fabric on the couch cushions, and clear patterns on the sweaters and beanbags. I can read the words on the posters and the titles of books.

This increased sharpness is the work of a Finnish startup called Varjo (pronounced like “Vario”; it means “shadow” in Finnish), which is trying to massively improve the resolution of images for both virtual-reality and augmented-reality headsets—something that may be a way to woo more users to the nascent technologies and make them more useful for professionals.

While resolution has improved greatly in the past several years with the development of VR and AR headsets like the Rift, Microsoft’s HoloLens, and HTC’s Vive, there are no headsets on the market that can show you images anywhere close to the quality of the images you can see with your eyes in real life. Magic Leap, the mysterious, massively funded Florida-based startup, has shown me some augmented-reality examples that came as close as I’ve seen, but the prototypes weren’t even in a form that I could hold up to my face, let alone wear.

Based in Helsinki, Finland, Varjo has been around for less than a year, but it has working prototypes for virtual and augmented reality and plans to make an early version of a headset using its technology for a few companies—think architects, designers, and others who work with 3-D models—to try out late this year. It’s also hoping to start selling a headset for professional users next year (the company won’t say how much this might cost, but indicated I was headed in the right direction when I asked if it would be thousands of dollars).

The company’s virtual-reality prototype, which it let me try out last week during a company visit to San Francisco, builds on an Oculus Rift with a high-resolution micro OLED display and an angled glass plate in front of the headset’s regular display. The plate—an optical combiner—lets Varjo merge the two different displays into one image that you see when you put on the headset.

“We basically have much more pixels in a small portion than the rest of the screen has,” Varjo cofounder and CEO Urho Konttori says.

What Varjo is doing with this hack is similar to a technique known as foveated rendering, which shows you the highest-resolution images just at the spot where your eye is focused, and lower-resolution images in the periphery of your field of view (much the way the fovea, which is a point on the retina of the eye, does).

In theory, foveated rendering can significantly cut down on the kind of computing power needed to show you super detailed images, and, perhaps, make it possible to pack great VR and AR experiences into less powerful devices, like a cell phone or a lightweight, untethered headset. But it’s still mainly in the research stage, as it’s thought to require extremely precise eye-tracking technology in order to work well (and to prevent nausea, or at least annoyance).

And, in fact, the prototype Varjo showed me didn’t even include eye tracking; it used just the Oculus Rift’s standard built-in tracking, which keeps tabs on the position and orientation of your head.

Varjo plans to add the ability to track your gaze, but, even so, it could prove tricky to make this work well. Emily Cooper, a research assistant professor at Dartmouth who studies 3-D vision and how we view displays, notes that eye tracking can be hard to calibrate and isn’t always consistent. One reason is that while we might look at the same object in the same spot over and over, we don’t always do it with the exact same part of our retina—which could throw off an eye tracker.

“It’s always important to keep in mind that people’s vision isn’t perfect,” Cooper says. “That can be a benefit—foveated rendering kind of exploits that in a way. But it can kind of get in the way sometimes.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.