Skip to Content

Oculus Chief Scientist Speaks about Virtual Reality in the Lab and on Your Face

Michael Abrash says he’s looking at “specific aspects” of how VR affects us over time.
October 13, 2017
Mike Windle | G

While many of us are impatient with virtual reality—the best headsets are still too expensive, they need to be tethered to beefy computers, and there isn’t all that much cool stuff to do anyway—Michael Abrash takes a long view.

As the chief scientist for Oculus, which is owned by Facebook, he’s concentrating on how to make the technology better by conducting a range of research in the company’s labs—looking at everything from how to improve the ways we focus on virtual objects to the best way of tracking us in space while we explore VR.

Abrash, who previously worked with Oculus chief technology officer John Carmack on the video game Quake and worked at the game and gaming software company Valve, has been immersed in virtual reality for years now, but he still sounds giddy when he talks about it. He spoke with MIT Technology Review about some of the research Oculus is conducting and the sometimes tough choices content creators will have to make as technology keeps making it easier to build realistic experiences in VR.

We still know very little about how the use of virtual reality affects us over time. Is this something you’re working on at all in your research lab?

We’re working on specific aspects of it. For example, if you want to know how much difference it makes that you have depth of focus. In terms of how VR affects people overall, it’s a pretty general question that I’m not sure how I would answer. Part of it is until we get to the point where VR is something you use for extended periods, it’s really going to be hard to evaluate. Part of it is to even get it good enough that we can do the studies properly.

We don’t need to look all that much like ourselves in virtual reality for others to “recognize” us; the social app Facebook Spaces takes this route by presenting us as cartoon versions of ourselves. How are you studying what makes us, well, us, so that other people who know us in real life will want to interact with us in VR?

What makes you a unique person that other people will respond to that way? It is not about literal things. Our lab in Pittsburgh is really focused around that question of what is it that constitutes social interaction, what are the important cues? If you ask me, my guess is there are a thousand of them and about five to 10 of them are the things that would really matter, that would make you feel completely satisfied about an interaction between us.

You see how your hands just went? Well, it might be if I were to meet you in virtual reality and I would see your body language, I would be like, “Ah, that’s Rachel.” I wouldn’t even have to think it; it would just be like—I would feel like I was with you. There are things like the way that you smile, the way your eyebrows raise, the way that you nod, the way your hands go, even the way that you sit.

Even if I’m comfortable with you being you in VR, we’re still experimenting a lot of the time, and sometimes it doesn’t go so well. For instance, Mark Zuckerberg apologized earlier this week after being criticized for a Facebook Spaces live chat he took part in that included a virtual tour of storm-ravaged Puerto Rico via 360° footage. What was your take on it?

The honest truth is I haven’t seen it. So I actually don’t have an opinion.

Virtual-reality games and other content are getting very realistic, including a combat-themed game coming from Respawn Entertainment that’s described as giving you a chance to “experience life closer to what a soldier would experience in real combat.” How careful do we have to be about the kinds of content that we’re creating in VR?

I’m trying to create the platform that will enable people to do things, and there’s an interesting debate to be had about what things it’s really valuable to do with that platform. But I could say that when they developed the Alto at Xerox PARC, which ultimately led, for example, to Quake—well, you could say, “Are first-person shooters a good or a bad thing?” And that’s a legitimate discussion to have. But also [it led to] word processors and spreadsheets, and the Internet and Twitter and Facebook and all those things, right? Technology comes out as a mix. Until you create the platform that enables creative people to explore that space, you don’t know what you’re going to get out of it.

And the thing is, the directions you can go in [with VR are] basically—what directions can you go in in reality? I mean, you wake up in the morning, and it’s like, what will I see, what can I do? And some of them are good and some of them are not great—it’s like that. It is really another space that’s potentially as big as reality. Which is the weird thing.

It’s super weird to think about it that way.

There will be those days where I just think, “Am I in a science fiction novel?” It’s sort of like the whole evolution of communication and digital technology, which you could take back to cave paintings or to the telephone or wherever you want. It’s all been this way of basically encrypting information, encoding it in a way that we can then re-expand in our brains so we understand. So you read a book and you have a sense of being places or whatever, but you don’t have a sense in the way I did when I stood on the edge of that [virtual] pit.

The richness of the experience that you can have in VR is that it is like it’s all of you. It’s the thing your body is built to do, to experience that world, as opposed having your brain kind of reconstruct it. So it’s just that much more powerful.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.