After experiencing a video call through Google’s Project Starline, the most impressive part of the demo was the ability to make meaningful eye contact.
Like many others, I’ve participated in more than my fair share of video calls in the past four years. Work moved online, which meant meetings moved online, and for a time, our friends and family were only available online too. We’re past the worst of that period, but the inherent downsides of video calls still feel fresh and obvious. “Zoom fatigue” is still very much in our common vocabulary.
One of the many things I’m self-conscious about during video calls is where I should be looking. The other person is often squarely in the middle of my screen, while the camera is typically at the top. At first, my eye is drawn to look at the person, but on camera, I appear to be looking downward. If I look at the camera instead, I only see the other person from my peripheral vision.
Can you tell I’ve thought a bit too much about this?
It all became a bit more real for me a few years ago, when my partner moved across the country to accept a job offer. I was self-conscious about where to look in video calls before, and I was doubly so until I could also move to rejoin her. Standard video calls were all we had, so we made do. But it was much harder to make the human connection.
In the last few years, Google has been building a better vision of what video calling could be, having first unveiled Project Starline in 2021. The premise is an interesting one. Instead of seeing a 2D video of the other person, you’re shown a live-animated 3D render of them — created in real time, no less.
I had the opportunity to try out Project Starline last month at Google I/O, and I thought I knew what to expect. “3D” became something of a buzzword in the early 2010s following the breakout success of Avatar in 2009. Best Buy had demos of 3D televisions, and even some phone makers were experimenting with glasses-free 3D screens – anyone remember the HTC Evo 3D?
My contribution to that fad was preordering the Nintendo 3DS ahead of its 2011 launch (shoutout to fellow “3DS Ambassador Program” members). The handheld’s main idea was to split the screen’s visual output to show different images to each of your eyes. Since the Nintendo 3DS had no way of tracking your face, you essentially had to hold it at a specific distance and angle for the effect to work correctly.
Was it impractical? Yes. But was it very, very cool? Also yes.
From those early experiences with glasses-free 3D, I expected Project Starline to be a bit wonky but workable. I spoke briefly in person with Andrew Nartker, Google’s GM for Project Starline, then sat at an empty table across from the unassuming hardware. It’s been over a month since my demo, and I can still distinctly remember my face lighting up with amazement when the call began.
Andrew was now sitting across from me, as though a portal had opened where a flat screen once was. I shifted side-to-side in my swivel chair, but the illusion didn’t break down. Moving to the side simply revealed a different angle of the other person. Clearly (and perhaps unsurprisingly), 3D technology has progressed significantly since the Nintendo 3DS’s release. I assume that the same cameras that were recording me in 3D were also adjusting the display to my eyes’ locations.

Comments