Quantcast
Viewing all articles
Browse latest Browse all 36

How to do a low bandwidth, retinal resolution video call

Not everybody loves video calls, but there are times when they are great. I like them with family, and I try to insist on them when negotiating, because body language is important. So I’ve watched as we’ve increased the quality and ease of use.

The ultimate goals would be “retinal” resolution — where the resolution surpasses your eye — along with high dynamic range, stereo, light field, telepresence mobility and VR/AR with headset image removal. Eventually we’ll be able to make a video call or telepresence experience so good it’s a little hard to tell from actually being there. This will affect how much we fly for business meetings, travel inside towns, life for bedridden and low mobility people and more.

Here’s a proposal for how to provide that very high or retinal resolution without needing hundreds of megabits of high quality bandwidth.

Many people have observed that the human eye is high resolution on in the center of attention, known as the fovea centralis. If you make a display that’s sharp where a person is looking, and blurry out at the edges, the eye won’t notice — until of course it quickly moves to another section of the image and the brain will show you the tunnel vision.

Decades ago, people designing flight simulators combined “gaze tracking,” where you spot in real time where a person is looking with the foveal concept so that the simulator only rendered the scene in high resolution where the pilot’s eyes were. In those days in particular, rendering a whole immersive scene at high resolution wasn’t possible. Even today it’s a bit expensive. The trick is you have to be fast — when the eye darts to a new location, you have to render it at high-res within milliseconds, or we notice. Of course, to an outside viewer, such a system looks crazy, and with today’s technology, it’s still challenging to make it work.

With a video call, it’s even more challenging. If a person moves their eyes (or in AR/VR their head) and you need to get a high resolution stream of the new point of attention, it can take a long time — perhaps hundreds of milliseconds — to send that signal to the remote camera, have it adjust the feed, and then get that new feed back to you. There is no way the user will not see their new target as blurry for way too long. While it would still be workable, it will not be comfortable or seem real. For VR video conferencing it’s even an issue for people turning their head. For now, to get a high resolution remote VR experience would require sending probably a half-sphere of full resolution video. The delay is probably tolerable if the person wants to turn their head enough to look behind them.

One opposite approach being taken for low bandwidth video is the use of “avatars” — animated cartoons of the other speaker which are driven by motion capture on the other end. You’ve seen characters in movies like Sméagol, the blue Na’vi of the movie Avatar and perhaps the young Jeff Bridges (acted by old Jeff Bridges) in Tron: Legacy. Cartoon avatars are preferred because of what we call the Uncanny Valley— people notice flaws in attempts at total realism and just ignore them in cartoonish renderings. But we are now able to do moderately decent realistic renderings, and this is slowly improving.

Image may be NSFW.
Clik here to view.
My thought is to combine foveal video with animated avatars for brief moments after saccades and then gently blend them towards the true image when it arrives. Here’s how.

  1. The remote camera will send video with increasing resolution towards the foveal attention point. It will also be scanning the entire scene and making a capture of all motion of the face and body, probably with the use of 3D scanning techniques like time-of-flight or structured light. It will also be, in background bandwidth, updating the static model of the people in the scene and the room.
  2. Upon a saccade, the viewer’s display will immediately (within milliseconds) combine the blurry image of the new target with the motion capture data, along with the face model data received, and render a generated view of the new target. It will transmit the new target to the remote.
  3. The remote, when receiving the new target, will now switch the primary video stream to a foveal density video of it.
  4. When the new video stream starts arriving, the viewer’s display will attempt to blend them, creating a plausible transition between the rendered scene and the real scene, gradually correcting any differences between them until the video is 100% real
  5. In addition, both systems will be making predictions about what the likely target of next attention is. We tend to focus our eyes on certain places, notably the mouth and eyes, so there are some places that are more likely to be looked at next. Some portion of the spare bandwidth would be allocated to also sending those at higher resolution — either full resolution if possible, or with better resolution to improve the quality of the animated rendering.

The animated rendering will, today, both be slightly wrong, and also suffer from the uncanny valley problem. My hope is that if this is short lived enough, it will be less noticeable, or not be that bothersome. It will be possible to trade off how long it takes to blend the generated video over to the real video. The longer you take, the less jarring any error correction will be, but the longer the image is “uncanny.”

While there are 100 million photoreceptors in the whole eye, but only about a million nerve fibers going out. It would still be expensive to deliver this full resolution in the attention spot and most likely next spots, but it’s much less bandwidth than sending the whole scene. Even if full resolution is not delivered, much better resolution can be offered.

Stereo and simulated 3D

You can also do this in stereo to provide 3D. Another interesting approach was done at CMU called pseudo 3D. I recommend you check out the video. This system captures the background and moves the flat head against it as the viewer moves their head. The result looks surprisingly good.  read more »


Viewing all articles
Browse latest Browse all 36

Trending Articles