Tuesday, 8 March 2022

Rendering a video from GStreamer in VR (Oculus Quest 2)

I'm working on a robot that is controller via the VR headset and sends a real-time video feed to the headset.

I've chosen to go the native way on Android and now have everything I need to receive the video stream and encode it (using GStreamer) and also to send the control data to the robot via UDP.

The last thing to do (and the one I most struggle with as I nave no prior experience with computer graphics) is to draw the image (encoded camera feed) to the screen. In the last few days, I've been reading stuff about how Vulkan and OpenGL works, I've also went through the examples provided in Oculus Mobile SDK (mainly VRCubeWorld_SurfaceView) but that's way to complex for what I need, I've tried to simplify it so I could just draw two images, but then I thought.

Do I even need any of that? And this question might sound stupid, but I really don't have any prior experience doing this.

I mean, the example is using OpenGL to basically compute all the layers of the 3D scene, apply colors and then fuse them together to get a final frame that is passed to VR_API via the function:

vrapi_SubmitFrame2(appState.Ovr, &frameDesc);

Can I just take those images, and somehow force them into the frameDesc structure to skip the whole OpenGL pipeline? If so, can anyone knowledgeable enough point me to a working solution?

I don't need any kind of panning over the images, just to render them. Later I'll be using head sensor data, but it won't actually do anything with the "scene".



from Rendering a video from GStreamer in VR (Oculus Quest 2)

No comments:

Post a Comment