Methods and systems for generating a merged reality scene based on a virtual object and on a real-world object represented from different vantage points in different video data streams
Abstract:
An exemplary merged reality scene capture system (“system”) receives a first frame set of surface data frames from a plurality of three-dimensional (“3D”) capture devices disposed with respect to a real-world scene so as to have a plurality of different vantage points of the real-world scene. Based on the first frame set, the system generates a transport stream that includes color and depth video data streams for each of the 3D capture devices. Based on the transport stream, the system generates entity description data representative of a plurality of entities included within a 3D space of a merged reality scene. The plurality of entities includes a virtual object, the real-world object, and virtual viewpoints into the 3D space from which a second frame set of surface data frames are to be rendered representing color and depth data for both the virtual and the real-world objects.
Information query
Patent Agency Ranking
0/0