Submesh-based updates in an extended reality environment

    公开(公告)号:US12086920B1

    公开(公告)日:2024-09-10

    申请号:US17515325

    申请日:2021-10-29

    Applicant: SPLUNK INC.

    CPC classification number: G06T15/04 G06T15/08 G06T17/205

    Abstract: Various implementations set forth a computer-implemented method for scanning a three-dimensional (3D) environment. The method includes generating, in a first time interval, a first extended reality (XR) stream based on a first set of meshes representing a 3D environment, transmitting, to a remote device, the first XR stream for rendering a 3D representation of a first portion of the 3D environment in a remote XR environment, determining that the 3D environment has changed based on a second set of meshes representing the 3D environment and generated subsequent to the first time interval, generating a second XR stream based on the second set of meshes, and transmitting, to the remote device, the second XR stream for rendering a 3D representation of at least a portion of the changed 3D environment in the remote XR environment.

    Interactions in networked remote collaboration environments

    公开(公告)号:US11798235B1

    公开(公告)日:2023-10-24

    申请号:US17086321

    申请日:2020-10-30

    Applicant: SPLUNK INC.

    Abstract: Various implementations of the present application set forth a method comprising generating, three-dimensional data and two-dimensional data representing a physical space that includes a real-world asset, generating an adaptable three-dimensional (3D) representation of the physical space based on the two-dimensional and three-dimensional data, where the adaptable 3D representation includes a plurality of coordinates representing different positions in 3D coordinate space corresponding to the physical space, transforming the adaptable 3D representation into geometry data comprising a set of vertices, faces comprising edges between pairs of vertices, and texture data, transmitting the geometry data to a remote device, wherein the remote device, constructs, based on the geometry data, the adaptable 3D representation of the physical space for display at a location of the remote device in a remote environment, and modifies, based on an input, at least one of a dimension or a position of the adaptable 3D representation.

    Interaction tools in networked remote collaboration

    公开(公告)号:US11734886B1

    公开(公告)日:2023-08-22

    申请号:US17246254

    申请日:2021-04-30

    Applicant: SPLUNK INC.

    CPC classification number: G06T17/10 G06T19/20 G06T2207/10028

    Abstract: In various embodiments, a method comprises generating, based on first sensor data captured by a depth sensor on a mobile device, three-dimensional data representing a physical space that includes a real-world asset, generating, based on second sensor data captured by an image sensor, two-dimensional data representing the physical space, generating an adaptable 3D representation of the physical space based on the three-dimensional and two-dimensional data, the adaptable representation including coordinates representing different positions in a 3D-coordinate space corresponding to the physical space and the coordinates encapsulate a digital representation of the asset, transforming the adaptable representation into geometry data comprising a set of vertices and a set of faces comprising edges between vertices, applying, based on a first input, a first color along a specified path that appears on a face to generate a first paint path, and transmitting, to a remote device, data corresponding to the first input.

    Mesh updates in an extended reality environment

    公开(公告)号:US11544904B1

    公开(公告)日:2023-01-03

    申请号:US17086297

    申请日:2020-10-30

    Applicant: SPLUNK INC.

    Abstract: Various implementations or examples set forth a method for scanning a three-dimensional (3D) environment. The method includes generating a 3D representation of the 3D environment that includes one or more 3D meshes. The method also includes determining at least a portion of the 3D environment that falls within a current frame captured by the image sensor. The method further includes generating one or more additional 3D meshes representing the at least a portion of the 3D environment and combining the one or more additional 3D meshes with the one or more 3D meshes into an update to the 3D representation of the 3D environment.

Patent Agency Ranking