Abstract:
A systems and methods for compositing real-world video with a virtual object are disclosed. In one embodiment the system includes a processor; and a computer-readable non-transitory storage medium, the medium encoded with instructions that when executed cause the processor to perform operations including receiving video of a video capture region from a camera coupled to an unmanned vehicle; obtaining a map representation of the video capture region; placing the virtual object into the map representation; rendering the video with the virtual object to generate a rendered video; displaying the rendered video; and based on an elevation of the unmanned vehicle, updating the virtual object to transition between a top view of the virtual object and an elevation view or perspective view of the virtual object.
Abstract:
Systems and methods for correcting color of uncalibrated material is disclosed. Example embodiments include a system to correct color of uncalibrated material. The system may include a non-transitory computer-readable medium operatively coupled to processors. The non-transitory computer-readable medium may store instructions that, when executed cause the processors to perform a number of operations. One operation is to obtain a target image of a degraded target material with one or more objects. The degraded target material comprises degraded colors and light information corresponding to light sources in the degraded target material. Another operations is to obtain color reference data. Another operation is to identify an object in the target image that corresponds to the color reference data. Yet another operation is to correct the identified object in the target image. Another operation is to correct the target image.
Abstract:
One or more embodiments of the present disclosure include a system for providing dynamic virtual reality ground effects. The system includes a user interface surface and multiple motors coupled to the user interface surface. At least one of the motors is coupled to a virtual reality component of an electronic device. A first motor of the multiple motors is driven by movement of the user interface surface and is used to generate a feedback electrical signal in response to the movement of the user interface surface. A second motor of the multiple motors is driven using the feedback electrical signal.
Abstract:
Systems and methods are provided for presenting visual media on a structure having a plurality of unordered light sources, e.g., fiber optic light sources, light emitting diodes (LEDs), etc. Visual media can be created based on a computer model of the structure. Images of the structure can be analyzed to determine the location of each of the light sources. A lookup table can be generated based on the image analysis, and used to correlate pixels of the visual media to one or more of the actual light sources. A visual media artist or designer need not have prior knowledge of the order/layout of the light sources on the structure in order to create visual media to be presented thereon.
Abstract:
Features of the surface of an object of interest captured in a two-dimensional (2D) image are identified and marked for use in point matching to align multiple 2D images and generating a point cloud representative of the surface of the object in a photogrammetry process. The features which represent actual surface features of the object may have their local contrast enhanced to facilitate their identification. Reflections on the surface of the object are suppressed by correlating such reflections with, e.g., light sources, not associated with the object of interest so that during photogrammetry, such reflections can be ignored, resulting in the creation of a 3D model that is an accurate representation of the object of interest. Prior to local contrast enhancement and the suppression of reflection information, identification and isolation of the object of interest can be improved through one or more filtering processes.
Abstract:
A process generates a certificate of authenticity for a virtual item. Further, the process sends, with the processor, the certificate of authenticity to a decentralized network of computing devices such that two or more of the computing devices store the certificate of authenticity. The two or more of the computing devices receive, from a user device that provides a virtual reality experience in which a virtual item is purchased, a request for authentication of the certificate of authenticity. In addition, the two or more computing devices authenticate the certificate of authenticity based on one or more consistency criteria for the certificate of authenticity being met by the two or more computing devices.
Abstract:
A user control apparatus has a laser emitter that emits a laser beam in a real-world environment. Further, the user control apparatus has an optical element that receives the laser beam and generates a plurality of laser beams such that a starting point and a plurality of endpoints, each corresponding to one of the plurality of laser beams, form a laser frustum. In addition, the user control apparatus has an image capture device that captures an image of a shape of the laser frustum based on a reflection of the plurality of laser beams from an object in the real-world environment so that a spatial position of the object in the real-world environment is determined for an augmented reality or virtual reality user experience.
Abstract:
There is provided a system for delivery of personalized audio including a memory and a processor configures to receive a plurality of audio contents, receive a first playback request from a first user device for playing a first audio content of the plurality of audio contents using the plurality of speakers, obtain a first position of a first user of the first user device with respect to each of the plurality of speakers, and play, using the plurality of speakers and object-based audio, the first audio content of the plurality of audio contents based on the first position of the first user of the first user device with respect to each of the plurality of speakers.
Abstract:
A window in a multi-window display configuration is provided. A gaze of one or more users is directed at the window. The multi-window display configuration has a plurality of windows that are each configured to display corresponding content. Further, a window attribute of the window is modified based upon the gaze. In addition, a request for the content corresponding to the window is sent to a server. The content corresponding to the window is received from the server. The content corresponding to the window is then displayed according to the modified window attribute at the window.
Abstract:
Systems and methods for compositing real-world video with a virtual object are disclosed. In one embodiment a method is disclosed that includes generating via a processor a virtual representation on a display of a location of interest based in part on unmanned vehicle information collected by an unmanned vehicle, positioning, the via the processor, a virtual object in the virtual representation on the display of the location of interest based on a user input, where the virtual representation of the location of interest includes a rendered view of the virtual object; updating, via the processor, the rendered view of the virtual object on the display as the unmanned vehicle moves between a first aerial position and a second aerial position, where the rendered view of the virtual object changes from a first perspective view corresponding to the first aerial position to a second perspective view corresponding to the second aerial position.