Abstract:
An electronic device is described. The electronic device includes a processor. The processor is configured to render a first zone of an image. The processor is also configured to render a second zone of the image. The first zone has a higher tessellated level of detail than the second zone. The processor is further configured to present the first zone and the second zone on at least one vehicle window.
Abstract:
The techniques disclosed herein include a first device including one or more processors configured to detect a selection of at least one target object external to the first device, and initiate a channel of communication between the first device and a second device associated with the at least one target object external to the first device. The one or more processors may be configured to receive audio packets, from the second device, in response to the selection of at least one target object external to the device, decode the audio packets, received from the second device, to generate an audio signal. The one or more processors may be configured to output the audio signal based on the selection of the at least one target object external to the first device. The first device includes a memory, coupled to the one or more processors, configured to store the audio packets.
Abstract:
Provided are systems, methods, and computer-readable medium for including parameters that describe fisheye images in a 360-degree video with the 360-degree video. The 360-degree video can then be stored and/or transmitted as captured by the omnidirectional camera, without transforming the fisheye images into some other format. The parameters can later be used to map the fisheye images to an intermediate format, such as an equirectangular format. The intermediate format can be used to store, transmit, and/or display the 360-degree video. The parameters can alternatively or additionally be used to map the fisheye images directly to a format that can be displayed in a 360-degree video presentation, such as a spherical format.
Abstract:
An electronic device is described. The electronic device includes a memory. The electronic device also includes a very long instruction word (VLIW) circuit. The VLIW circuit includes an asynchronous memory controller. The asynchronous memory controller is configured to asynchronously access the memory to render different levels of detail. The electronic device may include a non-uniform frame buffer controller configured to dynamically access different subsets of a frame buffer. The different subsets may correspond to the different levels of detail.
Abstract:
The techniques disclosed herein include a first device for reading one or more tags in metadata, the first device including one or more processors configured to receive metadata, from a second device, wirelessly connected via a sidelink channel to the first device. The one or more processors may also be configured to read the metadata, received from the second device to extract one or more tags representative of audio content, and identify audio content based on the one or more tags, and output the audio content. The first device may also include a memory, coupled to the one or more processors, configured to store the metadata.
Abstract:
The techniques disclosed herein include a first device for receiving a communication signal from a second device, the first device including one or more processors configured to receive, in the communication signal, packets that represent a virtual image as part of a virtual teleportation of one or more visual objects embedded in the virtual image. The one or more processors may be configured to decode the packets that represent the virtual image, and output the virtual image at a physical location within a fixed environment. The first device may also include a memory configured to store the packets that represent the virtual image as part of the virtual teleportation of one or more visual objects embedded in the virtual image.
Abstract:
Provided are systems, methods, and computer-readable medium for including parameters that describe fisheye images in a 360-degree video with the 360-degree video. The 360-degree video can then be stored and/or transmitted as captured by the omnidirectional camera, without transforming the fisheye images into some other format. The parameters can later be used to map the fisheye images to an intermediate format, such as an equirectangular format. The intermediate format can be used to store, transmit, and/or display the 360-degree video. The parameters can alternatively or additionally be used to map the fisheye images directly to a format that can be displayed in a 360-degree video presentation, such as a spherical format.
Abstract:
An electronic device is described. The electronic device includes a processor. The processor is configured to obtain images from a plurality of cameras. The processor is also configured to project each image to a respective 3-dimensional (3D) shape for each camera. The processor is further configured to generate a combined view from the images.
Abstract:
An apparatus is described. The apparatus includes an electronic device. The electronic device is configured to provide a surround view based on a combination of at least one stereoscopic view range and at least one monoscopic view range. A method is also described. The method includes obtaining a plurality of images from a respective plurality of lenses. The method also includes avoiding an obstructing lens based on rendering a stereoscopic surround view including a first rendering ellipsoid and a second rendering ellipsoid. Rendering the stereoscopic surround view includes natively mapping a first image of the plurality of images to a first range of the first rendering ellipsoid and natively mapping the first image to a second range of the second rendering ellipsoid.
Abstract:
An electronic device is described. The electronic device includes a processor. The processor is configured to obtain images from a plurality of cameras. The processor is also configured to project each image to a respective 3-dimensional (3D) shape for each camera. The processor is further configured to generate a combined view from the images.