Abstract:
Systems and methods for reducing the amount of texture cache memory needed to store a texture atlas by using uniquely grouped refined triangles to create each texture atlas.
Abstract:
An optical sensing method capable of changing a sensing direction of an optical sensing module is applied to a portable device, which includes a housing, an optical sensing module and an optical diverting mechanism. The optical sensing module is disposed inside the housing. The optical sensing module includes an optical emitter adapted to emit an optical sensing signal out of the housing and an optical receiver adapted to receive an optical modulated signal reflected from an external object. The optical diverting mechanism is adjacent by the optical sensing module. The optical sensing signal is directly projected while the optical sensing signal is not diverted by the optical diverting mechanism, and the optical sensing signal is transmitted to a second direction different from the first direction while the optical sensing signal is diverted by the optical diverting mechanism.
Abstract:
A scene comprising a set of visual elements may allow a user to perform “zoom” operations in order to navigate the depth of the scene. The “zoom” semantic is often applied to simulate optical visual depth, wherein the visual elements are presented with different visual dimensions and visual resolution to simulate physical proximity or distance. However, the “zoom” semantic may be alternatively applied to other aspects of the visual elements of a scene, such as a user selection of a zoomed-in visual element, a “drill-down” operation on a data set, or navigation through a portal in a first data set to view a second data set. These alternative “zoom” semantics may be achieved by presenting the effects of a “zoom” operation within the scene on the visual presentation of the visual element in a manner other than an adjustment of the visual dimensions and resolution of the visual element.
Abstract:
Introduced herein are various techniques for displaying virtual and augmented reality content via a head-mounted display (HMD). The techniques can be used to improve the effectiveness of the HMD, as well as the general experience and comfort of users of the HMD. A binocular HMD system may present visual stabilizers to each eye that allow users to more easily fuse the digital content seen by each eye. In some embodiments the visual stabilizers are positioned within the digital content so that they converge to a shared location when viewed by a user, while in other embodiments the visual stabilizers are mapped to different locations within the user's field of view (e.g., peripheral areas) and are visually distinct from one another. These techniques allow the user to more easily fuse the digital content, thereby decreasing the eye fatigue and strain typically experienced when viewing virtual or augmented reality content.
Abstract:
A system and method for scaling an image includes receiving raw image data comprising input pixel values which correspond to pixels of an image sensor; and filtering pixels according to a Bayer-consistent ruleset. The system and method may also include outputting scaled image data as output pixel values, which correspond to subgroups of the input pixel values. The Bayer-consistent ruleset includes a set of filter weights and a series of scaling rules. The Bayer-consistent ruleset results in a scaled image having a high degree of Bayer-consistency.
Abstract:
In accordance with some embodiments, jitter accompanying video resizing, can be reduced or even eliminated by analyzing the content that is to be depicted and resizing based on the nature of the content being depicted. As a result, dominant objects in one frame can be handled in a way that reduces or eliminates video jitter or sliding.
Abstract:
A system and method is provided for displaying a transition between a map and a street level image. In one aspect, the display on a mobile device transitions from a top-down view of a map to a street-level view of an image, such as a panoramic image, such that the mobile device uses the currently stored map image to perform a tilt and zoom transition.
Abstract:
A method and system for high-resolution and parallelizable data processing, reconstruction, and deconstruction, uses arbitrary frequency-space (FS) or inverse frequency-space (IFS, such as image, audio, or video space) sample points in N dimensions. According to a preferred embodiment of the invention, a subset of optionally pre-processed and/or pre-conditioned N-dimensional FS data (or IFS data) is acquired (102) by a processing device (360), the data is optionally transformed (115) by “region scaling factors”, and the data is optionally reduced (116) in numerical significant digits. A “horizontal key” of data elements is calculated (120) on a processor (361), preferably in parallel, for each of an arbitrary set of x-coordinates in IFS (or FS). IFS “color” data (or FS data) are calculated (130) on a processor (361), preferably in parallel, at the x-coordinates corresponding to the horizontal keys. The IFS coordinates (or the FS coordinates) are arbitrary, and the reconstruction's calculated IFS data (or the deconstruction's calculated FS data) are optionally rotated or transposed (141) (such as for display purposes), and are thus formed (150) in a memory (363) or on an output device (365). The method can be applied to other subsets, such as in the N-dimensional case.
Abstract:
A depth estimation apparatus including: an imaging device which generates a first image signal and a second image signal by imaging an object at different phases; a storage unit configured to store model data defining a relationship between (i) lens blur and phase difference of the object in images and (ii) position of the object in the images in the depth axis; and a detecting unit configured to detect a position of the object in the depth axis from the first image signal and the second image signal, using the model data, wherein a phase difference between the first image signal and the second image signal is smaller than or equal to 15% in terms of a base line length.
Abstract:
A three-dimensional map is displayed in a bird's eye view with a stereoscopic effect of feature polygons by providing shading in an appropriate direction according to the gaze direction in a simulative manner. Shading wall polygons are set in addition to feature polygons in three-dimensional map data. The shading wall polygon is a virtual plate-like polygon provided vertically, for example, along a boundary of a feature polygon. When provided around the water system, the shading wall polygon is specified to be opaque on one surface viewed from the water system side and to be transparent on the opposite surface. The shading wall polygons are drawn along with the feature polygons in the process of displaying a map. The shading wall polygon is drawn in black, gray or the like only at a location where the surface specified to be opaque faces a gaze direction.