Abstract:
A mechanism is described for facilitating cinematic space-time view synthesis in computing environments according to one embodiment. A method of embodiments, as described herein, includes capturing, by one or more cameras, multiple images at multiple positions or multiple points in times, where the multiple images represent multiple views of an object or a scene, where the one or more cameras are coupled to one or more processors of a computing device. The method further includes synthesizing, by a neural network, the multiple images into a single image including a middle image of the multiple images and representing an intermediary view of the multiple views.
Abstract:
An apparatus, method, and machine-readable medium for health monitoring and response are described herein. The apparatus includes a processor and a number of sensors configured to collect data corresponding to a user of the device. The apparatus also includes a health monitoring and response application, at least partially including hardware logic. The hardware logic of the health monitoring and response application is to test the data collected by any of the sensors to match the collected data with a predetermined health condition, determine a current health condition of the user based on the predetermined health condition that matches the collected data, and automatically perform an action based on the current health condition of the user.
Abstract:
An apparatus, method, and machine-readable medium for health monitoring and response are described herein. The apparatus includes a processor and a number of sensors configured to collect data corresponding to a user of the device. The apparatus also includes a health monitoring and response application, at least partially including hardware logic. The hardware logic of the health monitoring and response application is to test the data collected by any of the sensors to match the collected data with a predetermined health condition, determine a current health condition of the user based on the predetermined health condition that matches the collected data, and automatically perform an action based on the current health condition of the user.
Abstract:
Image scene labeling with 3D image data. A plurality of pixels of an image frame may be label based at least on a function of pixel color and a pixel depth over the spatial positions within the image frame. A graph-cut technique may be utilized to optimize a data cost and neighborhood cost in which at least the data cost function includes a component that is a dependent on a depth associated with a given pixel in the frame. In some embodiments, in the MRF formulation pixels are adaptively merged into pixel groups based on the constructed data cost(s) and neighborhood cost(s). These pixel groups are then made nodes in the directed graphs. In some embodiments, a hierarchical expansion is performed, with the hierarchy set up within the label space.
Abstract:
Techniques are provided for perception enhancement of light fields (LFs) for use in integral display applications. A methodology implementing the techniques according to an embodiment includes receiving one or more LF views and a disparity map associated with each LF view. The method also includes quantizing the disparity map into planes, where each plane is associated with a selected range of depth values. The method further includes slicing the LF view into layers, where each layer comprises pixels of the LF view associated with one of the planes. The method further includes shifting each of the layers in a lateral direction by an offset distance. The offset distance is based on a viewing angle associated with the LF view and further based on the depth values of the associated plane. The method also includes merging the shifted layers to generate a synthesized LF view with increased parallax.
Abstract:
Techniques are provided for perception enhancement of light fields (LFs) for use in integral display applications. A methodology implementing the techniques according to an embodiment includes receiving one or more LF views and a disparity map associated with each LF view. The method also includes quantizing the disparity map into planes, where each plane is associated with a selected range of depth values. The method further includes slicing the LF view into layers, where each layer comprises pixels of the LF view associated with one of the planes. The method further includes shifting each of the layers in a lateral direction by an offset distance. The offset distance is based on a viewing angle associated with the LF view and further based on the depth values of the associated plane. The method also includes merging the shifted layers to generate a synthesized LF view with increased parallax.
Abstract:
Image segmentation utilizing 3D image data. A plurality of pixels of an image frame may be segmented based at least on a function of pixel color and a pixel depth over the spatial positions within the image frame. A graph-cut technique may be utilized to optimize a data cost and smoothness cost in which at least the data cost function includes a component that is a dependent on a depth associated with a given pixel in the frame. In further embodiments, both the data cost and smoothness functions are dependent on a color and a depth associated with each pixel. Components of at least the data cost function may be weighted for each pixel to arrive at most likely segments. Segmentation may be further predicated on a pre-segmentation label assigned based at least on a 3D spatial position clusters.