Abstract:
A method for forming augmented image data, the method comprising: forming a primary image feed showing a subject illuminated by a lighting unit; estimating the location of the lighting unit; receiving overlay data defining an overlay of three- dimensional appearance; rendering the overlay data in dependence on the estimated location to form an augmentation image feed; and overlaying the augmentation image feed on the primary image feed to form a secondary image feed.
Abstract:
Examples are disclosed herein that relate to the display of mixed reality imagery. One example provides a mixed reality computing device comprising an image sensor, a display device, a storage device comprising instructions, and a processor. The instructions are executable to receive an image of a physical environment, store the image, render a three- dimensional virtual model, form a mixed reality thumbnail image by compositing a view of the three-dimensional virtual model and the image, and display the mixed reality thumbnail image. The instructions are further executable to receive a user input updating the three-dimensional virtual model, render the updated three-dimensional virtual model, update the mixed reality thumbnail image by compositing a view of the updated three- dimensional virtual model and the image of the physical environment, and display the mixed reality thumbnail image including the updated three-dimensional virtual model composited with the image of the physical environment.
Abstract:
Systems, apparatuses, and/or methods may provide for identifying a face of a user by extracting contour information from images of shadows cast on the face by a facial features illuminated by a controllable source of illumination. The source of illumination may be left, center, and right portions of the light emitting diode (LED) display on a smart phone, tablet, or notebook that has a forward-facing two-dimensional (2D) camera for obtaining the images. In one embodiment, the user is successively photographed under illumination provided using the left, the center, and the right portions of the LED display, providing shadows on the face from which identifying contour information may be extracted and/or determined.
Abstract:
Embodiments provide for a graphics processing apparatus comprising render logic to detect rendering operations that will result in framebuffer having the same data as the initial clear color value and morphing such rendering operations to optimizations that are typically done for initial clearing of the framebuffer.
Abstract:
The present application discloses a 3D model rendering method and apparatus and a terminal device. The method includes calculating, in a diffuse reflection illumination situation simulated by hardware, dot product operation results of a light vector and a normal line vector of each vertex on a surface of a 3D model, converting the dot product operation results of each vertex to corresponding UV coordinate values, then drawing, according to a preset correspondence between UV coordinate values and a color value of a 3D model basic texture after receiving light, a gradient map having a color value corresponding to the UV coordinate values of the each vertex, and covering the surface of the 3D model with the gradient map. The rendering method according to the present application transfers a conventional process of coloration in a three-dimensional model to drawing a gradient map in a two-dimensional plane and then, further covering the 3D model with the drawn gradient map, has a simpler processing process, and also reduces a performance requirement on an electronic device in a process of performing 3D model rendering.
Abstract:
There is provided an image processing device including circuitry configured to generate an image of a subject under an illumination environment based on illumination information, from subject information which is associated with illumination of the subject and from the illumination information, wherein the illumination information is acquired on the basis of a virtual illumination body within a real space.
Abstract:
Disclosed are various embodiments for methods and systems for three-dimensional imaging of subject particles in media through use of dark-field microscopy. Some examples, among others, include a method for obtaining a three-dimensional (3D) volume image of a sample, a method for determining a 3D location of at least one subject particle within a sample, a method for determining at least one spatial correlation between a location of at least one subject particle and a location of at least one cell structure within a cell and/or other similar biological or nonbiological structure, a method of displaying a location of at least one subject particle, method for increasing the dynamic range of a 3D image acquired from samples containing weak and strong sources of light, and method for sharpening a 3D image in a vertical direction.