Abstract:
A corneal-reflection-based gaze detection section (28) calculates a time series of a three-dimensional gaze vector in a camera coordinate system from a time series of facial images. A face position-and-orientation estimation section (24) estimates a time series of a three-dimensional position and orientation of a face. An eyeball-center-coordinates transformation section (32) calculates a time series of a three-dimensional position of the eyeball center in a coordinate system of a three-dimensional facial model. A fixed parameter calculation section (33) calculates for use as a fixed parameter a three-dimensional position of the eyeball center in the three-dimensional facial-model coordinate system. An eyeball-center-based gaze detection section (36) uses the three-dimensional position of the eyeball center calculated by the fixed parameter calculation section (33) to calculate a three-dimensional gaze vector from a three-dimensional position of the eyeball center to a three-dimensional position of a pupil center in the camera coordinate system. This enables accurate gaze tracking to be performed using a simple configuration and without performing calibration.
Abstract:
Disclosed are methods and digital tools for deriving tooth condition information for a patient's teeth, for populating a digital dental chart with derived tooth condition information, and for generating an electronic data record containing such information.
Abstract:
A method of image processing of magnetic resonance (MR) images for creating de-noised MR images, comprises the steps of providing image data sets including multiple complex MR images (S7), subjecting the MR images to a wavelet decomposition (S12) for creating coefficient data sets of wavelet coefficients (Sn,m) representing the MR images in a wavelet frequency domain, calculating normalized coefficient data sets of wavelet coefficients Formula (I) (S17), wherein the coefficient data sets are normalized with a quantitative amount of variation, in particular standard deviation Formula (II), of noise contributions included in the coefficient data sets (Sn,m), averaging the wavelet coefficients of each coefficient data set (S18) for providing averaged wavelet coefficients Formula (III) of the coefficient data sets, calculating phase difference maps (Δφn,m) for all coefficient data sets (S20), wherein the phase difference maps provide phase differences between the phase of each wavelet coefficient and the phase of the averaged wavelet coefficients Formula (III), calculating scaled averaged coefficient data sets of wavelet coefficients by scaling the averaged wavelet coefficients Formula (III) with scaling factors (Cn,m), which are obtained by comparing parts of the normalized wavelet coefficients of the normalized coefficient data sets Formula (I) that are in phase with the averaged wavelet coefficients Formula (III) (S22), calculating rescaled coefficient data sets of wavelet coefficients Formula (IV) (S24) by applying a transfer function Formula (V) on the coefficient data sets (Sn,m) and on the scaled averaged coefficient data sets, wherein the transfer function includes combined amplitude and phase filters, each depending on the normalized coefficient data sets Formula (I) and me phase difference maps (Δφn,m), resp., and subjecting the rescaled coefficient data sets to a wavelet reconstruction Formula (IV) (S25) for providing the denoised MR images.
Abstract:
Embodiments disclose systems and methods that aid in screening, diagnosis and/or monitoring of medical conditions. The systems and methods may allow, for example, for automated identification and localization of lesions and other anatomical structures from medical data obtained from medical imaging devices, computation of image-based biomarkers including quantification of dynamics of lesions, and/or integration with telemedicine services, programs, or software.
Abstract:
Various embodiments are provided which relate to the field of image signal processing, specifically relating to the generation of a depth-view image of a scene from a set of input images of a scene taken at different cameras of a multi-view imaging system. A method comprises obtaining a frame of an image of a scene and a frame of a depth map regarding the frame of the image. A minimum depth and a maximum depth of the scene and a number of depth layers for the depth map are determined. Pixels of the image are projected to the depth layers to obtain projected pixels on the depth layers; and cost values for the projected pixels are determined. The cost values are filtered and a filtered cost value is selected from a layer to obtain a depth value of a pixel of an estimated depth map.
Abstract:
The invention relates to a method of registering first image data of a first stream with second image data of a second stream, the first image data and the second image data comprising image information about a person or animal including image information about at least one organ (Or) of the person or animal, wherein a shape and/or position of the organ (Or) is changing thereby performing a cyclic change of the shape and/or position, and wherein: • each of the first stream and the second stream comprises a series of images (F) of the at least one organ (Or) at a plurality of different discrete stages of the cyclic change of the shape and/or position so that each of the images (F) corresponds to one of the discrete stages of the cyclic change of the shape and/or position, • a phase variable (p) is used to define the stage of the cyclic change of the shape and/or position and a phase value of the phase variable (p) is assigned to each of the images (F) within each of the first and second stream thereby defining unambiguously the discrete stage of the cyclic change of the shape and/or position represented by the image (F) instead of the time during a specific cycle when the image (F) was taken, • a registration, i.e. location-to-location associations of image values, is determined for a pair of images (F) consisting of a first image of the first stream and a second image of the second stream, wherein the first image and the second image of the pair are selected by choosing or interpolating an image from the first stream and by choosing or interpolating an image from the second stream to which the same phase value (pr) is assigned.
Abstract:
The present disclosure is intended to display registration information on a real-world registration information setting point and update the display with detection of an event in accordance with a change in the context as a trigger. A control for displaying a virtual object including registration information on a real-world registration information setting point is performed. The data processing unit performs a process for displaying registration information described in control data and a process for updating the display in accordance with a change in predefined context. Further, the data processing unit displays the virtual object indicative of the registration information setting point in conjunction with the display of the registration information. The data processing unit calculates the positional relationship between an information processing apparatus and the registration information setting point on the basis of the position, direction, and the like of a reference image (anchor) detected from the shot image in a virtual three-dimensional space, and displays the virtual object indicative of the registration information setting point on the basis of the calculated positional relationship and a change in the context.
Abstract:
Multiple cameras are arranged in an array at a pitch, roll, and yaw that allow the cameras to have adjacent fields of view such that each camera is pointed inward relative to the array. The read window of an image sensor of each camera in a multi-camera array can be adjusted to minimize the overlap between adjacent fields of view, to maximize the correlation within the overlapping portions of the fields of view, and to correct for manufacturing and assembly tolerances. Images from cameras in a multi-camera array with adjacent fields of view can be manipulated using low-power warping and cropping techniques, and can be taped together to form a final image.
Abstract:
A method, apparatus and computer program product are provided for image registration in the gradient domain. A method is provided including receiving three or more input images and registering, simultaneously, the three or more input images in the gradient domain based on applying an energy minimization function.