Abstract:
Described herein is a monitoring system (100) including one or more infrared light sources (108, 110) for illuminating a subject (102) in a sequenced manner. System (100) also includes a camera (106) for capturing images of the subject (102) during periods in which the subject (102) is illuminated by one of the light sources (108, 110). A processor (118) processes the captured images to determine a brightness measure of the images and a controller (120) controls the output power of the infrared light sources (108, 110) in response to the brightness measure. In response to the processor (118) detecting a brightness measure below a predetermined brightness threshold, the controller (120) is configured to switch off or reduce an output illumination intensity of one of the infrared light sources (108, 110).
Abstract:
Face tracking is used to display a live image of the person being tracked with a synthetically generated 3D model of eyewear (for example, sunglasses or spectacles) overlaid on the person's face to provide a virtual mirror for the person, for example to improve the sales and marketing of eyewear over the internet. The person need not remove their existing glasses; by tracking the outline of the glasses and using 2D image recovery any real glasses are removed and replaced with virtual ones. This ensures the synthetically rendered image is convincing. Optionally, since facial features are tracked in 3D with a high degree of accuracy and precision, the user can also select eyewear based on an appropriate fit to their facial proportions.
Abstract:
Described herein is a subject monitoring system (100). The system (100) includes a near infrared illumination source (108) configured to illuminate a scene with infrared light having a spatial beam characteristic to generate a spatial pattern and an image sensor (106) configured to capture one or more images of the scene when illuminated by the illumination source (108). System (100) also includes a processor (112) configured to process the captured one or more images by determining a degree of presence or modification of the spatial pattern by objects in the scene within pixel sub-regions of an image. Processor (112) also classifies one or more pixel sub-regions of the image as including human skin or other material based on the degree of modification of the spatial pattern identified in that pixel sub- region.
Abstract:
Described herein is a system and method for performing eye tracking. One embodiment provides a system (100) including a camera (106) for capturing images of a vehicle driver's (102) eye and light emitting diodes (LEDs – 108 and 110) configured to selectively illuminate the driver's eye during image capture by the camera (106). A processor (118) is configured to process at least a subset of the captured images to determine one or more eye tracking parameters of the subject's eye and to determine one or more illumination characteristics of the images. A controller (120) is configured to send an LED control signal to the LEDs (108 and 110) to control the drive current amplitude and pulse time of the LEDs (108 and 110). The controller (120) selectively adjusts the drive current amplitude and/or pulse time based on the determined illumination characteristics of a previous captured image or images.
Abstract:
Described herein is a method (500) of registering the position and orientation of one or more cameras (312-315) in a camera imaging system (100, 400). The method includes: at step (502), receiving, from a depth imaging device (330), data indicative of a three dimensional image of the scene. At step (503) the three dimensional image is calibrated with a reference frame relative to the scene. The reference frame including a reference position and reference orientation. At step (504), a three dimensional position of each of the cameras within the three dimensional image is determined in the reference frame. At step (505), an orientation of each camera is determined in at least one dimension in the reference frame. At step (506) the position and orientation for each camera is combined to determine a camera pose in the reference frame.
Abstract:
A computerized method of determining a camera pose of a forward facing camera in a vehicle scene, including: capturing images of a vehicle driver's face from a driver facing camera and images of the forward road scene from a forward facing camera; processing the images of the driver's face to derive gaze direction data in a vehicle frame of reference; the gaze direction data are statistically collated into a frequency distribution of gaze angles; peaks in the frequency distribution are identified and associated with reference points in the images of the forward road scene to determine one or more reference gaze positions in the vehicle reference frame; the one or more reference gaze positions are correlated with a position of the reference points in the forward facing camera reference frame to determine a camera pose of a forward facing camera in the vehicle frame of reference.
Abstract:
Described herein is a method (800) and system for controlling one or more illumination devices in an eye tracker system (100) such that a measured pupil/iris contrast exceeds a predefined minimum pupil/iris contrast. The method (100) includes: a. capturing images of a subject (102), including one or both of the subject's eyes, during predefined image capture periods; b. illuminating, from one or more illumination devices (108 and 110), one or both of the subject's eyes during the predefined image capture periods, wherein at least one illumination device (108 and 110) is located sufficiently close to a lens of the camera to generate bright pupil effects; and c. selectively varying the output power of at least one of the illumination devices (108 and 110) to generate a bright pupil reflection intensity such that a measured pupil/iris contrast in a captured image exceeds a predefined minimum pupil/iris contrast.
Abstract:
Described herein is an image pre-processing system and method. One embodiment provides a method (500) including: at step (501), receiving a plurality of images captured at a first frame rate, the plurality of images captured under at least two different image conditions; pre-processing the plurality of images by: at step (502) identifying one or more regions of interest within the images; at step (503), performing a visibility measure on the one or more regions of interest; and, at step (504), selecting a subset of the plurality of images based on the visibility measure; and, at step (505), feeding the subset of images to an image processing pipeline for subsequent processing at a second frame rate that is lower than the first frame rate.
Abstract:
Described herein is an imaging system (200) for a driver monitoring system (100). The imaging system (200) includes a light source (108) for generating an input light beam (202) and projecting the input light beam (202) along a path towards a driver (102) of a vehicle. System (200) also includes a dielectric metasurface (201) positioned within the path of the input light beam (202). The metasurface (201) has a two dimensional array of surface elements configured to impose predetermined phase, polarization and/or intensity changes to the input light beam (202) to generate an output light beam (204) for illuminating the driver (102). System (200) further includes an image sensor (106) configured to image reflected light (208) being light from the output light beam (204) that is reflected from the driver.
Abstract:
Described herein are systems and methods of determining a pose of a camera within a vehicle scene. In one embodiment, a method (400) includes the initial step (401) of capturing an image of the vehicle scene from the camera. At step (402), reference data indicative of the vehicle scene is loaded, the reference data includes positions and orientations of known features within the vehicle scene. Next, at step (403), the geometric appearance of one or more of the known features is identified within the image. Finally, at step (404), the three dimensional position and orientation of the camera relative to the known features identified in step c) is determined from the geometric appearance, and a pose of the camera within the vehicle scene is calculated.
Abstract translation:这里描述的是确定车辆场景内的相机姿态的系统和方法。 在一个实施例中,方法(400)包括从相机捕捉车辆场景的图像的初始步骤(401)。 在步骤(402),加载指示车辆场景的参考数据,参考数据包括车辆场景内的已知特征的位置和取向。 接下来,在步骤(403)中,在图像内识别一个或多个已知特征的几何外观。 最后,在步骤(404),根据几何外观确定摄像机相对于步骤c)中识别的已知特征的三维位置和取向,并且计算车辆场景内摄像机的姿态。 p >