Abstract:
Described herein is a monitoring system (100) including one or more infrared light sources (108, 110) for illuminating a subject (102) in a sequenced manner. System (100) also includes a camera (106) for capturing images of the subject (102) during periods in which the subject (102) is illuminated by one of the light sources (108, 110). A processor (118) processes the captured images to determine a brightness measure of the images and a controller (120) controls the output power of the infrared light sources (108, 110) in response to the brightness measure. In response to the processor (118) detecting a brightness measure below a predetermined brightness threshold, the controller (120) is configured to switch off or reduce an output illumination intensity of one of the infrared light sources (108, 110).
Abstract:
Described herein are systems and methods of determining a pose of a camera within a vehicle scene. In one embodiment, a method (400) includes the initial step (401) of capturing an image of the vehicle scene from the camera. At step (402), reference data indicative of the vehicle scene is loaded, the reference data includes positions and orientations of known features within the vehicle scene. Next, at step (403), the geometric appearance of one or more of the known features is identified within the image. Finally, at step (404), the three dimensional position and orientation of the camera relative to the known features identified in step c) is determined from the geometric appearance, and a pose of the camera within the vehicle scene is calculated.
Abstract translation:这里描述的是确定车辆场景内的相机姿态的系统和方法。 在一个实施例中,方法(400)包括从相机捕捉车辆场景的图像的初始步骤(401)。 在步骤(402),加载指示车辆场景的参考数据,参考数据包括车辆场景内的已知特征的位置和取向。 接下来,在步骤(403)中,在图像内识别一个或多个已知特征的几何外观。 最后,在步骤(404),根据几何外观确定摄像机相对于步骤c)中识别的已知特征的三维位置和取向,并且计算车辆场景内摄像机的姿态。 p >
Abstract:
Described herein is a method (500) of registering the position and orientation of one or more cameras (312-315) in a camera imaging system (100, 400). The method includes: at step (502), receiving, from a depth imaging device (330), data indicative of a three dimensional image of the scene. At step (503) the three dimensional image is calibrated with a reference frame relative to the scene. The reference frame including a reference position and reference orientation. At step (504), a three dimensional position of each of the cameras within the three dimensional image is determined in the reference frame. At step (505), an orientation of each camera is determined in at least one dimension in the reference frame. At step (506) the position and orientation for each camera is combined to determine a camera pose in the reference frame.
Abstract:
A computerized method of determining a camera pose of a forward facing camera in a vehicle scene, including: capturing images of a vehicle driver's face from a driver facing camera and images of the forward road scene from a forward facing camera; processing the images of the driver's face to derive gaze direction data in a vehicle frame of reference; the gaze direction data are statistically collated into a frequency distribution of gaze angles; peaks in the frequency distribution are identified and associated with reference points in the images of the forward road scene to determine one or more reference gaze positions in the vehicle reference frame; the one or more reference gaze positions are correlated with a position of the reference points in the forward facing camera reference frame to determine a camera pose of a forward facing camera in the vehicle frame of reference.
Abstract:
Systems / methods for capturing true gaze position data of subject (102) located within monitoring environment (104). System (100) includes: plurality of light sources (106-1 14) positioned at known three dimensional locations within monitoring environment (104); one or more cameras (1 17) positioned to capture image data corresponding to images of the eyes of subject (102); and system controller (123) that: (i) processes the captured images to determine gaze position of subject (102) within monitoring environment (104); (ii) selectively illuminates respective ones of light sources (106-1 14) at respective time intervals to temporarily attract driver's gaze position towards currently activated light source; (iii) detects a look event where subject's gaze position is determined to be at the currently activated light source; (iv) during each look event, records gaze position as being the known three dimensional location of the currently activated light source; (v) stores gaze position data in a database with image data.
Abstract:
Described herein is a mount (1) for supporting a mobile device (3) having a wireless transceiver (60) within a vehicle (5) having a driver. The mount (1) includes a body (7) 5 having a supporting formation (9) adapted to releasably support the mobile device (3) in a supported operative position within the vehicle (5). The mount (1) also includes an electrical interface (25) for connecting to an external power source, the interface (25) being positioned to engage a complementary electrical port (27) of the mobile device (3) when the mobile device (3) is in the operative position to supply power to the mobile 10 device (3). The mount (1) and the mobile device (3) collectively define an illumination device (29, 31) and a first camera (33) that cooperate to obtain predetermined performance information about the driver.
Abstract:
A method of reducing the illumination power requirements for an object tracking system, the method including the steps of: (a) determining a current location of the object within a scene; (b) for a future frame: determining a band around the object of interest; determining a start and stop time for when the rolling shutter detector will be sampling the band; and illuminating the object only whilst the rolling shutter detector is sampling the band; (c) for a future frame predicting the location of the object relative to the tracking system; determining the ambient light levels; and illuminating the object with the minimum optical power required for the object to be imaged suitably for tracking.
Abstract:
Described herein is a monitoring system (100) including one or more infrared light sources (108, 110) for illuminating a subject (102) in a sequenced manner. System (100) also includes a camera (106) for capturing images of the subject (102) during periods in which the subject (102) is illuminated by one of the light sources (108, 110). A processor (118) processes the captured images to determine a brightness measure of the images and a controller (120) controls the output power of the infrared light sources (108, 110) in response to the brightness measure. In response to the processor (118) detecting a brightness measure below a predetermined brightness threshold, the controller (120) is configured to switch off or reduce an output illumination intensity of one of the infrared light sources (108, 110).
Abstract:
Described herein is a method (1100) of measuring a distance from a camera (106) to a face of a vehicle driver (102) in a driver monitoring system (100). The camera (106) includes a digital image sensor having a plurality of phase detecting pixels. The phase detecting pixels are configured to generate first and second image data corresponding to light received along two optical paths through the camera's imaging system. The method (1100) includes, at step (1101) positioning the camera (106) at an imaging position to capture an image of the driver (102) including the driver's face. At step (1102), the image is processed to identify a face region being a region of pixels corresponding to the driver's face or head. At step (1103), a first subset of the phase detecting pixels representing those which correspond with the face region is determined. At step (1104), the first and second image data obtained by the first subset of the phase detecting pixels is compared to determine a spatial image offset. Finally, at step (1105), a first distance estimate of the distance between a region of the driver's face and the image sensor is determined from the spatial image offset.
Abstract:
Described herein are systems and methods for performing eye gaze tracking. The method includes capturing, from one or more imaging devices, a sequence of time separated images of the subject's face including one or both of the subject's eyes; processing the images to detect specular reflections present in the images; characterizing the detected specular reflections into corneal reflections and non-corneal reflections; upon detection of at least one corneal reflection, performing a first eye gaze tracking procedure based on the relative positions of the at least one corneal reflection and at least one reference eye feature; upon detection of no corneal reflections, performing a second eye gaze tracking procedure on one or both eyes of the subject based on the estimation of head pose of the subject and outputting eye gaze vectors of one or both eyes from the first or second eye gaze tracking procedure.