Abstract:
Systems, methods, and computer readable media for calibrating two cameras (image capture units) using a non-standard, and initially unknown, calibration object are described. More particularly, an iterative approach to determine the structure and pose of an target object in an unconstrained environment are disclosed. The target object may be any of a number of predetermined objects such as a specific three dimensional (3D) shape, a specific type of animal (e.g., dogs), or the face of an arbitrary human. Virtually any object whose structure may be expressed in terms of a relatively low dimensional parametrized model may be used as a target object. The identified object (i.e., its pose and shape) may be used as input to a bundle adjustment operation resulting in camera calibration.
Abstract:
Camera calibration includes capturing a first image of an object by a first camera, determining spatial parameters between the first camera and the object using the first image, obtaining a first estimate for an optical center, iteratively calculating a best set of optical characteristics and test setup parameters based on the first estimate for the optical center until the difference in a most recent calculated set of optical characteristics and previously calculated set of optical characteristics satisfies a predetermined threshold, and calibrating the first camera based on the best set of optical characteristics. Multi-camera system calibration may include calibrating, based on a detected misalignment of features in multiple images, the multi-camera system using a context of the multi-camera system and one or more prior stored contexts.
Abstract:
The present disclosure relates to image processing and analysis and in particular automatic segmentation of identifiable items in an image, for example the segmentation and identification of characters or symbols in an image. Upon user indication, multiple images of a subject are captured and variations between the images are created using lighting, spectral content, angles and other factors. The images are processed together so that characters and symbols may be recognized from the surface of the image subject.
Abstract:
A device with a touch sensitive display and a plurality of applications, including a camera application, while the device is in a locked, passcode-protected state: displays a lock screen interface, the lock screen interface including a camera access indicia; detects a gesture; in response to a determination that the gesture starts on the camera access indicia: ceases to display the lock screen interface; starts a restricted session for the camera application; displays an interface for the camera application, without displaying a passcode entry interface; and maintains the device in the locked, passcode-protected state for the applications other than the camera application; and in response to a determination that the gesture starts at a location other than the camera access indicia: displays a passcode entry interface, wherein in response to entry of a correct passcode in the passcode entry interface, the device enters an unlocked state.
Abstract:
Systems, methods, and computer readable media to capture and process high dynamic range (HDR) images when appropriate for a scene are disclosed. When appropriate, multiple images at a single—slightly underexposed—exposure value are captured (making a constant bracket HDR capture sequence) and local tone mapping (LTM) applied to each image. Local tone map and histogram information can be used to generate a noise-amplification mask which can be used during fusion operations. Images obtained and fused in the disclosed manner provide high dynamic range with improved noise and de-ghosting characteristics.
Abstract:
A device with a touch sensitive display and a plurality of applications, including a camera application, while the device is in a locked, passcode-protected state: displays a lock screen interface, the lock screen interface including a camera access indicia; detects a gesture; in response to a determination that the gesture starts on the camera access indicia: ceases to display the lock screen interface; starts a restricted session for the camera application; displays an interface for the camera application, without displaying a passcode entry interface; and maintains the device in the locked, passcode-protected state for the applications other than the camera application; and in response to a determination that the gesture starts at a location other than the camera access indicia: displays a passcode entry interface, wherein in response to entry of a correct passcode in the passcode entry interface, the device enters an unlocked state.