Abstract:
A method and device for determining the depth of field of a lens system for a specific distance by means of measuring optics (10), comprising the steps: - manual optical focussing (12) of the measuring optics (10) by means of the human eye (11) without projection onto a focussing screen, - monitoring (13) the movement of the mechanical elements of the measuring optics (10) during the focussing (12); - determining and specifying (5) the set distance from the movement, wherein the range of the depth of field for the adjusted distance is determined by calculation and specified (5) under consideration of the focal length, aperture, hyperfocal distance and blur circle diameter.
Abstract:
Verfahren zur Bestimmung der Tiefenschärfe für eine bestimmte Distanz mittels einer Mess-Optik umfassend die Schritte: - manuelle optische Fokussierung der Mess-Optik mittels des menschlichen Auges ohne Projektion auf eine Mattscheibe, - Überwachung der Bewegung der Mechanik der Mess-Optik bei der Fokussierung; - Bestimmung und Angabe der eingestellten Distanz aus der Bewegung, wobei unter Berücksichtigung von Brennweite, Blende, Hyperfokaldistanz und Zerstreukreise der Bereich der Tiefenschärfe zur justierten Distanz rechnerisch bestimmt und angegeben wird.
Abstract:
An improved system and method for providing information concerning the level of focus of objects appearing on a viewfinder for a digital camera system. The present invention analyzes a plurality of objects or regions of an image that is being shown to a user through a viewfinder of an electronic device. The system creates an overview map that matches the objects within the image, and colors are applied to objects within the overview map, representing the focus quality levels of the respective objects. The overview map is superimposed over the image, providing the user with information concerning the relative focus quality of the respective objects.
Abstract:
Aspects of the disclosure relate to an apparatus including multiple image sensors sharing one or more optical paths for imaging. An example method includes identifying whether a device including a first aperture, a first image sensor, a second image sensor, and an optical element is to be in a first device mode or a second device mode. The method also includes controlling the optical element based on the identified device mode. The optical element directs light from the first aperture to the first image sensor in a first optical element mode. Light from the first aperture is directed to the second image sensor when the optical element is in the second optical element mode.
Abstract:
Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.
Abstract:
Aspects of the disclosure relate to an emitter for active depth sensing shared by multiple apertures. An example method for active depth sensing by a device including a first aperture, a second aperture, a first emitter, and an optical element includes identifying whether the optical element is to be in a first optical element (OE) mode or a second OE mode, and controlling the optical element based on the identified OE mode. The optical element directs light from the first emitter towards the first aperture in the first OE mode. Light is directed from the first emitter towards the second aperture in the second OE mode.
Abstract:
Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.
Abstract:
An optical module includes a first optics group, a second optics group, and an image sensor, wherein the first optics group and second optics group are configured to provide an image having a focus and a magnification to the image sensor. In some embodiments of the present invention, a first optics assembly includes a first optics group coupled to a threaded portion of a first lead screw so that translation of the first lead screw results in translation of the first optics group along an axis of the first lead screw, a first actuator for rotating the first lead screw; and a first sensing target configured to permit detection of rotation of the first lead screw. In some embodiments of the present invention a second optics assembly includes a second optics group coupled to a threaded portion of a second lead screw so that translation of the second lead screw results in translation of the second optics group along an axis of the second lead screw, a second actuator for rotating the second lead screw, and second means for sensing configured to detect rotation of the second lead screw.