Abstract:
A camera device with a sensor which is mounted at an angle relative to a mounting surface or other reference surface is described. Camera modules, which include mirrors for light redirection, which are mounted at different angles in the camera device have sensors with different amounts of rotation. In some embodiments modules without mirrors use camera modules without rotated sensors while camera modules with mirrors may or may not use sensors which are rotated depending on the angle at which the modules are mounted in the camera device. By rotating the sensors of some camera modules rotation that maybe introduced by the angle at which a camera module is mounted can be offset. The images captured by different camera modules are combined in some embodiments without the need for computationally rotating an image thanks to the rotation of the sensor in the camera module used to capture the image.
Abstract:
The present application relates to image capture and generation methods and apparatus and, more particularly, to methods and apparatus which detect and/or indicate a dirty lens condition. One embodiment of the present invention includes a method of operating a camera including the steps of capturing a first image using a first lens of the camera; determining, based on at least the first image, if a dirty camera lens condition exists; and in response to determining that a dirty lens condition exists, generating a dirty lens condition notification or initiating an automatic camera lens cleaning operation. In some embodiments multiple captured images with overlapping image regions are compared to determine if a dirty lens condition exists.
Abstract:
Methods and apparatus for processing images captured by a camera device including multiple optical chains, e.g., camera modules, are described. Three, 4, 5 or more optical chains maybe used. Different optical chains capture different images due to different perspectives. Multiple images, e.g., corresponding to different perspectives, are captured during a time period and are combined to generate a composite image. In some embodiments one of the captured images or a synthesized image is used as a reference image during composite image generation. The image used as the reference image is selected to keep the perspective of sequentially generated composite images consistent despite unintentional came movement and/or in accordance with an expected path of travel. Thus, which camera module provides the reference image may vary over time taking into consideration unintended camera movement. Composite image generation may be performed external to the camera device or in the camera device.
Abstract:
In various embodiments light redirection device positions of one or more optical chains are moved between image capture times while the position of the camera device including the multiple optical chains remains fixed. Some features relate to performing a zoom operation using multiple optical chains of the camera. Zoom settings may, and in some embodiments are, used in determining the angle to which the light redirection device should be set. In some embodiments different zoom focal length settings correspond to different scene capture areas for the optical chain with the moveable light redirection device. In some embodiments multiple optical chains are used in parallel, each with its own image sensor and light redirection device. Depth information is generated and used in some but not all embodiments to combine the images in a manner intended to minimize or avoid parallax distortions in the composite image which is produced.
Abstract:
In various embodiments a camera with multiple optical chains, e.g., camera modules, is controlled to operate in one of a variety of supported modes of operation. The modes include a non-motion mode, a motion mode, a normal burst mode and/or a reduced data burst mode. Motion mode is well suited for capturing an image including motion, e.g., moving object(s) with some modules being used to capture scene areas using a shorter exposure time than other modules and the captured images then being combined taking into consideration locations of motion. A reduced data burst mode is supported in some embodiments in which camera modules with different focal lengths capture images at different rates. While the camera modules of different focal length operate at different image capture rates in the reduced data burst mode, images are combined to support a desired composite image output rate, e.g., a desired frame rate.
Abstract:
Handled camera related methods and apparatus are described. Intuitive zoom control and/or focus control based on camera acceleration and/or motion are described. Also described are handheld camera holder device methods and apparatus. The camera holder can be held in a hand and used to automatically stabilize a camera, e.g., a camera designed to be held in one hand. The stabilization is provided by gyroscopes alone or in combination with closed loop camera orientation control which can use information from one or more sensors included in the camera. The camera holder is designed so that, at least for some but not necessarily all cameras which may be placed in the camera holder, controllable axis of rotation intersect with the center of mass of the camera when the camera is present the holder thereby requiring little power to control and/or maintain camera orientation.
Abstract:
Methods and apparatus that facilitate or implement focus control in a camera and/or can be used to set the camera focus distance, e.g., the distance between the camera and an object which will appear in focus when the objects picture is taken by the camera. A depth map is generated for an image area, e.g., an area corresponding to an image which is captured by the camera. Based on said depth map, in various exemplary embodiments, a visual indication of which portions of an image captured by the camera device are in focus is generated. A user may indicate a change in the desired focus distance by touching a portion of the screen corresponding to an object at the desired focus distance or by varying a slider or other focus distance control.
Abstract:
Features relating to reducing and/or eliminating noise from images are described. In some embodiments depth based denoising is used on images captured by one or more camera modules based on depth information of a scene area and optical characteristics of the one or more camera modules used to capture the images. By taking into consideration the camera module optics and the depth of the object or objects included in the image portion being processed, a maximum expected frequency can be determined and the image portion is then filtered to reduce or remove frequencies above the maximum expected frequency. Noise can be reduced or eliminated from image portions captured by one or more camera modules (131,..., 133). In some embodiments a maximum expected frequency is determined on a per camera module and depth basis. A composite image is generated based on, e.g., from, filtered portions of one or more images.
Abstract:
Methods and apparatus for reading out pixel values from sensors in a synchronized manner are described. Readout of rows of pixel values from different sensors are controlled so that pixel values of different sensors corresponding to the same portion of a scene are read out in a way that the same portions of a scene are captured at the same or nearly the same time by different sensors. In one embodiment a first sensor which captures a large scene area alternates between reading out rows of pixel values from a top portion and a bottom portion of the first sensor while sensors corresponding to smaller areas of the scene read out rows of pixel values in a consecutive manner. Sensors may read out rows of pixel values at the same rate despite corresponding to optical chains with different focal lengths. The image captured by the first sensor facilitates image combining.
Abstract:
Methods and apparatus for supporting zoom operations using a plurality of optical chain modules, e.g., camera modules, are described. Switching between use of groups of optical chains with different focal lengths is used to support zoom operations. Digital zoom is used in some cases to support zoom levels corresponding to levels between the zoom levels of different optical chain groups or discrete focal lengths to which optical chains may be switched. In some embodiments optical chains have adjustable focal lengths and are switched between different focal lengths. In other embodiments optical chains have fixed focal lengths with different optical chain groups corresponding to different fixed focal lengths. Composite images are generate from images captured by multiple optical chains of the same group and/or different groups. Composite image is in accordance with a user zoom control setting. Individual composite images may be generated and/or a video sequence.