Abstract:
Generating an image with a selected level of background blur includes capturing, by a first image capture device, a plurality of frames of a scene, wherein each of the plurality of frames has a different focus depth, obtaining a depth map of the scene, determining a target object and a background in the scene based on the depth map, determining a goal blur for the background, and selecting, for each pixel in an output image, a corresponding pixel from the focus stack.
Abstract:
A graphical user interface (GUI) element permits a user to control an application in both a coarse manner and a fine manner. When a cursor is moved to coincide or overlap the displayed GUI element, parameter adjustment is made at a first (coarse) granularity so that rapid changes to the target parameter can be made (e.g., displayed zoom level, image rotation or playback volume). As the cursor is moved away from the displayed GUI element, parameter adjustment is made at a second (fine) granularity so that fine changes to the target parameter can be made. In one embodiment, the further the cursor is moved from the displayed GUI element, the finer the control.
Abstract:
Lens flare mitigation techniques determine which pixels in images of a sequence of images are likely to be pixels affected by lens flare. Once the lens flare areas of the images are determined, unwanted lens flare effects may be mitigated by various approaches, including reducing border artifacts along a seam between successive images, discarding entire images of the sequence that contain lens flare areas, and using tone-mapping to reduce the visibility of lens flare.
Abstract:
Traditionally, time-lapse videos are constructed from images captured at given time intervals called “temporal points of interests” or “temporal POIs.” Disclosed herein are intelligent systems and methods of capturing and selecting better images around temporal points of interest for the construction of improved time-lapse videos. According to some embodiments, a small “burst” of images may be captured, centered around the aforementioned temporal points of interest. Then, each burst sequence of images may be analyzed, e.g., by performing a similarity comparison between each image in the burst sequence and the image selected at the previous temporal point of interest. Selecting the image from a given burst that is most similar to the previous selected image allows the intelligent systems and methods described herein to improve the quality of the resultant time-lapse video by discarding “outlier” or other undesirable images captured in the burst sequence around a particular temporal point of interest.
Abstract:
The techniques disclosed herein use a compass, MEMS accelerometer, GPS module, and MEMS gyrometer to infer a frame of reference for a hand-held device. This can provide a true Frenet frame, i.e., X- and Y-vectors for the display, and also a Z-vector that points perpendicularly to the display. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track the Frenet frame of the device in real time to provide a continuous 3D frame-of-reference. Once this continuous frame of reference is known, the position of a user's eyes may either be inferred or calculated directly by using a device's front-facing camera. With the position of the user's eyes and a continuous 3D frame-of-reference for the display, more realistic virtual 3D depictions of the objects on the device's display may be created and interacted with by the user.
Abstract:
Disclosed herein are methods and systems for providing a user interface (UI) having a selector controllable by a physical input device. The response of the selector is adaptively adjusted to facilitate executing desired operations within the UI. A response factor defines how far the selector moves for a given movement of the physical input device. The response factor is increased so the selector can be moved a large distance, but is dynamically decreased to provide fine-tuned control of the selector for selecting densely grouped screen elements. Screen elements can be endowed with gravity, making them easy to select, or with anti-gravity, making them more difficult to select. The disclosure methods also provide tactile feedback such as vibration or braking of the physical input device to assist a user in executing desired operations.
Abstract:
Special blend operations for wide area-of-view image generation utilizing a “floating auto exposure” scheme are described. Pixel values in the two images being stitched together are blended within a transition band around a “seam.” identified in the overlap region between the images after changes in exposure and/or color saturation are accounted for. In some embodiments, changes in exposure and/or color saturation are accounted for through the use of one or more exposure mapping curves, the selection and use of which are based, at least in part, on a determined “Exposure Ratio” value, i.e., the amount that the camera's exposure settings have deviated from their initial capture settings. In other embodiments, the Exposure Ratio value is also used to determine regions along the seam where either: alpha blending, Poisson blending—or a combination of the two—should be used to blend in the transitional areas on each side of the seam.
Abstract:
The techniques disclosed herein may use various sensors to infer a frame of reference for a hand-held device. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track a Frenet frame of the device in real time to provide an instantaneous (or continuous) 3D frame-of-reference. In addition to—or in place of—calculating this instantaneous (or continuous) frame of reference, the position of a user's head may either be inferred or calculated directly by using one or more of a device's optical sensors, e.g., an optical camera, infrared camera, laser, etc. With knowledge of the 3D frame-of-reference for the display and/or knowledge of the position of the user's head, more realistic virtual 3D depictions of the graphical objects on the device's display may be created—and interacted with—by the user.
Abstract:
The techniques disclosed herein use a compass, MEMS accelerometer, GPS module, and MEMS gyrometer to infer a frame of reference for a hand-held device. This can provide a true Frenet frame, i.e., X- and Y-vectors for the display, and also a Z-vector that points perpendicularly to the display. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track the Frenet frame of the device in real time to provide a continuous 3D frame-of-reference. Once this continuous frame of reference is known, the position of a user's eyes may either be inferred or calculated directly by using a device's front-facing camera. With the position of the user's eyes and a continuous 3D frame-of-reference for the display, more realistic virtual 3D depictions of the objects on the device's display may be created and interacted with by the user.
Abstract:
Lens flare mitigation techniques determine which pixels in images of a sequence of images are likely to be pixels affected by lens flare. Once the lens flare areas of the images are determined, unwanted lens flare effects may be mitigated by various approaches, including reducing border artifacts along a seam between successive images, discarding entire images of the sequence that contain lens flare areas, and using tone-mapping to reduce the visibility of lens flare.