Abstract:
A barcode decoding system and method are disclosed that use a data-driven classifier for transforming a potentially degraded barcode signal into a digit sequence. The disclosed implementations are robust to signal degradation through incorporation of a noise model into the classifier construction phase. The run-time computational cost is low, allowing for efficient implementations on portable devices. Implementations are disclosed for intelligent preview scaling, barcode-aware autofocus augmentation and multi-scale signal feature extraction.
Abstract:
Methods, devices, and systems for continuous image capturing are described herein. In one embodiment, a method includes continuously capturing a sequence of images with an image capturing device. The method may further include storing a predetermined number of the sequence of images in a buffer. The method may further include receiving a user request to capture an image. In response to the user request, the method may further include automatically selecting one of the buffered images based on an exposure time of one of the buffered images. The sequence of images is captured prior to or concurrently with receiving the user request.
Abstract:
A user interface can have one or more spaces presented therein. A space is a grouping of one or more program windows in relation to windows of other application programs, such that the program(s) of only a single space is visible when the space is active. A view can be generated of all spaces and their contents.
Abstract:
Disclosed is a system for producing images including techniques for reducing the memory and processing power required for such operations. The system provides techniques for programmatically representing a graphics problem. The system further provides techniques for reducing and optimizing graphics problems for rendering with consideration of the system resources, such as the availability of a compatible GPU.
Abstract:
A graphics animation and compositing operations framework has a layer tree for interfacing with the application and a render tree for interfacing with a render engine. Layers in the layer tree can be content, windows, views, video, images, text, media or other type of objects for an application's user interface. The application commits state changes of the layers of the layer tree. The application does not need to include explicit code for animating the changes to the layers. Instead, after a synchronization threshold has been met, an animation is determined for animating the change in state by the framework which can define a set of predetermined animations based on motion, visibility and transition. The determined animation is explicitly applied to the affected layers in the render tree. A render engine renders from the render tree into a frame buffer, synchronized with the display. Portions of the render tree changing relative to prior versions can be tracked to improve resource management.
Abstract:
A graphics animation and compositing operations framework has a layer tree for interfacing with the application and a render tree for interfacing with a render engine. Layers in the layer tree can be content, windows, views, video, images, text, media or other type of objects for an application's user interface. The application commits state changes of the layers of the layer tree. The application does not need to include explicit code for animating the changes to the layers. Instead, after a synchronization threshold has been met, an animation is determined for animating the change in state by the framework which can define a set of predetermined animations based on motion, visibility and transition. The determined animation is explicitly applied to the affected layers in the render tree. A render engine renders from the render tree into a frame buffer, synchronized with the display. Portions of the render tree changing relative to prior versions can be tracked to improve resource management.
Abstract:
Systems, methods, and computer readable media to provide improved autofocus operations are described. In general, techniques are disclosed that show how to improve contrast-based autofocus operations by applying a novel threshold-and-select action to window-specific focus scores. More particularly, techniques disclosed herein may evaluate a multi-window autofocus area over a burst collected group of images. For each captured image, focus scores for each window within an autofocus area may be collected, aggregated and then consolidated to identify a single focus metric and its associated lens position for each window. The window-specific focus scores may be reviewed and selection of a “best” autofocus lens position made using a selection criteria. The specified criteria may be used to bias the selection to either a front-of-plane (macro) or back-of-plane (infinity) focus position.
Abstract:
A barcode decoding system and method are disclosed that use a data-driven classifier for transforming a potentially degraded barcode signal into a digit sequence. The disclosed implementations are robust to signal degradation through incorporation of a noise model into the classifier construction phase. The run-time computational cost is low, allowing for efficient implementations on portable devices. Implementations are disclosed for intelligent preview scaling, barcode-aware autofocus augmentation and multi-scale signal feature extraction.
Abstract:
Several methods and apparatuses for implementing automatic exposure mechanisms for image capturing devices are described. In one embodiment, an orientation detector located in the device determines orientation data for the device. The automatic exposure mechanism projects an orientation vector into an image plane of an image sensor. Next, the automatic exposure mechanism adjusts an initial position of a metering area used for automatic exposure towards a target position based on the projected orientation vector. The automatic exposure mechanism optionally dampens the adjustment of the metering area.
Abstract:
Systems, methods, and computer readable media to provide improved autofocus operations are described. In general, techniques are disclosed that show how to improve contrast-based autofocus operations by applying a novel threshold-and-select action to window-specific focus scores. More particularly, techniques disclosed herein may evaluate a multi-window autofocus area over a burst collected group of images. For each captured image, focus scores for each window within an autofocus area may be collected, aggregated and then consolidated to identify a single focus metric and its associated lens position for each window. The window-specific focus scores may be reviewed and selection of a “best” autofocus lens position made using a selection criteria. The specified criteria may be used to bias the selection to either a front-of-plane (macro) or back-of-plane (infinity) focus position.