Abstract:
A hybrid camera sensor offers efficient image sensing by periodically evaluating pixels at a certain frame rate and disregarding values of pixels that have changed less than a threshold amount. Because only a fraction of pixels in the camera sensor may change values from frame to frame, this can result in faster readout times, which can optionally enable increased frame rates.
Abstract:
Techniques describe apparatus and method for generating local binary pattern (LBP) labels based on sensor readings from sensor elements. The sensor apparatus may include a sensor element array that includes a plurality of sensor elements and may also include in-pixel circuitry coupled directly to the sensor element, peripheral circuitry coupled to the sensor element array and configured to receive output from one or more of sensor elements and digital circuitry. The in-pixel circuitry and/or peripheral circuitry may include an analog and/or digital computation structure configured to generate an LBP label for each of the sensor elements readings, by currently comparing the sensor readings for the referenced sensor element with the sensor readings of four or less neighboring sensor elements, and using previously, or subsequently, generated comparisons for the remaining neighboring sensor elements.
Abstract:
Techniques describe computing computer vision (CV) features based on sensor readings from a sensor and detecting macro-features based on the CV features. The sensor may include a sensor element array that includes a plurality of sensor elements. The sensor may also include in-pixel circuitry coupled to the sensor elements, peripheral circuitry and/or a dedicated microprocessor coupled to the sensor element array. The in-pixel circuitry, the peripheral circuitry or the dedicated microprocessor may include computation structures configured to perform analog or digital operations representative of a multi-pixel computation for a sensor element (or block of sensor elements), based on sensor readings generated by neighboring sensor elements in proximity to the sensor element, and to generate CV features. The dedicated microprocessor may process the CV features and detect macro-features. Furthermore, in certain embodiments, the dedicated microprocessor may be coupled to a second microprocessor through a wired or wireless interface.
Abstract:
Methods, systems, computer-readable storage media, and apparatuses for efficiently computing a computer vision operation are presented. In certain aspects, techniques are disclosed for receiving values from one or more pixels from a pixel array and representing those values for facilitating computer vision operations, such as Haar-like feature computations. In one implementation, the pixel values are represented as a hierarchy of computations; each level above the lower level of the hierarchy of computations comprises one or more values representing a computation of the values from a lower level of the hierarchy. The computation can include a simple sum of the values for the lower level of the hierarchy, a weighted sum of the values for the lower level of the hierarchy, an average of the values for the lower level of the hierarchy, a local binary pattern based on the values for the lower level of the hierarchy, a histogram of oriented gradients based on the values for the lower level of the hierarchy, or other computer vision feature computation based on the values for the lower level of the hierarchy. In such an implementation, the computer vision operations, such as Haar-like feature computations, may be performed using the hierarchy of averages.
Abstract:
An apparatus includes a hardware sensor array including a plurality of pixels arranged along at least a first dimension and a second dimension of the array, each of the pixels capable of generating a sensor reading. A hardware scanning window array includes a plurality of storage elements arranged along at least a first dimension and a second dimension of the hardware scanning window array, each of the storage elements capable of storing a pixel value based on one or more sensor readings. Peripheral circuitry for systematically transfers pixel values, based on sensor readings, into the hardware scanning window array, to cause different windows of pixel values to be stored in the hardware scanning window array at different times. Control logic coupled to the hardware sensor array, the hardware scanning window array, and the peripheral circuitry, provides control signals to the peripheral circuitry to control the transfer of pixel values.
Abstract:
Apparatuses, methods, and systems are presented for sensing scene-based occurrences. Such an apparatus may comprise a vision sensor system comprising a first processing unit and dedicated computer vision (CV) computation hardware configured to receive sensor data from at least one sensor array comprising a plurality of sensor pixels and capable of computing one or more CV features using readings from neighboring sensor pixels. The vision sensor system may be configured to send an event to be received by a second processing unit in response to processing of the one or more computed CV features by the first processing unit. The event may indicate possible presence of one or more irises within a scene.
Abstract:
Methods, systems, computer-readable media, and apparatuses for incremental object detection using a staged process and a band-pass feature extractor are presented. At each stage of the staged process, a different band of features from a plurality of bands of features in image data can be extracted using a dual-threshold local binary pattern operator, and compared with features of a target object within the band for a partial decision. The staged process exits if a rejection decision is made at any stage of the staged process. If no rejection decision is made in each stage of the staged process, the target object is detected. Features extracted at each stage may be from a different image for some applications.
Abstract:
Techniques describe apparatus and method for generating computed results based on sensor readings for detecting features, such as edges, corners etc. The sensor apparatus may include a sensor element array that includes a plurality of sensor elements. The sensor elements may be arranged in a 2-dimensional array, such as columns and rows. The sensor elements may be capable of generating sensor reading based on environmental conditions. The sensor apparatus may include a dedicated computer vision (CV) computation hardware in in-pixel circuitry, peripheral circuitry or dedicated microprocessor coupled to the sensor element array and configured to receive output from one or more of sensor elements. The dedicated CV computation hardware may include configurable blocks for detecting features using CV operations, wherein the configurable blocks may be configured to switch between multiple CV operations, such as linear binary pattern (LBP) and/or histogram of signed gradient (HSG) computer vision operations.
Abstract:
Apparatuses, methods, and systems are presented for sensing scene-based occurrences. Such an apparatus may comprise a vision sensor system comprising a first processing unit and dedicated computer vision (CV) computation hardware configured to receive sensor data from at least one sensor array comprising a plurality of sensor pixels and capable of computing one or more CV features using readings from neighboring sensor pixels. The vision sensor system may be configured to send an event to be received by a second processing unit in response to processing of the one or more computed CV features by the first processing unit. The event may indicate possible presence of one or more irises within a scene.
Abstract:
Techniques disclosed herein utilize a vision sensor that integrates a special-purpose camera with dedicated computer vision (CV) computation hardware and a dedicated low-power microprocessor for the purposes of detecting, tracking, recognizing, and/or analyzing subjects, objects, and scenes in the view of the camera. The vision sensor processes the information retrieved from the camera using the included low-power microprocessor and sends “events” (or indications that one or more reference occurrences have occurred, and, possibly, associated data) for the main processor only when needed or as defined and configured by the application. This allows the general-purpose microprocessor (which is typically relatively high-speed and high-power to support a variety of applications) to stay in a low-power (e.g., sleep mode) most of the time as conventional, while becoming active only when events are received from the vision sensor.