Abstract:
The mass of an object may be estimated based on intersection points of a representation of a surface in an image space with cubes defining the image space, the surface representing a surface of an object. The representation may be, for example, based on marching cubes. The mass may be estimated by estimating a mass contribution of a first set of cubes contained entirely within the representation of the surface, estimating a mass contribution of a second set of cubes having intersection points with the representation of the surface, and summing the estimated mass contribution of the first set of cubes and the estimated mass contribution of the second set of cubes. The object may be segmented from other portions of an image prior to estimating the mass of the object.
Abstract:
In an embodiment, digital video frames in a flow are subjected to a method of extraction of features including the operations of: extracting from the video frames respective sequences of pairs of keypoints/descriptors limiting to a threshold value the number of pairs extracted for each frame; sending the sequences extracted from an extractor module to a server for processing with a bitrate value variable in time; receiving the aforesaid bitrate value variable in time at the extractor as target bitrate for extraction; and limiting the number of pairs extracted by the extractor to a threshold value variable in time as a function of the target bitrate.
Abstract:
Digital image processing circuitry clusters a set of images into a set of first clusters of images and a set of unclustered images. The set of first clusters are merged, generating a set of second clusters of images. Images in the set of unclustered images are assigned to one of a cluster of the set of second clusters of images and an outlier image cluster. The clustered images may be partitioned into subclusters based on detection of objects in the images.
Abstract:
An image processing system includes a first processor that acquires frames of image data. For each frame of data, the first processor generates a Gaussian pyramid for the frame of data, extract histogram of oriented gradient (HOG) descriptors for each level of the Gaussian pyramid, compresses the HOG descriptors, and sends the compressed HOG descriptors. A second processor is coupled to the first processor and is configured to receive the compressed HOG descriptors, aggregate the compressed HOG descriptors into windows, compare data of each window to at least one stored model, and generate output based upon the comparison.
Abstract:
An embodiment of a visual search system includes at least one imaging device, each imaging device operable to capture a corresponding image, a feature extraction device coupled to each imaging device, each feature extraction device operable to generate feature descriptors from the image received from the corresponding imaging device. A descriptor encoding device is coupled to each feature extraction device and operable to generate compressed feature descriptors from the received feature descriptors. An application processor is coupled to each descriptor encoding device. The application processor is operable to process the received compressed feature descriptors to generate output information as a function of the processed compressed feature descriptors.
Abstract:
The mass of an object may be estimated based on intersection points of a representation of a surface in an image space with cubes defining the image space, the surface representing a surface of an object. The representation may be, for example, based on marching cubes. The mass may be estimated by estimating a mass contribution of a first set of cubes contained entirely within the representation of the surface, estimating a mass contribution of a second set of cubes having intersection points with the representation of the surface, and summing the estimated mass contribution of the first set of cubes and the estimated mass contribution of the second set of cubes. The object may be segmented from other portions of an image prior to estimating the mass of the object.
Abstract:
A classification device receives sensor data from a set of sensors and generates, using a context classifier having a set of classifier model parameters, a set of raw predictions based on the received sensor data. Temporal filtering and heuristic filtering are applied to the raw predictions, producing filtered predictions. A prediction error is generated from the filtered predictions, and model parameters of the set of classifier model parameters are updated based on said prediction error. The classification device may be a wearable device.
Abstract:
Methods, microprocessors, and systems are provided for implementing an artificial neural network. Data buffers in virtual memory are coupled to respective processing layers in the artificial neural network. An ordered visiting sequence of layers of the artificial neural network is obtained. A virtual memory allocation schedule is produced as a function of the ordered visiting sequence of layers of the artificial neural network, the schedule including a set of instructions for memory allocation and deallocation operations applicable to the data buffers. A physical memory configuration dataset is computed as a function of the virtual memory allocation schedule for the artificial neural network, the dataset including sizes and addresses of physical memory locations for the artificial neural network.
Abstract:
Apparatus and methods to unwarp at least portions of distorted, electronically-captured images are described. Keypoints, instead of an entire image, may be unwarped and used in various machine-vision algorithms, such as object recognition, image matching, and 3D reconstruction algorithms. When using unwarped keypoints, the machine-vision algorithms may perform reliably irrespective of distortions that may be introduced by one or more image capture systems.
Abstract:
An embodiment of a visual search system includes at least one imaging device, each imaging device operable to capture a corresponding image, a feature extraction device coupled to each imaging device, each feature extraction device operable to generate feature descriptors from the image received from the corresponding imaging device. A descriptor encoding device is coupled to each feature extraction device and operable to generate compressed feature descriptors from the received feature descriptors. An application processor is coupled to each descriptor encoding device. The application processor is operable to process the received compressed feature descriptors to generate output information as a function of the processed compressed feature descriptors.