Abstract:
Systems provided herein include a learning environment and an agent. The learning environment includes an avatar and an object. A state signal corresponding to a state of the learning environment includes a location and orientation of the avatar and the object. The agent is adapted to receive the state signal, to issue an action capable of generating at least one change in the state of the learning environment, to produce a set of observations relevant to a task, to hypothesize a set of action models configured to explain the observations, and to vet the set of action models to identify a learned model for the task.
Abstract:
A novel technique for unsupervised feature selection is disclosed. The disclosed methods include automatically selecting a subset of a feature of an image. Additionally, the selection of the subset of features may be incorporated with a congealing algorithm, such as a least-square-based congealing algorithm. By selecting a subset of the feature representation of an image, redundant and/or irrelevant features may be reduced or removed, and the efficiency and accuracy of least-square-based congealing may be improved.
Abstract:
There is provided a discriminative framework for image alignment. Image alignment is generally the process of moving and deforming a template to minimize the distance between the template and an image. There are essentially three elements to image alignment, namely template representation, distance metric, and optimization method. For template representation, given a face dataset with ground truth landmarks, a boosting-based classifier is trained that is able to learn the decision boundary between two classes—the warped images from ground truth landmarks (e.g., positive class) and those from perturbed landmarks (e.g., negative class). A set of trained weak classifiers based on Haar-like rectangular features determines a boosted appearance model. A distance metric is a score from the strong classifier, and image alignment is the process of optimizing (e.g., maximizing) the classification score. On the generic face alignment problem, the proposed framework greatly improves the robustness, accuracy, and efficiency of alignment.
Abstract:
Embodiments of the invention include a system and a method for determining whether a person is carrying concealed contraband, such as an improvised explosives device or other weapon. The system includes a people tracking video subsystem, a people tracking decisioning subsystem, a concealed contraband detection aiming subsystem, and a concealed contraband detection decisioning subsystem.
Abstract:
The present invention aims at providing a method for detecting a signal structure from a moving vehicle. The method for detecting signal structure includes capturing an image from a camera mounted on the moving vehicle. The method further includes restricting a search space by predefining candidate regions in the image, extracting a set of features of the image within each candidate region and detecting the signal structure accordingly.
Abstract:
A method for calibrating a projective camera is provided. The method includes acquiring information by detecting at least one object on a substantially flat ground plane within a field of view. A projective camera calibration is performed. A measurement uncertainty is considered to yield a plurality of camera parameters from the projective camera calibration.
Abstract:
A method and system for measuring disease relevant tissue changes for use in quantifying, diagnosing and predicting a given disease are provided. The method comprises applying at least one segmenting process to the image data to generate a plurality of segmented regions of interest, extracting features relevant for a given disease from the segmented regions to generate extracted features, and mathematically modeling the features for use in one of diagnosing, quantifying and predicting changes indicative of the given disease. The system comprises an imaging device for acquiring the image data and an image process configured to segment, extract and mathematically model disease relevant features.
Abstract:
A system and method for non-contact measurement of a complex part is provided. The method comprises acquiring an image of the complex part including imposed laser lines on the complex part using at least one imaging device, determining a span of interest of the complex part being representative of at least a portion of the complex part and which comprises information related to a plurality of dimensions of a surface of the complex part, extracting information corresponding to the laser lines from the span of interest to reduce computation and further extracting a plurality of unique points from the information corresponding to the laser lines, the plurality of unique points representing the plurality of dimensions of the surface. The plurality of unique points is used for reconstructing a three-dimensional (3D) representation of the surface of the complex part.
Abstract:
Homography-based imaging apparatus and method are provided. The apparatus may include a processor (44) coupled to process respective sequences of sky images respectively acquired by physical image acquisition devices 181 and 182 at respective spaced apart locations (e.g., P1, P2). The processor may include an image alignment module (32) configured to spatially relate respective views of at least one object (e.g., clouds, aerial vehicles) visible in the respective sequences of the sky images based on homography (42) of at least one astronomical image acquired at each spaced apart location. The astronomical image may include a number of spatial references corresponding to respective astronomical body positions located practically at infinity relative to a respective distance between the spaced apart locations. Further views (synthetic views) may be generated at selectable new locations (e.g., P3, P4, P5, P6), without actually having any physical image acquisition devices at such selectable locations.
Abstract:
Homography-based imaging apparatus and method are provided. The apparatus may include a processor (44) coupled to process respective sequences of sky images respectively acquired by physical image acquisition devices 181 and 182 at respective spaced apart locations (e.g., P1, P2). The processor may include an image alignment module (32) configured to spatially relate respective views of at least one object (e.g., clouds, aerial vehicles) visible in the respective sequences of the sky images based on homography (42) of at least one astronomical image acquired at each spaced apart location. The astronomical image may include a number of spatial references corresponding to respective astronomical body positions located practically at infinity relative to a respective distance between the spaced apart locations. Further views (synthetic views) may be generated at selectable new locations (e.g., P3, P4, P5, P6), without actually having any physical image acquisition devices at such selectable locations.