Abstract:
In one embodiment, an image processor is configured to obtain a plurality of phase images for each of first and second depth frames. For each of a plurality of pixels of a given one of the phase images of the first depth frame, the image processor determines an amount of movement of a point of an imaged scene between the pixel of the given phase image and a pixel of a corresponding phase image of the second depth frame, and adjusts pixel values of respective other phase images of the first depth frame based on the determined amount of movement. A motion compensated first depth image is generated utilizing the given phase image and the adjusted other phase images of the first depth frame. Movement of a point of the imaged scene is determined, for example, between pixels of respective n-th phase images of the first and second depth frames.
Abstract:
A method for extracting keyframes from a sequence of frames for a computer vision application using structure from motion, the keyframes being a subset of representative frames from the complete sequence of frames, and an apparatus configured to perform the method are described. A subset selector (22) selects (10) a subset of keyframes that closely match a current camera position from already available keyframes. A determination unit (23) then determines (11) whether a current frame should be included a bundle adjustment keyframe set and/or a triangulation keyframe set.
Abstract:
A system and method for tracking association of two or more objects over time, according to various embodiments, is configured to determine the association based at least in part on an image. The system may be configured to capture the image, identify two or more objects of interest within the image, determine whether the two or more objects are associated in the image, and store image association data for the two or more objects. In various embodiments the system is configured to create a timeline of object association over time for display to a user.
Abstract:
The present invention relates to computer capture of object motion. More specifically, embodiments of the present invention relate to capturing of facial movement or performance of an actor. Embodiments of the present invention provide a head-mounted camera system that allows the movements of an actor's face to be captured separately from, but simultaneously with, the movements of the actor's body. In some embodiments of the present invention, a method of motion capture of an actor's performance is provided. A self- contained system is provided for recording the data, which is free of tethers or other hard¬ wiring, is remotely operated by a motion-capture team, without any intervention by the actor wearing the device. Embodiments of the present invention also provide a method of validating that usable data is being acquired and recorded by the remote system.
Abstract:
Techniques described herein determine a center of mass state vector based on a body model. The body model may be formed by analyzing a depth image of a user who is performing some motion. The center of mass state vector may include, for example, center-of-mass position, center-of-mass velocity, center-of-mass acceleration, orientation, angular velocity, angular acceleration, inertia tensor, and angular momentum. A center of mass state vector may be determined for an individual body part or for the body as a whole. The center of mass state vector(s) may be used to analyze the user's motion.
Abstract:
Techniques described herein use signal analysis to detect and analyze repetitive user motion that is captured in a 3D image. The repetitive motion could be the user exercising. One embodiment includes analyzing image data that tracks a user performing a repetitive motion to determine data points for a parameter that is associated with the repetitive motion. The different data points are for different points in time. A parameter signal of the parameter versus time that tracks the repetitive motion is formed. The parameter signal is divided into brackets that delineate one repetition of the repetitive motion from other repetitions of the repetitive motion. A repetition in the parameter signal is analyzed using a signal processing technique. Curve fitting and/or auto-correlation may be used to analyze the repetition.
Abstract:
A video surveillance system is disclosed. The system includes a model database storing a plurality of models and a vector database storing a plurality of vectors of recently observed trajectories. The system includes a model building module that builds a new motion model corresponding to the motion data of the current trajectory data structure. The system generates a current trajectory data structure having motion data and abnormality scores. The system also includes a database purging module configured to determine a subset of vectors that is most similar to the current trajectory data structure based on a measure of similarity between the subset of vectors and the current trajectory data structure. The database purging module is further configured to replace one of the motion models in the model database with the new motion model based on an amount of vectors in the subset vectors the recentness of the subset of vectors.
Abstract:
Robot positioning is facilitated by obtaining, for each time of a first sampling schedule, a respective indication of a pose of a camera system of a robot relative to a reference coordinate frame, the respective indication of the pose of the camera system being based on a comparison of multiple three-dimensional images of a scene of an environment, the obtaining providing a plurality of indications of poses of the camera system; obtaining, for each time of a second sampling schedule, a respective indication of a pose of the robot, the obtaining providing a plurality of indications of poses of the robot; and determining, using the plurality of indications of poses of the camera system and the plurality of indications of poses of the robot, an indication of the reference coordinate frame and an indication of a reference point of the camera system relative to pose of the robot.
Abstract:
A system (100) includes a head mounted display (HMD) device (102) comprising at least one display (112, 114) and at least one sensor (210) to provide pose information for the HMD device. The system further includes a sensor integrator module (304) coupled to the at least one sensor, the sensor integrator module to determine a motion vector for the HMD device based on the pose information, and an application processor (204) to render a first texture (122, 322) based on pose of the HMD device determined from the pose information. The system further includes a motion analysis module (302) to determine a first velocity field (128, 328) having a pixel velocity for at least a subset of pixels of the first texture, and a compositor (224) to render a second texture (140, 334) based on the first texture, the first velocity field and the motion vector for the HMD, and to provide the second texture to the display of the HMD device.
Abstract:
Systems and method of compensating for tracking motion of an object are disclosed. One such method includes receiving a series of images captured by each of a plurality of image capture devices. The image capture devices are arranged in an orthogonal configuration of two opposing pairs. The method further includes computing a series of positions of the object and orientations of the object, by processing the images captured by each of the plurality of image capture devices.