Abstract:
Geometries of the structures and objects deviate from their idealized models, while not always visible to the naked eye. Embodiments of the present invention reveal and visualize such subtle geometric deviations, which can contain useful, surprising information. In an embodiment of the present invention, a method can include fitting a model of a geometry to an input image, matting a region of the input image according to the model based on a sampling function, generating a deviation function based on the matted region, extrapolating the deviation function to an image wide warping field, and generating an output image by warping the input image according to the warping. In an embodiment of the present invention, Deviation Magnification inputs takes a still image or frame, fits parametric models to objects of interest, and generates an output image exaggerating departures from ideal geometries.
Abstract:
Some embodiments are directed to a method, corresponding system, and corresponding apparatus for rendering a video and/or image display to amplify small motions through video magnification. Some embodiments include a new compact image pyramid representation, the Riesz pyramid, that may be used for real-time, high-quality phase-based video magnification. Some embodiments are less overcomplete than even the smallest two orientation, octave-bandwidth complex steerable pyramid. Some embodiments are implemented using compact, efficient linear filters in the spatial domain. Some embodiments produce motion magnified videos that are of comparable quality to those using the complex steerable pyramid. In some embodiments, the Riesz pyramid is used with phase-based video magnification. The Riesz pyramid may phase-shift image features along their dominant orientation, rather than along every orientation like the complex steerable pyramid.
Abstract:
Multi-view autostereoscopic displays provide an immersive, glasses-free 3D viewing experience, but they preferably use correctly filtered content from multiple viewpoints. The filtered content, however, may not be easily obtained with current stereoscopic production pipelines. The proposed method and system takes a stereoscopic video as an input and converts it to multi-view and filtered video streams that may be used to drive multi-view autostereoscopic displays. The method combines a phase-based video magnification and an interperspective antialiasing into a single filtering process. The whole algorithm is simple and may be efficiently implemented on current GPUs to yield real-time performance. Furthermore, the ability to retarget disparity is naturally supported. The method is robust and works transparent materials, and specularities. The method provides superior results when compared to the state-of-the-art depth-based rendering methods. The method is showcased in the context of a real-time 3D videoconferencing system.
Abstract:
Structural health monitoring (SHM) is essential but can be expensive to perform. In an embodiment, a method includes sensing vibrations at a plurality of locations of a structure by a plurality of time-synchronized sensors. The method further includes determining a first set of dependencies of all sensors of the time-synchronized sensors at a first sample time to any sensors of a second sample time, and determining a second set of dependencies of all sensors of the time-synchronized sensors at the second sample time to any sensors of a third sample time. The second sample time is later than the first sample time, and the third sample time is later than the second sample time. The method then determines whether the structure has changed if the first set of dependencies is different from the second set of dependencies. Therefore, automated SHM can ensure safety at a lower cost to building owners.
Abstract:
In one embodiment, a method comprises projecting, from a projector, a diffused on an object. The method further includes capturing, with a first camera in a particular location, a reference image of the object while the diffused is projected on the object. The method further includes capturing, with a second camera positioned in the particular location, a test image of the object while the diffused is projected on the object. The method further includes comparing speckles in the reference image to the test image. The projector, first camera and second camera are removably provided to and positioned in a site of the object.
Abstract:
An apparatus according to an embodiment of the present invention enables measurement and visualization of a refractive field such as a fluid. An embodiment device obtains video captured by a video camera with an imaging plane. Representations of apparent motions in the video are correlated to determine actual motions of the refractive field. A textured background of the scene can be modeled as stationary, with a refractive field translating between background and video camera. This approach offers multiple advantages over conventional fluid flow visualization, including an ability to use ordinary video equipment outside a laboratory without particle injection. Even natural backgrounds can be used, and fluid motion can be distinguished from refraction changes. Embodiments can render refractive flow visualizations for augmented reality, wearable devices, and video microscopes.
Abstract:
A method and corresponding apparatus for measuring object motion using camera images may include measuring a global optical flow field of a scene. The scene may include target and reference objects captured in an image sequence. Motion of a camera used to capture the image sequence may be determined relative to the scene by measuring an apparent, sub-pixel motion of the reference object with respect to an imaging plane of the camera. Motion of the target object corrected for the camera motion may be calculated based on the optical flow field of the scene and on the apparent, sub-pixel motion of the reference object with respect to the imaging plane of the camera. Embodiments may enable measuring vibration of structures and objects from long distance in relatively uncontrolled settings, with or without accelerometers, with high signal-to-noise ratios.
Abstract:
An imaging method and corresponding apparatus according to an embodiment of the present invention enables measurement and visualization of fluid flow. An embodiment method includes obtaining video captured by a video camera with an imaging plane. Representations of motions in the video are correlated. A textured background of the scene can be modeled as stationary, with a refractive field translating between background and video camera. This approach offers multiple advantages over conventional fluid flow visualization, including an ability to use ordinary video equipment outside a laboratory without particle injection. Even natural backgrounds can be used, and fluid motion can be distinguished from refraction changes. Depth and three-dimensional information can be recovered using stereo video, and uncertainty methods can enhance measurement robustness where backgrounds are less textured. Example applications can include avionics and hydrocarbon leak detection.
Abstract:
In one embodiment, a method of amplifying temporal variation in at least two images comprises examining pixel values of the at least two images. The temporal variation of the pixel values between the at least two images can be below a particular threshold. The method can further include applying signal processing to the pixel values.
Abstract:
A method and system of converting stereo video content to multi-view video content combines an Eulerian approach with a Lagrangian approach. The method comprises generating a disparity map for each of the left and right views of a received stereoscopic frame. For each corresponding pair of left and right scanlines of the received stereoscopic frame, the method further comprises decomposing the left and right scanlines into a left sum of wavelets or other basis functions, and a right sum wavelets or other basis functions. The method further comprises establishing an initial disparity correspondence between left wavelets and right wavelets based on the generated disparity maps, and refining the initial disparity between the left wavelet and the right wavelet using a phase difference between the corresponding wavelets. The method further comprises reconstructing at least one novel view based on the left and right wavelets.