Abstract:
A method and system of converting stereo video content to multi-view video content combines an Eulerian approach with a Lagrangian approach. The method comprises generating a disparity map for each of the left and right views of a received stereoscopic frame. For each corresponding pair of left and right scanlines of the received stereoscopic frame, the method further comprises decomposing the left and right scanlines into a left sum of wavelets or other basis functions, and a right sum wavelets or other basis functions. The method further comprises establishing an initial disparity correspondence between left wavelets and right wavelets based on the generated disparity maps, and refining the initial disparity between the left wavelet and the right wavelet using a phase difference between the corresponding wavelets. The method further comprises reconstructing at least one novel view based on the left and right wavelets.
Abstract:
Multi-view autostereoscopic displays provide an immersive, glasses-free 3D viewing experience, but they preferably use correctly filtered content from multiple viewpoints. The filtered content, however, may not be easily obtained with current stereoscopic production pipelines. The proposed method and system takes a stereoscopic video as an input and converts it to multi-view and filtered video streams that may be used to drive multi-view autostereoscopic displays. The method combines a phase-based video magnification and an interperspective antialiasing into a single filtering process. The whole algorithm is simple and may be efficiently implemented on current GPUs to yield real-time performance. Furthermore, the ability to retarget disparity is naturally supported. The method is robust and works transparent materials, and specularities. The method provides superior results when compared to the state-of-the-art depth-based rendering methods. The method is showcased in the context of a real-time 3D videoconferencing system.
Abstract:
According to some aspects, a method of designing an object based on a three-dimensional model representing a shape of the object is provided. The object may be fabricated from a plurality of materials having one or more known physical properties, wherein the object is designed to exhibit one or more target properties. The method may comprise determining a first composition of the object by providing the three-dimensional model as input to a reducer tree, determining one or more physical properties of the object with the first composition by simulating the object with the first composition, comparing the determined one or more physical properties with the one or more target properties, and determining a second composition of the object based on a result of comparing the determined one or more physical properties with the one or more target properties.
Abstract:
The present application relates generally to systems and methods for using machine vision to provide information on one or more aspects of an additive fabrication device, such as calibration parameters and/or an object formed by the device or in the process of being formed by the device. According to some aspects, a method is provided for calibrating an additive fabrication device. According to some aspects, a method is provided for assessing at least a portion of an object formed using an additive fabrication device. According to some aspects, a method is provided for fabricating a second object in contact with a first object using an additive fabrication device. According to some aspects, an additive fabrication device configured to perform one or more of the above methods may be provided.
Abstract:
The present application relates generally to systems and methods for using machine vision to provide information on one or more aspects of an additive fabrication device, such as calibration parameters and/or an object formed by the device or in the process of being formed by the device. According to some aspects, a method is provided for calibrating an additive fabrication device. According to some aspects, a method is provided for assessing at least a portion of an object formed using an additive fabrication device. According to some aspects, a method is provided for fabricating a second object in contact with a first object using an additive fabrication device. According to some aspects, an additive fabrication device configured to perform one or more of the above methods may be provided.
Abstract:
The present application relates generally to systems and methods for using machine vision to provide information on one or more aspects of an additive fabrication device, such as calibration parameters and/or an object formed by the device or in the process of being formed by the device. According to some aspects, a method is provided for calibrating an additive fabrication device. According to some aspects, a method is provided for assessing at least a portion of an object formed using an additive fabrication device. According to some aspects, a method is provided for fabricating a second object in contact with a first object using an additive fabrication device. According to some aspects, an additive fabrication device configured to perform one or more of the above methods may be provided.
Abstract:
A method and system of converting stereo video content to multi-view video content combines an Eulerian approach with a Lagrangian approach. The method comprises generating a disparity map for each of the left and right views of a received stereoscopic frame. For each corresponding pair of left and right scanlines of the received stereoscopic frame, the method further comprises decomposing the left and right scanlines into a left sum of wavelets or other basis functions, and a right sum wavelets or other basis functions. The method further comprises establishing an initial disparity correspondence between left wavelets and right wavelets based on the generated disparity maps, and refining the initial disparity between the left wavelet and the right wavelet using a phase difference between the corresponding wavelets. The method further comprises reconstructing at least one novel view based on the left and right wavelets.
Abstract:
A method and system of converting stereo video content to multi-view video content combines an Eulerian approach with a Lagrangian approach. The method comprises generating a disparity map for each of the left and right views of a received stereoscopic frame. For each corresponding pair of left and right scanlines of the received stereoscopic frame, the method further comprises decomposing the left and right scanlines into a left sum of wavelets or other basis functions, and a right sum wavelets or other basis functions. The method further comprises establishing an initial disparity correspondence between left wavelets and right wavelets based on the generated disparity maps, and refining the initial disparity between the left wavelet and the right wavelet using a phase difference between the corresponding wavelets. The method further comprises reconstructing at least one novel view based on the left and right wavelets.
Abstract:
Multi-view autostereoscopic displays provide an immersive, glasses-free 3D viewing experience, but they preferably use correctly filtered content from multiple viewpoints. The filtered content, however, may not be easily obtained with current stereoscopic production pipelines. The proposed method and system takes a stereoscopic video as an input and converts it to multi-view and filtered video streams that may be used to drive multi-view autostereoscopic displays. The method combines a phase-based video magnification and an interperspective antialiasing into a single filtering process. The whole algorithm is simple and may be efficiently implemented on current GPUs to yield real-time performance. Furthermore, the ability to retarget disparity is naturally supported. The method is robust and works transparent materials, and specularities. The method provides superior results when compared to the state-of-the-art depth-based rendering methods. The method is showcased in the context of a real-time 3D videoconferencing system.
Abstract:
The present application relates generally to systems and methods for using machine vision to provide information on one or more aspects of an additive fabrication device, such as calibration parameters and/or an object formed by the device or in the process of being formed by the device. According to some aspects, a method is provided for calibrating an additive fabrication device. According to some aspects, a method is provided for assessing at least a portion of an object formed using an additive fabrication device. According to some aspects, a method is provided for fabricating a second object in contact with a first object using an additive fabrication device. According to some aspects, an additive fabrication device configured to perform one or more of the above methods may be provided.