-
公开(公告)号:US10762702B1
公开(公告)日:2020-09-01
申请号:US16015997
申请日:2018-06-22
Applicant: A9.com, Inc.
Inventor: Arnab Dhua , Xing Zhang , Karl Hillesland , Himanshu Arora , Nicholas Corso , Brian Graham , Jesse Chang , Jason Canada
Abstract: A complex three-dimensional virtual representation of an object can be rendered. Virtual images can be captured representing a plurality of views of the complex virtual representation. The virtual images can be converted into binary masks depicting the object pixels and non-object pixels in the virtual images. The binary masks can be used to create a three-dimensional representation of the object having lower complexity than the first three-dimensional virtual representation of the object. In embodiments, the low complexity three-dimensional virtual representation of the object and the virtual images are sent to a mobile device to render a low payload representation of the object on the mobile device.
-
公开(公告)号:US11521376B1
公开(公告)日:2022-12-06
申请号:US17449049
申请日:2021-09-27
Applicant: A9.com, Inc.
Inventor: Arnab Dhua , Divyansh Agarwal
IPC: G06V20/10 , G10L15/08 , G06T17/00 , G06T7/70 , G06T19/20 , G06F16/432 , G10L15/22 , G06V20/40 , G06V20/64
Abstract: System and methods are provided that generate a three-dimensional model from a physical space. While a user is scanning and/or recording the physical space with a user computing device, user speech describing the physical space is recorded. A transcript is generated from the audio captured during the scan and/or image recording of the physical space. Keywords from the transcript are used to improve computer-vision object identification, which is incorporated in the three-dimensional model.
-
公开(公告)号:US11138789B1
公开(公告)日:2021-10-05
申请号:US16452067
申请日:2019-06-25
Applicant: A9.com, Inc.
Inventor: Himanshu Arora , Divyansh Agarwal , Arnab Dhua , Chun Kai Wang
Abstract: Approaches described and suggested herein relate to generating an enhanced point cloud representation of an objection and generating a surface mesh from the enhanced point cloud. The surface mesh can be used to render three-dimensional representations of objects on personal devices such as smartphones and personal computers, for example. Generating an enhanced point cloud of an object includes capturing a plurality of images of the object from a plurality of viewpoints about the object, generating an initial point cloud representation of the object from the plurality of images, generating a preliminary surface mesh from the point cloud using a Delauney-based meshing algorithm, and sampling points from the preliminary surface mesh. The sampled points are then added to the point cloud to form the enhanced point cloud. A final surface mesh can then be generated from the enhanced point cloud using a Poisson-based meshing algorithm.
-
公开(公告)号:US10991160B1
公开(公告)日:2021-04-27
申请号:US16452050
申请日:2019-06-25
Applicant: A9.com, Inc.
Inventor: Himanshu Arora , Divyansh Agarwal , Arnab Dhua , Chun Kai Wang
Abstract: Approaches described and suggested herein relate to generating three-dimensional representations of objects to be used to render virtual reality and augmented reality effects on personal devices such as smartphones and personal computers, for example. An initial surface mesh of an object is obtained. A plurality of silhouette masks of the object taken from a plurality of viewpoints is also obtained. A plurality of depth maps are generated from the initial surface mesh. Specifically, the plurality of depth maps are taken from the same plurality of viewpoints from which the silhouette images are taken. A volume including the object is discretized into a plurality of voxels. Each voxel is then determined to be either inside the object or outside of the object based on the silhouette masks and the depth data. A final mesh is then generated from the voxels that are determined to be inside the object.
-
公开(公告)号:US11210863B1
公开(公告)日:2021-12-28
申请号:US17001182
申请日:2020-08-24
Applicant: A9.com, Inc.
Inventor: Geng Yan , Xing Zhang , Amit Kumar K C , Arnab Dhua , Yu Lou
Abstract: Devices, systems, and methods are provided for real-time object placement guidance in augmented reality experience. An example method may include receiving, by a device having a sensor, an indication of an object to be viewed in an physical environment of the device. The example method may also include determining a 3D model of the physical environment using data of the physical environment captured by the sensor. The example method may also include determining that a first surface in the 3D model of the environment is a first floor space, and a second surface in the 3D model of the environment is a first wall space. The example method may also include determining that a portion of the first surface is unoccupied and sized to fit the object. The example method may also include determining a first location in the 3D model of the physical environment for placement of a virtual representation of the object based on a 3D model of the object, wherein the first location corresponds to the portion of the first floor space. The example method may also include generating the virtual representation of the object for display at the first location, the virtual representation of the object having a first orientation, wherein the first orientation is based on a second orientation of the second surface. The example method may also include generating a first real-time view of the physical environment comprising the virtual representation of the object within the portion of the first location and in the first orientation. In some cases, a real-time virtual overlay may also be generated in the physical environment, the real-time virtual overlay indicating a location of a floor space in the physical environment.
-
公开(公告)号:US11055910B1
公开(公告)日:2021-07-06
申请号:US16707862
申请日:2019-12-09
Applicant: A9.com, Inc.
Inventor: Kenan Deng , Xi Zhang , Arnab Dhua , Himanshu Arora , Ting-Hsiang Hwang , Tomas Francisco Yago Vicente , Sundar Vedula
Abstract: A machine learning system receives a reference image and generates a series of projected view images of a physical object represented in the images. Parallel neural networks may receive the reference image and series of projected view images for analysis to determine one or more features of the physical object. By pooling the results from the parallel network, a single output may be provided to a set of decodes that are trained to identify a material property of the one or more items. As a result, a three-dimensional model may be generated that includes a graphical representation of the object as a function of its material properties to enable improved rendering.
-
公开(公告)号:US11922575B2
公开(公告)日:2024-03-05
申请号:US17200400
申请日:2021-03-12
Applicant: A9.com, Inc.
Inventor: Himanshu Arora , Divyansh Agarwal , Arnab Dhua , Chun Kai Wang
CPC classification number: G06T17/20 , G06T7/55 , G06T2200/08 , G06T2207/10028
Abstract: Approaches described and suggested herein relate to generating three-dimensional representations of objects to be used to render virtual reality and augmented reality effects on personal devices such as smartphones and personal computers, for example. An initial surface mesh of an object is obtained. A plurality of silhouette masks of the object taken from a plurality of viewpoints is also obtained. A plurality of depth maps are generated from the initial surface mesh. Specifically, the plurality of depth maps are taken from the same plurality of viewpoints from which the silhouette images are taken. A volume including the object is discretized into a plurality of voxels. Each voxel is then determined to be either inside the object or outside of the object based on the silhouette masks and the depth data. A final mesh is then generated from the voxels that are determined to be inside the object.
-
公开(公告)号:US11734900B2
公开(公告)日:2023-08-22
申请号:US17510094
申请日:2021-10-25
Applicant: A9.com, Inc.
Inventor: Geng Yan , Xing Zhang , Amit Kumar K C , Arnab Dhua , Yu Lou
CPC classification number: G06T19/006 , G06T19/20 , G06T2200/24 , G06T2210/04 , G06T2219/2004
Abstract: Devices, systems, and methods are provided for real-time object placement guidance in augmented reality experience. An example method may include receiving, by a device having a sensor, an indication of an object to be viewed in an physical environment of the device. The example method may also include determining a 3D model of the physical environment using data of the physical environment captured by the sensor. The example method may also include determining that a first surface in the 3D model of the environment is a first floor space, and a second surface in the 3D model of the environment is a first wall space. The example method may also include determining that a portion of the first surface is unoccupied and sized to fit the object. The example method may also include determining a first location in the 3D model of the physical environment for placement of a virtual representation of the object based on a 3D model of the object, wherein the first location corresponds to the portion of the first floor space. The example method may also include generating the virtual representation of the object for display at the first location, the virtual representation of the object having a first orientation, wherein the first orientation is based on a second orientation of the second surface. The example method may also include generating a first real-time view of the physical environment comprising the virtual representation of the object within the portion of the first location and in the first orientation. In some cases, a real-time virtual overlay may also be generated in the physical environment, the real-time virtual overlay indicating a location of a floor space in the physical environment.
-
公开(公告)号:US20220058883A1
公开(公告)日:2022-02-24
申请号:US17510094
申请日:2021-10-25
Applicant: A9.com, Inc.
Inventor: Geng Yan , Xing Zhang , Amit Kumar K. C. , Arnab Dhua , Yu Lou
Abstract: Devices, systems, and methods are provided for real-time object placement guidance in augmented reality experience. An example method may include receiving, by a device having a sensor, an indication of an object to be viewed in an physical environment of the device. The example method may also include determining a 3D model of the physical environment using data of the physical environment captured by the sensor. The example method may also include determining that a first surface in the 3D model of the environment is a first floor space, and a second surface in the 3D model of the environment is a first wall space. The example method may also include determining that a portion of the first surface is unoccupied and sized to fit the object. The example method may also include determining a first location in the 3D model of the physical environment for placement of a virtual representation of the object based on a 3D model of the object, wherein the first location corresponds to the portion of the first floor space. The example method may also include generating the virtual representation of the object for display at the first location, the virtual representation of the object having a first orientation, wherein the first orientation is based on a second orientation of the second surface. The example method may also include generating a first real-time view of the physical environment comprising the virtual representation of the object within the portion of the first location and in the first orientation. In some cases, a real-time virtual overlay may also be generated in the physical environment, the real-time virtual overlay indicating a location of a floor space in the physical environment.
-
公开(公告)号:US20210248798A1
公开(公告)日:2021-08-12
申请号:US17243273
申请日:2021-04-28
Applicant: A9.com, Inc.
Inventor: Jesse Chang , Jared Corso , Xing Zhang , Arnab Dhua , Yu Lou , Jason Freund
Abstract: Approaches in accordance with various embodiments provide for the presentation of augmented reality (AR) content with respect to optically challenging surfaces. Such surfaces can be difficult to locate using conventional optical-based approaches that rely on visible features. Embodiments can utilize the fact that horizontal surfaces can be located relatively easily, and can determine intersections or boundaries of those horizontal surfaces that likely indicate the presence of another surface, such as a vertical wall. This boundary can be determined automatically, through user input, or using a combination of such approaches. Once such an intersection is located, a virtual plane can be determined whose relative location to a device displaying AR content can be tracked and used as a reference for displaying AR content.
-
-
-
-
-
-
-
-
-