-
1.
公开(公告)号:US20210101286A1
公开(公告)日:2021-04-08
申请号:US17053335
申请日:2020-02-28
Applicant: Google LLC
Inventor: Honglak Lee , Xinchen Yan , Soeren Pirk , Yunfei Bai , Seyed Mohammad Khansari Zadeh , Yuanzheng Gong , Jasmine Hsu
Abstract: Implementations relate to training a point cloud prediction model that can be utilized to process a single-view two-and-a-half-dimensional (2.5D) observation of an object, to generate a domain-invariant three-dimensional (3D) representation of the object. Implementations additionally or alternatively relate to utilizing the domain-invariant 3D representation to train a robotic manipulation policy model using, as at least part of the input to the robotic manipulation policy model during training, the domain-invariant 3D representations of simulated objects to be manipulated. Implementations additionally or alternatively relate to utilizing the trained robotic manipulation policy model in control of a robot based on output generated by processing generated domain-invariant 3D representations utilizing the robotic manipulation policy model.
-
2.
公开(公告)号:US20230289988A1
公开(公告)日:2023-09-14
申请号:US18199303
申请日:2023-05-18
Applicant: GOOGLE LLC
Inventor: Yunfei Bai , Yuanzheng Gong
CPC classification number: G06T7/593 , G06T7/12 , G06T7/521 , G06T7/13 , B25J9/1661 , B25J9/1697 , G06T5/008 , G06F18/24 , H04N23/90 , G06T2207/10024 , G06T2207/10028 , G06T2207/10048
Abstract: Generating edge-depth values for an object, utilizing the edge-depth values in generating a 3D point cloud for the object, and utilizing the generated 3D point cloud for generating a 3D bounding shape (e.g., 3D bounding box) for the object. Edge-depth values for an object are depth values that are determined from frame(s) of vision data (e.g., left/right images) that captures the object, and that are determined to correspond to an edge of the object (an edge from the perspective of frame(s) of vision data). Techniques that utilize edge-depth values for an object (exclusively, or in combination with other depth values for the object) in generating 3D bounding shapes can enable accurate 3D bounding shapes to be generated for partially or fully transparent objects. Such increased accuracy 3D bounding shapes directly improve performance of a robot that utilizes the 3D bounding shapes in performing various tasks.
-
公开(公告)号:US12165347B2
公开(公告)日:2024-12-10
申请号:US18199303
申请日:2023-05-18
Applicant: GOOGLE LLC
Inventor: Yunfei Bai , Yuanzheng Gong
IPC: G06T7/593 , B25J9/16 , G06F18/24 , G06T5/00 , G06T5/94 , G06T7/12 , G06T7/13 , G06T7/521 , H04N23/90
Abstract: Generating edge-depth values for an object, utilizing the edge-depth values in generating a 3D point cloud for the object, and utilizing the generated 3D point cloud for generating a 3D bounding shape (e.g., 3D bounding box) for the object. Edge-depth values for an object are depth values that are determined from frame(s) of vision data (e.g., left/right images) that captures the object, and that are determined to correspond to an edge of the object (an edge from the perspective of frame(s) of vision data). Techniques that utilize edge-depth values for an object (exclusively, or in combination with other depth values for the object) in generating 3D bounding shapes can enable accurate 3D bounding shapes to be generated for partially or fully transparent objects. Such increased accuracy 3D bounding shapes directly improve performance of a robot that utilizes the 3D bounding shapes in performing various tasks.
-
4.
公开(公告)号:US12112494B2
公开(公告)日:2024-10-08
申请号:US17053335
申请日:2020-02-28
Applicant: Google LLC
Inventor: Honglak Lee , Xinchen Yan , Soeren Pirk , Yunfei Bai , Seyed Mohammad Khansari Zadeh , Yuanzheng Gong , Jasmine Hsu
CPC classification number: G06T7/55 , B25J9/1605 , B25J9/163 , B25J9/1669 , B25J9/1697 , B25J13/08 , G06F18/2163 , G06T7/50 , G06V20/10 , G06V20/64 , G06T2207/10024 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/20132
Abstract: Implementations relate to training a point cloud prediction model that can be utilized to process a single-view two-and-a-half-dimensional (2.5D) observation of an object, to generate a domain-invariant three-dimensional (3D) representation of the object. Implementations additionally or alternatively relate to utilizing the domain-invariant 3D representation to train a robotic manipulation policy model using, as at least part of the input to the robotic manipulation policy model during training, the domain-invariant 3D representations of simulated objects to be manipulated. Implementations additionally or alternatively relate to utilizing the trained robotic manipulation policy model in control of a robot based on output generated by processing generated domain-invariant 3D representations utilizing the robotic manipulation policy model.
-
-
-