-
91.
公开(公告)号:US11740075B2
公开(公告)日:2023-08-29
申请号:US17504004
申请日:2021-10-18
Applicant: META PLATFORMS TECHNOLOGIES, LLC
Inventor: Michael Hall , Xinqiao Liu , Zhaoming Zhu , Rajesh Lachhmandas Chhabria , Huixuan Tang , Shuochen Su , Zihe Gao
IPC: G06T7/521 , G01B11/22 , G01B11/25 , G06T7/529 , G02B27/01 , G06T7/11 , G06T7/174 , G06T7/55 , G06T7/00 , G06T7/90 , H04N13/106 , H04N13/204 , G06T7/593 , H04N13/128 , H04N13/271 , H04N13/239 , H04N23/56 , G06V10/22 , G06V10/145 , H04N13/00
CPC classification number: G01B11/2513 , G01B11/22 , G02B27/0172 , G06T7/11 , G06T7/174 , G06T7/521 , G06T7/529 , G06T7/55 , G06T7/593 , G06T7/90 , G06T7/97 , G06V10/145 , G06V10/22 , H04N13/106 , H04N13/128 , H04N13/204 , H04N13/239 , H04N13/271 , H04N23/56 , G02B2027/014 , G02B2027/0138 , G02B2027/0178 , G06T2207/10028 , H04N2013/0081
Abstract: A depth camera assembly (DCA) determines depth information. The DCA projects a dynamic structured light pattern into a local area and captures images including a portion of the dynamic structured light pattern. The DCA determines regions of interest in which it may be beneficial to increase or decrease an amount of texture added to the region of interest using the dynamic structured light pattern. For example, the DCA may identify the regions of interest based on contrast values calculated using a contrast algorithm, or based on the parameters received from a mapping server including a virtual model of the local area. The DCA may selectively increase or decrease an amount of texture added by the dynamic structured light pattern in portions of the local area. By selectively controlling portions of the dynamic structured light pattern, the DCA may decrease power consumption and/or increase the accuracy of depth sensing measurements.
-
92.
公开(公告)号:US20230259131A1
公开(公告)日:2023-08-17
申请号:US18147632
申请日:2022-12-28
Applicant: KACHE.AI
Inventor: Eli Riggs , Catherine Culkin
CPC classification number: G05D1/0088 , G06N20/00 , G05D1/0287 , G06T7/80 , H04N13/246 , G06T7/292 , G06T7/593 , G06T7/85 , G05D2201/0213 , G06T2207/10012
Abstract: Systems and methods for implementing one or more autonomous features for autonomous and semi-autonomous control of one or more vehicles are provided. More specifically, image data may be obtained from an image acquisition device and processed utilizing one or more machine learning models to identify, track, and extract one or more features of the image utilized in decision making processes for providing steering angle and/or acceleration/deceleration input to one or more vehicle controllers. In some instances, techniques may be employed such that the autonomous and semi-autonomous control of a vehicle may change between vehicle follow and lane follow modes. In some instances, at least a portion of the machine learning model may be updated based on one or more conditions.
-
公开(公告)号:US11729364B2
公开(公告)日:2023-08-15
申请号:US17018679
申请日:2020-09-11
Applicant: GoPro, Inc.
Inventor: Bruno César Douady , Alexis Lefebvre
IPC: H04N13/156 , H04N13/239 , G06T7/593 , G06T3/40 , H04N23/45
CPC classification number: H04N13/156 , G06T3/4038 , G06T7/593 , H04N13/239 , H04N23/45 , G06T2207/20221 , G06T2207/20224
Abstract: Systems and methods are disclosed for circular stitching of images. For example, methods may include accessing a first image captured using a first image sensor; accessing a second image captured using a second image sensor; determining a cost table for a circular stitching boundary that includes overlapping regions of the first image and the second image; determining an extended disparity profile based on a periodic extension of the cost table and a smoothness criterion, wherein the extended disparity profile has a length greater than the width of the cost table; determining a binocular disparity profile of a length equal to the width of the cost table based on a contiguous subsequence of the extended disparity profile; and stitching the first image and the second image using the binocular disparity profile to obtain a combined image.
-
公开(公告)号:US11727626B2
公开(公告)日:2023-08-15
申请号:US17174250
申请日:2021-02-11
Applicant: Fyusion, Inc.
Inventor: Stefan Johannes Josef Holzer , Abhishek Kar , Matteo Munaro , Pavel Hanchar , Radu Bogdan Rusu , Santi Arano
IPC: G06F30/15 , G06N3/02 , G06T15/10 , G06T17/00 , G06F17/18 , G06T7/593 , H04N13/271 , H04N13/243 , G06F16/29 , G01C21/32 , G06T15/20 , G06T7/00 , G06T7/70 , G06T19/00 , G06Q30/02 , G06F9/451 , H04N23/63 , H04N13/00
CPC classification number: G06T15/205 , G01C21/32 , G06F9/453 , G06F16/29 , G06F17/18 , G06F30/15 , G06N3/02 , G06Q30/0278 , G06T7/0002 , G06T7/0004 , G06T7/593 , G06T7/70 , G06T15/10 , G06T17/00 , G06T19/003 , G06T19/006 , H04N13/243 , H04N13/271 , H04N23/633 , G06T2200/08 , G06T2200/24 , G06T2207/10016 , G06T2207/10028 , G06T2207/20076 , G06T2207/20081 , G06T2207/20084 , G06T2207/20224 , G06T2207/30108 , G06T2207/30244 , G06T2207/30248 , H04N2013/0081
Abstract: A plurality of images may be analyzed to determine an object model. The object model may have a plurality of components, and each of the images may correspond with one or more of the components. Component condition information may be determined for one or more of the components based on the images. The component condition information may indicate damage incurred by the object portion corresponding with the component.
-
公开(公告)号:US20230252664A1
公开(公告)日:2023-08-10
申请号:US18012584
申请日:2021-06-30
Inventor: Hui WEI , Ruyang LI , Yaqian ZHAO , Xingchen CUI , Rengang LI
CPC classification number: G06T7/593 , G06T7/564 , G06T7/33 , G06T2207/10012
Abstract: An image registration method and apparatus, an electronic apparatus, and a storage medium are provided. The image registration method comprises: extracting edge pixels of the binocular image, and determining high-confidence parallax points in the edge pixels and parallax; projecting each non-high-confidence parallax point to the triangular mesh in a direction parallel with a parallax dimension to obtain a triangular face; determining a parallax search range of parallax of each non-high-confidence parallax point, calculating a matching cost corresponding to all parallax in each parallax search range, and determining that parallax with the smallest matching cost is the parallax of the corresponding non-high-confidence parallax point; and determining a depth boundary point in the edge pixels, determining a parallax boundary point of each depth boundary point in a target direction, and setting parallax of pixels between the depth boundary point and the parallax boundary point to a target value.
-
公开(公告)号:US11720766B2
公开(公告)日:2023-08-08
申请号:US16730920
申请日:2019-12-30
Applicant: PACKSIZE LLC
Inventor: Francesco Peruch , Carlo Dal Mutto , Jason Trachewsky
CPC classification number: G06K7/1447 , G06K7/1417 , G06K7/1443 , G06T7/521 , G06T7/593
Abstract: A method for automatically recognizing content of labels on objects includes: capturing visual information of an object using a scanning system including one or more cameras, the object having one or more labels on one or more exterior surfaces; detecting, by a computing system, one or more surfaces of the object having labels; rectifying, by the computing system, the visual information of the one or more surfaces of the object to compute one or more rectified images; and decoding, by the computing system, content of a label depicted in at least one of the one or more rectified images.
-
公开(公告)号:US11710272B2
公开(公告)日:2023-07-25
申请号:US17211724
申请日:2021-03-24
Applicant: Disney Enterprises, Inc.
Inventor: Dane M. Coffey , Siroberto Scerbo , Daniel L. Baker , Mark R. Mine , Evan M. Goldberg
Abstract: An image processing system includes a computing platform having processing hardware, a display, and a system memory storing a software code. The processing hardware executes the software code to receive a digital object, surround the digital object with virtual cameras oriented toward the digital object, render, using each one of the virtual cameras, a depth map identifying a distance of that one of the virtual cameras from the digital object, and generate, using the depth map, a volumetric perspective of the digital object from a perspective of that one of the virtual cameras, resulting in multiple volumetric perspectives of the digital object. The processing hardware further executes the software code to merge the multiple volumetric perspectives of the digital object to form a volumetric representation of the digital object, and to convert the volumetric representation of the digital object to a renderable form.
-
公开(公告)号:US11710256B2
公开(公告)日:2023-07-25
申请号:US17007174
申请日:2020-08-31
Applicant: Sony Interactive Entertainment Inc.
Inventor: Nigel John Williams , Andrew William Walker
IPC: G06T7/80 , G06T7/593 , H04N13/106 , G06F1/03 , G06T5/50 , H04N13/111 , H04N13/00
CPC classification number: G06T7/80 , G06F1/03 , G06T5/50 , G06T7/593 , H04N13/106 , H04N13/111 , G06T2207/20021 , G06T2207/20216 , H04N2013/0081
Abstract: A method of generating a 3D reconstruction of a scene, the scene comprising a plurality of cameras positioned around the scene, comprises: obtaining the extrinsics and intrinsics of a virtual camera within a scene; accessing a data structure so as to determine a camera pair that is to be used in reconstructing the scene from the viewpoint of the virtual camera; wherein the data structure defines a voxel representation of the scene, the voxel representation comprising a plurality of voxels, at least some of the voxel surfaces being associated with respective camera pair identifiers; wherein each camera pair identifier associated with a respective voxel surface corresponds to a camera pair that has been identified as being suitable for obtaining depth data for the part of the scene within that voxel and for which the averaged pose of the camera pair is oriented towards the voxel surface; identifying, based on the obtained extrinsics and intrinsics of the virtual camera, at least one voxel that is within the field of view of the virtual camera and a corresponding voxel surface that is oriented towards the virtual camera; identifying, based on the accessed data structure, at least one camera pair that is suitable for reconstructing the scene from the viewpoint of the virtual camera, and generating a reconstruction of the scene from the viewpoint of the virtual camera based on the images captured by the cameras in the identified at least one camera pair.
-
99.
公开(公告)号:US20230232130A1
公开(公告)日:2023-07-20
申请号:US18186151
申请日:2023-03-18
Applicant: Corephotonics Ltd.
Inventor: Nadav Geva , Michael Scherer , Ephraim Goldenberg , Gal Shabtay
IPC: H04N25/705 , H04N13/271 , G06T7/593 , H04N13/207 , H04N25/75
CPC classification number: H04N25/705 , G06T7/593 , H04N13/207 , H04N13/271 , H04N25/75
Abstract: Indirect time-of-flight (i-ToF) image sensor pixels, i-ToF image sensors including such pixels, stereo cameras including such image sensors, and sensing methods to obtain i-ToF detection and phase detection information using such image sensors and stereo cameras. An i-ToF image sensor pixel may comprise a plurality of sub-pixels, each sub-pixel including a photodiode, a single microlens covering the plurality of sub-pixels and a read-out circuit for extracting i-ToF phase signals of each sub-pixel individually.
-
公开(公告)号:US20230230313A1
公开(公告)日:2023-07-20
申请号:US18008418
申请日:2021-06-03
Applicant: Condense Reality Ltd.
Inventor: Nicholas Fellingham , Oliver Moolan-Feroze
CPC classification number: G06T17/00 , G06T7/593 , G06T2200/04 , G06T2200/08 , G06T2207/20081 , G06T2207/20084
Abstract: A first aspect of the invention provides a method of training a neural network for capturing volumetric video, comprising: generating a 3D model of a scene; using the 3D model to generate a high fidelity depth map; capturing a perceived depth map of the scene, having a field of view that is aligned with a field of view of the high fidelity depth map; and training the neural network based on the high fidelity depth map and the perceived depth map, wherein the high fidelity depth map has a higher fidelity to the scene than the perceived depth map has.
-
-
-
-
-
-
-
-
-