-
公开(公告)号:US20220230376A1
公开(公告)日:2022-07-21
申请号:US17611763
申请日:2020-05-15
Applicant: Nvidia Corporation
Inventor: Artem Rozantsev , Marco Foco , Gavriel State
Abstract: Animation can be generated with a high perceptive quality by utilizing a trained neural network that takes as input a current state of a virtual character to be animated and predict how this character would appear in one or more subsequent frames. Such a process can be performed recursively to generate the data for these frames. During training, each frame of a generated sequence can be predicted from a result for a previous frame, and this generated sequence can be compared with a ground truth sequence using a generative network. Differences between the ground truth and generated animation sequences can be minimized, whereby a specific objective function does not need to be manually defined. Minimizing differences between the generated animation sequences and ground truth sequences during training improves the quality of network predictions for single frames at inference time.
-
2.
公开(公告)号:US20240221288A1
公开(公告)日:2024-07-04
申请号:US18147426
申请日:2022-12-28
Applicant: Nvidia Corporation
Inventor: Marco Foco , Michael Kass , Gavriel State , Artem Rozantsev
IPC: G06T15/20 , G06T7/55 , G06T11/00 , G06V10/764
CPC classification number: G06T15/205 , G06T7/55 , G06T11/001 , G06V10/764 , G06T2207/20081
Abstract: Approaches presented herein provide for automatic generation of representative two-dimensional (2D) images for three-dimensional (3D) objects or assets. In generating these 2D images, a set of options is determined such as may relate to viewpoint or other parameters of a virtual camera. A set of sample points is determined from which to generate 2D images of a 3D model, for example, with 2D images being processed using a classifier to determine which of these images generates a classification with highest confidence or probability, individually or with respect to other classifications. The sample point for this selected image can then be used to select nearby sample points as part of a refinement or optimization process, where 2D images can again be generated and processed using a classifier to identify a 2D image with highest classification probability or confidence, which can be selected as representative of the 3D object or asset.
-
3.
公开(公告)号:US20240203052A1
公开(公告)日:2024-06-20
申请号:US18066135
申请日:2022-12-14
Applicant: Nvidia Corporation
Inventor: Marco Foco , András Bódis-Szomorú , Isaac Deutsch , Artem Rozantsev , Michael Shelley , Gavriel State , Jiehan Wang , Anita Hu , Jean-Francois Lafleche
CPC classification number: G06T17/20 , G06T7/33 , G06V10/82 , G06V2201/07
Abstract: Approaches presented herein can provide for the automatic generation of a digital representation of an environment that may include multiple objects of various object types. An initial representation (e.g., a point cloud) of the environment can be generated from registered image or scan data, for example, and objects in the environment can be segmented and identified based at least on that initial representation. For objects that are recognized based on these segmentations, stored accurate representations can be substituted for those objects in the representation of the environment, and if no such model is available then a mesh or other representation of that object can be generated and positioned in the environment. A result can then include a 3D representation of a scene or environment in which objects are identified and segmented as individual objects, and representations of the scene or environment can be viewed, and interacted with, through various viewports, positions, and perspectives.
-
公开(公告)号:US20230334697A1
公开(公告)日:2023-10-19
申请号:US17659032
申请日:2022-04-13
Applicant: NVIDIA Corporation
Inventor: Siddha Ganju , Elad Mentovich , Marco Foco , Elena Oleynikova
CPC classification number: G06T7/75 , G06T7/11 , G06T17/00 , G01C21/3484 , G05D1/0246
Abstract: In various examples, a 3D representation of an environment may be generated from sensor data, with objects being detected in the environment using the sensor data and stored as items that can be tracked and located within the 3D representation. The 3D representation of the environment and item information may be used to determine (e.g., identify or predict) a location or position of an item within the 3D representation and/or recommend a storage location for the item within the 3D representation. Using a determined location or position, one or more routes to the location through the 3D representation may be determined. Data corresponding to a determined route may be provided to a user and/or device. User preferences, permissions, roles, feedback, historical item data, and/or other data associated with a user may be used to further enhance various aspects of the disclosure.
-
-
-