View synthesis robust to unconstrained image data
Abstract:
Provided are systems and methods for synthesizing novel views of complex scenes (e.g., outdoor scenes). In some implementations, the systems and methods can include or use machine-learned models that are capable of learning from unstructured and/or unconstrained collections of imagery such as, for example, “in the wild” photographs. In particular, example implementations of the present disclosure can learn a volumetric scene density and radiance represented by a machine-learned model such as one or more multilayer perceptrons (MLPs).
Public/Granted literature
Information query
Patent Agency Ranking
0/0