-
公开(公告)号:US20230177116A1
公开(公告)日:2023-06-08
申请号:US17649277
申请日:2022-01-28
Applicant: Arm Limited
Inventor: Vasileios LAGANAKOS , Mark Richard NUTTER
CPC classification number: G06K9/6262 , G06K9/6201 , G06N3/08 , G06N3/0454
Abstract: A computer-implemented method includes obtaining trained neural networks for performing a common task and test data for evaluating the performance of the trained neural networks, and inspecting the trained neural networks to identify functional blocks common to a plurality of the trained neural networks. For each identified functional block, extracting a respective network component for implementing the functional block within each of at least some of the trained neural networks, and for each extracted network component, evaluating performance of the network component, and storing performance data indicating said performance of the network component. Storing configuration data indicating a configuration of the identified functional blocks, receiving a request to synthesize a neural network for performing said task subject to a given set of constraints, and composing a plurality of network components in accordance with the configuration data and in dependence on the performance data and the given set of constraints.
-
公开(公告)号:US20210168348A1
公开(公告)日:2021-06-03
申请号:US16700457
申请日:2019-12-02
Applicant: Arm Limited
Inventor: Vasileios LAGANAKOS , Irenéus Johannes DE JONG
IPC: H04N13/271 , H04N5/232
Abstract: A method comprising the steps of obtaining image data captured at an image sensor using a focus configuration. A distance is determined for one or more objects in the image data, based on the focus configuration and a sharpness characteristic of the image data of the object. A depth map is then generated based on the determined distance.
-
公开(公告)号:US20210279964A1
公开(公告)日:2021-09-09
申请号:US16823003
申请日:2020-03-18
Applicant: Arm Limited
Inventor: Vasileios LAGANAKOS , Irenéus Johannes DE JONG , Gary Dale CARPENTER
Abstract: A computer-implemented method includes generating, using a scene generator, first candidate scene data; obtaining reference scene data corresponding to a predetermined reference scene; processing the first candidate scene data and the reference scene data, using a scene discriminator, to generate first discrimination data for estimating whether each of the first candidate scene data and the reference scene data corresponds to a predetermined reference scene; updating a set of parameter values for the scene discriminator using the first discrimination data; generating, using the scene generator, second candidate scene data; processing the second candidate scene data, using the scene discriminator with the updated set of parameter values for the scene discriminator, to generate second discrimination data for estimating whether the second candidate scene data corresponds to a predetermined reference scene; and updating a set of parameter values for the scene generator using the second discrimination data.
-
-