DATA PROCESSING SYSTEM AND METHOD
    2.
    发明申请

    公开(公告)号:US20190287022A1

    公开(公告)日:2019-09-19

    申请号:US16432617

    申请日:2019-06-05

    Abstract: Embodiments of the present invention disclose a data processing apparatus. The apparatus is configured to: after calculating a set of gradient information of each parameter by using a sample data subset, delete the sample data subset, read a next sample data subset, calculate another set of gradient information of each parameter by using the next sample data subset, and accumulate a plurality of sets of calculated gradient information of each parameter, to obtain an update gradient of each parameter.

    IMAGE PROCESSING METHOD, APPARATUS, AND SYSTEM

    公开(公告)号:US20220156931A1

    公开(公告)日:2022-05-19

    申请号:US17590005

    申请日:2022-02-01

    Abstract: This application relates to the artificial intelligence field, and provides an image processing method, an apparatus, and a system. The image processing method includes: obtaining a plurality of image blocks by segmenting a to-be-analyzed pathological image; inputting the plurality of image blocks to a first analysis model to obtain a first analysis result, where the first analysis model classifies each of the plurality of image blocks based on a quantity or an area of suspicious lesion components; inputting at least one second-type image block in the first analysis result to a second analysis model to obtain a second analysis result, where the second analysis model analyzes a location of a suspicious lesion component of each input second-type image block; and obtaining a final analysis result of the pathological image based on the first analysis result and the second analysis result.

    TRAINING METHOD, APPARATUS, CHIP, AND SYSTEM FOR NEURAL NETWORK MODEL

    公开(公告)号:US20190279088A1

    公开(公告)日:2019-09-12

    申请号:US16424760

    申请日:2019-05-29

    Abstract: A method for training a neural network model are disclosed. Each training period includes K iterations, and for an ith iteration of one of N worker modules within each training period, each worker module performs in parallel the following steps: calculating a model parameter of an (i+1)th iteration based on a local gradient of the ith iteration and a model parameter of the ith iteration, and if i is less than K, calculating a local gradient of the (i+1)th iteration based on the model parameter of the (i+1)th iteration and sample data of the (i+1)th iteration; and pulling, by the worker module, a global gradient of an rth iteration from a server module and/or pushing, by the worker module, a local gradient of an fth iteration to the server module. In this way, time windows of a calculation process and a communication process overlap, thereby reducing time delay.

Patent Agency Ranking