METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM FOR RUNNING INFERENCE SERVICE PLATFORM

    公开(公告)号:EP4060496A3

    公开(公告)日:2023-01-04

    申请号:EP22188822.5

    申请日:2022-08-04

    IPC分类号: G06F9/50

    摘要: A method for running an inference service platform, includes: determining inference tasks to be allocated for the inference service platform, in which the inference service platform includes two or more inference service groups, versions of the inference service groups are different, and the inference service groups are configured to perform a same type of inference services; determining a flow weight of each of the inference service groups, in which the flow weight is configured to indicate a proportion of a number of inference tasks to which the corresponding inference service group need to be allocated in a total number of inference tasks; and allocating the corresponding number of inference tasks in the inference tasks to be allocated to each of the inference service groups based on the flow weight of each of the inference service groups; and performing the inference tasks by the inference service group.

    METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM FOR RUNNING INFERENCE SERVICE PLATFORM

    公开(公告)号:EP4060496A2

    公开(公告)日:2022-09-21

    申请号:EP22188822.5

    申请日:2022-08-04

    IPC分类号: G06F9/50

    摘要: A method for running an inference service platform, includes: determining inference tasks to be allocated for the inference service platform, in which the inference service platform includes two or more inference service groups, versions of the inference service groups are different, and the inference service groups are configured to perform a same type of inference services; determining a flow weight of each of the inference service groups, in which the flow weight is configured to indicate a proportion of a number of inference tasks to which the corresponding inference service group need to be allocated in a total number of inference tasks; and allocating the corresponding number of inference tasks in the inference tasks to be allocated to each of the inference service groups based on the flow weight of each of the inference service groups; and performing the inference tasks by the inference service group.