-
公开(公告)号:US20220374742A1
公开(公告)日:2022-11-24
申请号:US17817015
申请日:2022-08-03
Inventor: Zhengxiong Yuan , Zhengyu Qian , En Shi , Mingren Hu , Jinqi Li , Zhenfang Chu , Runqing Li , Yue Huang
Abstract: A method for running an inference service platform, includes: determining inference tasks to be allocated for the inference service platform, in which the inference service platform includes two or more inference service groups, versions of the inference service groups are different, and the inference service groups are configured to perform a same type of inference services; determining a flow weight of each of the inference service groups, in which the flow weight is configured to indicate a proportion of a number of inference tasks to which the corresponding inference service group need to be allocated in a total number of inference tasks; and allocating the corresponding number of inference tasks in the inference tasks to be allocated to each of the inference service groups based on the flow weight of each of the inference service groups; and performing the inference tasks by the inference service group.