-
公开(公告)号:US20230244966A1
公开(公告)日:2023-08-03
申请号:US17649912
申请日:2022-02-03
Applicant: Xilinx, Inc.
Inventor: Varun Sharma , Aaron Ng
Abstract: An inference server is capable of receiving a plurality of inference requests from one or more client systems. Each inference request specifies one of a plurality of different endpoints. The inference server can generate a plurality of batches each including one or more of the plurality of inference requests directed to a same endpoint. The inference server also can process the plurality of batches using a plurality of workers executing in an execution layer therein. Each batch is processed by a worker of the plurality of workers indicated by the endpoint of the batch.