ARTIFICIAL INTELLIGENCE INFERENCE ARCHITECTURE WITH HARDWARE ACCELERATION
摘要:
Various systems and methods of artificial intelligence (AI) processing using hardware acceleration within edge computing settings are described herein. In an example, processing performed at an edge computing device includes: obtaining a request for an AI operation using an AI model; identifying, based on the request, an AI hardware platform for execution of an instance of the AI model; and causing execution of the AI model instance using the AI hardware platform. Further operations to analyze input data, perform an inference operation with the AI model, and coordinate selection and operation of the hardware platform for execution of the AI model, is also described.
信息查询
0/0