STORAGE CLASS MEMORY DEVICE INCLUDING A NETWORK

    公开(公告)号:US20220113914A1

    公开(公告)日:2022-04-14

    申请号:US17560945

    申请日:2021-12-23

    摘要: Systems and techniques for storage-class memory device including a network interface are described herein. A write for a network communication is received by the host interface of the memory device. Here, the network communication includes a header. The header is written to a non-volatile storage array managed by a memory controller. A network command is detected by the memory device. Here, the network command includes a pointer to the header in the non-volatile storage array. The header is retrieved from the non-volatile storage array and a packet based on the header is transmitted via a network interface of the memory controller.

    GEOFENCE-BASED EDGE SERVICE CONTROL AND AUTHENTICATION

    公开(公告)号:US20210006972A1

    公开(公告)日:2021-01-07

    申请号:US17025519

    申请日:2020-09-18

    摘要: Methods, systems, and use cases for geofence-based edge service control and authentication are discussed, including an orchestration system with memory and at least one processing circuitry coupled to the memory. The processing circuitry is configured to perform operations to obtain, from a plurality of connectivity nodes providing edge services, physical location information, and resource availability information associated with each of the plurality of connectivity nodes. An edge-to-edge location graph (ELG) is generated based on the physical location information and the resource availability information, the ELG indicating a subset of the plurality of connectivity nodes that are available for executing a plurality of services associated with an edge workload. The connectivity nodes are provisioned with the ELG and a workflow execution plan to execute the plurality of services, the workflow execution plan including metadata with a geofence policy. The geofence policy specifies geofence restrictions associated with each of the plurality of services.

    ARTIFICIAL INTELLIGENCE INFERENCE ARCHITECTURE WITH HARDWARE ACCELERATION

    公开(公告)号:US20190138908A1

    公开(公告)日:2019-05-09

    申请号:US16235100

    申请日:2018-12-28

    IPC分类号: G06N3/10 H04L12/24

    摘要: Various systems and methods of artificial intelligence (AI) processing using hardware acceleration within edge computing settings are described herein. In an example, processing performed at an edge computing device includes: obtaining a request for an AI operation using an AI model; identifying, based on the request, an AI hardware platform for execution of an instance of the AI model; and causing execution of the AI model instance using the AI hardware platform. Further operations to analyze input data, perform an inference operation with the AI model, and coordinate selection and operation of the hardware platform for execution of the AI model, is also described.