STORAGE CLASS MEMORY DEVICE INCLUDING A NETWORK

    公开(公告)号:US20220113914A1

    公开(公告)日:2022-04-14

    申请号:US17560945

    申请日:2021-12-23

    摘要: Systems and techniques for storage-class memory device including a network interface are described herein. A write for a network communication is received by the host interface of the memory device. Here, the network communication includes a header. The header is written to a non-volatile storage array managed by a memory controller. A network command is detected by the memory device. Here, the network command includes a pointer to the header in the non-volatile storage array. The header is retrieved from the non-volatile storage array and a packet based on the header is transmitted via a network interface of the memory controller.

    END-TO-END DEVICE ATTESTATION
    52.
    发明申请

    公开(公告)号:US20210314365A1

    公开(公告)日:2021-10-07

    申请号:US17351004

    申请日:2021-06-17

    IPC分类号: H04L29/06 G06F11/34

    摘要: Various examples of device and system implementations and methods for performing end-to-end attestation operations for multi-layer hardware devices are disclosed. In an example, attestation operations are performed by a verifier, including: obtaining layered attestation evidence regarding a state of a compute device, with the layered attestation evidence including attesting evidence provided from a second hardware layer of the compute device, such that the attesting evidence provided from the second hardware layer is generated from attesting evidence provided from a first hardware layer of the compute device to the second hardware layer of the compute device; obtaining endorsement information relating to the layered attestation evidence for the state of the compute device; determining an appraisal policy for performing attestation of the compute device from the layered attestation evidence; and applying the appraisal policy and the endorsement information to the layered attestation evidence, to perform attestation of the compute device.

    GEOFENCE-BASED EDGE SERVICE CONTROL AND AUTHENTICATION

    公开(公告)号:US20210006972A1

    公开(公告)日:2021-01-07

    申请号:US17025519

    申请日:2020-09-18

    摘要: Methods, systems, and use cases for geofence-based edge service control and authentication are discussed, including an orchestration system with memory and at least one processing circuitry coupled to the memory. The processing circuitry is configured to perform operations to obtain, from a plurality of connectivity nodes providing edge services, physical location information, and resource availability information associated with each of the plurality of connectivity nodes. An edge-to-edge location graph (ELG) is generated based on the physical location information and the resource availability information, the ELG indicating a subset of the plurality of connectivity nodes that are available for executing a plurality of services associated with an edge workload. The connectivity nodes are provisioned with the ELG and a workflow execution plan to execute the plurality of services, the workflow execution plan including metadata with a geofence policy. The geofence policy specifies geofence restrictions associated with each of the plurality of services.

    DISTRIBUTED MACHINE LEARNING IN AN INFORMATION CENTRIC NETWORK

    公开(公告)号:US20200027022A1

    公开(公告)日:2020-01-23

    申请号:US16586593

    申请日:2019-09-27

    IPC分类号: G06N20/00 H04L29/08

    摘要: Systems and techniques for distributed machine learning (DML) in an information centric network (ICN) are described herein. Finite message exchanges, such as those used in many DML exercises, may be efficiently implemented by treating certain data packets as interest packets to reduce overall network overhead when performing the finite message exchange. Further, network efficiency in DML may be improved achieved by using local coordinating nodes to manage devices participating in a distributed machine learning exercise. Additionally, modifying a round of DML training to accommodate available participant devices, such as by using a group quality of service metric to select the devices, or extending the round execution parameters to include additional devices, may have an impact on DML performance.

    ARTIFICIAL INTELLIGENCE INFERENCE ARCHITECTURE WITH HARDWARE ACCELERATION

    公开(公告)号:US20190138908A1

    公开(公告)日:2019-05-09

    申请号:US16235100

    申请日:2018-12-28

    IPC分类号: G06N3/10 H04L12/24

    摘要: Various systems and methods of artificial intelligence (AI) processing using hardware acceleration within edge computing settings are described herein. In an example, processing performed at an edge computing device includes: obtaining a request for an AI operation using an AI model; identifying, based on the request, an AI hardware platform for execution of an instance of the AI model; and causing execution of the AI model instance using the AI hardware platform. Further operations to analyze input data, perform an inference operation with the AI model, and coordinate selection and operation of the hardware platform for execution of the AI model, is also described.