-
公开(公告)号:US11016832B2
公开(公告)日:2021-05-25
申请号:US16344582
申请日:2017-11-29
Applicant: INTEL CORPORATION
Inventor: Mohan J. Kumar , Murugasamy K. Nachimuthu , Krishna Bhuyan
Abstract: Technologies for composing a managed node with multiple processors on multiple compute sleds to cooperatively execute a workload include a memory, one or more processors connected to the memory, and an accelerator. The accelerator further includes a coherence logic unit that is configured to receive a node configuration request to execute a workload. The node configuration request identifies the compute sled and a second compute sled to be included in a managed node. The coherence logic unit is further configured to modify a portion of local working data associated with the workload on the compute sled in the memory with the one or more processors of the compute sled, determine coherence data indicative of the modification made by the one or more processors of the compute sled to the local working data in the memory, and send the coherence data to the second compute sled of the managed node.
-
公开(公告)号:US11792113B2
公开(公告)日:2023-10-17
申请号:US17598115
申请日:2020-07-02
Applicant: Intel Corporation
Inventor: Nageen Himayat , Srikathyayani Srikanteswara , Krishna Bhuyan , Daojing Guo , Rustam Pirmagomedov , Gabriel Arrobo Vidal , Yi Zhang , Dmitri Moltchanov
IPC: H04L45/00 , H04L45/745 , H04L47/28 , H04L47/31
CPC classification number: H04L45/26 , H04L45/745 , H04L47/28 , H04L47/31
Abstract: Systems and methods for dynamic compute orchestration include receiving, at a network node of an information centric network, a first interest packet comprising a name field indicating a named function and one or more constraints specifying compute requirements for a computing node to execute the named function, the first interest packet received from a client node. A plurality of computing nodes are identified that satisfy the compute requirements for executing the named function. The first interest packet is forwarded to at least some of the plurality of computing nodes. Data packets are received from at least some of the plurality of computing nodes in response to the first interest packet. One of the plurality of computing nodes is selected based on the received data packets, and a second interest packet is sent to the selected one of the plurality of computing nodes instructing the selected one of the plurality of compute nodes to execute the named function.
-
公开(公告)号:US10860709B2
公开(公告)日:2020-12-08
申请号:US16024547
申请日:2018-06-29
Applicant: Intel Corporation
Inventor: Michael Lemay , David M. Durham , Michael E. Kounavis , Barry E. Huntley , Vedvyas Shanbhogue , Jason W. Brandt , Josh Triplett , Gilbert Neiger , Karanvir Grewal , Baiju V. Patel , Ye Zhuang , Jr-Shian Tsai , Vadim Sukhomlinov , Ravi Sahita , Mingwei Zhang , James C. Farwell , Amitabh Das , Krishna Bhuyan
Abstract: Disclosed embodiments relate to encoded inline capabilities. In one example, a system includes a trusted execution environment (TEE) to partition an address space within a memory into a plurality of compartments each associated with code to execute a function, the TEE further to assign a message object in a heap to each compartment, receive a request from a first compartment to send a message block to a specified destination compartment, respond to the request by authenticating the request, generating a corresponding encoded capability, conveying the encoded capability to the destination compartment, and scheduling the destination compartment to respond to the request, and subsequently, respond to a check capability request from the destination compartment by checking the encoded capability and, when the check passes, providing a memory address to access the message block, and, otherwise, generating a fault, wherein each compartment is isolated from other compartments.
-
-