-
51.
公开(公告)号:US20190380171A1
公开(公告)日:2019-12-12
申请号:US16369420
申请日:2019-03-29
摘要: Technologies for providing hardware resources as a service with direct resource addressability are disclosed. According to one embodiment of the present disclosure, a device receives a request to access a destination accelerator device in an edge network, the request specifying a destination address assigned to the destination accelerator device. The device determines, as a function of the destination address, a location of the destination accelerator device and sends the request to the destination accelerator device.
-
公开(公告)号:US20210021485A1
公开(公告)日:2021-01-21
申请号:US17063991
申请日:2020-10-06
摘要: Methods and apparatus for jitter-less distributed Function as a Service (FaaS) using flavor clustering. A set of FaaS functions clustered by flavor chaining is implemented to deploy one or more FaaS flavor clusters on one or more edge nodes, wherein each flavor is defined by a set of resource requirements mapped into a jitter Quality of Service (QoS) and is executed on at least one hardware computing component on the one or more edge nodes. One or more jitter controllers are implemented to control and monitor execution of FaaS functions in the one or more FaaS flavor clusters such that the functions are executed to meet jitter-less QoS requirements. Jitter controllers include platform jitter-less function controllers in edge nodes and a data center FaaS jitter-less controller. A jitter-less Software Defined Wide Area Network (SD-WAN) network controller is also provided to provide network resources used by FaaS flavor clusters and satisfy connectivity requirements between the edge nodes.
-
公开(公告)号:US20240236017A1
公开(公告)日:2024-07-11
申请号:US18278517
申请日:2021-06-25
IPC分类号: H04L47/70 , H04L41/0816 , H04L47/80
CPC分类号: H04L47/822 , H04L41/0816 , H04L47/805
摘要: A computing node includes a NIC and processing circuitry configured to select a subset of computing resources from a set of available computing resources to initiate a parameter sweep associated with a parameter sweep request received. A plurality of settings is applied to each computing resource of the subset to generate a plurality of resource mappings during the parameter sweep. Each resource mapping of the plurality of resource mappings indicates at least one computing resource of the subset and a corresponding at least one setting of the plurality of settings. Telemetry information for the subset of computing resources is retrieved, the telemetry information is generated during the parameter sweep. A resource mapping of the plurality of resource mappings is selected based on a comparison of the telemetry information with an SLO. A reconfiguration of the available computing resources is performed based on the selected resource mapping.
-
公开(公告)号:US20230136615A1
公开(公告)日:2023-05-04
申请号:US18090701
申请日:2022-12-29
申请人: Francesc Guim Bernat , Karthik Kumar , Marcos E. Carranza , Cesar Martinez-Spessot , Kshitij Arun Doshi
发明人: Francesc Guim Bernat , Karthik Kumar , Marcos E. Carranza , Cesar Martinez-Spessot , Kshitij Arun Doshi
IPC分类号: G06F9/50
摘要: Various approaches for deploying and using virtual pools of compute resources with the use of infrastructure processing units (IPUs) and similar networked processing units are disclosed. A host computing system may be configured to operate a virtual pool of resources, with operations including: identifying, at the host computing system, availability of a resource at the host computing system; transmitting, to a network infrastructure device, a notification that the resource at the host computing system is available for use in a virtual resource pool in the edge computing network; receiving a request for the resource in the virtual resource pool that is provided on behalf of a client computing system, based on the request being coordinated via the network infrastructure device and includes at least one quality of service (QoS) requirement; and servicing the request for the resource, based on the at least one QoS requirement.
-
公开(公告)号:US20220116455A1
公开(公告)日:2022-04-14
申请号:US17561334
申请日:2021-12-23
申请人: Arun Raghunath , Mohammad Chowdhury , Michael Mesnier , Ravishankar R. Iyer , Ian Adams , Thijs Metsch , John J. Browne , Adrian Hoban , Veeraraghavan Ramamurthy , Patrick Koeberl , Francesc Guim Bernat , Kshitij Arun Doshi , Susanne M. Balle , Bin Li
发明人: Arun Raghunath , Mohammad Chowdhury , Michael Mesnier , Ravishankar R. Iyer , Ian Adams , Thijs Metsch , John J. Browne , Adrian Hoban , Veeraraghavan Ramamurthy , Patrick Koeberl , Francesc Guim Bernat , Kshitij Arun Doshi , Susanne M. Balle , Bin Li
IPC分类号: H04L67/1097 , H04L67/146 , H04L67/52 , H04L9/40
摘要: Various systems and methods for implementing computational storage are described herein. An orchestrator system is configured to: receive, at the orchestrator system, a registration package, the registration package including function code, a logical location of input data for the function code, and an event trigger for the function code, the event trigger set to trigger in response to when the input data is modified; interface with a storage service, the storage service to monitor the logical location of the input data and notify a location service when the input data is modified; interface with the location service to obtain a physical location of the input data, the location service to resolve the physical location from the logical location of the input data; and configure the function code to execute near the input data
-
公开(公告)号:US20220012149A1
公开(公告)日:2022-01-13
申请号:US17484253
申请日:2021-09-24
摘要: Various methods, systems, and use cases for a stable and automated transformation of a networked computing system are provided, to enable a transformation to the configuration of the computing system (e.g., software or firmware upgrade, hardware change, etc.). In an example, automated operations include: identifying a transformation to apply to a configuration of the computing system, for a transformation that affects a network service provided by the computing system; identifying operational conditions used to evaluate results of the transformation; attempting to apply the transformation, using a series of stages that have rollback positions when the identified operational conditions are not satisfied; and determining a successful or unsuccessful result of the attempt to apply the transformation. For an unsuccessful result, remediation may be performed to the configuration, with use of one or more rollback positions; for a successful result, a new restore state is established from the completion state.
-
公开(公告)号:US20230135938A1
公开(公告)日:2023-05-04
申请号:US18090813
申请日:2022-12-29
申请人: Marcos E. Carranza , Francesc Guim Bernat , Kshitij Arun Doshi , Karthik Kumar , Srikathyayani Srikanteswara , Mateo Guzman
发明人: Marcos E. Carranza , Francesc Guim Bernat , Kshitij Arun Doshi , Karthik Kumar , Srikathyayani Srikanteswara , Mateo Guzman
IPC分类号: H04L67/63 , H04L67/1087
摘要: Various approaches for service mech switching, including the use of infrastructure processing units (IPUs) and similar networked processing units, are disclosed. For example, a packet that includes a service request for a service may be received at a networking infrastructure device. The service may include an application that spans multiple nodes in a network. An outbound interface of the networking infrastructure device may be selected through which to route the packet. The selection of the outbound interface may be based on a service component of the service request in the packet and network metrics that correspond to the service. The packet may then be transmitted using the outbound interface.
-
公开(公告)号:US20230134683A1
公开(公告)日:2023-05-04
申请号:US18090720
申请日:2022-12-29
IPC分类号: G06F12/0846 , G06F12/0873
摘要: Various approaches for configuring interleaving in a memory pool used in an edge computing arrangement, including with the use of infrastructure processing units (IPUs) and similar networked processing units, are disclosed. An example system may discover and map disaggregated memory resources at respective compute locations connected to each another via at least one interconnect. The system may identify workload requirements for use of the compute locations by respective workloads, for workloads provided by client devices to the compute locations. The system may determine an interleaving arrangement for a memory pool that fulfills the workload requirements, to use the interleaving arrangement to distribute data for the respective workloads among the disaggregated memory resources. The system may configure the memory pool for use by the client devices of the network, as the memory pool causes the disaggregated memory resources to host data based on the interleaving arrangement.
-
公开(公告)号:US20230134643A1
公开(公告)日:2023-05-04
申请号:US18148335
申请日:2022-12-29
摘要: Methods and apparatus for distributing coolant between server racks are disclosed herein. An example apparatus described herein includes a compute node including a sensor and a first volume of coolant, a coolant storage, memory, and at least one processor to execute instructions to determine, based on an output of the sensor, if the first volume is effective to maintain a temperature of the compute node at a target temperature, in response to determining the first volume is not effective, reduce a computation load on the first compute node, and pump, from the coolant storage, a second volume of coolant to the compute node. In some examples, the coolant storage can be disposed underground.
-
公开(公告)号:US20220113790A1
公开(公告)日:2022-04-14
申请号:US17561301
申请日:2021-12-23
申请人: Kshitij Arun Doshi , John J. Browne , Christopher MacNamara , Francesc Guim Bernat , Adrian Hoban , Thijs Metsch
发明人: Kshitij Arun Doshi , John J. Browne , Christopher MacNamara , Francesc Guim Bernat , Adrian Hoban , Thijs Metsch
IPC分类号: G06F1/3296 , G06F1/3228 , H04L47/70
摘要: Various systems and methods for implementing intent-driven power management are described herein. A system includes: a power monitoring unit to collect real-time telemetry of a processor on a compute node; and a power level controller to: receive a power intent for execution of an application on the compute node; configure a power level of the processor of the compute node based on the power intent, the processor to execute the application; set an initial execution priority of the application on the compute node based on the power intent; and modify the initial execution priority based on the power intent and the real-time telemetry of the compute node.
-
-
-
-
-
-
-
-
-