TECHNOLOGIES FOR FABRIC SUPPORTED SEQUENCERS IN DISTRIBUTED ARCHITECTURES

    公开(公告)号:US20180077270A1

    公开(公告)日:2018-03-15

    申请号:US15260613

    申请日:2016-09-09

    CPC classification number: H04L69/324 G06F15/173 H04L1/1642 H04L12/50 H04L49/00

    Abstract: Technologies for using fabric supported sequencers in fabric architectures includes a network switch communicatively coupled to a plurality of computing nodes. The network switch is configured to receive an sequencer access message from one of the plurality of computing nodes that includes an identifier of a sequencing counter corresponding to a sequencer session and one or more operation parameters. The network switch is additionally configured to perform an operation on a value associated with the identifier of the sequencing counter as a function of the one or more operation parameters, increment the identifier of the sequencing counter, and associate a result of the operation with the incremented identifier of the sequencing counter. The network switch is further configured to transmit an acknowledgment of successful access to the computing node that includes the result of the operation and the incremented identifier of the sequencing counter. Other embodiments are described herein.

    DATA ACCESS BETWEEN COMPUTING NODES
    82.
    发明申请

    公开(公告)号:US20170344283A1

    公开(公告)日:2017-11-30

    申请号:US15167953

    申请日:2016-05-27

    Abstract: Technology for an apparatus is described. The apparatus can receive a command to copy data. The command can indicate a first address, a second address and an offset value. The apparatus can determine a first non-uniform memory access (NUMA) domain ID for the first address and a second NUMA domain ID for the second address. The apparatus can identify a first computing node with memory that corresponds to the first NUMA domain ID and a second computing node with memory that corresponds to the second NUMA domain ID. The apparatus can generate an instruction for copying data in a first memory range of the first computing node to a second memory range of the second computing node. The first memory range can be defined by the first address and the offset value and the second memory range can be defined by the second address and the offset value.

    SYSTEMS, APPARATUSES, AND METHODS FOR PERFORMING VECTOR PACKED UNARY ENCODING USING MASKS
    84.
    发明申请
    SYSTEMS, APPARATUSES, AND METHODS FOR PERFORMING VECTOR PACKED UNARY ENCODING USING MASKS 审中-公开
    用于使用掩码执行向量包装的UNARY编码的系统,设备和方法

    公开(公告)号:US20140223140A1

    公开(公告)日:2014-08-07

    申请号:US13994505

    申请日:2011-12-23

    CPC classification number: G06F9/30018 G06F9/30036 G06F9/30149 G06F9/3867

    Abstract: Embodiments of systems, apparatuses, and methods for performing in a computer processor vector packed unary encoding using masks in response to a single vector packed unary encoding using masks instruction that includes a source vector register operand, a destination writemask register operand, and an opcode are described.

    Abstract translation: 用于在计算机处理器中执行向量压缩的一元编码的系统,装置和方法在使用掩模的响应于使用包括源向量寄存器操作数,目的地写入寄存器操作数和操作码的掩码指令的单向量压缩一元编码的情况下, 描述。

    METHODS, SYSTEMS, ARTICLES OF MANUFACTURE AND APPARATUS TO ESTIMATE WORKLOAD COMPLEXITY

    公开(公告)号:US20240385884A1

    公开(公告)日:2024-11-21

    申请号:US18571092

    申请日:2021-12-23

    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to estimate workload complexity. An example apparatus includes processor circuitry to perform at least one of first, second, or third operations to instantiate payload interface circuitry to extract workload objective information and service level agreement (SLA) criteria corresponding to a workload, and acceleration circuitry to select a pre-processing model based on (a) the workload objective information and (b) feedback corresponding to workload performance metrics of at least one prior workload execution iteration, execute the pre-processing model to calculate a complexity metric corresponding to the workload, and select candidate resources based on the complexity metric.

    Platform ambient data management schemes for tiered architectures

    公开(公告)号:US11994932B2

    公开(公告)日:2024-05-28

    申请号:US16907264

    申请日:2020-06-21

    CPC classification number: G06F1/3275 G06F1/3287

    Abstract: Methods and apparatus for platform ambient data management schemes for tiered architectures. A platform including one or more CPUs coupled to multiple tiers of memory comprising various types of DIMMs (e.g., DRAM, hybrid, DCPMM) is powered by a battery subsystem receiving input energy harvested from one or more green energy sources. Energy threshold conditions are detected, and associated memory reconfiguration is performed. The memory reconfiguration may include but is not limited to copying data between DIMMs (or memory ranks on the DIMMS in the same tier, copying data between a first type of memory to a second type of memory on a hybrid DIMM, and flushing dirty lines in a DIMM in a first memory tier being used as a cache for a second memory tier. Following data copy and flushing operations, the DIMMs and/or their memory devices are powered down and/or deactivated. In one aspect, machine learning models trained on historical data are employed to project harvested energy levels that are used in detecting energy threshold conditions.

    TECHNOLOGIES FOR ACCELERATED HIERARCHICAL KEY CACHING IN EDGE SYSTEMS

    公开(公告)号:US20220200788A1

    公开(公告)日:2022-06-23

    申请号:US17561558

    申请日:2021-12-23

    Abstract: Technologies for accelerated key caching in an edge hierarchy include multiple edge appliance devices organized in tiers. An edge appliance device receives a request for a key, such as a private key. The edge appliance device determines whether the key is included in a local key cache and, if not, requests the key from an edge appliance device included in an inner tier of the edge hierarchy. The edge appliance device may request the key from an edge appliance device included in a peer tier of the edge hierarchy. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys in the key cache for eviction. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys for pre-fetching. Those functions of the edge appliance device may be performed by an accelerator such as an FPGA. Other embodiments are described and claimed.

Patent Agency Ranking