MECHANISM TO AUTONOMOUSLY MANAGE SSDS IN AN ARRAY

    公开(公告)号:US20220327068A1

    公开(公告)日:2022-10-13

    申请号:US17734908

    申请日:2022-05-02

    Abstract: Embodiments of the present invention include a drive-to-drive storage system comprising a host server having a host CPU and a host storage drive, one or more remote storage drives, and a peer-to-peer link connecting the host storage drive to the one or more remote storage drives. The host storage drive includes a processor and a memory, wherein the memory has stored thereon instructions that, when executed by the processor, causes the processor to transfer data from the host storage drive via the peer-to-peer link to the one or more remote storage drives when the host CPU issues a write command.

    COMPUTING ACCELERATOR USING A LOOKUP TABLE
    12.
    发明申请

    公开(公告)号:US20200334012A1

    公开(公告)日:2020-10-22

    申请号:US16919043

    申请日:2020-07-01

    Abstract: A computing accelerator using a lookup table. The accelerator may accelerate floating point multiplications by retrieving the fraction portion of the product of two floating-point operands from a lookup table, or by retrieving the product of two floating-point operands of two floating-point operands from a lookup table, or it may retrieve dot products of floating point vectors from a lookup table. The accelerator may be implemented in a three-dimensional memory assembly. It may use approximation, the symmetry of a multiplication lookup table, and zero-skipping to improve performance.

    Computing accelerator using a lookup table

    公开(公告)号:US10732929B2

    公开(公告)日:2020-08-04

    申请号:US15916196

    申请日:2018-03-08

    Abstract: A computing accelerator using a lookup table. The accelerator may accelerate floating point multiplications by retrieving the fraction portion of the product of two floating-point operands from a lookup table, or by retrieving the product of two floating-point operands of two floating-point operands from a lookup table, or it may retrieve dot products of floating point vectors from a lookup table. The accelerator may be implemented in a three-dimensional memory assembly. It may use approximation, the symmetry of a multiplication lookup table, and zero-skipping to improve performance.

    Intelligent high bandwidth memory appliance

    公开(公告)号:US10545860B2

    公开(公告)日:2020-01-28

    申请号:US15796743

    申请日:2017-10-27

    Abstract: Inventive aspects include An HBM+ system, comprising a host including at least one of a CPU, a GPU, an ASIC, or an FPGA; and an HBM+ stack including a plurality of HBM modules arranged one atop another, and a logic die disposed beneath the plurality of HBM modules. The logic die is configured to offload processing operations from the host. A system architecture is disclosed that provides specific compute capabilities in the logic die of high bandwidth memory along with the supporting hardware and software architectures, logic die microarchitecture, and memory interface signaling options. Various new methods are provided for using in-memory processing abilities of the logic die beneath an HBM memory stack. In addition, various new signaling protocols are disclosed to use an HBM interface. The logic die microarchitecture and supporting system framework are also described.

    Unified addressing and hierarchical heterogeneous storage and memory

    公开(公告)号:US10437479B2

    公开(公告)日:2019-10-08

    申请号:US14561204

    申请日:2014-12-04

    Abstract: According to one general aspect, an apparatus may include a processor, a heterogeneous memory system, and a memory interconnect. The processor may be configured to perform a data access on data stored in a memory system. The heterogeneous memory system may include a plurality of types of storage mediums. Each type of storage medium may be based upon a respective memory technology and may be associated with one or more performance characteristics. The heterogeneous memory system may include both volatile and non-volatile storage mediums. The memory interconnect may be configured to route the data access from the processor to at least one of the storage mediums based, at least in part, upon the one or more performance characteristic associated with the respective memory technologies of the storage media.

    Dynamic thermal budget allocation for multi-processor systems
    16.
    发明授权
    Dynamic thermal budget allocation for multi-processor systems 有权
    多处理器系统的动态热预算分配

    公开(公告)号:US09342136B2

    公开(公告)日:2016-05-17

    申请号:US14292785

    申请日:2014-05-30

    CPC classification number: G06F1/329 G06F1/206 Y02D10/16 Y02D10/24

    Abstract: Embodiments of the present inventive concept relate to systems and methods for dynamically allocating and/or redistributing thermal budget to each processor from a total processor thermal budget based on the workload of each processor. In this manner, the processor(s) having a higher workload can receive a higher thermal budget. The allocation can be dynamically adjusted over time. The individual and overall processor performance increases while efficiently allocating the total thermal budget. By dynamically sharing the total thermal budget of the system, the performance of the system as a whole is increased, thereby lowering, for example, the total cost of ownership (TCO) of datacenters.

    Abstract translation: 本发明构思的实施例涉及用于基于每个处理器的工作负荷从总处理器热预算动态地分配和/或再分配热预算到每个处理器的系统和方法。 以这种方式,具有较高工作负载的处理器可以接收更高的热预算。 分配可以随时间动态调整。 个体和整体处理器性能提高,同时有效分配总体热预算。 通过动态共享系统的总体热预算,整个系统的性能提高,从而降低了数据中心的总拥有成本(TCO)。

    DISAGGREGATED MEMORY APPLIANCE
    17.
    发明申请
    DISAGGREGATED MEMORY APPLIANCE 审中-公开
    DISAGGREGATED MEMORY器具

    公开(公告)号:US20160124872A1

    公开(公告)日:2016-05-05

    申请号:US14867988

    申请日:2015-09-28

    CPC classification number: G06F13/161 G06F13/4022 G06F13/4068 G06F13/42

    Abstract: Exemplary embodiments provide a disaggregated memory appliance, comprising: a plurality of leaf memory switches that manage one or more memory channels of one or more of leaf memory modules; a low-latency memory switch that arbitrarily connects one or more external processors to the plurality of leaf memory modules over a host link; and a low-latency routing protocol used by both the low-latency memory switch and the leaf memory switches that encapsulates memory technology specific semantics by use of tags that uniquely identify specific types of memory technology used in the memory appliance during provisioning, monitoring and operation.

    Abstract translation: 示例性实施例提供分解的存储设备,包括:多个叶存储器开关,其管理一个或多个叶存储器模块的一个或多个存储器通道; 低延迟存储器交换机,其通过主机链路任意地将一个或多个外部处理器连接到多个叶存储器模块; 以及由低延迟存储器交换机和叶片存储器交换机使用的低延迟路由协议,其通过使用在供应,监视和操作中唯一地标识存储器设备中使用的特定类型的存储器技术的标签来封装存储器技术特定语义 。

    DISAGGREGATED MEMORY APPLIANCE
    18.
    发明申请
    DISAGGREGATED MEMORY APPLIANCE 审中-公开
    DISAGGREGATED MEMORY器具

    公开(公告)号:US20160117129A1

    公开(公告)日:2016-04-28

    申请号:US14867961

    申请日:2015-09-28

    Abstract: Example embodiments provide a disaggregated memory appliance, comprising: a plurality of leaf memory switches that manage one or more memory channels of one or more of leaf memory modules; a low-latency memory switch that arbitrarily connects one or more external processors to the plurality of leaf memory modules over a host link; and a management processor that responds to requests from one or more external processors for management, maintenance, configuration and provisioning of the leaf memory modules within the memory appliance.

    Abstract translation: 示例性实施例提供了一种分解的存储设备,包括:多个叶存储器开关,其管理叶存储器模块中的一个或多个的一个或多个存储器通道; 低延迟存储器交换机,其通过主机链路任意地将一个或多个外部处理器连接到多个叶存储器模块; 以及管理处理器,其响应于来自一个或多个外部处理器的用于管理,维护,配置和供应存储器设备内的叶片存储器模块的请求。

    Mechanism to autonomously manage SSDs in an array

    公开(公告)号:US12174762B2

    公开(公告)日:2024-12-24

    申请号:US18373711

    申请日:2023-09-27

    Abstract: Embodiments of the present invention include a drive-to-drive storage system comprising a host server having a host CPU and a host storage drive, one or more remote storage drives, and a peer-to-peer link connecting the host storage drive to the one or more remote storage drives. The host storage drive includes a processor and a memory, wherein the memory has stored thereon instructions that, when executed by the processor, causes the processor to transfer data from the host storage drive via the peer-to-peer link to the one or more remote storage drives when the host CPU issues a write command.

    Data management scheme in virtualized hyperscale environments

    公开(公告)号:US10725663B2

    公开(公告)日:2020-07-28

    申请号:US16231229

    申请日:2018-12-21

    Abstract: According to one general aspect, a memory management unit (MMU) may be configured to interface with a heterogeneous memory system that comprises a plurality of types of storage mediums. Each type of storage medium may be based upon a respective memory technology and may be associated with performance characteristic(s). The MMU may receive a data access for the heterogeneous memory system. The MMU may also determine at least one of the storage mediums of the heterogeneous memory system to service the data access. The target storage medium may be selected based upon at least one performance characteristic associated with the target storage medium and a quality of service tag that is associated with the virtual machine and that indicates one or more performance characteristics. The MMU may route the data access by the virtual machine to the at least one of the storage mediums.

Patent Agency Ranking