Solid state drive multi-card adapter with integrated processing

    公开(公告)号:US10996896B2

    公开(公告)日:2021-05-04

    申请号:US17088571

    申请日:2020-11-03

    Abstract: Embodiments of the inventive concept include solid state drive (SSD) multi-card adapters that can include multiple solid state drive cards, which can be incorporated into existing enterprise servers without major architectural changes, thereby enabling the server industry ecosystem to easily integrate evolving solid state drive technologies into servers. The SSD multi-card adapters can include an interface section between various solid state drive cards and drive connector types. The interface section can perform protocol translation, packet switching and routing, data encryption, data compression, management information aggregation, virtualization, and other functions.

    Solid state drive multi-card adapter with integrated processing

    公开(公告)号:US10747473B2

    公开(公告)日:2020-08-18

    申请号:US16149034

    申请日:2018-10-01

    Abstract: Embodiments of the inventive concept include solid state drive (SSD) multi-card adapters that can include multiple solid state drive cards, which can be incorporated into existing enterprise servers without major architectural changes, thereby enabling the server industry ecosystem to easily integrate evolving solid state drive technologies into servers. The SSD multi-card adapters can include an interface section between various solid state drive cards and drive connector types. The interface section can perform protocol translation, packet switching and routing, data encryption, data compression, management information aggregation, virtualization, and other functions.

    COMPUTING SYSTEM WITH TIERED FETCH MECHANISM AND METHOD OF OPERATION THEREOF
    4.
    发明申请
    COMPUTING SYSTEM WITH TIERED FETCH MECHANISM AND METHOD OF OPERATION THEREOF 审中-公开
    具有分层机构的计算系统及其操作方法

    公开(公告)号:US20160124859A1

    公开(公告)日:2016-05-05

    申请号:US14724057

    申请日:2015-05-28

    Abstract: A computing system includes: a fetch block configured to provide an initial destination and a way prediction associated with the initial destination for accessing a retrieval target; a way block, coupled to the fetch block, configured to determine a way-fetch result based on the way prediction; a parallel circuit, coupled to the fetch block, configured to determine an access destination based on the initial destination in parallel and concurrently with the way block; and an access block, coupled to the way block and the parallel circuit, configured to access the retrieval target based on comparing the access destination and the way-fetch result.

    Abstract translation: 计算系统包括:提取块,被配置为提供初始目的地和与初始目的地相关联的用于访问检索目标的方式预测; 耦合到所述获取块的方式块,被配置为基于所述预测方式来确定获取方式; 耦合到所述提取块的并行电路,被配置为基于所述初始目的地并行并与所述方式块同时地确定接入目的地; 以及耦合到所述方式块和所述并行电路的接入块,被配置为基于比较所述接入目的地和所述获取结果来访问所述检索目标。

    System and method for distributed caching

    公开(公告)号:US11290535B2

    公开(公告)日:2022-03-29

    申请号:US16904448

    申请日:2020-06-17

    Abstract: A system and method for distributed caching, the system having at least one network-connected storage device, a content server, and a control server. The control server is configured to discover the at least one network-connected storage device, collect device information from the at least one network-connected storage device, where the device information comprises a device location, assign each of the at least one network-connected storage device to a device domain based on each device location, and provide the content server with the device information for the one or more network-connected storage.

    System and method for optimizing performance of a solid-state drive using a deep neural network

    公开(公告)号:US10963394B2

    公开(公告)日:2021-03-30

    申请号:US16012470

    申请日:2018-06-19

    Abstract: A controller of a data storage device includes: a host interface providing an interface to a host computer; a flash translation layer (FTL) translating a logical block address (LBA) to a physical block address (PBA) associated with an input/output (I/O) request; a flash interface providing an interface to flash media to access data stored on the flash media; and one or more deep neural network (DNN) modules for predicting an I/O access pattern of the host computer. The one or more DNN modules provide one or more prediction outputs to the FTL that are associated with one or more past I/O requests and a current I/O request received from the host computer, and the one or more prediction outputs include at least one predicted I/O request following the current I/O request. The FTL prefetches data stored in the flash media that is associated with the at least one predicted I/O request.

    SYSTEM AND METHOD FOR DISTRIBUTED CACHING
    7.
    发明申请

    公开(公告)号:US20200322433A1

    公开(公告)日:2020-10-08

    申请号:US16904448

    申请日:2020-06-17

    Abstract: A system and method for distributed caching, the system having at least one network-connected storage device, a content server, and a control server. The control server is configured to discover the at least one network-connected storage device, collect device information from the at least one network-connected storage device, where the device information comprises a device location, assign each of the at least one network-connected storage device to a device domain based on each device location, and provide the content server with the device information for the one or more network-connected storage.

    System and method for distributed caching

    公开(公告)号:US10728332B2

    公开(公告)日:2020-07-28

    申请号:US15921568

    申请日:2018-03-14

    Abstract: A system and method for distributed caching, the system having at least one network-connected storage device, a content server, and a control server. The control server is configured to discover the at least one network-connected storage device, collect device information from the at least one network-connected storage device, where the device information comprises a device location, assign each of the at least one network-connected storage device to a device domain based on each device location, and provide the content server with the device information for the one or more network-connected storage.

    System and method for distributed caching

    公开(公告)号:US12294626B2

    公开(公告)日:2025-05-06

    申请号:US17706373

    申请日:2022-03-28

    Abstract: A system and method for distributed caching, the system having at least one network-connected storage device, a content server, and a control server. The control server is configured to discover the at least one network-connected storage device, collect device information from the at least one network-connected storage device, where the device information comprises a device location, assign each of the at least one network-connected storage device to a device domain based on each device location, and provide the content server with the device information for the one or more network-connected storage.

    Systems and methods for predicting storage device failure using machine learning

    公开(公告)号:US12260347B2

    公开(公告)日:2025-03-25

    申请号:US18197717

    申请日:2023-05-15

    Abstract: A method for predicting a time-to-failure of a target storage device may include training a machine learning scheme with a time-series dataset, and applying the telemetry data from the target storage device to the machine learning scheme which may output a time-window based time-to-failure prediction. A method for training a machine learning scheme for predicting a time-to-failure of a storage device may include applying a data quality improvement framework to a time-series dataset of operational and failure data from multiple storage devices, and training the scheme with the pre-processed dataset. A method for training a machine learning scheme for predicting a time-to-failure of a storage device may include training the scheme with a first portion of a time-series dataset of operational and failure data from multiple storage devices, testing the machine learning scheme with a second portion of the time-series dataset, and evaluating the machine learning scheme.

Patent Agency Ranking