STORAGE TRANSACTIONS WITH PREDICTABLE LATENCY

    公开(公告)号:US20200241927A1

    公开(公告)日:2020-07-30

    申请号:US16849915

    申请日:2020-04-15

    Abstract: Examples described herein relate to at least one processor that can execute a polling group to poll for storage transactions associated with a first group of one or more particular queue identifiers, wherein the one or more particular queue identifiers are associated with one or more queues that can be accessed using the polling group and no other polling group. In some examples, the polling group is to execute on a processor that runs no other polling group. In some examples, the at least one processor is configured to: execute a second polling group on a second processor, wherein the second polling group is to poll for storage transactions for a second group of one or more particular queue identifiers that are different than the one or more particular queue identifiers of the first group, wherein the second group of one or more particular queue identifiers are associated with one or more queues that can be accessed using the second polling group and not the first polling group.

    A Concept for Providing Access to Offloading Circuitry

    公开(公告)号:US20240296083A1

    公开(公告)日:2024-09-05

    申请号:US18568328

    申请日:2021-11-25

    CPC classification number: G06F9/545

    Abstract: Examples relate to an apparatus, device, method, and computer program for providing access to offloading circuitry of a computer system, to a method and computer program for setting up access to offloading circuitry of a computer system, and to corresponding computer systems. The apparatus comprises circuitry configured to provide a common interface for accessing offloading circuitry of the computer system from one or more software applications. The circuitry is configured to select one of a kernel-space driver and a user-space driver for accessing the offloading circuitry. The circuitry is configured to provide the access to the offloading circuitry for the one or more software applications via the selected driver at runtime.

    ACCELERATION FRAMEWORK TO CHAIN IPU ASIC BLOCKS

    公开(公告)号:US20230205715A1

    公开(公告)日:2023-06-29

    申请号:US18069088

    申请日:2022-12-20

    CPC classification number: G06F13/28 G06F2213/28

    Abstract: A method is described. The method includes receiving a first invocation for a first ASIC block on a semiconductor chip. The first invocation provides a value. The method includes receiving a second invocation for a second ASIC block on the semiconductor chip. The second invocation also provides the value. The method includes determining that the second ASIC block is to operate on output from the first ASIC block from the first and second invocations having both provided the value. The method includes using a first device driver for the first ASIC block and a second device driver for the ASIC block to cause the second ASIC block to operate on the output from the first ASIC block.

    FAST LBA/PBA TABLE REBUILD
    5.
    发明申请

    公开(公告)号:US20230076365A1

    公开(公告)日:2023-03-09

    申请号:US17987553

    申请日:2022-11-15

    Abstract: A method is described. The method includes constructing a bitmap having a first dimension organized into bins of logical block addresses (LBA bins) and a second dimension organized into bins of physical block addresses (PBA bins). Coordinates of the bitmap indicate whether respective physical blocks of non volatile memory within one or more SSDs that fall within a particular PBA bin are being mapped to by an LBA that falls within a particular one of the LBA bins. The method includes using the bitmap during a rebuild of an LBA bin of an LBA/PBA table to avoid reading meta data for physical blocks that are not mapped to by an LBA that falls within the LBA bin.

    ADAPTIVE PROCESSOR RESOURCE UTILIZATION
    6.
    发明申请

    公开(公告)号:US20200218676A1

    公开(公告)日:2020-07-09

    申请号:US16825538

    申请日:2020-03-20

    Abstract: Examples herein relate to polling for input/output transactions of a network interface or a storage device, or any peripheral device. Some examples monitor clock cycles spent checking for a presence of input/output (I/O) events and processing I/O events and monitor clock cycles spent checking for presence of I/O events without completing an I/O event. Central processing unit (CPU) core utilization can be based on clock cycles spent checking for a presence of I/O events and processing I/O events and clock cycles spent checking for presence of I/O events without completion of an I/O event. For example, if core utilization is below a threshold, frequency of the core can be reduced for performing polling of I/O events. If core utilization is at or above the threshold, frequency of the core can be increased used to performing polling of I/O events.

    DIRECT ACCESS TO HARDWARE QUEUES OF A STORAGE DEVICE BY SOFTWARE THREADS

    公开(公告)号:US20180060256A1

    公开(公告)日:2018-03-01

    申请号:US15253849

    申请日:2016-08-31

    CPC classification number: G06F13/1642 G06F9/3009 G06F16/13 G06F16/1774

    Abstract: Methods of accessing hardware input/output (I/O) queues by software threads performing operations on a storage system, such as a filesystem, are described herein. In one embodiment, a method for performing I/O operations on a filesystem stored at least in part on a storage device involves creating a channel to map exclusively to one hardware I/O queue of the storage device. The channel includes an instance of a software primitive in the filesystem to route I/O requests to access objects in the filesystem from an application executing on one or more threads to the one hardware I/O queue to which the channel maps. The method also involves submitting the I/O requests to access the objects in the filesystem from at most one thread of the application at a given time to the one hardware I/O queue using the channel.

Patent Agency Ranking