-
公开(公告)号:US11644978B2
公开(公告)日:2023-05-09
申请号:US17509523
申请日:2021-10-25
Applicant: NETAPP, INC.
Inventor: Randolph Sterns , Charles Binford , Joseph Blount , Joseph Moore , William P. Delaney
IPC: G06F3/06 , G06F12/0868
CPC classification number: G06F3/0617 , G06F3/061 , G06F3/0665 , G06F3/0689 , G06F12/0868 , G06F2212/1024 , G06F2212/1032 , G06F2212/263 , G06F2212/282 , G06F2212/286 , G06F2212/312
Abstract: A system shares I/O load between controllers in a high availability system. For writes, a controller determines based on one or more factors which controller will flush batches of data from write-back cache to better distribute the I/O burden. The determination occurs after the local storage controller caches the data, mirrors it, and confirms write complete to the host. Once it is determined which storage controller will flush the cache, the flush occurs and the corresponding metadata at a second layer of indirection is updated by that determined storage controller (whether or not it is identified as the owner of the corresponding volume to the host, while the volume owner updates metadata at a first layer of indirection). For a host read, the controller that owns the volume accesses the metadata from whichever controller(s) flushed the data previously and reads the data, regardless of which controller had performed the flush.
-
公开(公告)号:US20180203609A1
公开(公告)日:2018-07-19
申请号:US15405661
申请日:2017-01-13
Applicant: ARM Limited
Inventor: Steven Douglas KRUEGER
IPC: G06F3/06 , G06F9/455 , G06F12/0846
CPC classification number: G06F3/0608 , G06F3/0644 , G06F3/0653 , G06F3/0685 , G06F9/45558 , G06F9/467 , G06F9/5016 , G06F9/528 , G06F12/0848 , G06F2009/45583 , G06F2212/282
Abstract: Memory transactions are issued to a memory system component specifying a partition identifier allocated to a software execution environment associated with said memory transaction. The memory system component selects one of a plurality of sets of memory system component parameters in dependence on the partition identifier specified by a memory transaction to be handled. The memory system component controls allocation of resources for handling the memory transaction or manages contention for the resources in dependence on the selected set of parameters, or updates performance monitoring data specified by the selected set of parameters in response to handling of said memory transaction. Partition identifier remapping circuitry is provided to remap a virtual partition identifier specified for a memory transaction by a first software execution environment to a physical partition identifier to be specified with the memory transaction issued to the memory system component.
-
公开(公告)号:US20180181303A1
公开(公告)日:2018-06-28
申请号:US15392760
申请日:2016-12-28
Applicant: Western Digital Technologies, Inc.
Inventor: David Robison Hall
IPC: G06F3/06 , G06F12/0868 , G06F12/0897
CPC classification number: G06F3/061 , G06F3/064 , G06F3/0676 , G06F12/0868 , G06F12/0897 , G06F2212/1016 , G06F2212/152 , G06F2212/214 , G06F2212/282 , G06F2212/305
Abstract: A data storage device may include non-volatile storage media that includes a long-term storage region divided into a plurality of physical regions and a temporary storage region that includes at least two first tier bins. Each logical block address (LBA) span of a plurality of LBA spans may be associated with at least one physical region. Each first tier bin may be associated with a respective LBA subset of the plurality of LBA spans that includes at least two LBA spans and less than all LBA spans. The data storage device may also include a processor configured to receive first data having an LBA from a first LBA subset and second data having an LBA from a second LBA subset, and writing the first data to a first bin associated with the first LBA subset and writing the second data to a second bin associated with the second LBA subset.
-
公开(公告)号:US20180157308A1
公开(公告)日:2018-06-07
申请号:US15804785
申请日:2017-11-06
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Oluleye Olorode , Mehrdad Nourani
IPC: G06F1/32 , G06F12/0811 , G06F12/0846
CPC classification number: G06F1/3275 , G06F12/0811 , G06F12/0848 , G06F12/0895 , G06F2212/1028 , G06F2212/282 , G06F2212/283 , Y02D10/13
Abstract: Disclosed embodiments relate to a dNap architecture that accurately transitions cache lines to full power state before an access to them. This ensures that there are no additional delays due to waking up drowsy lines. Only cache lines that are determined by the DMC to be accessed in the immediate future are fully powered while others are put in drowsy mode. As a result, we are able to significantly reduce leakage power with no cache performance degradation and minimal hardware overhead, especially at higher associativities. Up to 92% static/Leakage power savings are accomplished with minimal hardware overhead and no performance tradeoff.
-
公开(公告)号:US09990400B2
公开(公告)日:2018-06-05
申请号:US14922733
申请日:2015-10-26
Applicant: salesforce.com, inc.
Inventor: Barathkumar Sundaravaradan , Christopher James Wall , Lawrence Thomas Lopez , Paul Sydell , Sreeram Duvur , Vijayanth Devadhar
IPC: G06F12/08 , G06F17/30 , G06F12/0842 , G06F12/0846 , G06F12/123
CPC classification number: G06F17/3048 , G06F12/0842 , G06F12/0848 , G06F12/123 , G06F2212/282
Abstract: Techniques are disclosed relating to an in-memory cache. In some embodiments, in response to determining that data for a requested entry is not present in the cache (e.g., because it has been evicted), a computing system is configured to invoke cached program code associated with the entry. In some embodiments, the computing system is configured to provide data generated by the program code in response to requests that indicate the entry. In some embodiments, the computing system is configured to store the generated data in the cache. In various embodiments, this may avoid cache misses and provide configurability in responding to requests to access the cache.
-
公开(公告)号:US09977678B2
公开(公告)日:2018-05-22
申请号:US14594716
申请日:2015-01-12
Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventor: Lee Evan Eisen , Hung Qui Le , Jentje Leenstra , Jose Eduardo Moreira , Bruce Joseph Ronchetti , Brian William Thompto , Albert James Van Norstrand, Jr.
IPC: G06F9/38 , G06F9/30 , G06F12/0875 , G06F12/0846
CPC classification number: G06F9/3851 , G06F9/30145 , G06F9/30189 , G06F9/3836 , G06F9/3887 , G06F12/0848 , G06F12/0875 , G06F2212/1048 , G06F2212/282 , G06F2212/452
Abstract: A processor core having multiple parallel instruction execution slices and coupled to multiple dispatch queues by a dispatch routing network provides flexible and efficient use of internal resources. The configuration of the execution slices is selectable so that capabilities of the processor core can be adjusted according to execution requirements for the instruction streams. Two or more execution slices can be combined as super-slices to handle wider data, wider operands and/or vector operations, according to one or more mode control signal that also serves as a configuration control signal. The mode control signal is also used to partition clusters of the execution slices within the processor core according to whether single-threaded or multi-threaded operation is selected, and additionally according to a number of hardware threads that are active.
-
公开(公告)号:US20180121304A1
公开(公告)日:2018-05-03
申请号:US15783537
申请日:2017-10-13
Applicant: Machine Zone, Inc.
Inventor: Eric Liaw , Kevin Xiao , Glen Wong
IPC: G06F11/20 , G06F12/0846 , G06F12/128
CPC classification number: G06F11/2094 , G06F12/0848 , G06F12/128 , G06F17/3048 , G06F2201/805 , G06F2201/82 , G06F2212/282 , G06F2212/621
Abstract: Implementations of this disclosure are directed to systems, devices and methods for implementing a cache data management system. Webserver computers receive cache data requests for data stored at a computer cluster comprising a plurality of master cache data server computers that do not have corresponding slave cache data server computers to store reserve cache data. Proxy computers in communication with the plurality of webserver computers and the computer cluster route the cache data requests from the webserver computers to the computer cluster. Each proxy computer includes a sentinel module to monitor a health of the computer cluster by detecting failures of master cache data server computers and a trask monitor agent to manage the computer cluster. In response to the sentinel module detecting a failed master cache data server computer, the trask monitor agent replaces the failed master cache data server computer with a substantially empty reserve master cache data server computer, which is subsequently populated with the reserve cache data from a master database.
-
公开(公告)号:US20180067859A1
公开(公告)日:2018-03-08
申请号:US15810442
申请日:2017-11-13
Applicant: SAP SE
Inventor: Ivan Schreter
IPC: G06F12/0846 , G06F9/54 , G06F9/52
CPC classification number: G06F12/0848 , G06F9/52 , G06F9/544 , G06F12/0895 , G06F2212/1021 , G06F2212/282 , G06F2212/604 , G06F2212/608
Abstract: A central processing unit (CPU) forming part of a computing device, initiates execution of code associated with each of a plurality of objects used by a worker thread. The CPU has an associated cache that is split into a plurality of slices. It is determined, by a cache slice allocation algorithm for each object, whether any of the slices will be exclusive to or shared by the object. Thereafter, for each object, any slices determined to be exclusive to the object are activated such that the object exclusively uses such slices and any slices determined to be shared by the object are activated such that the object shares or is configured to share such slices.
-
公开(公告)号:US09842061B2
公开(公告)日:2017-12-12
申请号:US15452678
申请日:2017-03-07
Applicant: Facebook, Inc.
Inventor: Wyatt Andrew Lloyd , Linpeng Tang , Qi Huang
IPC: G06F12/128 , G06F12/0871
CPC classification number: G06F12/128 , G06F3/06 , G06F3/061 , G06F3/0659 , G06F3/0688 , G06F12/0871 , G06F12/122 , G06F2212/1024 , G06F2212/222 , G06F2212/282 , G06F2212/604 , G06F2212/69 , G06F2212/70
Abstract: Embodiments are disclosed for implementing a priority queue in a storage device, e.g., a solid state drive. At least some of the embodiments can use an in-memory set of blocks to store items until the block is full, and commit the full block to the storage device. Upon storing a full block, a block having a lowest priority can be deleted. An index storing correspondences between items and blocks can be used to update priorities and indicated deleted items. By using the in-memory blocks and index, operations transmitted to the storage device can be reduced.
-
公开(公告)号:US09842054B2
公开(公告)日:2017-12-12
申请号:US14793972
申请日:2015-07-08
Applicant: HON HAI PRECISION INDUSTRY CO., LTD.
Inventor: Chun-Hsieh Chiu , Hsiang-Ting Cheng
IPC: G06F12/08 , G06F12/0871
CPC classification number: G06F12/0871 , G06F2212/282
Abstract: In a method for processing cache data of a computing device, a storage space of the storage device is divided into sections, and a section number of each data block in the storage device is determined according one of the sections in the storage device which each data block belongs to. A field is added for each data block in the storage device to record a section number of each data block in the storage device. When the cache data in the cache memory requires to be written back to the storage device, cache data with the section number is searched from all of the cache data in the cache memory to be written back to a corresponding section in the storage device.
-
-
-
-
-
-
-
-
-