摘要:
Restricting peripheral device protocols in confidential compute architectures, the method including: receiving a first address translation request from a peripheral device supporting a first protocol, wherein the first protocol supports cache coherency between the peripheral device and a processor cache; determining that a confidential compute architecture is enabled; and providing, in response to the first address translation request, a response including an indication to the peripheral device to not use the first protocol.
摘要:
A system includes a memory, a processor in communication with the memory, and an operating system (“OS”) executing on the processor. The processor belongs to a processor socket. The OS is configured to pin a workload of a plurality of workloads to the processor belonging to the processor socket. Each respective processor belonging to the processor socket shares a common last-level cache (“LLC”). The OS is also configured to measure an LLC occupancy for the workload, reserve the LLC occupancy for the workload thereby isolating the workload from other respective workloads of the plurality of workloads sharing the processor socket, and maintain isolation by monitoring the LLC occupancy for the workload.
摘要:
A control device includes a first and a second control devices, the first control device is configured to transmit second data to the second control device at a first time, receive a first notification from the second control device at a second time, transmit the second data to a storage device at a third time, receive a second notification from the storage device at a fourth time, select a first or a second mode based on a comparison of the first period from the first time to the second time and the second period from the third time to the fourth time, in the first mode, transmit the first data to the second control device and the storage device, receive the first notification, transmit a processing completion notification, in the second mode, transmit the first data to the storage device, receive the second notification, transmit the processing completion notification.
摘要:
In a transactional memory environment including a first processor and one or more additional processors, a computer-implemented method includes, by the first processor, initializing a time record, listening for zero or more probes from the one more additional processors, responding to each probe of the zero or more probes, and logging each probe of the zero or more probes to yield a probe log. The computer-implemented method further includes, by the first processor, receiving a probe report directive and, responsive to the probe report directive, generating a probe report indication based on the probe log. The probe report indication denotes whether, since the time record, the first processor has received any of the zero or more probes. The computer-implemented method further includes ending the time record. A corresponding computer program product and computer system are also disclosed.
摘要:
A particular method includes selecting between a first cache access mode and a second cache access mode based on a number of instructions stored at an issue queue, a number of active threads of an execution unit, or both. The method further includes performing a first cache access. When the first cache access mode is selected, performing the first cache access includes performing a tag access and performing a data array access after performing the tag access. When the second cache access mode is selected, performing the first cache access includes performing the tag access in parallel with the data array access.
摘要:
Various systems and methods for caching and tiering in cloud storage are described herein. A system for managing storage allocation comprises a storage device management system to maintain an access history of a plurality of storage blocks of solid state drives (SSDs) managed by the storage device management system; and automatically configure each of a plurality of storage blocks to operate in cache mode or tier mode, wherein a ratio of storage blocks operating in cache mode and storage blocks operating in tier mode is based on the access history.
摘要:
A method is described that includes performing the following by a device driver of a non volatile storage device: caching information targeted for the storage device into a non volatile region of a system memory without writing the information through into the storage device.
摘要:
Embodiments of the disclosure relate to optimizing a memory nest for a workload. Aspects include an operating system determining the cache/memory footprint of each work unit of the workload and assigning a time slice to each work unit of the workload based on the cache/memory footprint of each work unit. Aspects further include executing the workload on a processor by providing each work unit access to the processor for the time slice assigned to each work unit.
摘要:
A method for processing a data stream may comprise retrieving a first window from a distributed cache platform based on a first window key, executing a first task and a second task in parallel on a processor resource, and merging a first result and a second result into a stream result based on a relationship between a first task key and a second task key.
摘要:
An aspect includes receiving a fetch request for a data block at a cache memory system that includes cache memory that is partitioned into a plurality of cache data ways including a cache data way that contains the data block. The data block is fetched and it is determined whether the in-line ECC checking and correcting should be bypassed. The determining is based on a bypass indicator corresponding to the cache data way. Based on determining that in-line ECC checking and correcting should be bypassed, returning the fetched data block to the requestor and performing an ECC process for the fetched data block subsequent to returning the fetched data block to the requestor. Based on determining that in-line ECC checking and correcting should not be bypassed, performing the ECC process for the fetched data block and returning the fetched data block to the requestor subsequent to performing the ECC process.