SYNCHRONIZING ACCESS TO DATA IN SHARED MEMORY
    33.
    发明申请
    SYNCHRONIZING ACCESS TO DATA IN SHARED MEMORY 有权
    在共享存储器中同步访问数据

    公开(公告)号:US20150242320A1

    公开(公告)日:2015-08-27

    申请号:US14192227

    申请日:2014-02-27

    Abstract: In some embodiments, in response to execution of a load-reserve instruction that binds to a load target address held in a store-through upper level cache, a processor core sets a core reservation flag, transmits a load-reserve operation to a store-in lower level cache, and tracks, during a core reservation tracking interval, the reservation requested by the load-reserve operation until the store-in lower level cache signals that the store-in lower level cache has assumed responsibility for tracking the reservation. In response to receipt during the core reservation tracking interval of an invalidation signal indicating presence of a conflicting snooped operation, the processor core cancels the reservation by resetting the core reservation flag and fails a subsequent store-conditional operation. Responsive to not canceling the reservation during the core reservation tracking interval, the processor core determines whether a store-conditional operation succeeds by reference to a pass/fail indication provided by the store-in lower level cache.

    Abstract translation: 在一些实施例中,响应于绑定到存储在上层高速缓存中的加载目标地址的负载预留指令的执行,处理器核心设置核心预留标志,向存储 - 在低级缓存中,并且在核心预约跟踪间隔期间跟踪由加载备用操作请求的预留,直到存储的较低级缓存信号指示存储的较低级缓存已经承担了跟踪该预留的责任。 响应于在指示存在冲突的窥探操作的无效信号的核心预约跟踪间隔期间的接收,处理器核心通过重置核心预留标志来取消预约,并且使随后的存储条件操作失败。 响应于在核心预约跟踪间隔期间不取消预约,处理器核心通过参考由存储的下级高速缓存提供的通过/失败指示来确定存储条件操作是否成功。

    COHERENCY OVERCOMMIT
    34.
    发明申请
    COHERENCY OVERCOMMIT 有权
    对等通知

    公开(公告)号:US20150178205A1

    公开(公告)日:2015-06-25

    申请号:US14311499

    申请日:2014-06-23

    CPC classification number: G06F13/4031 G06F13/362 Y02D10/14 Y02D10/151

    Abstract: One or more systems, devices, methods, and/or processes described can receive, via an interconnect, messages from processing nodes, and a first portion of the messages can displace a second portion of the messages based on priorities of the first portion of messages or based on expirations times of the second portion of messages. In one example, the second portion of messages can be stored via a buffer of a fabric controller (FBC) of the interconnect, and the first portion of messages, associated with higher priorities than the second portion of messages, can displace the second portion of messages in the buffer. For instance, the second portion of messages can include speculative commands. In another example, the second portion of messages can be stored via the buffer, and the second portion of messages, associated with expiration times, can displace the second portion of messages based on the expiration times.

    Abstract translation: 所描述的一个或多个系统,设备,方法和/或处理可以经由互连从处理节点接收消息,并且消息的第一部分可以基于消息的第一部分的优先级来移位消息的第二部分 或者基于消息的第二部分的到期时间。 在一个示例中,消息的第二部分可以经由互连的结构控制器(FBC)的缓冲器存储,并且与消息的第二部分相比具有更高优先级的消息的第一部分可以将第二部分 缓冲区中的消息。 例如,消息的第二部分可以包括推测命令。 在另一示例中,消息的第二部分可以经由缓冲器存储,并且与到期时间相关联的消息的第二部分可以基于到期时间来移位第二部分消息。

    TRANSIENT CONDITION MANAGEMENT UTILIZING A POSTED ERROR DETECTION PROCESSING PROTOCOL
    36.
    发明申请
    TRANSIENT CONDITION MANAGEMENT UTILIZING A POSTED ERROR DETECTION PROCESSING PROTOCOL 有权
    使用错误检测处理协议的瞬态条件管理

    公开(公告)号:US20140304558A1

    公开(公告)日:2014-10-09

    申请号:US14037792

    申请日:2013-09-26

    CPC classification number: G06F11/0751 G06F11/073 G06F11/1004

    Abstract: In a data processing system, a memory subsystem detects whether or not at least one potentially transient condition is present that would prevent timely servicing of one or more memory access requests directed to the associated system memory. In response to detecting at least one such potentially transient condition, the memory system identifies a first read request affected by the at least one potentially transient condition. In response to identifying the read request, the memory subsystem signals to a request source to issue a second read request for the same target address by transmitting to the request source dummy data and a data error indicator.

    Abstract translation: 在数据处理系统中,存储器子系统检测是否存在至少一个潜在的瞬态条件,其将阻止及时地对针对相关系统存储器的一个或多个存储器访问请求进行维护。 响应于检测到至少一个这样的潜在瞬态条件,存储器系统识别受至少一个潜在瞬态条件影响的第一读取请求。 响应于识别读取请求,存储器子系统通过向请求源发送伪数据和数据错误指示符来向请求源发信号,以发出相同目标地址的第二读取请求。

    TRANSACTION CHECK INSTRUCTION FOR MEMORY TRANSACTIONS

    公开(公告)号:US20140047196A1

    公开(公告)日:2014-02-13

    申请号:US13777511

    申请日:2013-02-26

    CPC classification number: G06F3/0668 G06F9/467 G06F12/0811

    Abstract: A processing unit of a data processing system having a shared memory system executes a memory transaction including a transactional store instruction that causes a processing unit of the data processing system to make a conditional update to a target memory block of the shared memory system conditioned on successful commitment of the memory transaction. The memory transaction further includes a transaction check instruction. In response to executing the transaction check instruction, the processing unit determines, prior to conclusion of the memory transaction, whether the target memory block of the shared memory system was modified after the conditional update caused by execution of the transactional store instruction. In response to determining that the target memory block has been modified, a condition register within the processing unit is set to indicate a conflict for the memory transaction.

    MEMORY MIGRATION WITHIN A MULTI-HOST DATA PROCESSING ENVIRONMENT

    公开(公告)号:US20230036054A1

    公开(公告)日:2023-02-02

    申请号:US17388993

    申请日:2021-07-29

    Abstract: A destination host includes a processor core, a system fabric, a memory system, and a link controller communicatively coupled to the system fabric and configured to be communicatively coupled, via a communication link, to a source host with which the destination host is non-coherent. The destination host migrates, via the communication link, a state of a logical partition from the source host to the destination host and page table entries for translating addresses of a dataset of the logical partition from the source host to the destination host. After migrating the state and page table entries, the destination host initiates execution of the logical partition on the processor core while at least a portion of the dataset of the logical partition resides in the memory system of the source host and migrates, via the communication link, the dataset of the logical partition to the memory system of the destination host.

    ACCELERATED PROCESSING OF STREAMS OF LOAD-RESERVE REQUESTS

    公开(公告)号:US20220156194A1

    公开(公告)日:2022-05-19

    申请号:US16950511

    申请日:2020-11-17

    Abstract: A processing unit for a data processing system includes a processor core that issues memory access requests and a cache memory coupled to the processor core. The cache memory includes a reservation circuit that tracks reservations established by the processor core via load-reserve requests and a plurality of read-claim (RC) state machines for servicing memory access requests of the processor core. The cache memory, responsive to receipt from the processor core of a store-conditional request specifying a store target address, allocates an RC state machine among the plurality of RC state machines to process the store-conditional request and transfers responsibility for tracking a reservation for the store target address from the reservation circuit to the RC state machine.

    CACHE SNOOPING MODE EXTENDING COHERENCE PROTECTION FOR CERTAIN REQUESTS

    公开(公告)号:US20210182198A1

    公开(公告)日:2021-06-17

    申请号:US16717868

    申请日:2019-12-17

    Abstract: A cache memory includes a data array, a directory of contents of the data array that specifies coherence state information, and snoop logic that processes operations snooped from a system fabric by reference to the data array and the directory. The snoop logic, responsive to snooping on the system fabric a request of a first flush/clean memory access operation that specifies a target address, determines whether or not the cache memory has coherence ownership of the target address. Based on determining the cache memory has coherence ownership of the target address, the snoop logic services the request and thereafter enters a referee mode. While in the referee mode, the snoop logic protects a memory block identified by the target address against conflicting memory access requests by the plurality of processor cores until conclusion of a second flush/clean memory access operation that specifies the target address.

Patent Agency Ranking