Silent store detection and recording in memory storage

    公开(公告)号:US09588768B2

    公开(公告)日:2017-03-07

    申请号:US14948725

    申请日:2015-11-23

    CPC classification number: G06F9/30043 G06F9/3863 G06F11/00 G06F11/30

    Abstract: An aspect includes receiving a write request that includes a memory address and write data. Stored data is read from a memory location at the memory address. Based on determining that the memory location was not previously modified, the stored data is compared to the write data. Based on the stored data matching the write data, the write request is completed without writing the write data to the memory and a corresponding silent store bit, in a silent store bitmap is set. Based on the stored data not matching the write data, the write data is written to the memory location, the silent store bit is reset and a corresponding modified bit is set. At least one of an application and an operating system is provided access to the silent store bitmap.

    Silent store detection and recording in memory storage

    公开(公告)号:US09588767B2

    公开(公告)日:2017-03-07

    申请号:US14749680

    申请日:2015-06-25

    CPC classification number: G06F9/30043 G06F9/3863 G06F11/00 G06F11/30

    Abstract: An aspect includes receiving a write request that includes a memory address and write data. Stored data is read from a memory location at the memory address. Based on determining that the memory location was not previously modified, the stored data is compared to the write data. Based on the stored data matching the write data, the write request is completed without writing the write data to the memory and a corresponding silent store bit, in a silent store bitmap is set. Based on the stored data not matching the write data, the write data is written to the memory location, the silent store bit is reset and a corresponding modified bit is set. At least one of an application and an operating system is provided access to the silent store bitmap.

    Implementing selective cache injection
    95.
    发明授权
    Implementing selective cache injection 有权
    实现选择性缓存注入

    公开(公告)号:US09582427B2

    公开(公告)日:2017-02-28

    申请号:US14841610

    申请日:2015-08-31

    Abstract: A method, system and memory controller for implementing memory hierarchy placement decisions in a memory system including direct routing of arriving data into a main memory system and selective injection of the data or computed results into a processor cache in a computer system. A memory controller, or a processing element in a memory system, selectively drives placement of data into other levels of the memory hierarchy. The decision to inject into the hierarchy can be triggered by the arrival of data from an input output (IO) device, from computation, or from a directive of an in-memory processing element.

    Abstract translation: 一种用于在存储器系统中实现存储器层级放置决策的方法,系统和存储器控制器,包括将到达数据直接路由到主存储器系统中,并且将数据或计算结果选择性地注入计算机系统中的处理器高速缓存。 存储器控制器或存储器系统中的处理元件选择性地将数据放置到存储器层级的其他级别中。 注入层次结构的决定可以通过来自输入输出(IO)设备的数据到来自计算或来自存储器内处理元件的指令来触发。

    STORE OPERATIONS TO MAINTAIN CACHE COHERENCE
    96.
    发明申请
    STORE OPERATIONS TO MAINTAIN CACHE COHERENCE 有权
    存储操作以保持高速缓存的一致性

    公开(公告)号:US20160283377A1

    公开(公告)日:2016-09-29

    申请号:US14671050

    申请日:2015-03-27

    Abstract: In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

    Abstract translation: 在一个实施例中,计算机实现的方法包括在程序的编译期间遇到存储操作,其中存储操作可应用于存储器线。 由计算机处理器确定,商店操作不需要高速缓存一致性动作。 响应于确定不需要高速缓存一致性动作,为存储操作生成存储不相干动作指令。 存储无相干动作指令指定在没有高速缓存一致性动作的情况下执行存储操作,并且在执行无关联关联指令时保持高速缓存一致性。

    On-chip traffic prioritization in memory
    97.
    发明授权
    On-chip traffic prioritization in memory 有权
    内存中的片上流量优先级

    公开(公告)号:US09405712B2

    公开(公告)日:2016-08-02

    申请号:US13761252

    申请日:2013-02-07

    Abstract: According to one embodiment, a memory device is provided. The memory device includes a processing element coupled to a crossbar interconnect. The processing element is configured to send a memory access request, including a priority value, to the crossbar interconnect. The crossbar interconnect is configured to route the memory access request to a memory controller associated with the memory access request. The memory controller is coupled to memory and to the crossbar interconnect. The memory controller includes a queue and is configured to compare the priority value of the memory access request to priority values of a plurality of memory access requests stored in the queue of the memory controller to determine a highest priority memory access request and perform a next memory access request based on the highest priority memory access request.

    Abstract translation: 根据一个实施例,提供了一种存储器件。 存储器件包括耦合到交叉开关互连的处理元件。 处理元件被配置为向交叉开关互连发送包括优先级值的存储器访问请求。 交叉开关互连被配置为将存储器访问请求路由到与存储器访问请求相关联的存储器控​​制器。 存储器控制器耦合到存储器和交叉开关互连。 存储器控制器包括队列,并被配置为将存储器访问请求的优先级值与存储在存储器控制器的队列中的多个存储器访问请求的优先级值进行比较,以确定最高优先级的存储器访问请求并执行下一个存储器 基于最高优先级存储器访问请求的访问请求。

    Local bypass in memory computing
    99.
    发明授权
    Local bypass in memory computing 有权
    本地旁路在内存计算

    公开(公告)号:US09298654B2

    公开(公告)日:2016-03-29

    申请号:US13837909

    申请日:2013-03-15

    Abstract: Embodiments include a method for bypassing data in an active memory device. The method includes a requestor determining a number of transfers to a grantor that have not been communicated to the grantor, requesting to the interconnect network that the bypass path be used for the transfers based on the number of transfers meeting a threshold and communicating the transfers via the bypass path to the grantor based on the request, the interconnect network granting control of the grantor in response to the request. The method also includes the interconnect network requesting control of the grantor based on an event and communicating delayed transfers via the interconnect network from other requestors, the delayed transfers being delayed due to the grantor being previously controlled by the requestor, the communicating based on the control of the grantor being changed back to the interconnect network.

    Abstract translation: 实施例包括用于旁路有源存储器件中的数据的方法。 该方法包括:请求者确定尚未传送给设保者的授权人的传送次数,根据满足阈值的传送次数,向互连网请求旁路路径用于传送,并通过 基于请求的设保人的旁路路径,互连网络根据请求授予设保人的控制权。 该方法还包括互连网络,其基于事件请求对设保人的控制,并且经由互连网络从其他请求者传送延迟的传输,延迟的传送由于授权者先前由请求者控制而延迟,基于控制进行通信 的设保人被改回互连网络。

    Chaining between exposed vector pipelines

    公开(公告)号:US09250916B2

    公开(公告)日:2016-02-02

    申请号:US13795435

    申请日:2013-03-12

    Abstract: Embodiments include a method for chaining data in an exposed-pipeline processing element. The method includes separating a multiple instruction word into a first sub-instruction and a second sub-instruction, receiving the first sub-instruction and the second sub-instruction in the exposed-pipeline processing element. The method also includes issuing the first sub-instruction at a first time, issuing the second sub-instruction at a second time different than the first time, the second time being offset to account for a dependency of the second sub-instruction on a first result from the first sub-instruction, the first pipeline performing the first sub-instruction at a first clock cycle and communicating the first result from performing the first sub-instruction to a chaining bus coupled to the first pipeline and a second pipeline, the communicating at a second clock cycle subsequent to the first clock cycle that corresponds to a total number of latch pipeline stages in the first pipeline.

Patent Agency Ranking