METHOD AND APPARATUS FOR MASKING AND TRANSMITTING DATA
    2.
    发明申请
    METHOD AND APPARATUS FOR MASKING AND TRANSMITTING DATA 审中-公开
    掩蔽和传输数据的方法和设备

    公开(公告)号:WO2018052718A1

    公开(公告)日:2018-03-22

    申请号:PCT/US2017/049478

    申请日:2017-08-30

    IPC分类号: G06F13/40

    摘要: A method and apparatus for transmitting data includes determining whether to apply a mask to a cache line that includes a first type of data and a second type of data for transmission based upon a first criteria. The second type of data is filtered from the cache line, and the first type of data along with an identifier of the applied mask is transmitted. The first type of data and the identifier is received, and the second type of data is combined with the first type of data to recreate the cache line based upon the received identifier.

    摘要翻译: 用于传输数据的方法和装置包括:基于第一准则确定是否将掩码应用于包括用于传输的第一类数据和第二类数据的缓存线。 第二种类型的数据从高速缓存行被过滤,并且第一种类型的数据连同所应用的掩码的标识符一起被发送。 接收第一类型的数据和标识符,并且第二类型的数据与第一类型的数据组合以基于接收到的标识符重新创建高速缓存行。

    PROVIDING MEMORY BANDWIDTH COMPRESSION USING ADAPTIVE COMPRESSION IN CENTRAL PROCESSING UNIT (CPU)-BASED SYSTEMS
    3.
    发明申请
    PROVIDING MEMORY BANDWIDTH COMPRESSION USING ADAPTIVE COMPRESSION IN CENTRAL PROCESSING UNIT (CPU)-BASED SYSTEMS 审中-公开
    在基于中央处理单元(CPU)的系统中使用自适应压缩提供存储器带宽压缩

    公开(公告)号:WO2018052653A1

    公开(公告)日:2018-03-22

    申请号:PCT/US2017/047532

    申请日:2017-08-18

    摘要: Providing memory bandwidth compression using adaptive compression in central processing unit (CPU)-based systems is disclosed. In one aspect, a compressed memory controller (CMC) is configured to implement two compression mechanisms: a first compression mechanism for compressing small amounts of data (e.g., a single memory line), and a second compression mechanism for compressing large amounts of data (e.g., multiple associated memory lines). When performing a memory write operation using write data that includes multiple associated memory lines, the CMC compresses each of the memory lines separately using the first compression mechanism, and also compresses the memory lines together using the second compression mechanism. If the result of the second compression is smaller than the result of the first compression, the CMC stores the second compression result in the system memory. Otherwise, the first compression result is stored.

    摘要翻译: 公开了在基于中央处理单元(CPU)的系统中使用自适应压缩来提供存储器带宽压缩。 在一个方面中,压缩存储器控制器(CMC)被配置为实现两个压缩机制:用于压缩少量数据的第一压缩机构(例如,单个存储器线路)以及用于压缩大量数据的第二压缩机构( 例如多个关联的存储器线)。 当使用包括多个关联存储器线的写入数据执行存储器写入操作时,CMC使用第一压缩机构分别压缩每条存储器线,并且还使用第二压缩机构将存储器线压缩在一起。 如果第二次压缩的结果小于第一次压缩的结果,则CMC将第二次压缩结果存储在系统内存中。 否则,将存储第一个压缩结果。

    TECHNIQUES FOR WRITE COMMANDS TO A STORAGE DEVICE
    4.
    发明申请
    TECHNIQUES FOR WRITE COMMANDS TO A STORAGE DEVICE 审中-公开
    写入命令到存储设备的技巧

    公开(公告)号:WO2018004928A1

    公开(公告)日:2018-01-04

    申请号:PCT/US2017/035037

    申请日:2017-05-30

    申请人: INTEL CORPORATION

    IPC分类号: G06F3/06

    摘要: Examples include techniques for a write commands to one or more storage devices coupled with a host computing platform. In some examples, the write commands may be responsive to write requests from applications hosted or supported by the host computing platform. A tracking table is utilized by elements of the host computing platform and the one or more storage devices such that the write commands are completed by the one or more storage devices without a need for an interrupt response to elements of the host computing platform.

    摘要翻译: 示例包括用于向与主计算平台耦合的一个或多个存储设备写入命令的技术。 在一些示例中,写入命令可以响应于来自主机计算平台托管或支持的应用的写入请求。 主机计算平台和一个或多个存储设备的元件利用跟踪表,使得写入命令由一个或多个存储设备完成,而不需要对主机计算平台的元件的中断响应。 p>

    PRE-FETCH MECHANISM FOR COMPRESSED MEMORY LINES IN A PROCESSOR-BASED SYSTEM
    5.
    发明申请
    PRE-FETCH MECHANISM FOR COMPRESSED MEMORY LINES IN A PROCESSOR-BASED SYSTEM 审中-公开
    基于处理器的系统中压缩存储器线的预取机制

    公开(公告)号:WO2017222801A1

    公开(公告)日:2017-12-28

    申请号:PCT/US2017/036070

    申请日:2017-06-06

    摘要: Some aspects of the disclosure relate to a pre-fetch mechanism for a cache line compression system that increases RAM capacity and optimizes overflow area reads. For example, a pre-fetch mechanism may allow the memory controller to pipeline the reads from an area with fixed size slots (main compressed area) and the reads from an overflow area. The overflow area is arranged so that a cache line most likely containing the overflow data for a particular line may be calculated by a decompression engine. In this manner, the cache line decompression engine may fetch, in advance, the overflow area before finding the actual location of the overflow data.

    摘要翻译: 本公开的一些方面涉及用于高速缓存行压缩系统的预取机制,其增加RAM容量并优化溢出区读取。 例如,预取机制可以允许存储器控制器从具有固定大小插槽的区域(主要压缩区域)读取数据,并从溢出区域读取数据。 溢出区域被布置为使得可能由解压缩引擎计算最有可能包含特定行的溢出数据的高速缓存行。 以这种方式,高速缓存行解压缩引擎可以在找到溢出数据的实际位置之前提前获取溢出区域。

    CACHE MEMORY ACCESS
    6.
    发明申请
    CACHE MEMORY ACCESS 审中-公开
    高速缓存存取

    公开(公告)号:WO2017178925A1

    公开(公告)日:2017-10-19

    申请号:PCT/IB2017/051944

    申请日:2017-04-05

    IPC分类号: G06F12/00

    摘要: A multiprocessor data processing system includes multiple vertical cache hierarchies supporting a plurality of processor cores, a system memory, and a system interconnect. In response to a load-and-reserve request from a first processor core, a first cache memory supporting the first processor core issues on the system interconnect a memory access request for a target cache line of the load-and-reserve request. Responsive to the memory access request and prior to receiving a system wide coherence response for the memory access request, the first cache memory receives from a second cache memory in a second vertical cache hierarchy by cache-to-cache intervention the target cache line and an early indication of the system wide coherence response for the memory access request. In response to the early indication and prior to receiving the system wide coherence response, the first cache memory initiating processing to update the target cache line in the first cache memory.

    摘要翻译: 多处理器数据处理系统包括支持多个处理器核的多个垂直高速缓存分层结构,系统存储器和系统互连。 响应于来自第一处理器核心的加载和保留请求,支持第一处理器核心的第一高速缓存存储器在系统互连上发出用于加载和保留请求的目标高速缓存行的存储器访问请求。 响应于存储器访问请求并且在接收对于存储器访问请求的系统范围的一致性响应之前,第一高速缓冲存储器通过高速缓存到高速缓存干预从第二垂直高速缓存层级中的第二高速缓冲存储器接收目标高速缓存行和 尽早指示存储器访问请求的系统范围内一致性响应。 响应于早期指示并且在接收系统范围一致性响应之前,第一高速缓冲存储器发起处理以更新第一高速缓冲存储器中的目标高速缓存行。

    MEASURING ADDRESS TRANSLATION LATENCY
    7.
    发明申请
    MEASURING ADDRESS TRANSLATION LATENCY 审中-公开
    测量地址转换延迟

    公开(公告)号:WO2017125701A1

    公开(公告)日:2017-07-27

    申请号:PCT/GB2016/051667

    申请日:2016-06-07

    申请人: ARM LIMITED

    IPC分类号: G06F12/1009 G06F11/34

    摘要: An apparatus includes processing circuitry to process instructions, some of which may require addresses to be translated. The apparatus also includes address translation circuitry to translate addresses in response to instruction processed by the processing circuitry. Furthermore, the apparatus also includes translation latency measuring circuitry to measure a latency of at least part of an address translation process performed by the address translation circuitry in response to a given instruction.

    摘要翻译: 一种装置包括处理指令的处理电路,其中一些可能需要翻译地址。 该装置还包括地址翻译电路,以响应由处理电路处理的指令翻译地址。 此外,该装置还包括翻译延迟测量电路,用于测量由地址翻译电路响应于给定指令而执行的地址翻译过程的至少一部分的等待时间。

    NON-UNIFORM MEMORY ACCESS LATENCY ADAPTATIONS TO ACHIEVE BANDWIDTH QUALITY OF SERVICE
    8.
    发明申请
    NON-UNIFORM MEMORY ACCESS LATENCY ADAPTATIONS TO ACHIEVE BANDWIDTH QUALITY OF SERVICE 审中-公开
    非一致性存储器访问延迟适应性,以实现宽带服务质量

    公开(公告)号:WO2017112283A1

    公开(公告)日:2017-06-29

    申请号:PCT/US2016/063546

    申请日:2016-11-23

    申请人: INTEL CORPORATION

    IPC分类号: G06F13/16

    摘要: Systems, apparatuses and methods may provide for detecting an issued request in a queue that is shared by a plurality of domains in a memory architecture, wherein the plurality of domains are associated with non-uniform access latencies. Additionally, a destination domain associated with the issued request may be determined. Moreover, a first set of additional requests may be prevented from being issued to the queue if the issued request satisfies an overrepresentation condition with respect to the destination domain and the first set of additional requests are associated with the destination domain. In one example, a second set of additional requests are permitted to be issued to the queue while the first set of additional requests are prevented from being issued to the queue, wherein the second set of additional requests are associated with one or more remaining domains in the plurality of domains.

    摘要翻译: 系统,设备和方法可以提供用于检测由存储器架构中的多个域共享的队列中的已发布请求,其中多个域与非统一访问延迟相关联。 另外,可以确定与发布的请求相关联的目的地域。 此外,如果所发布的请求满足关于目的地域的过度表示条件并且第一组附加请求与目的地域相关联,则可以防止向队列发出第一组附加请求。 在一个示例中,第二组附加请求被允许发布到队列,而第一组附加请求被阻止发布到队列,其中第二组附加请求与一个或多个其余的域相关联 多个域名。

    SINGLE-STAGE ARBITER/SCHEDULER FOR A MEMORY SYSTEM COMPRISING A VOLATILE MEMORY AND A SHARED CACHE
    9.
    发明申请
    SINGLE-STAGE ARBITER/SCHEDULER FOR A MEMORY SYSTEM COMPRISING A VOLATILE MEMORY AND A SHARED CACHE 审中-公开
    用于包含易失性存储器和共享缓存的存储器系统的单级仲裁器/调度器

    公开(公告)号:WO2017105741A1

    公开(公告)日:2017-06-22

    申请号:PCT/US2016/062346

    申请日:2016-11-16

    发明人: ALAVOINE, Olivier

    IPC分类号: G06F13/16

    摘要: Systems, methods, and computer programs are disclosed for scheduling memory transactions. An embodiment of a method comprises determining future memory state data of a dynamic random access memory (DRAM) for a predetermined number of future clock cycles. The DRAM is electrically coupled to a system on chip (SoC). Based on the future memory state data, one of a plurality of pending memory transactions is selected that speculatively optimizes DRAM efficiency. The selected memory transaction is sent to a shared cache controller. If the selected memory transaction results in a cache miss, the selected memory transaction is sent to a DRAM controller.

    摘要翻译: 公开了用于调度存储器事务的系统,方法和计算机程序。 方法的实施例包括确定动态随机存取存储器(DRAM)的未来存储器状态数据预定数量的将来时钟周期。 DRAM电耦合到片上系统(SoC)。 基于未来的存储器状态数据,选择多个未决存储器事务中的一个,其推测性地优化DRAM效率。 选定的内存事务被发送到共享缓存控制器。 如果选定的内存事务导致高速缓存未命中,则选定的内存事务将发送到DRAM控制器。

    CACHE MANAGER-CONTROLLED MEMORY ARRAY
    10.
    发明申请
    CACHE MANAGER-CONTROLLED MEMORY ARRAY 审中-公开
    高速缓存管理器控制的内存阵列

    公开(公告)号:WO2017091197A1

    公开(公告)日:2017-06-01

    申请号:PCT/US2015/062119

    申请日:2015-11-23

    IPC分类号: G06F12/08

    摘要: In an example, an apparatus is described that includes a memory array. The memory array includes a volatile memory, a first non-volatile memory, and a second non-volatile memory. The memory array further includes a cache manager that controls access by a computer system to the memory array. For instance, the cache manager may carry out memory operations, including read operations, write operations, and cache evictions, in conjunction with at least one of the volatile memory, the first non-volatile memory, or the second non-volatile memory.

    摘要翻译: 在一个示例中,描述了包括存储器阵列的装置。 存储器阵列包括易失性存储器,第一非易失性存储器和第二非易失性存储器。 存储器阵列进一步包括高速缓存管理器,其控制计算机系统对存储器阵列的访问。 例如,高速缓存管理器可以结合易失性存储器,第一非易失性存储器或第二非易失性存储器中的至少一个来执行包括读取操作,写入操作和高速缓存驱逐的存储器操作。 / p>