INTEGRATED LEVEL TWO CACHE AND MEMORY CONTROLLER WITH MULTIPLE DATA PORTS
    1.
    发明申请
    INTEGRATED LEVEL TWO CACHE AND MEMORY CONTROLLER WITH MULTIPLE DATA PORTS 审中-公开
    集成级两个高速缓存和内存控制器与多个数据端口

    公开(公告)号:WO1995032472A1

    公开(公告)日:1995-11-30

    申请号:PCT/EP1994004315

    申请日:1994-12-27

    CPC classification number: G06F12/0884 G06F12/0897

    Abstract: A memory system wherein data retrieval is simultaneously initiated in both L2 cache and main memory, which allows memory latency associated with arbitration, memory DRAM address translation, and the like to be minimized in the event that the data sought by the processor is not in the L2 cache (miss). The invention allows for any memory access to be interrupted in the storage control unit prior to any memory signals being activated. The L2 and memory access controls are in a single component, i.e. the storage control unit (SCU). Both the L2 and the memory have a unique port into the CPU which allows data to be directly transferred. This eliminates the overhead associated with storing the data in an intermediate device, such as a cache or memory controller.

    Abstract translation: 一种存储系统,其中在L2高速缓存和主存储器中同时启动数据检索,其允许与仲裁相关联的存储器延迟,存储器DRAM地址转换等在处理器寻求的数据不在 L2缓存(miss)。 本发明允许任何存储器访问在任何存储器信号被激活之前在存储控制单元中中断。 L2和存储器访问控制在单个组件中,即存储控制单元(SCU)。 L2和存储器都有一个独特的CPU端口,允许直接传输数据。 这消除了将数据存储在诸如高速缓存或存储器控制器的中间设备中的开销。

    METHOD AND APPARATUS FOR MEMORY CONSISTENCY USING CACHE COHERENCY PROTOCOLS

    公开(公告)号:WO2018111511A1

    公开(公告)日:2018-06-21

    申请号:PCT/US2017/062732

    申请日:2017-11-21

    CPC classification number: G06F12/0884 G06F12/0822 G06F2212/6042

    Abstract: A request is received from a first node over a communication fabric, the request to acquire an access right of a cache line for accessing data stored in a memory location of a memory, the first node being one of a plurality of nodes sharing the memory. In response to the request, a second node is determined based on the cache line that has cached a copy of the data of the cache line in its local memory. A first message is transmitted to the second node over the communication fabric requesting the second node to invalidate the cache line. In response to a response received from the second node indicating that the cache line has been invalidated, a second message is transmitted to the first node over the communication fabric to grant the access right of the cache line to the first node.

    TECHNIQUES FOR BLOCK-BASED INDEXING
    4.
    发明申请
    TECHNIQUES FOR BLOCK-BASED INDEXING 审中-公开
    基于块的索引技术

    公开(公告)号:WO2015077951A1

    公开(公告)日:2015-06-04

    申请号:PCT/CN2013/088014

    申请日:2013-11-28

    Abstract: Techniques for block-based indexing are described. In one embodiment, for example, an apparatus may comprise a multicore processor element, an assignment component for execution by the multicore processor element to generate a plurality of block-attribute pairs, each block- attribute pair corresponding to an attribute value and one of a plurality of data blocks, and an indexing component for execution by the multicore processor element to generate an index block for the plurality of data blocks based on the plurality of block-attribute pairs, the indexing component to perform parallel indexing of the plurality of block-attribute pairs using multiple indexing instances. Other embodiments are described and claimed.

    Abstract translation: 描述了基于块的索引的技术。 在一个实施例中,例如,设备可以包括多核处理器元件,用于由多核处理器元件执行以产生多个块属性对的分配组件,每个块属性对对应于属性值, 多个数据块,以及索引部件,用于由多核处理器元件执行以基于多个块属性对来生成多个数据块的索引块,索引部件执行多个块属性对的并行索引, 使用多个索引实例的属性对。 描述和要求保护其他实施例。

    SYSTEM AND METHOD FOR IMPROVING MEMORY ACCESS
    5.
    发明申请
    SYSTEM AND METHOD FOR IMPROVING MEMORY ACCESS 审中-公开
    用于改善记忆访问的系统和方法

    公开(公告)号:WO00022531A1

    公开(公告)日:2000-04-20

    申请号:PCT/SE1999/001857

    申请日:1999-10-14

    CPC classification number: G06F12/0884 G06F12/0888

    Abstract: A method and system are described for improving memory access. The invention will improve memory access (400, 500) in systems where program code and data stored in memory (150, 160) have low locality. The invention builds on that the access to at least some addresses of the memory will take longer time than the access to other addresses, such as, for example, page type memory.

    Abstract translation: 描述了用于改善存储器访问的方法和系统。 本发明将改进存储在存储器(150,160)中的程序代码和数据具有低局部性的系统中的存储器访问(400,500)。 本发明建立在对存储器的至少一些地址的访问将比对其他地址(例如页面类型存储器)的访问将花费更长的时间。

    COMPUTER SYSTEM WITH TRANSPARENT WRITE CACHE MEMORY POLICY
    6.
    发明申请
    COMPUTER SYSTEM WITH TRANSPARENT WRITE CACHE MEMORY POLICY 审中-公开
    具有透明写入缓存存储器策略的计算机系统

    公开(公告)号:WO98058318A1

    公开(公告)日:1998-12-23

    申请号:PCT/US1998/012363

    申请日:1998-06-12

    CPC classification number: G06F12/0804 G06F12/0884

    Abstract: Efficient use of a cache memory in a computer system is achieved, the system comprising a processor (12), a local bus comprising local address (110) and local data buses (111) coupled to the processor, a cache memory (16) coupled to the local bus, a bus interface (20) coupled to the local bus for coupling the processor to a main memory via an external bus (141, 142) and a transparent write cache policy (TWCP) controller (14) functionally coupled between the processor and bus interface. The TWCP controller looks for a data write operation initiated by the processor, and signals the processor that the data write is complete before actual completion, to free the processor to engage in one or more subsequent operations that do not require the external bus. The TWCP controller causes the bus interface to complete the data write to main memory in parallel with the one or more subsequent operations.

    Abstract translation: 实现了在计算机系统中高效地使用高速缓冲存储器,该系统包括处理器(12),包括本地地址(110)和耦合到处理器的本地数据总线(111)的局部总线,高速缓冲存储器(16) 耦合到本地总线的总线接口(20),用于经由外部总线(141,142)和一个透明的写入高速缓存策略(TWCP)控制器(14)将处理器耦合到主存储器,该透明写入高速缓存策略(TWCP)控制器 处理器和总线接口。 TWCP控制器查找由处理器启动的数据写入操作,并在实际完成之前向处理器通知数据写入完成,以释放处理器进行一个或多个不需要外部总线的后续操作。 TWCP控制器使总线接口与一个或多个后续操作并行地完成对主存储器的数据写入。

    SYSTEMS AND METHODS FOR ADDRESSING A CACHE WITH SPLIT-INDEXES
    7.
    发明申请
    SYSTEMS AND METHODS FOR ADDRESSING A CACHE WITH SPLIT-INDEXES 审中-公开
    用于解析具有分割索引的高速缓存的系统和方法

    公开(公告)号:WO2016185272A1

    公开(公告)日:2016-11-24

    申请号:PCT/IB2016/000709

    申请日:2016-05-11

    Abstract: Cache memory mapping techniques are presented. A cache may contain an index configuration register. The register may configure the locations of an upper index portion and a lower index portion of a memory address. The portions may be combined to create a combined index. The configurable split-index addressing structure may be used, among other applications, to reduce the rate of cache conflicts occurring between multiple processors decoding the video frame in parallel.

    Abstract translation: 介绍缓存内存映射技术。 缓存可以包含索引配置寄存器。 寄存器可以配置存储器地址的上索引部分和下索引部分的位置。 这些部分可以组合以产生组合索引。 除了其他应用之外,可以使用可配置的分拆索引寻址结构来降低在并行解码视频帧的多个处理器之间发生的高速缓存冲突的速率。

    SPECULATIVE READS IN BUFFERED MEMORY
    8.
    发明申请
    SPECULATIVE READS IN BUFFERED MEMORY 审中-公开
    缓冲存储器中的分析读数

    公开(公告)号:WO2016105852A1

    公开(公告)日:2016-06-30

    申请号:PCT/US2015/062813

    申请日:2015-11-26

    Abstract: A speculative read request is received from a host device over a buffered memory access link for data associated with a particular address. A read request is sent for the data to a memory device. The data is received from the memory device in response to the read request and the received data is sent to the host device as a response to a demand read request received subsequent to the speculative read request.

    Abstract translation: 通过用于与特定地址相关联的数据的缓冲存储器访问链路从主机设备接收推测读请求。 发送读取请求以将数据发送到存储器设备。 响应于读取请求从存储器装置接收数据,并且接收到的数据作为对在推测性读取请求之后接收的请求读取请求的响应被发送到主机设备。

    A NEW CACHE AND MEMORY ARCHITECTURE FOR FAST PROGRAM SPACE ACCESS
    9.
    发明申请
    A NEW CACHE AND MEMORY ARCHITECTURE FOR FAST PROGRAM SPACE ACCESS 审中-公开
    快速程序空间访问的新缓存和存储体系结构

    公开(公告)号:WO2005017691A3

    公开(公告)日:2005-12-29

    申请号:PCT/US2004026189

    申请日:2004-08-11

    Applicant: CHEN CHAO-WU

    Inventor: CHEN CHAO-WU

    CPC classification number: G06F12/0884

    Abstract: A data handling system includes a memory that includes a cache memory (120) and a main memory (130). The memory further includes a controller (140) for simultaneously initiating two data access operations to the cache memory and to the main memory by providing a main memory access address (M_Add) with a time-delay increment added to a cache memory access address (C_Add) based on an access time delay between an initial data access time to the main memory relative to the cache memory. The main memory further includes a plurality of data access paths divided into a plurality of propagation stages interconnected between a plurality of memory arrays in the main memory wherein each of the propagation stages further implementing a local clock for asynchronously propagating a plurality of data access signals to access data stored in a plurality memory cells in each of the main memory arrays. The data handling system further requests a plurality sets of data from the memory wherein the cache memory is provided with a capacity for storing only a first few data for the plurality sets of data with remainder of data of the plurality sets of data stored in the main memory and the main memory and the cache memory having substantially a same cycle time for completing a data access operation.

    Abstract translation: 数据处理系统包括包括高速缓冲存储器(120)和主存储器(130)的存储器。 存储器还包括一个控制器(140),用于通过提供一个主存储器访问地址(M_Add)来同时启动对高速缓冲存储器和主存储器的两个数据访问操作,该主存储器访问地址(M_Add)被添加到高速缓冲存储器访问地址(C_Add )基于相对于高速缓冲存储器到主存储器的初始数据访问时间之间的访问时间延迟。 主存储器还包括被分成在主存储器中的多个存储器阵列之间互连的多个传播级的多个数据访问路径,其中每个传播级进一步实现用于异步传播多个数据访问信号的本地时钟 访问存储在每个主存储器阵列中的多个存储单元中的数据。 数据处理系统还从存储器请求多组数据,其中高速缓冲存储器被提供有仅存储多组数据的前几个数据的容量,其中存储在主要数据中的多组数据的剩余数据 存储器和主存储器和高速缓冲存储器具有基本相同的周期时间以完成数据访问操作。

    METHOD AND APPARATUS FOR REDUCING LATENCY IN A MEMORY SYSTEM
    10.
    发明申请
    METHOD AND APPARATUS FOR REDUCING LATENCY IN A MEMORY SYSTEM 审中-公开
    用于减少存储器系统中的延迟的方法和装置

    公开(公告)号:WO0244904A2

    公开(公告)日:2002-06-06

    申请号:PCT/CA0101686

    申请日:2001-11-28

    CPC classification number: G06F12/0893 G06F12/0884

    Abstract: A memory controller controls a buffer which stores the most recently used addresses and associated data, but the data stored in the buffer is only a portion of a row of data (termed row head data) stored in main memory. In a memory access initiated by the CPU, both the buffer and main memory are accessed simultaneously. If the buffer contains the address requested, the buffer immediately begins to provide the associated row head data in a burst to the cache memory. Meanwhile, the same row address is activated in the main memory bank corresponding to the requested address found in the buffer. After the buffer provides the row head data, the remainder of the burst of requested data is provided by the main memory to the CPU.

    Abstract translation: 存储器控制器控制存储最近使用的地址和相关联的数据的缓冲器,但存储在缓冲器中的数据仅是存储在主存储器中的一行数据(称为行头数据)的一部分。 在由CPU启动的存储器访问中,缓冲器和主存储器都被同时访问。 如果缓冲器包含请求的地址,则缓冲器立即开始以高速缓冲存储器的脉冲串提供关联的行头数据。 同时,在与存储在缓冲器中的请求地址对应的主存储器中激活相同的行地址。 在缓冲器提供行头数据之后,所请求数据的突发的其余部分由主存储器提供给CPU。

Patent Agency Ranking