System and method for optimizing neighboring cache usage in a multiprocessor environment
    23.
    发明授权
    System and method for optimizing neighboring cache usage in a multiprocessor environment 有权
    用于优化多处理器环境中的相邻缓存使用的系统和方法

    公开(公告)号:US08296520B2

    公开(公告)日:2012-10-23

    申请号:US11959652

    申请日:2007-12-19

    CPC classification number: G06F12/0831

    Abstract: A method for managing data operates in a data processing system with a system memory and a plurality of processing units (PUs), each PU having a cache comprising a plurality of cache lines, each cache line having one of a plurality of coherency states, and each PU coupled to at least another one of the plurality of PUs. A first PU selects a castout cache line of a plurality of cache lines in a first cache of the first PU to be castout of the first cache. The first PU sends a request to a second PU, wherein the second PU is a neighboring PU of the first PU, and the request comprises a first address and first coherency state of the selected castout cache line. The second PU determines whether the first address matches an address of any cache line in the second PU. The second PU sends a response to the first PU based on a coherency state of each of a plurality of cache lines in the second cache and whether there is an address hit. The first PU determines whether to transmit the castout cache line to the second PU based on the response. And, in the event the first PU determines to transmit the castout cache line to the second PU, the first PU transmits the castout cache line to the second PU.

    Abstract translation: 用于管理数据的方法在具有系统存储器和多个处理单元(PU)的数据处理系统中操作,每个PU具有包括多个高速缓存行的高速缓存,每个高速缓存行具有多个相关性状态之一,以及 每个PU耦合到多个PU中的至少另一个。 第一PU选择第一PU的第一高速缓存中的多条高速缓存线的转出高速缓存线,以使第一高速缓冲存储器被抛弃。 第一PU向第二PU发送请求,其中第二PU是第一PU的相邻PU,并且该请求包括所选择的丢弃高速缓存行的第一地址和第一相关性状态。 第二PU确定第一地址是否与第二PU中的任何高速缓存行的地址相匹配。 第二PU基于第二高速缓存中的多条高速缓存行中的每一条的一致性状态以及是否存在地址命中,向第一PU发送响应。 第一PU确定是否根据该响应将丢弃高速缓存行发送到第二PU。 并且,在第一PU确定将丢弃高速缓存行发送到第二PU的情况下,第一PU将丢弃高速缓存行发送到第二PU。

    Structure for piggybacking multiple data tenures on a single data bus grant to achieve higher bus utilization
    24.
    发明授权
    Structure for piggybacking multiple data tenures on a single data bus grant to achieve higher bus utilization 失效
    在单个数据总线上搭载多个数据期限的结构,以实现更高的总线利用率

    公开(公告)号:US07987437B2

    公开(公告)日:2011-07-26

    申请号:US12112818

    申请日:2008-04-30

    CPC classification number: G06F13/364

    Abstract: A design structure for piggybacking multiple data tenures on a single data bus grant to achieve higher bus utilization is disclosed. In one embodiment of the design structure, a method in a computer-aided design system includes a source device sending a request for a bus grant to deliver data to a data bus connecting a source device and a destination device. The device receives the bus grant and logic within the device determines whether the bandwidth of the data bus allocated to the bus grant will be filled by the data. If the bandwidth of the data bus allocated to the bus grant will not be filled by the data, the device appends additional data to the first data and delivers the combined data to the data bus during the bus grant for the first data. When the bandwidth of the data bus allocated to the bus grant will be filled by the first data, the device delivers only the first data to the data bus during the bus grant.

    Abstract translation: 公开了一种用于在单个数据总线上搭载多个数据期限以实现更高总线利用率的设计结构。 在设计结构的一个实施例中,计算机辅助设计系统中的一种方法包括发送对总线许可的请求的源设备,以向连接源设备和目的地设备的数据总线传送数据。 设备接收总线许可,并且设备内的逻辑确定分配给总线授权的数据总线的带宽是否将被数据填充。 如果分配给总线授权的数据总线的带宽不会被数据填充,则设备将附加数据附加到第一个数据,并在第一个数据的总线授权期间将组合的数据传送到数据总线。 当分配给总线授权的数据总线的带宽将由第一个数据填充时,设备在总线授权期间只将第一个数据传送到数据总线。

    Method for performing a direct memory access block move in a direct memory access device
    25.
    发明授权
    Method for performing a direct memory access block move in a direct memory access device 失效
    用于执行直接存储器访问块的方法在直接存储器访问设备中移动

    公开(公告)号:US07523228B2

    公开(公告)日:2009-04-21

    申请号:US11532562

    申请日:2006-09-18

    CPC classification number: G06F13/28

    Abstract: A direct memory access (DMA) device is structured as a loosely coupled DMA engine (DE) and a bus engine (BE). The DE breaks the programmed data block moves into separate transactions, interprets the scatter/gather descriptors, and arbitrates among channels. The DE and BE use a combined read-write (RW) command that can be queued between the DE and the BE. The bus engine (BE) has two read queues and a write queue. The first read queue is for “new reads” and the second read queue is for “old reads,” which are reads that have been retried on the bus at least once. The BE gives absolute priority to new reads, and still avoids deadlock situations.

    Abstract translation: 直接存储器访问(DMA)设备被构造为松散耦合的DMA引擎(DE)和总线引擎(BE)。 DE将编程数据块移动到单独的事务中,解释分散/收集描述符,并在通道之间进行仲裁。 DE和BE使用可以在DE和BE之间排队的组合读写(RW)命令。 总线引擎(BE)具有两个读队列和一个写队列。 第一个读取队列是用于“新读取”,第二个读取队列用于“旧读取”,这是至少在总线上重试的读取。 BE绝对优先考虑新的读取,并且仍然避免死锁的情况。

    DMA Controller with Support for High Latency Devices
    26.
    发明申请
    DMA Controller with Support for High Latency Devices 失效
    支持高延迟器件的DMA控制器

    公开(公告)号:US20080126602A1

    公开(公告)日:2008-05-29

    申请号:US11532562

    申请日:2006-09-18

    CPC classification number: G06F13/28

    Abstract: A direct memory access (DMA) device is structured as a loosely coupled DMA engine (DE) and a bus engine (BE). The DE breaks the programmed data block moves into separate transactions, interprets the scatter/gather descriptors, and arbitrates among channels. The DE and BE use a combined read-write (RW) command that can be queued between the DE and the BE. The bus engine (BE) has two read queues and a write queue. The first read queue is for “new reads” and the second read queue is for “old reads,” which are reads that have been retried on the bus at least once. The BE gives absolute priority to new reads, and still avoids deadlock situations.

    Abstract translation: 直接存储器访问(DMA)设备被构造为松散耦合的DMA引擎(DE)和总线引擎(BE)。 DE将编程数据块移动到单独的事务中,解释分散/收集描述符,并在通道之间进行仲裁。 DE和BE使用可以在DE和BE之间排队的组合读写(RW)命令。 总线引擎(BE)具有两个读队列和一个写队列。 第一个读取队列是用于“新读取”,第二个读取队列用于“旧读取”,这是至少在总线上重试的读取。 BE绝对优先考虑新的读取,并且仍然避免死锁的情况。

    System and Method for Improved Logic Simulation Using a Negative Unknown Boolean State
    27.
    发明申请
    System and Method for Improved Logic Simulation Using a Negative Unknown Boolean State 失效
    使用负的未知布尔状态改进逻辑模拟的系统和方法

    公开(公告)号:US20080126065A1

    公开(公告)日:2008-05-29

    申请号:US11531708

    申请日:2006-09-14

    Inventor: Richard Nicholas

    CPC classification number: G06F17/5022

    Abstract: A system and method for simulating a circuit design using both an unknown Boolean state and a negative unknown Boolean state is provided. When the circuit is simulated, one or more initial simulated logic elements are initialized to the unknown Boolean state. The initialized unknown Boolean states are then fed to one or more simulated logic elements and the simulator simulates the handling of the unknown Boolean state by the simulated logic elements. Examples of simulated logic elements include gates and latches, such as flip-flops, inverters, and basic logic gates. The processing results in at least one negative unknown Boolean state. An example of when a negative unknown Boolean state would result would be when the unknown Boolean state is inverted by an inverter. The resulting negative unknown Boolean state is then fed to other simulated logic elements that generate further simulation results based on processing the negative unknown Boolean state.

    Abstract translation: 提供了一种用于模拟使用未知布尔状态和负未知布尔状态的电路设计的系统和方法。 当模拟电路时,一个或多个初始仿真逻辑元件被初始化为未知布尔状态。 然后将初始化的未知布尔状态馈送到一个或多个仿真逻辑元件,并且模拟器通过模拟逻辑元件模拟未知布尔状态的处理。 模拟逻辑元件的示例包括门和锁存器,例如触发器,反相器和基本逻辑门。 该处理结果至少有一个负的未知布尔状态。 当未知的布尔状态由逆变器反相时,将产生负的未知布尔状态的一个例子。 然后将所得到的负未知布尔状态馈送到其他仿真逻辑元件,该逻辑元件基于处理负未知布尔状态生成进一步的仿真结果。

    Method and apparatus for a modified parity check
    29.
    发明授权
    Method and apparatus for a modified parity check 失效
    用于修改奇偶校验的方法和装置

    公开(公告)号:US07275199B2

    公开(公告)日:2007-09-25

    申请号:US10912483

    申请日:2004-08-05

    CPC classification number: G06F11/1032

    Abstract: A method, an apparatus, and a computer program are provided for sequentially determining parity of stored data. Because of the inherent instabilities that exist in most memory arrays, data corruption can be a substantial problem. Parity checking and other techniques are typically employed to counteract the problem. However, with parity checking and other techniques, there are tradeoffs. Time required to perform the parity check, for example, can cause system latencies. Therefore, to reduce latencies, a trusted register can be included into a memory system to allow for immediate access to one piece of trusted data. By being able to read one piece of trusted data, the system can overlap the parity checking and delivery of a location of data with the reading of the next location of data from the memory array. Hence, a full cycle of latency can be eliminated without the reduction of the clock frequency.

    Abstract translation: 提供了一种方法,装置和计算机程序,用于顺序地确定存储的数据的奇偶性。 由于大多数内存阵列中存在固有的不稳定性,数据损坏可能是一个重大问题。 通常采用奇偶校验和其他技术来解决问题。 然而,通过奇偶校验和其他技术,有权衡。 例如,执行奇偶校验所需的时间可能会导致系统延迟。 因此,为了减少延迟,信任寄存器可以被包括在存储器系统中以允许立即访问一条可信数据。 通过能够读取一条可信赖的数据,系统可以通过从存储器阵列读取数据的下一个位置来重叠数据位置的奇偶校验和传送。 因此,可以消除整个周期的延迟,而不降低时钟频率。

    Method and bus prefetching mechanism for implementing enhanced buffer control

    公开(公告)号:US20060174068A1

    公开(公告)日:2006-08-03

    申请号:US11050295

    申请日:2005-02-03

    CPC classification number: G06F13/28

    Abstract: A method, and bus prefetching mechanism are provided for implementing enhanced buffer control. A computer system includes a plurality of masters and at least one slave exchanging data over a system bus and the slave prefetches read data under control of a master. The master generates a continue bus signal that indicates a new or a continued request. The master generates a prefetch bus signal that indicates an amount to prefetch including no prefetching. The master includes a mechanism for continuing a sequence of reads allowing prefetching until a request is made indicating a prefetch amount of zero.

Patent Agency Ranking