MULTI-MODE MULTI-PARALLELISM DATA EXCHANGE METHOD AND DEVICE THEREOF
    1.
    发明申请
    MULTI-MODE MULTI-PARALLELISM DATA EXCHANGE METHOD AND DEVICE THEREOF 有权
    多模式并行数据交换方法及其装置

    公开(公告)号:US20090146849A1

    公开(公告)日:2009-06-11

    申请号:US12048101

    申请日:2008-03-13

    IPC分类号: H03M7/00

    摘要: A multi-mode multi-parallelism data exchange method and the device thereof are proposed to apply to a check node operator or a bit node operator. The proposed method comprises the steps of: duplicating part or all of an original shift data as a duplicated shift data; combining the original shift data and the duplicated shift data to form a data block; and using a data block as the unit to shift this data block so as to conveniently retrieve shift data from the shifted data block. With a maximum z factor circuit and duplication of part of data, specifications of different shift sizes can be supported. The functions of shifters of several sizes can therefore be accomplished with the minimum complexity.

    摘要翻译: 提出了一种多模式多并行数据交换方法及其装置,以应用于校验节点运算符或位节点运算符。 所提出的方法包括以下步骤:将原始移位数据的一部分或全部复制为复制移位数据; 组合原始移位数据和复制的移位数据以形成数据块; 并且使用数据块作为单元来移位该数据块,从而便于从移位的数据块检索移位数据。 通过最大的z因子电路和部分数据的重复,可以支持不同位移大小的规范。 因此可以以最小的复杂度来实现几种尺寸的移位器的功能。

    Multi-mode multi-parallelism data exchange method and device thereof
    2.
    发明授权
    Multi-mode multi-parallelism data exchange method and device thereof 有权
    多模式多并行数据交换方法及其装置

    公开(公告)号:US07719442B2

    公开(公告)日:2010-05-18

    申请号:US12048101

    申请日:2008-03-13

    IPC分类号: H03M7/34

    摘要: A multi-mode multi-parallelism data exchange method and the device thereof are proposed to apply to a check node operator or a bit node operator. The proposed method comprises the steps of: duplicating part or all of an original shift data as a duplicated shift data; combining the original shift data and the duplicated shift data to form a data block; and using a data block as the unit to shift this data block so as to conveniently retrieve shift data from the shifted data block. With a maximum z factor circuit and duplication of part of data, specifications of different shift sizes can be supported. The functions of shifters of several sizes can therefore be accomplished with the minimum complexity.

    摘要翻译: 提出了一种多模式多并行数据交换方法及其装置,以应用于校验节点运算符或位节点运算符。 所提出的方法包括以下步骤:将原始移位数据的一部分或全部复制为复制移位数据; 组合原始移位数据和复制的移位数据以形成数据块; 并且使用数据块作为单元来移位该数据块,从而便于从移位的数据块检索移位数据。 通过最大的z因子电路和部分数据的重复,可以支持不同位移大小的规范。 因此可以以最小的复杂度来实现几种尺寸的移位器的功能。

    OPERATING METHOD APPLIED TO LOW DENSITY PARITY CHECK (LDPC) DECODER AND CIRCUIT THEREOF
    3.
    发明申请
    OPERATING METHOD APPLIED TO LOW DENSITY PARITY CHECK (LDPC) DECODER AND CIRCUIT THEREOF 有权
    适用于低密度奇偶校验(LDPC)解码器及其电路的操作方法

    公开(公告)号:US20090037799A1

    公开(公告)日:2009-02-05

    申请号:US11939119

    申请日:2007-11-13

    IPC分类号: H03M13/47 G06F11/00

    摘要: An operating method applied to low density parity check (LDPC) decoders and the circuit thereof are proposed, in which original bit nodes are incorporated into check nodes for simultaneous operation. The bit node messages are generated according to the different between the newly generated check messages and the previously check node messages. The bit node messages can be updated immediately, and the decoder throughput can be improved. In the other way, the required memory of LDPC decoders can be effectively reduced, and the decoding speed can also be enhanced.

    摘要翻译: 提出了一种应用于低密度奇偶校验(LDPC)解码器及其电路的操作方法,其中将原始比特节点并入校验节点以用于同时操作。 根据新生成的检查消息和先前检查节点消息之间的不同,生成位节点消息。 可以立即更新位节点消息,并且可以提高解码器吞吐量。 另一方面,可以有效地减少LDPC解码器所需的存储器,并且也可以提高解码速度。

    Operating method and circuit for low density parity check (LDPC) decoder
    4.
    发明授权
    Operating method and circuit for low density parity check (LDPC) decoder 有权
    低密度奇偶校验(LDPC)解码器的操作方法和电路

    公开(公告)号:US08108762B2

    公开(公告)日:2012-01-31

    申请号:US11939119

    申请日:2007-11-13

    IPC分类号: G06F11/00 H03M13/00

    摘要: An operating method and a circuit for low density parity check (LDPC) decoders, in which original bit nodes are incorporated into check nodes for simultaneous operation. The bit node messages are generated according to the difference between the newly generated check messages and the previous check node messages. The bit node messages can be updated immediately, and the decoder throughput can be improved. The required memory of LDPC decoders can be effectively reduced, and the decoding speed can also be enhanced.

    摘要翻译: 一种用于低密度奇偶校验(LDPC)解码器的操作方法和电路,其中将原始比特节点并入校验节点用于同时操作。 根据新生成的检查消息和先前检查节点消息之间的差异来生成位节点消息。 可以立即更新位节点消息,并且可以提高解码器吞吐量。 可以有效地减少LDPC解码器所需的存储器,并且还可以提高解码速度。

    Invalidation bus optimization for multiprocessors using directory-based
cache coherence protocols in which an address of a line to be modified
is placed on the invalidation bus simultaneously with sending a modify
request to the directory
    5.
    发明授权
    Invalidation bus optimization for multiprocessors using directory-based cache coherence protocols in which an address of a line to be modified is placed on the invalidation bus simultaneously with sending a modify request to the directory 失效
    对于使用基于目录的缓存一致性协议的多处理器的无效总线优化,其中要修改的行的地址同时发送到目录的修改请求到无效总线上

    公开(公告)号:US5778437A

    公开(公告)日:1998-07-07

    申请号:US533044

    申请日:1995-09-25

    IPC分类号: G06F12/08 G06F12/00

    CPC分类号: G06F12/0826 G06F12/0813

    摘要: An optimization scheme for a directory-based cache coherence protocol for multistage interconnection network-based multiprocessors improves system performance by reducing network latency. The optimization scheme is scalable, targeting multiprocessor systems having a moderate number of processors. The modification of shared data is the dominant contributor to performance degradation in these systems. The directory-based cache coherence scheme uses an invalidation bus on the processor side of the network. The invalidation bus connects all the private caches in the system and processes the invalidation requests thereby eliminating the need to send invalidations across the network. In operation, a processor which attempts to modify data places an address of the data to be modified on the invalidation bus simultaneously with sending a store request for the data modification to the global directory and the global directory sends to the processor attempting to modify the data, in addition to the permission signal, a count of the number of invalidation acknowledgments the processor should receive.

    摘要翻译: 用于基于多级互连网络的多处理器的基于目录的高速缓存一致性协议的优化方案通过减少网络延迟来提高系统性能。 优化方案是可扩展的,针对具有中等数量处理器的多处理器系统。 共享数据的修改是这些系统中性能下降的主要因素。 基于目录的高速缓存一致性方案在网络的处理器侧使用无效总线。 无效总线连接系统中的所有私有高速缓存,并处理无效请求,从而消除了通过网络发送无效的需要。 在操作中,尝试修改数据的处理器同时向无效化总线放置要修改的数据的地址,同时向全局目录发送用于数据修改的存储请求,并且全局目录发送到处理器尝试修改数据 除了许可信号之外,处理器应该接收到的无效确认次数的计数。

    Single-FIFO high speed combining switch
    6.
    发明授权
    Single-FIFO high speed combining switch 失效
    单FIFO高速组合开关

    公开(公告)号:US5046000A

    公开(公告)日:1991-09-03

    申请号:US303699

    申请日:1989-01-27

    申请人: Yarsun Hsu

    发明人: Yarsun Hsu

    CPC分类号: G06F7/22 G06F13/1631 G06F5/06

    摘要: A combining switch 10 includes a two input multiplexer 12 which receives I and J inputs from data processors and directs one of the incoming messages, if there are no contentions of congestions at a switch output port 14 and a Quene FIFO 16 is empty, directly to the output port 14 for transmission to one of a plurality of memory modules. If the output port 14 is busy and the Queue 16 is empty the incoming message is routed to the Queue FIFO 16 for storage. If the Queue FIFO 16 is not empty the incoming message is first compared by a comparator 20 to all existing messages stored in the Queue FIFO 16 to determine if the incoming messasge is destined for a memory address which already has a queued message. If no match is determined by comparator 20 the incoming message is routed to the Queue FIFO 16 for storage. If comparator 20 determines that the memory address and operation type of the incoming message matches that of a message already stored in the Queue FIFO 16 both the incoming message and the queued message are applied to a message combining ALU 26. The ALU 26 generates a combined message which is stored at the same Queue 16 location as the queued message which generated a comparison match with the incoming message.

    Cache coherence for lazy entry consistency in lockup-free caches
    7.
    发明授权
    Cache coherence for lazy entry consistency in lockup-free caches 失效
    缓存一致性,用于在无锁定缓存中延迟输入一致性

    公开(公告)号:US6094709A

    公开(公告)日:2000-07-25

    申请号:US886222

    申请日:1997-07-01

    IPC分类号: G06F12/08 G06F12/00

    CPC分类号: G06F12/0808 G06F12/0828

    摘要: A method of reducing false sharing in a shared memory system by enabling two caches to modify the same line at the same time. More specifically, with this invention a lock associated with a segment of shared memory is acquired, where the segment will then be used exclusively by processor of the shared memory system that has acquired the lock. For each line of the segment, an invalidation request is sent to a number of caches of the system. When a cache receives the invalidation request, it invalidates each line of the segment that is in the cache. When each line of the segment is invalidated, an invalidation acknowledgement is sent to the global directory. For each line of the segment that has been updated or modified, the update data is written back to main memory. Then, an acquire signal is sent to the requesting processor which then has exclusive use of the segment.

    摘要翻译: 一种通过使两个缓存同时修改同一行来减少共享存储器系统中的虚假共享的方法。 更具体地,利用本发明,获取与共享存储器的段相关联的锁,其中该段将由已经获得锁定的共享存储器系统的处理器专用。 对于段的每一行,将无效请求发送到系统的多个缓存。 当缓存接收到无效请求时,它会使缓存中的段的每一行无效。 当段的每一行无效时,将无效确认发送到全局目录。 对于已更新或修改的段的每一行,更新数据将被写回主存储器。 然后,获取信号被发送到请求处理器,然后该请求处理器具有该段的独占使用。

    Network rearrangement method and system
    8.
    发明授权
    Network rearrangement method and system 失效
    网络重排方法和系统

    公开(公告)号:US5287491A

    公开(公告)日:1994-02-15

    申请号:US335916

    申请日:1989-04-10

    申请人: Yarsun Hsu

    发明人: Yarsun Hsu

    摘要: A system and method for a fault-tolerant system for parallel networks which interconnect processors and the first of the parallel networks distributed in an Omega configuration and the second of the parallel networks distributed in a reversed Omega configuration.

    摘要翻译: 用于并行网络的容错系统的系统和方法,其将处理器和分布在Omega配置中的第一并行网络和分布在反向的欧米茄配置中的第二并行网络互连。

    Method for providing virtual atomicity in multi processor environment having access to multilevel caches
    9.
    发明授权
    Method for providing virtual atomicity in multi processor environment having access to multilevel caches 失效
    在具有访问多级缓存的多处理器环境中提供虚拟原子性的方法

    公开(公告)号:US06175899B1

    公开(公告)日:2001-01-16

    申请号:US08858135

    申请日:1997-05-19

    IPC分类号: G06F1200

    CPC分类号: G06F12/0811 G06F12/0808

    摘要: A method for assuring virtual atomic invalidation in a multilevel cache system wherein lower level cache locations store portions of a line stored at a higher level cache location. Upon receipt of an invalidation signal, the higher level cache location invalidates the line and places a HOLD bit on the invalidated line. Thereafter, the higher level cache sends invalidation signals to all lower level caches which store portions of the invalidated line. Each lower level cache invalidates its portion of the line and sets a HOLD bit on its portion of the line. The HOLD bits are reset after all line portion invalidations have been completed.

    摘要翻译: 一种用于在多级缓存系统中确保虚拟原子无效的方法,其中较低级高速缓存位置存储存储在较高级高速缓存位置的线的部分。 在接收到无效信号时,较高级高速缓存位置使线路无效,并将无效线路上的HOLD位置1。 此后,较高级别的缓存将无效信号发送到存储无效行的部分的所有低级缓存。 每个低级缓存使其部分行无效,并在该行的部分设置一个HOLD位。 所有线路部分无效之后,HOLD位被复位。

    Hierarchical bus simple COMA architecture for shared memory
multiprocessors having a bus directly interconnecting caches between
nodes
    10.
    发明授权
    Hierarchical bus simple COMA architecture for shared memory multiprocessors having a bus directly interconnecting caches between nodes 失效
    共享存储器多处理器的分层总线简单COMA架构,其具有总线直接互连节点之间的高速缓存

    公开(公告)号:US6148375A

    公开(公告)日:2000-11-14

    申请号:US023754

    申请日:1998-02-13

    CPC分类号: G06F12/0811 G06F12/0831

    摘要: A method of maintaining cache coherency in a shared memory multiprocessor system having a plurality of nodes, where each node itself is a shared memory multiprocessor. With this invention, an additional shared owner state is maintained so that if a cache at the highest level of cache memory in the system issues a read or write request to a cache line that misses the highest cache level of the system, then the owner of the cache line places the cache line on the bus interconnecting the highest level of cache memories.

    摘要翻译: 在具有多个节点的共享存储器多处理器系统中维持高速缓存一致性的方法,其中每个节点本身是共享存储器多处理器。 利用本发明,维护附加的共享所有者状态,使得如果系统中的高速缓冲存储器的最高级别的高速缓存向错过系统的最高高速缓存级别的高速缓存行发出读取或写入请求,则所有者 高速缓存行将高速缓存行放置在互连最高级别的高速缓冲存储器的总线上。