System and method for assigning memory access transfers between communication channels
    2.
    发明授权
    System and method for assigning memory access transfers between communication channels 有权
    在通信通道之间分配存储器访问传输的系统和方法

    公开(公告)号:US09195621B2

    公开(公告)日:2015-11-24

    申请号:US13838133

    申请日:2013-03-15

    Abstract: A communication channel controller includes a queue, a memory map, and a scheduler. The queue to store a first memory transfer request received at the communication channel controller. The memory map stores information to identify a memory address range to be associated with a memory. The scheduler to compare a source address of the first memory transfer in the queue to the memory address range in the memory map to determine whether the source address of the first memory transfer request targets the memory, and in response allocate the first memory transfer request to a first communication channel of a plurality of communication channels in response to the first communication channel having all of its outstanding memory transactions to a common source address bank and source address page as a source address bank and a source address page of the first memory transfer request.

    Abstract translation: 通信信道控制器包括队列,存储器映射和调度器。 存储在通信信道控制器处接收的第一存储器传送请求的队列。 存储器映射存储用于标识与存储器相关联的存储器地址范围的信息。 调度器将队列中的第一存储器传输的源地址与存储器映射中的存储器地址范围进行比较,以确定第一存储器传送请求的源地址是否针对存储器,并且响应于将第一存储器传送请求分配给 多个通信信道的第一通信信道响应于第一通信信道具有其所有未完成的存储器事务到公共源地址组和作为源地址组的源地址页和第一存储器转移请求的源地址页 。

    Controller for managing a reset of a subset of threads in a multi-thread system
    3.
    发明授权
    Controller for managing a reset of a subset of threads in a multi-thread system 有权
    用于管理多线程系统中的线程子集的重置的控制器

    公开(公告)号:US08924784B2

    公开(公告)日:2014-12-30

    申请号:US13445582

    申请日:2012-04-12

    CPC classification number: G06F9/5022 G06F11/1438 G06F11/1479

    Abstract: An integrated circuit device includes a processor core, and a controller. The processor core issues a command intended for a first thread of a plurality of threads. The controller initiates de-allocates hardware resources of the controller that are allocated to the first thread during a thread reset process for the first thread, returns a specified value to the processor core in response to the first command intended for the first thread during the thread reset process, drops responses intended for the first thread from other devices during the thread reset process, completes the thread reset process in response to a determination that all expected responses intended for the first thread have been either received or dropped, and continues to issue requests to other devices in response to commands from other threads of the plurality of threads and processing corresponding responses during the thread reset process.

    Abstract translation: 集成电路装置包括处理器核心和控制器。 处理器核心发出用于多个线程的第一线程的命令。 在针对第一线程的线程复位过程期间,控制器启动对分配给第一线程的控制器的硬件资源的分配,响应于在线程期间针对第一线程的第一命令,将指定的值返回到处理器核心 复位过程在线程重置过程期间将针对其他设备的第一个线程的响应丢弃,响应于所有预期针对第一个线程的预期响应已被接收或丢弃的确定而完成线程重置过程,并且继续发出请求 响应于来自多个线程的其他线程的命令并且在线程重置过程期间处理对应的响应而发送到其他设备。

    Virtualized Interrupt Delay Mechanism
    5.
    发明申请
    Virtualized Interrupt Delay Mechanism 有权
    虚拟化中断延迟机制

    公开(公告)号:US20130326102A1

    公开(公告)日:2013-12-05

    申请号:US13485120

    申请日:2012-05-31

    CPC classification number: G06F13/24

    Abstract: A method and circuit for a data processing system provide a partitioned interrupt controller with an efficient deferral mechanism for processing partitioned interrupt requests by executing a control instruction to encode and store a delay command (e.g., DEFER or SUSPEND) in a data payload with a hardware-inserted partition attribute (LPID) for storage to a command register (25) at a physical address (PA) retrieved from a special purpose register (46) so that the partitioned interrupt controller (14) can determine if the delay command can be performed based on local access control information.

    Abstract translation: 一种用于数据处理系统的方法和电路,通过执行控制指令来编码和存储具有硬件的数据有效载荷中的延迟命令(例如,DEFER或SUSPEND)来提供具有用于处理分区中断请求的有效延迟机制的分区中断控制器 - 插入分区属性(LPID),用于存储到从专用寄存器(46)检索的物理地址(PA)处的命令寄存器(25),使得分区中断控制器(14)可以确定是否可以执行延迟命令 基于本地访问控制信息。

    Virtualized Instruction Extensions for System Partitioning
    6.
    发明申请
    Virtualized Instruction Extensions for System Partitioning 有权
    用于系统分区的虚拟化指令扩展

    公开(公告)号:US20130290585A1

    公开(公告)日:2013-10-31

    申请号:US13460287

    申请日:2012-04-30

    CPC classification number: G06F13/14

    Abstract: A method and circuit for a data processing system provide virtualized instructions for accessing a partitioned device (e.g., 14, 61) by executing a control instruction (47, 48) to encode and store an access command (CMD) in a data payload with a hardware-inserted partition attribute (LPID) for storage to a command register (25) at a physical address (PA) retrieved from a special purpose register (46) so that the partitioned device (14, 61) can determine if the access command can be performed based on local access control information.

    Abstract translation: 用于数据处理系统的方法和电路通过执行控制指令(47,48)来提供用于访问分区设备(例如,14,61)的虚拟化指令,以将数据有效载荷中的访问命令(CMD)编码和存储在 硬件插入分区属性(LPID),用于存储到从专用寄存器(46)检索的物理地址(PA)处的命令寄存器(25),使得分区设备(14,61)可以确定访问命令是否可以 基于本地访问控制信息执行。

    Asynchronously scheduling memory access requests
    7.
    发明授权
    Asynchronously scheduling memory access requests 有权
    异步调度内存访问请求

    公开(公告)号:US08572322B2

    公开(公告)日:2013-10-29

    申请号:US12748600

    申请日:2010-03-29

    CPC classification number: G06F13/1689 G06F12/0215

    Abstract: A data processing system employs a scheduler to schedule pending memory access requests and a memory controller to service scheduled pending memory access requests. The memory access requests are asynchronously scheduled with respect to the clocking of the memory. The scheduler is operated using a clock signal with a frequency different from the frequency of the clock signal used to operate the memory controller. The clock signal used to clock the scheduler can have a lower frequency than the clock used by a memory controller. As a result, the scheduler is able to consider a greater number of pending memory access requests when selecting the next pending memory access request to be submitted to the memory for servicing and thus the resulting sequence of selected memory access requests is more likely to be optimized for memory access throughput.

    Abstract translation: 数据处理系统使用调度器来调度待执行的存储器访问请求和存储器控制器来服务预定的未决存储器访问请求。 存储器访问请求相对于存储器的时钟被异步调度。 使用频率不同于用于操作存储器控制器的时钟信号的频率的时钟信号来操作调度器。 用于时钟调度器的时钟信号的频率可能低于存储器控制器使用的时钟频率。 因此,当选择要提交给存储器进行服务的下一个未决的存储器访问请求时,调度器能够考虑更多数量的待处理存储器访问请求,因此所得到的存储器访问请求的顺序更有可能被优化 用于内存访问吞吐量。

    Explicit barrier scheduling mechanism for pipelining of stream processing algorithms
    8.
    发明授权
    Explicit barrier scheduling mechanism for pipelining of stream processing algorithms 有权
    流处理算法流水线显式屏障调度机制

    公开(公告)号:US09207979B1

    公开(公告)日:2015-12-08

    申请号:US14288541

    申请日:2014-05-28

    CPC classification number: H04L49/00

    Abstract: A method for pipelined data stream processing of packets includes determining a task to be performed on each packet of a data stream, the task having a plurality of task portions including a first task portion. Determining the first task portion is to process a first packet. In response to determining a first storage location stores a first barrier indicator, enabling the first task portion to process the first packet and storing a second barrier indicator at the first location. Determining the first task portion is to process a second next-in-order packet. In response to determining the first location stores the second barrier indicator, preventing the first task portion from processing the second packet. In response to a first barrier clear indicator, storing the first barrier indicator at the first location, and in response, enabling the first task portion to process the second packet.

    Abstract translation: 用于数据流的流水线数据流处理的方法包括确定要在数据流的每个分组上执行的任务,所述任务具有包括第一任务部分的多个任务部分。 确定第一任务部分是处理第一分组。 响应于确定第一存储位置存储第一屏障指示符,使第一任务部分能够处理第一分组并在第一位置存储第二屏障指示符。 确定第一任务部分是处理第二次序的分组。 响应于确定第一位置存储第二屏障指示符,防止第一任务部分处理第二分组。 响应于第一屏障清除指示符,将第一屏障指示器存储在第一位置处,并且作为响应,使得第一任务部分能够处理第二分组。

    System and method for direct memory access buffer utilization by setting DMA controller with plurality of arbitration weights associated with different DMA engines
    9.
    发明授权
    System and method for direct memory access buffer utilization by setting DMA controller with plurality of arbitration weights associated with different DMA engines 有权
    通过设置具有与不同DMA引擎相关联的多个仲裁权重的DMA控制器来直接存储器访问缓冲器利用的系统和方法

    公开(公告)号:US09128925B2

    公开(公告)日:2015-09-08

    申请号:US13454505

    申请日:2012-04-24

    CPC classification number: G06F13/28

    Abstract: A DMA controller allocates space at a buffer to different DMA engines based on the length of time data segments have been stored at a buffer. This allocation ensures that DMA engines associated with a destination that is experiencing higher congestion will be assigned less buffer space than a destination that is experiencing lower congestion. Further, the DMA controller is able to adapt to changing congestion conditions at the transfer destinations.

    Abstract translation: DMA控制器根据数据段已经存储在缓冲区的时间长度,将缓冲区的空间分配给不同的DMA引擎。 该分配确保与经历较高拥塞的目的地相关联的DMA引擎将被分配比经历较低拥塞的目的地更少的缓冲区空间。 此外,DMA控制器能够适应转移目的地处的变化的拥塞状况。

    System and method for maintaining packet order in an ordered data stream
    10.
    发明授权
    System and method for maintaining packet order in an ordered data stream 有权
    用于在有序数据流中维护分组顺序的系统和方法

    公开(公告)号:US09054998B2

    公开(公告)日:2015-06-09

    申请号:US13760109

    申请日:2013-02-06

    CPC classification number: H04L47/62

    Abstract: A source processor can divide each packet of a data stream into multiple segments prior to communication of the packet, allowing a packet to be transmitted in smaller chunks. The source processor can process the segments for two or more packets for a given data stream concurrently, and provide appropriate context information in each segments header to facilitate in order transmission and reception of the packets represented by the individual segments. Similarly, a destination processor can receive the packet segments packets for an ordered data stream from a source processor, and can assign different contexts, based upon the context information in each segments header. When a last segment is received for a particular packet, the context for the particular packet is closed, and a descriptor for the packet is sent to a queue. The order in which the last segments of the packets are transmitted maintains order amongst the packets.

    Abstract translation: 源处理器可以在分组通信之前将数据流的每个分组划分成多个分段,从而允许以更小的分组发送分组。 源处理器可以对于给定数据流并发地处理两个或更多个分组的分段,并且在每个分段报头中提供适当的上下文信息,以有助于顺序发送和接收由各个分段表示的分组。 类似地,目的地处理器可以从源处理器接收用于有序数据流的分组分段分组,并且可以基于每个分段报头中的上下文信息来分配不同的上下文。 当针对特定分组接收到最后一个分段时,特定分组的上下文被关闭,并且分组的描述符被发送到队列。 传送数据包的最后一个段的顺序维护数据包之间的顺序。

Patent Agency Ranking