ASSIST THREAD FOR INJECTING CACHE MEMORY IN A MICROPROCESSOR
    1.
    发明申请
    ASSIST THREAD FOR INJECTING CACHE MEMORY IN A MICROPROCESSOR 有权
    在微处理器中注入高速缓存存储器的辅助螺纹

    公开(公告)号:US20120198459A1

    公开(公告)日:2012-08-02

    申请号:US13434423

    申请日:2012-03-29

    IPC分类号: G06F9/46 G06F12/08

    摘要: A data processing system includes a microprocessor having access to multiple levels of cache memories. The microprocessor executes a main thread compiled from a source code object. The system includes a processor for executing an assist thread also derived from the source code object. The assist thread includes memory reference instructions of the main thread and only those arithmetic instructions required to resolve the memory reference instructions. A scheduler configured to schedule the assist thread in conjunction with the corresponding execution thread is configured to execute the assist thread ahead of the execution thread by a determinable threshold such as the number of main processor cycles or the number of code instructions. The assist thread may execute in the main processor or in a dedicated assist processor that makes direct memory accesses to one of the lower level cache memory elements.

    摘要翻译: 数据处理系统包括具有访问多级缓存存储器的微处理器。 微处理器执行从源代码对象编译的主线程。 该系统包括用于执行也源自源代码对象的辅助线程的处理器。 辅助线程包括主线程的存储器参考指令和仅解析存储器参考指令所需的算术指令。 配置成与对应的执行线程一起调度辅助线程的调度器被配置为通过诸如主处理器周期的数量或代码指令的数量的可确定的阈值来执行执行线程之前的辅助线程。 辅助线程可以在主处理器或专用辅助处理器中执行,该处理器直接对下一级高速缓冲存储器元件之一进行存储器访问。

    Assist thread for injecting cache memory in a microprocessor
    2.
    发明授权
    Assist thread for injecting cache memory in a microprocessor 有权
    协助在微处理器中注入高速缓存的线程

    公开(公告)号:US08230422B2

    公开(公告)日:2012-07-24

    申请号:US11034546

    申请日:2005-01-13

    IPC分类号: G06F9/46 G06F9/40 G06F13/28

    摘要: A data processing system includes a microprocessor having access to multiple levels of cache memories. The microprocessor executes a main thread compiled from a source code object. The system includes a processor for executing an assist thread also derived from the source code object. The assist thread includes memory reference instructions of the main thread and only those arithmetic instructions required to resolve the memory reference instructions. A scheduler configured to schedule the assist thread in conjunction with the corresponding execution thread is configured to execute the assist thread ahead of the execution thread by a determinable threshold such as the number of main processor cycles or the number of code instructions. The assist thread may execute in the main processor or in a dedicated assist processor that makes direct memory accesses to one of the lower level cache memory elements.

    摘要翻译: 数据处理系统包括具有访问多级缓存存储器的微处理器。 微处理器执行从源代码对象编译的主线程。 该系统包括用于执行也源自源代码对象的辅助线程的处理器。 辅助线程包括主线程的存储器参考指令和仅解析存储器参考指令所需的算术指令。 配置成与对应的执行线程一起调度辅助线程的调度器被配置为通过诸如主处理器周期的数量或代码指令的数量的可确定的阈值来执行执行线程之前的辅助线程。 辅助线程可以在主处理器或专用辅助处理器中执行,该处理器直接对下一级高速缓冲存储器元件之一进行存储器访问。

    Assist thread for injecting cache memory in a microprocessor
    3.
    发明授权
    Assist thread for injecting cache memory in a microprocessor 有权
    协助在微处理器中注入高速缓存的线程

    公开(公告)号:US08949837B2

    公开(公告)日:2015-02-03

    申请号:US13434423

    申请日:2012-03-29

    摘要: A data processing system includes a microprocessor having access to multiple levels of cache memories. The microprocessor executes a main thread compiled from a source code object. The system includes a processor for executing an assist thread also derived from the source code object. The assist thread includes memory reference instructions of the main thread and only those arithmetic instructions required to resolve the memory reference instructions. A scheduler configured to schedule the assist thread in conjunction with the corresponding execution thread is configured to execute the assist thread ahead of the execution thread by a determinable threshold such as the number of main processor cycles or the number of code instructions. The assist thread may execute in the main processor or in a dedicated assist processor that makes direct memory accesses to one of the lower level cache memory elements.

    摘要翻译: 数据处理系统包括具有访问多级缓存存储器的微处理器。 微处理器执行从源代码对象编译的主线程。 该系统包括用于执行也源自源代码对象的辅助线程的处理器。 辅助线程包括主线程的存储器参考指令和仅解析存储器参考指令所需的算术指令。 配置成与对应的执行线程一起调度辅助线程的调度器被配置为通过诸如主处理器周期的数量或代码指令的数量的可确定的阈值来执行执行线程之前的辅助线程。 辅助线程可以在主处理器或专用辅助处理器中执行,该处理器直接对下一级高速缓冲存储器元件之一进行存储器访问。

    Method and system for managing cache injection in a multiprocessor system
    4.
    发明授权
    Method and system for managing cache injection in a multiprocessor system 有权
    在多处理器系统中管理缓存注入的方法和系统

    公开(公告)号:US08255591B2

    公开(公告)日:2012-08-28

    申请号:US10948407

    申请日:2004-09-23

    IPC分类号: G06F13/28

    CPC分类号: G06F13/28

    摘要: A method and apparatus for managing cache injection in a multiprocessor system reduces processing time associated with direct memory access transfers in a symmetrical multiprocessor (SMP) or a non-uniform memory access (NUMA) multiprocessor environment. The method and apparatus either detect the target processor for DMA completion or direct processing of DMA completion to a particular processor, thereby enabling cache injection to a cache that is coupled with processor that executes the DMA completion routine processing the data injected into the cache. The target processor may be identified by determining the processor handling the interrupt that occurs on completion of the DMA transfer. Alternatively or in conjunction with target processor identification, an interrupt handler may queue a deferred procedure call to the target processor to process the transferred data. In NUMA multiprocessor systems, the completing processor/target memory is chosen for accessibility of the target memory to the processor and associated cache.

    摘要翻译: 用于管理多处理器系统中的高速缓存注入的方法和装置减少与对称多处理器(SMP)或非均匀存储器访问(NUMA)多处理器环境中的直接存储器访问传输相关联的处理时间。 该方法和装置可以检测目标处理器用于DMA完成或直接处理DMA完成到特定处理器,从而使高速缓存注入与执行DMA完成例程的处理器处理注入高速缓存的数据的处理器相连的高速缓存。 可以通过确定处理器处理在DMA传输完成时发生的中断来识别目标处理器。 或者或与目标处理器识别结合,中断处理程序可以将延迟过程调用排队到目标处理器以处理传送的数据。 在NUMA多处理器系统中,选择完成的处理器/目标存储器,以便可访问目标存储器到处理器和相关联的高速缓存。

    Method and system for managing cache injection in a multiprocessor system
    6.
    发明申请
    Method and system for managing cache injection in a multiprocessor system 有权
    在多处理器系统中管理缓存注入的方法和系统

    公开(公告)号:US20060064518A1

    公开(公告)日:2006-03-23

    申请号:US10948407

    申请日:2004-09-23

    IPC分类号: G06F13/28

    CPC分类号: G06F13/28

    摘要: A method and apparatus for managing cache injection in a multiprocessor system reduces processing time associated with direct memory access transfers in a symmetrical multiprocessor (SMP) or a non-uniform memory access (NUMA) multiprocessor environment. The method and apparatus either detect the target processor for DMA completion or direct processing of DMA completion to a particular processor, thereby enabling cache injection to a cache that is coupled with processor that executes the DMA completion routine processing the data injected into the cache. The target processor may be identified by determining the processor handling the interrupt that occurs on completion of the DMA transfer. Alternatively or in conjunction with target processor identification, an interrupt handler may queue a deferred procedure call to the target processor to process the transferred data. In NUMA multiprocessor systems, the completing processor/target memory is chosen for accessibility of the target memory to the processor and associated cache.

    摘要翻译: 用于管理多处理器系统中的高速缓存注入的方法和装置减少与对称多处理器(SMP)或非均匀存储器访问(NUMA)多处理器环境中的直接存储器访问传输相关联的处理时间。 该方法和装置可以检测目标处理器用于DMA完成或直接处理DMA完成到特定处理器,从而使高速缓存注入与执行DMA完成例程的处理器处理注入高速缓存的数据的处理器相连的高速缓存。 可以通过确定处理器处理在DMA传输完成时发生的中断来识别目标处理器。 或者或与目标处理器识别结合,中断处理程序可以将延迟过程调用排队到目标处理器以处理传送的数据。 在NUMA多处理器系统中,选择完成的处理器/目标存储器,以便可访问目标存储器到处理器和相关联的高速缓存。

    Method and apparatus for accelerating input/output processing using cache injections
    7.
    发明授权
    Method and apparatus for accelerating input/output processing using cache injections 失效
    用于使用高速缓存注入加速输入/输出处理的方法和装置

    公开(公告)号:US06711650B1

    公开(公告)日:2004-03-23

    申请号:US10289817

    申请日:2002-11-07

    IPC分类号: G06F1202

    CPC分类号: G06F12/0835

    摘要: A method for accelerating input/output operations within a data processing system is disclosed. Initially, a determination is initially made in a cache controller as to whether or not a bus operation is a data transfer from a first memory to a second memory without intervening communications through a processor, such as a direct memory access (DMA) transfer. If the bus operation is such data transfer, a determination is made in a cache memory as to whether or not the cache memory includes a copy of data from the data transfer. If the cache memory does not include a copy of data from the data transfer, a cache line is allocated within the cache memory to store a copy of data from the data transfer.

    摘要翻译: 公开了一种用于加速数据处理系统内的输入/输出操作的方法。 最初,在高速缓存控制器中首先确定总线操作是否是从第一存储器到第二存储器的数据传输,而不需要通过诸如直接存储器访问(DMA)传送的处理器进行通信。 如果总线操作是这种数据传输,则在高速缓冲存储器中确定高速缓冲存储器是否包括来自数据传输的数据的副本。 如果高速缓冲存储器不包括来自数据传输的数据的副本,则在高速缓冲存储器内分配高速缓存线以存储来自数据传输的数据副本。

    OPTIMAL INTERCONNECT UTILIZATION IN A DATA PROCESSING NETWORK
    8.
    发明申请
    OPTIMAL INTERCONNECT UTILIZATION IN A DATA PROCESSING NETWORK 失效
    数据处理网络中的最佳互连应用

    公开(公告)号:US20080181111A1

    公开(公告)日:2008-07-31

    申请号:US12059762

    申请日:2008-03-31

    IPC分类号: H04L12/56

    摘要: A method for managing packet traffic in a data processing network includes collecting data indicative of the amount of packet traffic traversing each of the links in the network's interconnect. The collected data includes source and destination information indicative of the source and destination of corresponding packets. A heavily used links are then identified from the collected data. Packet data associated with the heavily used link is then analyzed to identify a packet source and packet destination combination that is a significant contributor to the packet traffic on the heavily used link. In response, a process associated with the identified packet source and packet destination combination is migrated, such as to another node of the network, to reduce the traffic on the heavily used link. In one embodiment, an agent installed on each interconnect switch collects the packet data for interconnect links connected to the switch.

    摘要翻译: 一种用于管理数据处理网络中的分组业务的方法包括收集表示穿过网络互连中每个链路的分组流量的数据的数据。 收集的数据包括指示相应分组的源和目的地的源和目的地信息。 然后从收集的数据中识别出大量使用的链接。 然后分析与大量使用的链路相关联的分组数据,以识别作为重度使用的链路上的分组业务的重要贡献者的分组源和分组目的地组合。 作为响应,与识别的分组源和分组目的地组合相关联的进程被迁移,例如到网络的另一个节点,以减少重度使用的链路上的流量。 在一个实施例中,安装在每个互连交换机上的代理收集用于连接到交换机的互连链路的分组数据。

    Optimal interconnect utilization in a data processing network
    9.
    发明授权
    Optimal interconnect utilization in a data processing network 失效
    数据处理网络中的最佳互连利用率

    公开(公告)号:US07400585B2

    公开(公告)日:2008-07-15

    申请号:US10948414

    申请日:2004-09-23

    IPC分类号: H04L12/56

    摘要: A method for managing packet traffic in a data processing network includes collecting data indicative of the amount of packet traffic traversing each of the links in the network's interconnect. The collected data includes source and destination information indicative of the source and destination of corresponding packets. A heavily used links are then identified from the collected data. Packet data associated with the heavily used link is then analyzed to identify a packet source and packet destination combination that is a significant contributor to the packet traffic on the heavily used link. In response, a process associated with the identified packet source and packet destination combination is migrated, such as to another node of the network, to reduce the traffic on the heavily used link. In one embodiment, an agent installed on each interconnect switch collects the packet data for interconnect links connected to the switch.

    摘要翻译: 一种用于管理数据处理网络中的分组业务的方法包括收集表示穿过网络互连中每个链路的分组流量的数据的数据。 收集的数据包括指示相应分组的源和目的地的源和目的地信息。 然后从收集的数据中识别出大量使用的链接。 然后分析与大量使用的链路相关联的分组数据,以识别作为重度使用的链路上的分组业务的重要贡献者的分组源和分组目的地组合。 作为响应,与识别的分组源和分组目的地组合相关联的进程被迁移,例如到网络的另一个节点,以减少重度使用的链路上的流量。 在一个实施例中,安装在每个互连交换机上的代理收集用于连接到交换机的互连链路的分组数据。

    Optimal interconnect utilization in a data processing network
    10.
    发明授权
    Optimal interconnect utilization in a data processing network 失效
    数据处理网络中的最佳互连利用率

    公开(公告)号:US07821944B2

    公开(公告)日:2010-10-26

    申请号:US12059762

    申请日:2008-03-31

    IPC分类号: H04L12/56

    摘要: A method for managing packet traffic in a data processing network includes collecting data indicative of the amount of packet traffic traversing each of the links in the network's interconnect. The collected data includes source and destination information indicative of the source and destination of corresponding packets. A heavily used links are then identified from the collected data. Packet data associated with the heavily used link is then analyzed to identify a packet source and packet destination combination that is a significant contributor to the packet traffic on the heavily used link. In response, a process associated with the identified packet source and packet destination combination is migrated, such as to another node of the network, to reduce the traffic on the heavily used link. In one embodiment, an agent installed on each interconnect switch collects the packet data for interconnect links connected to the switch.

    摘要翻译: 一种用于管理数据处理网络中的分组业务的方法包括收集表示穿过网络互连中每个链路的分组流量的数据的数据。 收集的数据包括指示相应分组的源和目的地的源和目的地信息。 然后从收集的数据中识别出大量使用的链接。 然后分析与大量使用的链路相关联的分组数据,以识别作为重度使用的链路上的分组业务的重要贡献者的分组源和分组目的地组合。 作为响应,与识别的分组源和分组目的地组合相关联的进程被迁移,例如到网络的另一个节点,以减少重度使用的链路上的流量。 在一个实施例中,安装在每个互连交换机上的代理收集用于连接到交换机的互连链路的分组数据。