Method and apparatus for changing microcode to be executed in a processor
    11.
    发明授权
    Method and apparatus for changing microcode to be executed in a processor 有权
    用于改变要在处理器中执行的微代码的方法和装置

    公开(公告)号:US07444630B2

    公开(公告)日:2008-10-28

    申请号:US10774994

    申请日:2004-02-09

    CPC classification number: G06F9/3017 G06F9/268 G06F9/328

    Abstract: A Central Processing Unit (CPU) hotpatch circuit compares the run-time instruction stream against an internal cache. The internal cache stores embedded memory addresses with associated control flags, executable instruction codes, and tag information. In the event that a comparison against the current program counter succeeds, then execution is altered as required per the control flags. If no comparison match is made, then execution of the instruction that was accessed by the program counter is executed.

    Abstract translation: 中央处理单元(CPU)热补丁电路将运行时指令流与内部高速缓存进行比较。 内部高速缓存存储具有关联控制标志,可执行指令代码和标签信息的嵌入式存储器地址。 如果与当前程序计数器的比较成功,则根据控制标志的要求更改执行。 如果不进行比较匹配,则执行由程序计数器访问的指令。

    Method and system of routing network-based data using frame address notification
    12.
    发明授权
    Method and system of routing network-based data using frame address notification 有权
    使用帧地址通知路由基于网络的数据的方法和系统

    公开(公告)号:US07337253B2

    公开(公告)日:2008-02-26

    申请号:US11386323

    申请日:2006-03-22

    Abstract: A method and system for routing network-based data arranged in frames is disclosed. A host processor analyzes transferred bursts of data and initiates an address and look up algorithm for dispatching the frame to a desired destination. A shared system memory existing between a network device, e.g., an HDLC controller, working in conjunction with the host processor, receives data, including any preselected address fields. The network device includes a plurality of ports. Each port includes a FIFO receive memory for receiving at least a first portion of a frame. The first portion of the frame includes data having the preselected address fields. A direct memory access unit transfers a burst of data from the FIFO receive memory to the shared system memory. A communications processor selects the amount of data to be transferred from the FIFO receive memory based on the desired address fields to be analyzed by the host processor.

    Abstract translation: 公开了一种用于路由布置在帧中的基于网络的数据的方法和系统。 主机处理器分析传输的数据突发,并发起一个地址和查找算法,用于将帧发送到所需的目的地。 存在于与主处理器结合工作的网络设备(例如,HDLC控制器)之间的共享系统存储器接收包括任何预先选择的地址字段的数据。 网络设备包括多个端口。 每个端口包括用于接收帧的至少第一部分的FIFO接收存储器。 帧的第一部分包括具有预选地址字段的数据。 直接存储器访问单元将数据从FIFO接收存储器传送到共享系统存储器。 通信处理器基于要由主处理器分析的期望的地址字段来选择要从FIFO接收存储器传送的数据量。

    Method and apparatus for controlling network data congestion
    13.
    发明授权
    Method and apparatus for controlling network data congestion 有权
    控制网络数据拥塞的方法和装置

    公开(公告)号:US07072294B2

    公开(公告)日:2006-07-04

    申请号:US10785372

    申请日:2004-02-24

    Abstract: A method, apparatus and network device for controlling the flow of network data arranged in frames and minimizing congestion is disclosed. A status error indicator is generated within a receive FIFO memory indicative of a frame overflow within the receive FIFO memory. In response to the status error indicator, an early congestion interrupt is generated to a host processor indicative that a frame overflow has occurred within the receive FIFO memory. The incoming frame is discarded and the services of received frames are enhanced by one of either increasing the number of words of a direct memory access (DMA) unit burst size, or modifying the time-slice of other active processes.

    Abstract translation: 公开了一种用于控制以帧布置并最小化拥塞的网络数据流的方法,装置和网络设备。 在接收FIFO存储器内产生指示接收FIFO存储器内的帧溢出的状态错误指示符。 响应于状态错误指示器,向主处理器产生指示在接收FIFO存储器内发生帧溢出的早期拥塞中断。 通过增加直接存储器访问(DMA)单元突发大小的字数或修改其他活动进程的时间片之一,丢弃输入帧并接收帧的服务增强。

    Fencepost descriptor caching mechanism and method therefor
    14.
    发明授权
    Fencepost descriptor caching mechanism and method therefor 有权
    栅栏描述符缓存机制及方法

    公开(公告)号:US06691178B1

    公开(公告)日:2004-02-10

    申请号:US09510387

    申请日:2000-02-22

    CPC classification number: H04L49/254 G06F13/28 H04L49/103 H04L49/90 H04L49/901

    Abstract: A system and method for reducing transfer latencies in fencepost buffering requires that a cache is provided between a host and a network controller having shared memory. The cache is divided into a dual cache having a top cache and a bottom cache. A first and second descriptor address location are fetched from shared memory. The two descriptors are discriminated from one another in that the first descriptor address location is a location of an active descriptor and the second descriptor address location is a location of a reserve/lookahead descriptor. The active descriptor is copied to the top cache. A command is issued to DMA for transfer of the active descriptor. The second descriptor address location is then copied into the first descriptor address. The next descriptor address location from external memory is then fetched and placed in the second descriptor address location.

    Abstract translation: 用于减少fencepost缓冲中的传输延迟的系统和方法要求在主机和具有共享存储器的网络控制器之间提供高速缓存。 高速缓存被分为具有顶部高速缓存和底部高速缓存的双缓存。 从共享存储器获取第一和第二描述符地址位置。 两个描述符彼此区分在于第一描述符地址位置是活动描述符的位置,第二描述符地址位置是保留/前视描述符的位置。 活动描述符被复制到顶层缓存。 向DMA发送命令以传送活动描述符。 然后将第二描述符地址位置复制到第一描述符地址中。 然后从外部存储器获取下一个描述符地址位置并将其放置在第二个描述符地址位置。

Patent Agency Ranking