Co-processor including a media access controller
    6.
    发明授权
    Co-processor including a media access controller 有权
    协处理器包括媒体访问控制器

    公开(公告)号:US06898673B2

    公开(公告)日:2005-05-24

    申请号:US10105973

    申请日:2002-03-25

    摘要: A compute engine includes a central processing unit coupled to a coprocessor. The coprocessor includes a media access controller engine and a data transfer engine. The media access controller engine couples the compute engine to a communications network. The data transfer engine couples the media access controller engine to a set of cache memory. In further embodiments, a compute engine includes two media access controller engines. A reception media access controller engine receives data from the communications network. A transmission media access controller engine transmits data to the communications network. The compute engine also includes two data transfer engines. A streaming output engine stores network data from the reception media access controller engine in cache memory. A streaming input engine transfers data from cache memory to the transmission media access controller engine. In one implementation, the compute engine performs different network services, including but not limited to: 1) virtual private networking; 2) secure sockets layer processing; 3) web caching; 4) hypertext mark-up language compression; 5) virus checking; 6) firewall support; and 7) web switching.

    摘要翻译: 计算引擎包括耦合到协处理器的中央处理单元。 协处理器包括媒体访问控制器引擎和数据传输引擎。 媒体访问控制器引擎将计算引擎耦合到通信网络。 数据传输引擎将媒体访问控制器引擎耦合到一组高速缓冲存储器。 在另外的实施例中,计算引擎包括两个媒体访问控制器引擎。 接收媒体接入控制器引擎从通信网络接收数据。 传输媒体接入控制器引擎向通信网络发送数据。 计算引擎还包括两个数据传输引擎。 流输出引擎将来自接收媒体访问控制器引擎的网络数据存储在高速缓冲存储器中。 流输入引擎将数据从高速缓冲存储器传输到传输介质访问控制器引擎。 在一个实现中,计算引擎执行不同的网络服务,包括但不限于:1)虚拟专用网; 2)安全套接字层处理; 3)网页缓存; 4)超文本标记语言压缩; 5)病毒检查; 6)防火墙支持; 和7)网页切换。

    Application processing employing a coprocessor
    7.
    发明授权
    Application processing employing a coprocessor 有权
    应用处理采用协处理器

    公开(公告)号:US06920542B2

    公开(公告)日:2005-07-19

    申请号:US10105979

    申请日:2002-03-25

    摘要: A compute engine's central processing unit is coupled to a coprocessor that includes application engines. The central processing unit initializes the coprocessor to perform an application, and the coprocessor initializes an application engine to perform the application. The application engine responds by carrying out the application. In performing some applications, the application engine accesses cache memory—obtaining a physical memory address that corresponds to a virtual address and providing the physical address to the cache memory. In some instances, the coprocessor employs multiple application engines to carry out an application. In one implementation, the application engines facilitate different network services, including but not limited to: 1) virtual private networking; 2) secure sockets layer processing; 3) web caching; 4) hypertext mark-up language compression; 5) virus checking; 6) firewall support; and 7) web switching.

    摘要翻译: 计算引擎的中央处理单元耦合到包括应用引擎的协处理器。 中央处理单元初始化协处理器以执行应用,并且协处理器初始化应用引擎以执行应用。 应用程序引擎通过执行应用程序进行响应。 在执行某些应用程序时,应用程序引擎访问高速缓冲存储器 - 获得与虚拟地址相对应的物理内存地址,并将物理地址提供给高速缓冲存储器。 在一些情况下,协处理器采用多个应用引擎来执行应用。 在一个实现中,应用引擎促进不同的网络服务,包括但不限于:1)虚拟专用网; 2)安全套接字层处理; 3)网页缓存; 4)超文本标记语言压缩; 5)病毒检查; 6)防火墙支持; 和7)网页切换。

    Managing ownership of a full cache line using a store-create operation
    8.
    发明授权
    Managing ownership of a full cache line using a store-create operation 有权
    使用存储创建操作管理完整高速缓存行的所有权

    公开(公告)号:US06901482B2

    公开(公告)日:2005-05-31

    申请号:US10106925

    申请日:2002-03-25

    摘要: A system includes a plurality of processing clusters and a snoop controller. A first processing cluster in the plurality of processing clusters includes a first tier cache memory coupled to a second tier cache memory. The system employs a store-create operation to obtain sole ownership of a full cache line memory location for the first processing cluster, without retrieving the memory location from other processing clusters. The system issues the store-create operation for the memory location to the first tier cache. The first tier cache forwards a memory request including the store-create operation command to the second tier cache. The second tier cache determines whether the second tier cache has sole ownership of the memory location. If the second tier cache does not have sole ownership of the memory location, ownership of the memory location is relinquished by the other processing clusters with any ownership of the memory location.

    摘要翻译: 系统包括多个处理群集和窥探控制器。 多个处理群集中的第一处理群集包括耦合到第二层高速缓冲存储器的第一层高速缓冲存储器。 系统采用存储创建操作来获得用于第一处理集群的完整高速缓存行存储器位置的唯一所有权,而不从其他处理集群检索存储器位置。 系统会将内存位置的存储创建操作发布到第一层缓存。 第一层缓存将包含store-create操作命令的内存请求转发到第二层缓存。 第二层缓存确定第二层缓存是否具有对存储器位置的唯一所有权。 如果第二层缓存不具有对存储器位置的唯一所有权,则存储器位置的所有权由具有存储器位置的任何所有权的其他处理集群放弃。

    Bandwidth allocation for a data path
    10.
    发明申请
    Bandwidth allocation for a data path 审中-公开
    数据路径的带宽分配

    公开(公告)号:US20050262263A1

    公开(公告)日:2005-11-24

    申请号:US11189595

    申请日:2005-07-26

    IPC分类号: G06F12/00 G06F12/08 H04L12/56

    摘要: A compute engine allocates data path bandwidth among different classes of packets. The compute engine identifies a packet's class and determines whether to transmit the packet based on the class' available bandwidth. If the class has available bandwidth, the compute engine grants the packet access to the data path. Otherwise, the compute engine only grants the packet access to the data path if none of the other packets waiting for data path access have a class with available bandwidth. After a packet is provided to the data path, the compute engine decrements a bandwidth allocation count for the packet's class. Once the bandwidth count for each class is exhausted, the compute engine sets each count to a respective starting value-reflecting the amount of bandwidth available to a class relative to the other classes. A compute engine employing the above-described bandwidth allocation can be employed to perform different networking services, including but not limited to: 1) virtual private networking; 2) secure sockets layer processing; 3) web caching; 4) hypertext mark-up language compression; 5) virus checking; 6) firewall support; and 7) web switching.

    摘要翻译: 计算引擎在不同类别的数据包之间分配数据路径带宽。 计算引擎识别分组的类,并根据类的可用带宽确定是否发送分组。 如果类具有可用带宽,则计算引擎允许数据包访问数据路径。 否则,如果等待数据路径访问的其他数据包中没有一个具有可用带宽的类,那么计算引擎仅允许数据包访问数据路径。 在将数据包提供给数据路径之后,计算引擎减少数据包类的带宽分配计数。 一旦每个类的带宽计数用尽,计算引擎将每个计数设置为相应的起始值 - 反映类相对于其他类可用的带宽量。 可以采用采用上述带宽分配的计算引擎来执行不同的网络服务,包括但不限于:1)虚拟专用网; 2)安全套接字层处理; 3)网页缓存; 4)超文本标记语言压缩; 5)病毒检查; 6)防火墙支持; 和7)网页切换。