Autonomous system topology based auxiliary network for peer-to-peer overlay network
    91.
    发明授权
    Autonomous system topology based auxiliary network for peer-to-peer overlay network 有权
    基于自适应系统拓扑的辅助网络,用于对等覆盖网络

    公开(公告)号:US07379428B2

    公开(公告)日:2008-05-27

    申请号:US10284100

    申请日:2002-10-31

    IPC分类号: H04L12/28

    摘要: A system and method for an auxiliary network for a peer-to-peer overlay network using autonomous system level topology. Using information available through the auxiliary network, expressway connections are established amongst expressway nodes and ordinary connections are established between ordinary and expressway nodes. The connections established is unconstrained and arbitrary. After the connections are established, efficient routing of information may take place.

    摘要翻译: 一种用于使用自主系统级拓扑的对等覆盖网络的辅助网络的系统和方法。 利用辅助网络提供的信息,在高速公路节点之间建立高速公路连接,在普通高速公路节点之间建立普通连接。 建立的连接是无约束和任意的。 建立连接后,可能会发生信息的有效路由。

    System and method for determining correct sign of response of an adaptive controller

    公开(公告)号:US20060287737A1

    公开(公告)日:2006-12-21

    申请号:US11149991

    申请日:2005-06-10

    IPC分类号: G05B13/02

    CPC分类号: G05B13/042

    摘要: According to one embodiment, a method comprises receiving, by an adaptive controller, performance measurement for a computing system. The method further comprises estimating a performance model for use by the adaptive controller, and determining whether the estimated performance model has a correct sign for approaching performance desired for the computing system. When determined that the estimated performance model has an incorrect sign, the adaptive controller takes action to determine a performance model having a correct sign for approaching performance desired for the computing system.

    System and method for workload-aware request distribution in cluster-based network servers

    公开(公告)号:US20060080388A1

    公开(公告)日:2006-04-13

    申请号:US11242303

    申请日:2005-10-03

    IPC分类号: G06F15/16

    摘要: A method and system for workload-aware request in cluster-based network servers. The present invention provides a web server cluster having a plurality of nodes wherein each node comprises a distributor component, a dispatcher component and a server component. In another embodiment, the present provides a method for managing request distribution to a set of files stored on a web server cluster. A request for a file is received at a first node of a plurality of nodes, each node comprising a distributor component, a dispatcher component and a server component. If the request is for a core file, the request is processed at the first node (e.g., processed locally). If the request is for a partitioned file, it is determined whether the request is assigned to be processed locally at the first node or at another node (e.g., processed remotely). If the request is for neither a core file nor a partitioned file, the request is processed at the first node. In one embodiment, the present invention provides a method for identifying a set of frequently accessed files on a server cluster comprising a number of nodes. Embodiments of the present invention operate to maximize the number of requests served from the total cluster memory of a web server cluster and to minimize the forwarding overhead and disk access overhead by identifying the subset of core files to be processed at any node and by identifying the subset of partitioned files to be processed by different nodes in the cluster.

    Data placement for fault tolerance
    94.
    发明授权
    Data placement for fault tolerance 有权
    数据放置容错

    公开(公告)号:US07000141B1

    公开(公告)日:2006-02-14

    申请号:US10295554

    申请日:2001-11-14

    IPC分类号: G06F11/00

    摘要: A technique for data placement in a distributed system that takes into account fault tolerance. Data placement is performed in which data objects, and possibly replicas thereof, are assigned to nodes within the distributed system. The resulting placement is then tested to determine whether the system provides desired performance under various different fault scenarios. If not, the distributed system is altered such as by altering its capacity or its capacity allocations. Performing data placement, testing for fault-tolerance and altering capacity or capacity allocations are performed repetitively, thereby increasing the system's ability to provide the desired performance under the fault scenarios. Preferably, a system and placement are eventually determined that provide the desired performance under the given fault scenarios.

    摘要翻译: 一种考虑到容错的分布式系统中的数据放置技术。 执行数据放置,其中数据对象及其可能的副本被分配给分布式系统内的节点。 然后测试结果的位置,以确定系统是否在各种不同的故障情况下提供所需的性能。 如果不是,分布式系统被改变,例如改变其容量或容量分配。 执行数据放置,重复测试容错和更改容量或容量分配,从而提高系统在故障情况下提供所需性能的能力。 优选地,最终确定在给定故障情况下提供期望性能的系统和布局。

    AAL2 receiver for filtering signaling/management packets in an ATM system
    95.
    发明授权
    AAL2 receiver for filtering signaling/management packets in an ATM system 有权
    AAL2接收机,用于在ATM系统中过滤信令/管理数据包

    公开(公告)号:US06961340B2

    公开(公告)日:2005-11-01

    申请号:US09827660

    申请日:2001-04-06

    IPC分类号: H04L12/56 H04Q11/04 H04L12/28

    摘要: The present invention provides an apparatus, system and method for receiving asynchronous transfer mode (ATM) data cells on an ATM adaptation layer (AAL) configured connection within an ATM system comprising a digital signal processor (DSP) sub-system (412) and a host processor (414). The receiver interfaces directly with the DSP sub-system (412) (which converts the digitized voice samples into voice signals) and the host processor (414) (which performs AAL2 signaling and layer management functions). The receiver filters the AAL2 signaling and management packets from the AAL2 voice packets using a host programmable CID filter (550) and UUI filter (560). A match from either filter (550,560) enables the packet to be forwarded to the host processor (414). If no match is made in either filter (550,560), then a look-up is performed in a receive CID look-up table and the packet is forwarded to the DSP sub-system (412) on a look-up match.

    摘要翻译: 本发明提供了一种用于在ATM系统内的ATM适配层(AAL)配置的连接上接收异步传输模式(ATM)数据信元的装置,系统和方法,该ATM系统包括数字信号处理器(DSP)子系统(412)和 主机处理器(414)。 接收机直接与DSP子系统(412)(其将数字化语音样本转换为语音信号)和主处理器(414)(其执行AAL2信令和层管理功能)接口。 接收机使用主机可编程CID滤波器(550)和UUI滤波器(560)对来自AAL2语音分组的AAL2信令和管理分组进行过滤。 来自任一过滤器(550,560)的匹配使数据包能够转发到主机处理器(414)。 如果在任一过滤器(550,560)中不匹配,则在接收CID查找表中执行查找,并且以查找匹配将数据包转发到DSP子系统(412)。

    ATM processor for switching in an ATM system
    97.
    发明授权
    ATM processor for switching in an ATM system 有权
    ATM处理器用于切换ATM系统

    公开(公告)号:US06931012B2

    公开(公告)日:2005-08-16

    申请号:US09827648

    申请日:2001-04-06

    IPC分类号: H04L12/56 H04L12/433

    摘要: The present invention provides an apparatus and system for high speed end-to-end telecommunication traffic using an Asynchronous Transfer Mode (ATM) architecture for convergence of video, data and voice in an SOHO application using a DSL router. An ATM processor (120) enables traffic shaping, and operation and maintenance processing within a single module. The ATM processor (120) further includes a processor (114) which executes firmware from a program memory (110). A register block (116) is provided for communicating setup and teardown notification, and OAM configuration to the processor (114) and a connection state RAM (112) provides for communicating connection configuration in which this information is used by the processor (114) when performing the functions of switching, QoS, and OAM. Transmit scheduler hardware (118) is provided for the scheduling of ATM cell transmission and is configured by the processor (114).

    摘要翻译: 本发明提供了一种使用异步传输模式(ATM)架构的高速端到端电信业务的装置和系统,用于使用DSL路由器的SOHO应用中的视频,数据和语音的收敛。 ATM处理器(120)使单个模块内的流量整形和操作和维护处理成为可能。 ATM处理器(120)还包括从程序存储器(110)执行固件的处理器(114)。 提供寄存器块(116),用于将建立和拆卸通知进行通信,并将OAM配置传送到处理器(114),并且连接状态RAM(112)提供通信连接配置,其中处理器(114)使用该信息, 执行交换,QoS和OAM的功能。 发送调度器硬件(118)被提供用于ATM信元传输的调度,并由处理器(114)配置。

    Cell buffering system with priority cache in an ATM system
    98.
    发明授权
    Cell buffering system with priority cache in an ATM system 有权
    在ATM系统中具有优先级缓存的单元缓冲系统

    公开(公告)号:US06915360B2

    公开(公告)日:2005-07-05

    申请号:US09827808

    申请日:2001-04-06

    IPC分类号: H04L12/56 G06F13/18 G06F12/00

    摘要: The present invention provides an apparatus and system for buffering data in a communication network with an arranged priority which enables traffic shaping. A cell buffer unit (600) is arranged with a plurality of queues (614) configured to store PDUs on-chip and off-chip. There are associated queues both on-chip and off-chip for each priority queue. A cell buffer controller (620) forwards PDUs to a predetermined priority queue and manages the transfer of PDUs off-chip when a priority queue on-chip is fully occupied. The controller (620) also manages the transfer of PDUs from the off-chip queue when the on-chip priority queue becomes less than fully occupied.

    摘要翻译: 本发明提供了一种用于在通信网络中缓存数据的装置和系统,其具有能够进行流量整形的排列优先级。 单元缓冲器单元(600)被布置成具有多个队列(614),其被配置为在片上和片外存储PDU。 每个优先级队列都有片内和片外相关联的队列。 单元缓冲器控制器(620)将PDU转发到预定的优先级队列,并且当芯片上的优先级队列被完全占用时管理片外PDU的传输。 当片上优先级队列变得不足以完全占用时,控制器(620)还管理来自片外队列的PDU的传送。

    Delay cache method and apparatus
    99.
    发明授权
    Delay cache method and apparatus 失效
    延迟缓存方法和装置

    公开(公告)号:US06836827B2

    公开(公告)日:2004-12-28

    申请号:US10217698

    申请日:2002-08-13

    IPC分类号: G06F1200

    CPC分类号: G06F17/30902

    摘要: Delayed caching receives an evaluation interval to delay updating the objects stored in a delayed cache, delays a time period corresponding to the evaluation interval, and updates the objects stored in the contents of the delayed cache when the time period delay has completed. The configuration operation for the delayed cache selects a time interval to sample a trace having object access frequencies for objects stored in a cache, creates a first working set of objects accessed during the time interval and a second working set of objects accessed during a subsequent time interval based on the historical trace, determines that the difference between the objects contained in the first and second working sets does not exceed a maximum threshold with the selected time interval, and sets a evaluation interval for evaluating the contents of the cache to the selected time interval.

    摘要翻译: 延迟缓存接收到评估间隔以延迟更新存储在延迟高速缓存中的对象,延迟对应于评估间隔的时间段,并且当时间延迟完成时更新存储在延迟高速缓存的内容中的对象。 延迟高速缓存的配置操作选择一个时间间隔,以对存储在高速缓存中的对象的对象访问频率进行采样,创建在时间间隔期间访问的对象的第一工作集,以及在后续时间内访问的对象的第二工作集 基于历史记录的间隔确定包含在第一和第二工作集中的对象之间的差异不超过具有所选时间间隔的最大阈值,并且将用于评估高速缓存的内容的评估间隔设置为所选择的时间 间隔。