Method and system for a timing based logic entry
    61.
    发明授权
    Method and system for a timing based logic entry 失效
    基于定时的逻辑输入的方法和系统

    公开(公告)号:US06789234B2

    公开(公告)日:2004-09-07

    申请号:US10328355

    申请日:2002-12-23

    IPC分类号: G06F1750

    CPC分类号: G06F17/5031

    摘要: A method and system for creating on a computer a timing based representation of an integrated circuit using a graphical editor operating on the computer. The method includes first in creating timing diagrams identifying the elements of the circuit and their time based interconnections. The method further comprises a translation of the timing based diagram editor files into HDL statement. The preferred embodiment is described, it comprises the use of an ASCII editor and a translation program to VHDL statements. A system is also described implementing the steps of the method in a computer. In order to avoid having different tools to translate timing based diagram editor files into HDL statements, a first step translating graphical editor output file into a PostScript file is performed by executing the “print to file” command of the printing driver of the computer. The PostScript file is then translated into a bitmap file using a RIP. The translation is then performed from the bitmap file into the HDL statements. This translation is “universal” as it can be used for any type of initial graphical file containing the timing diagram.

    摘要翻译: 一种用于在计算机上创建使用在计算机上操作的图形编辑器的基于时序的集成电路表示的方法和系统。 该方法首先在创建识别电路的元件及其基于时间的互连的时序图。 该方法还包括将基于时序的编辑器文件转换为HDL语句。 描述了优选实施例,它包括使用ASCII编辑器和对VHDL语句的翻译程序。 还描述了在计算机中实现该方法的步骤的系统。 为了避免使用不同的工具将基于时序的图编辑器文件转换为HDL语句,将图形编辑器输出文件转换为PostScript文件的第一步是通过执行计算机打印驱动程序的“打印到文件”命令执行的。 然后使用RIP将PostScript文件转换为位图文件。 然后,转换从位图文件执行到HDL语句。 这个翻译是“通用的”,因为它可以用于包含时序图的任何类型的初始图形文件。

    Method and apparatus to share resources while processing multiple
priority data flows
    62.
    发明授权
    Method and apparatus to share resources while processing multiple priority data flows 失效
    在处理多个优先级数据流的同时共享资源的方法和装置

    公开(公告)号:US6003060A

    公开(公告)日:1999-12-14

    申请号:US993695

    申请日:1997-12-18

    IPC分类号: G06F9/48 G06F9/00

    CPC分类号: G06F9/4881

    摘要: The invention discloses a method and an apparatus for use in high speed networks such as Asynchronous Transfer Mode (ATM) networks providing support for processing multipriority data flows at media speed, the major constraint being to share the storage and the ALU between all the tasks. The invention consists first in grouping the tasks in processes and the processes in set of processes all organized in decreasing order of their priority ; `on the fly`interruption of a lower priority process/set of processes by a higher priority process/set of processes is possible as well as reuse of the shared resources during task void states inactive in a process or between processes.In the preferred embodiment of the invention, the support of the reserved bandwidth and non reserved bandwidth ATM services data flows requires two different groups of processes, the highest priority being for the group of processes serving the reserved bandwidth service.With the principle of the invention when used in network equipment the media speed is sustained and many different network traffics can be simultaneously supported. The apparatus implementing the solution of the invention, allowing sharing of resources saves place and costa by the improved reduced number of sophisticated hardware components such as static memories and programmable logic circuits.

    摘要翻译: 本发明公开了一种在诸如异步传输模式(ATM)网络的高速网络中使用的方法和装置,其提供了以媒体速度处理多优先级数据流的支持,主要的约束是在所有任务之间共享存储和ALU。 该发明首先是将过程中的任务分组,以及所有组织过程中的任务按优先顺序排列; 通过更高优先级的进程/进程集可以“低速优先”的进程/一组进程的中断,以及任务无效状态下的共享资源在进程中或进程之间无效的重用。 在本发明的优选实施例中,保留带宽和非保留带宽ATM服务数据流的支持需要两个不同的进程组,其中最高优先级用于为预留带宽服务服务的进程组。 根据本发明的原理,在网络设备中使用媒体速度得以持续,同时支持不同的网络流量。 实现本发明的解决方案的设备允许资源共享通过改进的诸如静态存储器和可编程逻辑电路之类的复杂硬件组件的减少的数量来节省地方和成本。

    Scheduling method and apparatus for supporting ATM connections having a
guaranteed minimun bandwidth
    63.
    发明授权
    Scheduling method and apparatus for supporting ATM connections having a guaranteed minimun bandwidth 失效
    用于支持具有保证的最小带宽的ATM连接的调度方法和装置

    公开(公告)号:US5946297A

    公开(公告)日:1999-08-31

    申请号:US786914

    申请日:1997-01-22

    摘要: The method and apparatus of the present invention solve the problem of scheduling the transmission of cells in packet switched networks having network connections requiring a minimum bandwidth at connection establishment. The method and the apparatus further support any mixed traffic flow including connections requiring a minimum bandwidth, a fixed reserved bandwidth or no bandwidth at connection establishment. Scheduling is controlled by a dual scheduling mechanism having a first scheduler, triggered by absolute time, for scheduling the minimum service connections up to a rate corresponding to their reserved minimum bandwidth, a second scheduler and a queue of minimum service connection identifiers for communication between the two scheduling schemes. With the dual scheduling mechanism of the present invention, the minimum bandwidth for connections reserving a minimum bandwidth at connection establishment is guaranteed in each point of the connection path and at any time, with the level of fairness of the scheduling of the remaining bandwidth depending on the quality of the second scheduler.

    摘要翻译: 本发明的方法和装置解决了在连接建立时需要具有最小带宽的网络连接的分组交换网络中的小区传输调度问题。 该方法和装置进一步支持任何混合业务流,包括在连接建立时需要最小带宽,固定保留带宽或无带宽的连接。 调度由具有由绝对时间触发的第一调度器的双调度机制控制,用于将最小服务连接调度到与其保留的最小带宽相对应的速率,第二调度器和最小服务连接标识符队列,用于在 两个调度方案。 利用本发明的双重调度机制,在连接建立时保留最小带宽的连接的最小带宽在连接路径的每个点保证,并且在任何时候,剩余带宽的调度的公平性取决于 第二个调度器的质量。

    Method and apparatus providing a multiport physical interface to high
speed packet networks
    64.
    发明授权
    Method and apparatus providing a multiport physical interface to high speed packet networks 失效
    向高速分组网络提供多端口物理接口的方法和装置

    公开(公告)号:US5923664A

    公开(公告)日:1999-07-13

    申请号:US824941

    申请日:1997-03-27

    CPC分类号: H04Q11/0478 H04L2012/5615

    摘要: The invention discloses a method and an apparatus for implementing the physical interface in a network element connected to a packet network such as Asynchronous Transfer Mode (ATM) network. With the solution of the invention, the physical interface functions can be integrated on one chip for more than one network port. The physical interface is provided between port bit streams at media speed and word data flow transferred onto/from a bus which is under the control of the network equipment. The solution of the invention includes grouping logics and storage elements by islands of more than one port. Furthermore, the logics and storage elements for statistical counting operations can be grouped for a processing generalized to all ports. Finally, the solution of the present invention takes into account two characteristics of the physical interface which are the different rates between network link media speed and bus access rate and the technology of the high density static imbedded RAMs used for hardware integration. The Flip/Flop pointer RAMs of Flip/Flop data RAMs are duplicated and some interface RAMs are created to transfer control data between the islands and the generalized processing logical blocks.

    摘要翻译: 本发明公开了一种在连接到诸如异步传输模式(ATM)网络的分组网络的网络元件中实现物理接口的方法和装置。 通过本发明的解决方案,物理接口功能可以集成在多个网络端口的一个芯片上。 在媒体速度的端口比特流之间提供物理接口,并且传输到/经由网络设备控制的总线上的字数据流。 本发明的解决方案包括通过多于一个端口的岛分组逻辑和存储元件。 此外,用于统计计数​​操作的逻辑和存储元件可以被分组用于对所有端口进行泛化。 最后,本发明的解决方案考虑了物理接口的两个特征,它们是网络链路媒体速度和总线访问速率之间的不同速率以及用于硬件集成的高密度静态嵌入RAM的技术。 翻转/翻转数据RAM的翻转/翻转指针RAM被复制,并且创建一些接口RAM以在岛和广义处理逻辑块之间传送控制数据。

    Split socket send queue apparatus and method with efficient queue flow control, retransmission and sack support mechanisms

    公开(公告)号:US20060212563A1

    公开(公告)日:2006-09-21

    申请号:US11418606

    申请日:2006-05-05

    IPC分类号: G06F15/173

    摘要: A mechanism for offloading the management of send queues in a split socket stack environment, including efficient split socket queue flow control and TCP/IP retransmission support. As consumers initiate send operations, send work queue entries (SWQEs) are created by an Upper Layer Protocol (ULP) and written to the send work queue (SWQ). The Internet Protocol Suite Offload Engine (IPSOE) is notified of a new entry to the SWQ and it subsequently reads this entry that contains pointers to the data that is to be transmitted. After the data is transmitted and acknowledgments are received, the IPSOE creates a completion queue entry (CQE) that is written into the completion queue (CQ). After the CQE is written, the ULP subsequently processes the entry and removes it from the CQE, freeing up a space in both the SWQ and CQ. The number of entries available in the SWQ are monitored by the ULP so that it does not overwrite any valid entries. Likewise, the IPSOE monitors the number of entries available in the CQ, so as not overwrite the CQ. The flow control between the ULP and the IPSOE is credit based. The passing of CQ credits is the only explicit mechanism required to manage flow control of both the SWQ and the CQ between the ULP and the IPSOE.

    Receive queue device with efficient queue flow control, segment placement and virtualization mechanisms
    67.
    发明申请
    Receive queue device with efficient queue flow control, segment placement and virtualization mechanisms 有权
    接收具有高效队列流控制,段放置和虚拟化机制的队列设备

    公开(公告)号:US20060259644A1

    公开(公告)日:2006-11-16

    申请号:US11487265

    申请日:2006-07-14

    IPC分类号: G06F15/16

    摘要: A mechanism for offloading the management of receive queues in a split (e.g. split socket, split iSCSI, split DAFS) stack environment, including efficient queue flow control and TCP/IP retransmission support. An Upper Layer Protocol (ULP) creates receive work queues and completion queues that are utilized by an Internet Protocol Suite Offload Engine (IPSOE) and the ULP to transfer information and carry out send operations. As consumers initiate receive operations, receive work queue entries (RWQEs) are created by the ULP and written to the receive work queue (RWQ). The ISPOE is notified of a new entry to the RWQ and it subsequently reads this entry that contains pointers to the data that is to be received. After the data is received, the IPSOE creates a completion queue entry (CQE) that is written into the completion queue (CQ). After the CQE is written, the ULP subsequently processes the entry and removes it from the CQE, freeing up a space in both the RWQ and CQ. The number of entries available in the RWQ are monitored by the ULP so that it does not overwrite any valid entries. Likewise, the IPSOE monitors the number of entries available in the CQ, so as not overwrite the CQ.

    摘要翻译: 一种用于卸载分裂(例如,分裂式插座,拆分式iSCSI,拆分式DAFS)堆栈环境中接收队列管理的机制,包括有效的队列流控制和TCP / IP重传支持。 上层协议(ULP)创建互联网协议套件卸载引擎(IPSOE)和ULP利用的接收工作队列和完成队列,以传输信息并执行发送操作。 当消费者开始接收操作时,接收工作队列条目(RWQE)由ULP创建并写入接收工作队列(RWQ)。 通知ISPOE对RWQ的新条目,并随后读取包含要接收的数据的指针的该条目。 接收到数据后,IPSOE创建写入完成队列(CQ)的完成队列条目(CQE)。 在编写CQE之后,ULP随后处理该条目并将其从CQE中移除,释放了RWQ和CQ两者中的空间。 RWQ中可用的条目数由ULP监视,以便它不会覆盖任何有效的条目。 同样,IPSOE监视CQ中可用条目的数量,以免覆盖CQ。

    Methods and apparatus for improving security while transmitting a data packet
    68.
    发明申请
    Methods and apparatus for improving security while transmitting a data packet 失效
    用于在传输数据分组时提高安全性的方法和装置

    公开(公告)号:US20070223389A1

    公开(公告)日:2007-09-27

    申请号:US11388011

    申请日:2006-03-23

    IPC分类号: H04J1/16 H04L12/66

    摘要: In a first aspect, a first method of transmitting a data packet is provided. The first method includes the steps of (1) for each connection from which a data packet may be transmitted, storing header data corresponding to the connection; (2) employing a user application to form header and payload data of a packet, wherein the user application is associated with a connection from which the packet is to be transmitted; and (3) while transmitting the packet, comparing one or more portions of the packet header data with the header data corresponding to the connection with which the user application is associated. Numerous other aspects are provided.

    摘要翻译: 在第一方面,提供了发送数据分组的第一种方法。 第一种方法包括以下步骤:(1)对于可以从其发送数据分组的每个连接,存储对应于该连接的头部数据; (2)使用用户应用来形成分组的报头和有效载荷数据,其中所述用户应用与要发送所述分组的连接相关联; 和(3)在发送分组时,将分组报头数据的一个或多个部分与对应于用户应用所关联的连接的报头数据进行比较。 提供了许多其他方面。

    Method and device for controlling timers associated with multiple users
in a data processing system

    公开(公告)号:US5491815A

    公开(公告)日:1996-02-13

    申请号:US120112

    申请日:1993-09-10

    CPC分类号: G06F9/4825 H04L29/06

    摘要: A system for providing a plurality of timers to perform the timing of event occurrences wherein, for each event, there corresponds a timer control block which stores in its time-flag field (Tf) an indication of whether the timer control block is chained or unchained, running or stopped, in its time-out field (Tv) the expiration time interval and in its time-stamp field (Ts) the current time as a reference at each interruption. The timer control blocks are chained by a one-way link according to their expiration times in such a way that each timer chain contains the timer control blocks whose events will occur at the same time. A cyclic table of index values classifies the timer chains according to their expiration times. When a START operation is requested for an event which has to occur at a time-out value, an index is computed according to the Tv and the current time in order to insert its corresponding timer control block at the head of the timer chain pointed to by the index; the timer control block storing the state of CHAINED-RUNNING in its time-flag and the current time in its time-stamp. If the timer control block is already chained, then the time-stamp is updated to the current time and the time-flag to RUNNING. Whenever a RESTART operation is requested for an event which has not occurred, the time-stamp of the corresponding timer control block is updated to the value of the current time. Whenever a STOP operation is requested before the event has occurred, the time-flag is updated to STOP. The time-stamps and the time-flags are updated according to the START, STOP and RESTART operations, the current index of the cyclic table is incremented at each timer-tick to delete the timer control blocks of the chain whose events have occurred or whose time-out values have expired, and to insert new timer control blocks in the new timer chain for those tasks which have been interrupted and whose events have not occurred yet.