Abstract:
A backlight module includes at least one light source, a lamp cover and a circuit board. The lamp cover has a containing portion and at least one projection. The containing portion is arranged for containing the light source. The projection is located at an outer side of the containing portion on which the circuit board is placed so that at least one fixing portion of the circuit board is in contact with the projection for constraining movements of the circuit board relative to the lamp cover.
Abstract:
A network processor includes first communication protocol ports that each support ‘M’ minimum size packet data path traffic on ‘N’ lanes at ‘S’ Gigabits per second (Gbps) and traffic with different communication protocol units on ‘n’ additional lanes at ‘s’ Gbps. The first communication protocol ports support access to an external coprocessor using parsing logic located in each of the first communication protocol ports. The parsing logic, during a parsing period, is configured to send a request to the external coprocessor at reception of a ‘M’ size packet and to receive a response from the external coprocessor. The parsing logic sends a request maximum ‘m’ size byte word to the external coprocessor on one of the additional lanes and receives a response maximum ‘m’ size byte word from the external coprocessor on the one of the additional lanes while complying with the equation N×S/M=
Abstract translation:网络处理器包括第一通信协议端口,每个端口以“S”千兆位/秒(Gbps)在“N”通道上支持“M”个最小尺寸分组数据路径业务,并且在“n”个附加车道上以不同的通信协议单元的流量“ s Gbps 第一通信协议端口支持使用位于每个第一通信协议端口中的解析逻辑来访问外部协处理器。 解析逻辑在解析周期期间被配置为在接收到“M”大小的分组时向外部协处理器发送请求并且从外部协处理器接收响应。 解析逻辑在附加通道之一上向外部协处理器发送请求最大“m”字节字,并在附加通道之一上从外部协处理器接收响应最大“m”字节字,同时遵循等式 N×S / M =
Abstract:
A network fabric may divide a physical connection into a plurality of VLANs as defined by IEEE 802.1Q. Moreover, many network fabrics use Priority Flow Control to identify and segregate network traffic based on different traffic classes or priorities. Current routing protocols define only eight traffic classes. In contrast, a network fabric may contain thousands of unique VLANs. When network congestion occurs, network devices (e.g., switches, bridges, routers, servers, etc.) can negotiate to pause the network traffic associated with one of the different traffic classes. Pausing the data packets associated with a single traffic class may also stop the data packets associated with thousands of VLANs. The embodiments disclosed herein permit a network fabric to individually pause VLANs rather than entire traffic classes.
Abstract:
A mechanism for offloading the management of send queues in a split socket stack environment, including efficient split socket queue flow control and TCP/IP retransmission support. An Upper Layer Protocol (ULP) creates send work queue entries (SWQEs) for writing to the send work queue (SWQ). The Internet Protocol Suite Offload Engine (IPSOE) is notified of a new entry to the SWQ and it subsequently reads this entry that contains pointers to the data that is to be transmitted. After the data is transmitted and acknowledgments are received, the IPSOE creates a completion queue entry (CQE) that is written into the completion queue (CQ). The flow control between the ULP and the IPSOE is credit based. The passing of CQ credits is the only explicit mechanism required to manage flow control of both the SWQ and the CQ between the ULP and the IPSOE.
Abstract:
A mechanism for offloading the management of send queues in a split socket stack environment, including efficient split socket queue flow control and TCP/IP retransmission support. As consumers initiate send operations, send work queue entries (SWQEs) are created by an Upper Layer Protocol (ULP) and written to the send work queue (SWQ). The Internet Protocol Suite Offload Engine (IPSOE) is notified of a new entry to the SWQ and it subsequently reads this entry that contains pointers to the data that is to be transmitted. After the data is transmitted and acknowledgments are received, the IPSOE creates a completion queue entry (CQE) that is written into the completion queue (CQ). After the CQE is written, the ULP subsequently processes the entry and removes it from the CQE, freeing up a space in both the SWQ and CQ. The number of entries available in the SWQ are monitored by the ULP so that it does not overwrite any valid entries. Likewise, the IPSOE monitors the number of entries available in the CQ, so as not overwrite the CQ. The flow control between the ULP and the IPSOE is credit based. The passing of CQ credits is the only explicit mechanism required to manage flow control of both the SWQ and the CQ between the ULP and the IPSOE.
Abstract:
Method and apparatus for providing a checksum in a network transmission. In one aspect of the invention, a checksum for a packet to be transmitted on a network is determined by retrieving packet information from a storage device, the packet information to be included in the packet to be transmitted. A blind checksum value is determined based on the retrieved packet information, and the blind checksum value is adjusted to a protocol checksum based on descriptor information describing the structure of the packet. The protocol checksum is inserted in the packet before the packet is transmitted.
Abstract:
Assigning work, such as data packets, from a plurality of sources, such as data queues in a network processing device, to a plurality of sinks, such as processor threads in the network processing device is provided. In a given processing period, sinks that are available to receive work are identified and sources qualified to send work to the available sinks are determined taking into account any assignment constraints. A single source is selected from an overlap of the qualified sources and sources having work available. This selection may be made using a hierarchical source scheduler for processing subsets of supported sources simultaneously in parallel. A sink to which work from the selected source may be assigned is selected from available sinks qualified to receive work from the selected source.
Abstract:
An assignment constraint matrix method and apparatus used in assigning work, such as data packets, from a plurality of sources, such as data queues in a network processing device, to a plurality of sinks, such as processor threads in the network processing device. The assignment constraint matrix is implemented as a plurality of qualifier matrixes adapted to operate simultaneously in parallel. Each of the plurality of qualifier matrixes is adapted to determine sources in a subset of supported sources that are qualified to provide work to a set of sinks based on assignment constraints. The determination of qualified sources may be based sink availability information that may be provided for a set of sinks on a single chip or distributed on multiple chips.
Abstract:
A computer-implemented method and system for malware prevention in a peer-to-peer (P2P) environment are disclosed. Specifically, one implementation of the embodiment sets forth a method, which includes the operations of obtaining a meta information of a data, prior to initiating downloading of the data, sending the meta information to a server, and initiating downloading of the data after having received confirmation from the server that the meta information is free from being associated with any known malware.
Abstract:
Method and apparatus for implementing use of a network connection table. In one aspect, searching for network connections includes receiving a packet, and zeroing particular fields of connection information from the packet if a new connection is to be established. The connection information is converted to an address for a location in a direct table using a table access process. The direct table stores patterns and reference information for new and existing connections. The connection information is compared with at least one pattern stored in the direct table at the address to find reference information for the received packet.