Abstract:
A display module and a manufacturing method thereof are provided. The display module comprises a casing, an optical element and a display panel. The casing has an upper portion. The optical element is disposed in the casing. The display panel comprises a color filter substrate and a thin film transistor substrate. The color filter substrate is located within and toward the casing. The thin film transistor substrate is connected to the color filter substrate and connected to the upper portion of the casing.
Abstract:
A mechanism is provided for merging in a network processor results from a parser and results from an external coprocessor providing processing support requested by said parser. The mechanism enqueues in a result queue both parser results needing to be merged with a coprocessor result and parser results which have no need to be merged with a coprocessor result. An additional queue is used to enqueue the addresses of the result queue where the parser results are stored. The result from the coprocessor is received in a simple response register. The coprocessor result is read by the result queue management logic from the response register and merged to the corresponding incomplete parser result read in the result queue at the address enqueued in the additional queue.
Abstract:
A host Ethernet adapter (HEA) and method of managing network communications is provided. The HEA includes a host interface configured for communication with a multi-core processor over a processor bus. The host interface comprises a receive processing element including a receive processor, a receive buffer and a scheduler for dispatching packets from the receive buffer to the receive processor; a send processing element including a send processor and a send buffer; and a completion queue scheduler (CQS) for dispatching completion queue elements (CQE) from the head of the completion queue (CQ) to threads of the multi-core processor in a network node mode. The method comprises operatively coupling an Ethernet adapter to a multi-core processor system via a processor bus, selectively assigning a first plurality of packets to a first queue pair for servicing in an endpoint mode, running a device driver on the multi-core processing system, the device driver controlling the servicing of the first queue pair by dispatching the first plurality of packets to only one processor core of the multi-core processor system, selectively assigning a second plurality of packets to a second queue pair for servicing in a network node mode; and the Ethernet adapter controlling the servicing of the second queue pair by dispatching the second plurality of packets to multiple processor threads.
Abstract:
A network processor dataflow chip and method for flexible dataflow are provided. The dataflow chip comprises a plurality of on-chip data transmission and scheduling circuit structures. The data transmission and scheduling circuit structures are selected responsive to indicators. Data transmission circuit structures may comprise selectable frame processing and data transmission functions. Selectable frame processing may comprise cut and paste, full dispatch and store and dispatch frame processing. Scheduling functions include full internal scheduling, calendar scheduling in communication with an external scheduler, and external calendar scheduling. In another aspect of the present invention, data transmission functions may comprise low latency and normal latency external processor interfaces for selectively providing privileged access to dataflow chip resources.
Abstract:
A method and apparatus for assigning work, such as data packets, from a plurality of sources, such as data queues in a network processing device, to a plurality of sinks, such as processor threads in the network processing device. In a given processing period, a source is selected in a manner that maintains fairness in the selection process. A corresponding sink is selected for the selected source based on processing efficiency. If, due to assignment constraints, no sink is available for the selected source, the selected source is retained for selection in the next scheduling period, to maintain fairness. In this case, to optimize efficiency, a most efficient currently available sink is identified and a source for providing work to that sink is selected.
Abstract:
A method and apparatus for assigning work, such as data packets, from a plurality of sources, such as data queues in a network processing device, to a plurality of sinks, such as processor threads in the network processing device. In a given processing period, sinks that are available to receive work are identified and sources qualified to send work to the available sinks are determined taking into account any assignment constraints. A single source is selected from an overlap of the qualified sources and sources having work available. This selection may be made using a hierarchical source scheduler for processing subsets of supported sources simultaneously in parallel. A sink to which work from the selected source may be assigned is selected from available sinks qualified to receive work from the selected source.
Abstract:
Mechanisms for processing of communications between data processing devices are provided. With the mechanisms of the illustrative embodiments, a set of techniques that enables sustaining media speed by distributing transmit and receive-side processing over multiple processing cores is provided. In addition, these techniques also enable designing multi-threaded network interface controller (NIC) hardware that efficiently hides the latency of direct memory access (DMA) operations associated with data packet transfers over an input/output (I/O) bus. Multiple processing cores may operate concurrently using separate instances of a communication protocol stack and device drivers to process data packets for transmission with separate hardware implemented send queue managers in a network adapter processing these data packets for transmission. Multiple hardware receive packet processors in the network adapter may be used, along with a flow classification engine, to route received data packets to appropriate receive queues and processing cores for processing.
Abstract:
A communication network used to link information handling systems together utilizes a switching network to transmit data among senders and receivers. Each individual packet of data is described and controlled by an FCB. The bandwidth associated with the storing and distribution of data is optimized by chaining the data packets in different types of queues, or operating without chaining outside a queue. When a frame is in an output queue, the third word contains an RFCBA for egress of the frame to a line port, and an MCID for ingress from an output queue to a switch port. The RFCBA and the MCID have multicast capabilities. The format does not require a third word when a frame is in an input queue.
Abstract:
A system and method in accordance with the present invention allows for an adapter to be utilized in a server environment that can accommodate both a 10 G and a 1 G source utilizing the same pins. This is accomplished through the use of a high speed serializer/deserializer (high speed serdes) which can accommodate both data sources. The high speed serdes allows for the use of a relatively low reference clock speed on the NIC to provide the proper clocking of the data sources and also allows for different modes to be set to accommodate the different data sources. Finally the system allows for the adapter to use the same pins for multiple data sources.
Abstract:
Access arbiters are used to prioritize read and write access requests to individual memory banks in DRAM memory devices, particularly fast cycle DRAMs. This serves to optimize the memory bandwidth available for the read and the write operations by avoiding consecutive accesses to the same memory bank and by minimizing dead cycles. The arbiter first divides DRAM accesses into write accesses and read accesses. The access requests are divided into accesses per memory bank with a threshold limit imposed on the number of accesses to each memory bank. The write receive packets are rotated among the banks based on the write queue status. The status of the write queue for each memory bank may also be used for system flow control. The arbiter also typically includes the ability to determine access windows based on the status of the command queues, and to perform arbitration on each access window.