Abstract:
A Central Processing Unit (CPU) hotpatch circuit compares the run-time instruction stream against an internal cache. The internal cache stores embedded memory addresses with associated control flags, executable instruction codes, and tag information. In the event that a comparison against the current program counter succeeds, then execution is altered as required per the control flags. If no comparison match is made, then execution of the instruction that was accessed by the program counter is executed.
Abstract:
A method and system for routing network-based data arranged in frames is disclosed. A host processor analyzes transferred bursts of data and initiates an address and look up algorithm for dispatching the frame to a desired destination. A shared system memory existing between a network device, e.g., an HDLC controller, working in conjunction with the host processor, receives data, including any preselected address fields. The network device includes a plurality of ports. Each port includes a FIFO receive memory for receiving at least a first portion of a frame. The first portion of the frame includes data having the preselected address fields. A direct memory access unit transfers a burst of data from the FIFO receive memory to the shared system memory. A communications processor selects the amount of data to be transferred from the FIFO receive memory based on the desired address fields to be analyzed by the host processor.
Abstract:
A method, apparatus and network device for controlling the flow of network data arranged in frames and minimizing congestion is disclosed. A status error indicator is generated within a receive FIFO memory indicative of a frame overflow within the receive FIFO memory. In response to the status error indicator, an early congestion interrupt is generated to a host processor indicative that a frame overflow has occurred within the receive FIFO memory. The incoming frame is discarded and the services of received frames are enhanced by one of either increasing the number of words of a direct memory access (DMA) unit burst size, or modifying the time-slice of other active processes.
Abstract:
A system and method for reducing transfer latencies in fencepost buffering requires that a cache is provided between a host and a network controller having shared memory. The cache is divided into a dual cache having a top cache and a bottom cache. A first and second descriptor address location are fetched from shared memory. The two descriptors are discriminated from one another in that the first descriptor address location is a location of an active descriptor and the second descriptor address location is a location of a reserve/lookahead descriptor. The active descriptor is copied to the top cache. A command is issued to DMA for transfer of the active descriptor. The second descriptor address location is then copied into the first descriptor address. The next descriptor address location from external memory is then fetched and placed in the second descriptor address location.