摘要:
In a slotted ring network, a node may transmit a non-renewable slot reservation with any unreserved slot. The reservation restricts other nodes from transmitting a new packet in the slot. When the slot returns around the ring to the reserving node, the slot will be available. Preferably, reservation is made responsive to a starvation condition in the reserving node, which may be detected in any of various ways. In an optional enhancement, a reservation identifies the reserving node, and another node on the ring is free to transmit a new packet in the reserved slot if the new packet will reach its destination at or before the reserving node, and thus will not interfere with the reservation.
摘要:
A design structure is provided for a slotted ring network, in which a node may transmit a non-renewable slot reservation with any unreserved slot. The reservation restricts other nodes from transmitting a new packet in the slot. When the slot returns around the ring to the reserving node, the slot will be available. Preferably, reservation is made responsive to a starvation condition in the reserving node, which may be detected in any of various ways. In an optional enhancement, a reservation identifies the reserving node, and another node on the ring is free to transmit a new packet in the reserved slot if the new packet will reach its destination at or before the reserving node, and thus will not interfere with the reservation.
摘要:
In a slotted ring network, a node may transmit a non-renewable slot reservation with any unreserved slot. The reservation restricts other nodes from transmitting a new packet in the slot. When the slot returns around the ring to the reserving node, the slot will be available. Preferably, reservation is made responsive to a starvation condition in the reserving node, which may be detected in any of various ways. In an optional enhancement, a reservation identifies the reserving node, and another node on the ring is free to transmit a new packet in the reserved slot if the new packet will reach its destination at or before the reserving node, and thus will not interfere with the reservation.
摘要:
A design structure is provided for a slotted ring network, in which a node may transmit a non-renewable slot reservation with any unreserved slot. The reservation restricts other nodes from transmitting a new packet in the slot. When the slot returns around the ring to the reserving node, the slot will be available. Preferably, reservation is made responsive to a starvation condition in the reserving node, which may be detected in any of various ways. In an optional enhancement, a reservation identifies the reserving node, and another node on the ring is free to transmit a new packet in the reserved slot if the new packet will reach its destination at or before the reserving node, and thus will not interfere with the reservation.
摘要:
A selective cache includes a set configured to receive data evicted from a number of primary sets of a primary cache. The selective cache also includes a counter associated with the set. The counter is configured to indicate a frequency of access to data within the set. A decision whether to replace data in the set with data from one of the primary sets is based on a value of the counter.
摘要:
A pattern matching accelerator (PMA) for assisting software threads to find the presence and location of strings in an input data stream that match a given pattern. The patterns are defined using regular expressions that are compiled into a data structure comprised of rules subsequently processed by the PMA. The patterns to be searched in the input stream are defined by the user as a set of regular expressions. The patterns to be searched are grouped in pattern context sets. The sets of regular expressions which define the pattern context sets are compiled to generate a rules structure used by the PMA hardware. The rules are compiled before search run time and stored in main memory, in rule cache memory within the PMA or a combination thereof. For each input character, the PMA executes the search and returns the search results.
摘要:
A pattern matching accelerator (PMA) for assisting software threads to find the presence and location of strings in an input data stream that match a given pattern. The patterns are defined using regular expressions that are compiled into a data structure comprised of rules subsequently processed by the PMA. The patterns to be searched in the input stream are defined by the user as a set of regular expressions. The patterns to be searched are grouped in pattern context sets. The sets of regular expressions which define the pattern context sets are compiled to generate a rules structure used by the PMA hardware. The rules are compiled before search run time and stored in main memory, in rule cache memory within the PMA or a combination thereof. For each input character, the PMA executes the search and returns the search results.
摘要:
Page size prediction is used to predict a page size for a page of memory being accessed by a memory access instruction such that the predicted page size can be used to access an address translation data structure. By doing so, an address translation data structure may support multiple page sizes in an efficient manner and with little additional circuitry disposed in the critical path for address translation, thereby increasing performance.
摘要:
Page size prediction is used to predict a page size for a page of memory being accessed by a memory access instruction such that the predicted page size can be used to access an address translation data structure. By doing so, an address translation data structure may support multiple page sizes in an efficient manner and with little additional circuitry disposed in the critical path for address translation, thereby increasing performance.
摘要:
A memory management mechanism requires data structures to be explicitly deallocated in the programming code, but deallocation does not immediately make the memory available for reuse. Before a deallocated memory region can be reused, memory is scanned for pointers to the deallocated region, and any such pointer is set to null. The deallocated memory is then available for reuse. Preferably, deallocated memory regions are accumulated, and an asynchronous memory cleaning process periodically scans memory to nullify the pointers. In order to prevent previously scanned memory becoming contaminated with a dangling pointer before the scan is finished, any write to a pointer is checked to verify that the applicable target address has not been deallocated.