摘要:
A system and method for controlling a shared memory bus in a computer of a multi-processor system prevents collisions on the shared bus and ensures that the bus is full at system start-up. Steady state operations are maintained without the need for a queuing mechanism in the system's memory controller and in view of the memory modules of the shared memory having different read access times, with the system and method being implemented in a system that includes a central unit and multiple uni-directional buses that are disposed between a shared memory and a plurality of processors, with the central unit controlling access to, and use of, the shared buses of the system.
摘要:
A high speed bus system for use in a shared memory system that allows for the high speed transmissions of commands and data between a number of processors and a memory array of a multi-processor, shared memory system, with the high speed bus system including a central unit and a series of uni-directional buses that connect between the plurality of processors and shared memory, with the central unit including arbitration logic and a series of multiplexers to determine which CPUs are granted access to shared buses, scheduling logic that works with the arbitration logic and multiplexers to determine which CPUs are granted access to the shared buses, and port logic for combining the CPU transmissions and determining if such transmissions are valid.
摘要:
An apparatus comprises a plurality of ports wherein each port is adapted to couple to a device. At least one port connects by way of first and second unidirectional, point-to-point communication links with a device. The first unidirectional, point-to-point communication link transfers data from the device to the central logic unit and the second unidirectional, point-to-point communication link transfers data from the central logic unit to the device.
摘要:
A computer system is adapted to transfer write data from a central processing unit to one of a plurality of memory modules in a memory array by transferring a block of write data to a memory control logic device. The memory control logic device transfers the block of data in a plurality of data bursts interspaced by a preselected number of bus cycles. During the interspaced preselected number of bus cycles, the memory control logic device sends pending read commands to an available memory module thereby overlapping read and write operations on the memory bus, thus, lowering memory read latency.
摘要:
An apparatus comprises a plurality of ports wherein each port is adapted to couple to a device. At least one port connects by way of first and second unidirectional, point-to-point communication links with a device. The first unidirectional, point-to-point communication link transfers data from the device to the central logic unit and the second unidirectional, point-to-point communication link transfers data from the central logic unit to the device.
摘要:
The bandwidth of a first bus and a second bus, unequal due to differences in protocol overheads and cycle times between the buses, are equalized without sacrificing any bandwidth on the lower bandwidth bus and without introducing any buffering in a control logic device. The control logic device equalizes the bandwidths of the buses by instructing a device coupled to the second bus to insert a partial dead bus cycle in a read transmission thereby dynamically adjusting read timing on the second bus when the second bus is heavily loaded.
摘要:
Technologies for quality of service based throttling in a fabric architecture include a network node of a plurality of network nodes interconnected across the fabric architecture via an interconnect fabric. The network node includes a host fabric interface (HFI) configured to facilitate the transmission of data to/from the network node, monitor quality of service levels of resources of the network node used to process and transmit the data, and detect a throttling condition based on a result of the monitored quality of service levels. The HFI is further configured to generate and transmit a throttling message to one or more of the interconnected network nodes in response to having detected a throttling condition. The HFI is additionally configured to receive a throttling message from another of the network nodes and perform a throttling action on one or more of the resources based on the received throttling message. Other embodiments are described herein.
摘要:
An apparatus, method, machine-readable medium, and system are disclosed. In one embodiment the apparatus is a micro-page table engine that includes logic that is capable of receiving a memory page request for a page in global memory address space. The apparatus also includes a translation lookaside buffer (TLB) that is capable of storing one or more memory page address translations. Additionally, the apparatus also has a page miss handler capable of performing a micro physical address lookup in a page miss handler tag table in response to the TLB not storing the memory page address translation for the page of memory referenced by the memory page request. The apparatus also includes memory management logic that is capable of managing the page miss handler tag table entries. The micro-page table engine allows the TLB to be an agent that determines whether data in a two-level memory hierarchy is in a hot region of memory or in a cold region of memory. When data is in the cold region of memory, the micro-page table engine fetches the data to the hot memory and a hot memory block is then pushed out to the cold memory area.
摘要:
A cache coherency scheme in a multiprocessor computer system allows data sharing between caches at a fast rate. A new cache coherency state is introduced which allows a processor pair to more effectively share data and eliminate bus transfers thereby improving system throughput. The transfer of data is accomplished by the returning a portion of a preselected data block pursuant to either a read or a read for ownership request by a first one of the processors of the processor pair by the second processor of the processor pair. The ownership of the portion of the preselected data block is shared by the processor pair. Both processors set an indicator to denote that the preselected data block is an incomplete data block.
摘要:
Technologies for quality of service based throttling in a fabric architecture include a network node of a plurality of network nodes interconnected across the fabric architecture via an interconnect fabric. The network node includes a host fabric interface (HFI) configured to facilitate the transmission of data to/from the network node, monitor quality of service levels of resources of the network node used to process and transmit the data, and detect a throttling condition based on a result of the monitored quality of service levels. The HFI is further configured to generate and transmit a throttling message to one or more of the interconnected network nodes in response to having detected a throttling condition. The HFI is additionally configured to receive a throttling message from another of the network nodes and perform a throttling action on one or more of the resources based on the received throttling message. Other embodiments are described herein.