摘要:
Disclosed are systems and methods providing a unified system fabric in a computer. The systems and methods of embodiments including first interface disposed between a first component of the computer system and a second component of the computer system, the first interface implementing an interface protocol, and a second interface disposed between the first component of the computer system and a third component of the computer system, the second interface implementing the interface protocol, wherein the first interface and the second interface comprise separate signal paths at the first component.
摘要:
An embodiment of a multiprocessor computer system comprises main memory, a remote processor capable of accessing the main memory, a remote cache device operative to store accesses by said remote processor to said main memory, and a filter tag cache device associated with the main memory. The filter cache device is operative to store information relating to remote ownership of data in the main memory including ownership by the remote processor. The filter cache device is operative to selectively invalidate filter tag cache entries when space is required in the filter tag cache device for new cache entries. The remote cache device is responsive to events indicating that a cache entry has low value to the remote processor to send a hint to the filter tag cache device. The filter tag cache device is responsive to a hint in selecting a filter tag cache entry to invalidate.
摘要:
A computer system, related components such as a processor agent, and related method are disclosed. In at least one embodiment, the computer system includes a first core, at least one memory device including a first memory segment, and a first memory controller coupled to the first memory segment. Further, the computer system includes a fabric and at least one processor agent coupled at least indirectly to the first core and the first memory segment, and also coupled to the fabric. A first memory request of the first core in relation to a first memory location within the first memory segment proceeds to the first memory controller by way of the at least one processor agent and the fabric.
摘要:
Methods and apparatus in a partitionable computing system. A processor communicates with a packet former. The packet former can be configured to construct a data packet that can include security status information related to a partition or processor.
摘要:
A computer apparatus and related method to access storage is provided. In one aspect, a controller maps an address range of a data block of storage into an accessible memory address range of at least one of a plurality of processors, in a further aspect, the controller ensures that copies of the data block cached in a plurality of memories by a plurality of processors are consistent.
摘要:
A computer system, related components such as a processor agent, and related method are disclosed. In at least one embodiment, the computer system includes a first core, at least one memory device including a first memory segment, and a first memory controller coupled to the first memory segment. Further, the computer system includes a fabric and at least one processor agent coupled at least indirectly to the first core and the first memory segment, and also coupled to the fabric. A first memory request of the first core in relation to a first memory location within the first memory segment proceeds to the first memory controller by way of the at least one processor agent and the fabric.
摘要:
A system and method are disclosed for achieving cache coherency in a multiprocessor computer system having a plurality of sockets with processing devices and memory controllers and a plurality of memory blocks. In at least some embodiments, the system includes a plurality of node controllers capable of being respectively coupled to the respective sockets of the multiprocessor computer, a plurality of caching devices respectively coupled to the respective node controllers, and a fabric coupling the respective node controllers, by which cache line request signals can be communicated between the respective node controllers. Cache coherency is achieved notwithstanding the cache line request signals communicated between the respective node controllers due at least in part to communications between the node controllers and the respective caching devices to which the node controllers are coupled. In at least some embodiments, the caching devices track remote cache line ownership for processor and/or input/output hub caches.
摘要:
A partitionable computer system and method of operating the same is disclosed. The partitionable computer system has a state machine, a processor, and a device controller. The state machine can be configured to monitor the status of a partition of the partitionable computer system. The information provided by the state machine can be used to provide security within the partitionable computing system.
摘要:
Distributing communications between paths, comprises providing a plurality of destinations, providing a plurality of communications paths such that each of the plurality of destinations can be accessed over each of the plurality of communications paths, defining destination addresses interleaved over the plurality of destinations, sending communications from a source to a plurality of the interleaved addresses, and selecting different ones of the plurality of paths for successive communications that are sent to addresses on different destinations, wherein the path for a communication is selected using at least a part of the address of the communication.
摘要:
A method of shielding a memory device (110) from high write rates comprising receiving instructions to write data at a memory container (105), the memory controller (105) composing a cache (120) comprising a number of cache lines defining stored data, with the memory controller (105), updating a cache line in response to a write hit in the cache (120), and with the memory controller (105), executing the instruction to write data in response to a cache miss to a cache line within the cache (120) in which the memory controller (105) prioritizes for writing to the cache (120) over writing to the memory device (110).