摘要:
A system, memory hub device, method and design structure for providing an enhanced cascade interconnected memory system are provided. The system includes a memory controller, a memory channel, a memory hub device coupled to the memory channel to communicate with the memory controller via one of a direct connection and a cascade interconnection through another memory hub device, and multiple memory devices in communication with the memory controller via one or more cascade interconnected memory hub devices. The memory channel includes unidirectional downstream link segments coupled to the memory controller and operable for transferring configurable data frames. The memory channel further includes unidirectional upstream link segments coupled to the memory controller and operable for transferring data frames.
摘要:
Systems and methods for providing remote pre-fetch buffers. The systems include a computer memory system with a memory controller, one or more memory busses connected to the memory controller, and at least one memory subsystem in communication with the memory controller via the memory busses. The memory controller generates, receives and responds to memory access requests including unsolicited data transfers. The memory subsystem includes one or more memory devices and logic to initiate an unsolicited data transfer to the memory controller based on analysis performed at the memory subsystem of prior memory access requests received by the memory subsystem.
摘要:
Memory systems are disclosed that include a memory controller and an outbound link with the memory controller connected to the outbound link. The outbound link typically includes a number of conductive pathways that conduct memory signals from the memory controller to memory buffer devices in a first memory layer; and at least two memory buffer devices in a first memory layer. Each memory buffer device in the first memory layer typically is connected to the outbound link to receive memory signals from the memory controller.
摘要:
A method and system for providing quality of service guarantees for simultaneous multithreaded processors are disclosed. Hardware and operating system communicate with one another providing information relating to thread attributes for threads executing on processing elements. The operating system controls scheduling of the threads based at least partly on the information communicated and provides quality of service guarantees.
摘要:
A memory system, having indeterminate read data latency, that includes a memory controller and one or more hub devices. The memory controller is configured for receiving data transfers via an upstream channel and for determining whether all or a subset of the data transfers include a data frame by detecting a frame start indicator. The data frame includes an identification tag that is utilized by the memory controller to associate the data frame with a corresponding read instruction issued by the memory controller. The one or more hub devices are in communication with the memory controller in a cascade interconnect manner via the upstream channel and a downstream channel. Each hub device is configured for receiving the data transfers via the upstream channel or the downstream channel and for determining whether all or a subset of the data transfers include a data frame by detecting the frame start indicator.
摘要:
A computing system and methods for memory management are presented. A memory or an I/O controller receives a write request where the data two be written is associated with an address. Hint information may be associated with the address and may relate to memory characteristics such as an historical, O/S direction, data priority, job priority, job importance, job category, memory type, I/O sender ID, latency, power, write cost, or read cost components. The memory controller may interrogate the hint information to determine where (e.g., what memory type or class) to store the associated data. Data is therefore efficiently stored within the system. The hint information may also be used to track post-write information and may be interrogated to determine if a data migration should occur and to which new memory type or class the data should be moved.
摘要:
A method and system for thread scheduling for optimal heat dissipation are provided. Temperature sensors measure temperature throughout various parts of a processor chip. The temperatures detected are reported to an operating system or the like for scheduling threads. In one aspect, the observed temperature values are recorded on registers. An operating system or the like reads the registers and schedules threads based on the temperature values.
摘要:
A method and apparatus for determining correct timing for receiving, in a host in a memory system, of a normal toggle transmitted by an addressed memory chip on a bidirectional data strobe. An offset in the data strobe is established, either by commanding the addressed memory chip, in a training period, to drive the data strobe to a known state, except during transmission of a normal toggle, or by providing a voltage offset between a true and a complement phase in the data strobe, or by providing a circuit bias in a differential receiver on the host the receives the data strobe. A series of read commands are transmitted by the host to the addressed memory chip, which responds by transmitting the normal toggle. Timing of reception of the normal toggle as received by the host chip is adjusted until the normal toggle is correctly received.
摘要:
A method and system for prefetching in computer system are provided. The method in one aspect includes using a prefetch engine to perform prefetch instructions and to translate unmapped data. Misses to address translations during the prefetch are handled and resolved. The method also includes storing the resolved translations in a respective cache translation table. A system for prefetching in one aspect includes a prefetch engine operable to receive instructions to prefetch data from the main memory. The prefetch engine is also operable to search cache address translation for prefetch data and perform address mapping translation, if the prefetch data is unmapped. The prefetch engine is further operable to prefetch the data and store the address mapping in one or more cache memory, if the data is unmapped.
摘要:
A computing system and method employing a processor device for generating real addresses associated with memory locations of a real memory system for reading and writing of data thereto, the system comprising: a plurality of memory blocks in the real memory system for storing data, a physical memory storage for storing the pages of data comprising one or more real memory blocks, each real memory block partitioned into one or more sectors, each comprising contiguous bytes of physical memory; a translation table structure in the physical memory storage having entries for associating a real address with sectors of the physical memory, each translation table entry including one or more pointers for pointing to a corresponding sector in its associated real memory block, the table accessed for storing data in one or more allocated sectors for memory read and write operations initiated by the processor; and, a control device for directly manipulating entries in the translation table structure for performing page operations without actually accessing physical memory data contents. In this system, the actual data of the pages involved in the operation are never accessed by the processor and therefore is never required in the memory cache hierarchy, thus eliminating the cache damage normally associated with these block operations. Further the manipulation of the translation table will involve reading and writing a few bytes to perform the operation as opposed to reading and writing the hundreds or thousands of bytes in the pages being manipulated.