Abstract:
Method and apparatus for collaborative memory accesses is described. A system includes a memory controller that receives a command from a host. The command is associated with at least one of a plurality of data elements. The memory controller causes execution of data casting operations that adjust a bit size of the plurality of data elements to generate casted data elements. The system includes an interface for communicating data between the host and a memory.
Abstract:
Systems and techniques for selectively transferring one or more portions of a cache block in response to a request are described. Computing system components are informed as to instances where data transfer operations involve moving less than an entirety of data included in a cache block cache block. In one example, executable code for a computational task includes hints that identify when memory requests involve accessing and transmitting less than an entirety of a cache block and cause system components to communicate a subset of the cache block during a memory access. In another example, a data differentiator unit is implemented to analyze a cache block and return a portion of the cache block that is selected based on one or more criteria specified for a computational task. The described techniques thus overcome conventional drawbacks facing systems that transmit an entire cache block when only a portion is needed.
Abstract:
A method includes, for each key of a plurality of keys, identifying from a set of buckets a first bucket for the key based on a first hash function, and identifying from the set of buckets a second bucket for the key based on a second hash function. An entry for the key is stored in a bucket selected from one of the first bucket and the second bucket. The entry is inserted in a sequence of entries in a memory block. A position of the entry in the sequence of entries corresponds to the selected bucket. For each bucket in the set of buckets, an indication of a number of entries in the bucket is recorded.
Abstract:
Systems, apparatuses, and methods for determining preferred memory page management policies by software are disclosed. Software executing on one or more processing units generates a memory request. Software determines the preferred page management policy for the memory request based at least in part on the data access size and data access pattern of the memory request. Software conveys an indication of a preferred page management policy to a memory controller. Then, the memory controller accesses memory for the memory request using the preferred page management policy specified by software.
Abstract:
A system, method, and computer program product are provided for a memory device system. One or more memory dies and at least one logic die are disposed in a package and communicatively coupled. The logic die comprises a processing device configurable to manage virtual memory and operate in an operating mode. The operating mode is selected from a set of operating modes comprising a slave operating mode and a host operating mode.
Abstract:
The described embodiments include an interposer with signal routes located therein. The interposer includes a set of sites arranged in a pattern, each site including a set of connection points. Each connection point in each site is coupled to a corresponding one of the signal routes. Integrated circuit chiplets may be mounted on the sites and signal connectors for mounted integrated circuit chiplets may coupled to some or all of the connection points for corresponding sites, thereby coupling the chiplets to corresponding signal routes. The chiplets may then send and receive signals via the connection points and signal routes. In some embodiments, the set of connection points in each of the sites is the same, i.e., has a same physical layout. In other embodiments, the set of connection points for each site is arranged in one of two or more physical layouts.
Abstract:
Some die-stacked memories will contain a logic layer in addition to one or more layers of DRAM (or other memory technology). This logic layer may be a discrete logic die or logic on a silicon interposer associated with a stack of memory dies. Additional circuitry/functionality is placed on the logic layer to implement functionality to perform various computation operations. This functionality would be desired where performing the operations locally near the memory devices would allow increased performance and/or power efficiency by avoiding transmission of data across the interface to the host processor.
Abstract:
Methods and mechanisms for managing data in a hash table are disclosed. A computing system includes a hash table configured to store data and hash management logic. In response to receiving a request to insert data into the hash table, the hash management logic is configured to generate a first hash value by applying a first hash function to the key of the key-value pair, and identify a first bucket within the hash table that corresponds to the first hash table. If the first bucket has a slot available, store the key-value pair in the slot. If the first bucket does not have a slot available, select a first slot of the first bucket for conversion to a remap entry, store the key-value pair in a second bucket, and store information associating the key-value pair with the second bucket in the remap entry.
Abstract:
The described embodiments include a computing device with multiple interrupt processors for processing interrupts. In the described embodiments, each of the multiple processors is classified as one or more processor types based on factors such as features and functionality of the processor, an operating environment of the processor, the characteristics of some or all of the available interrupts, etc. During operation, an interrupt controller in the computing device receives an indication of an interrupt. The interrupt controller then determines a processor type for processing the interrupt. Next, the interrupt controller causes the interrupt to be processed by one of the plurality of processors that is the determined processor type.
Abstract:
An integrated circuit device includes a memory controller coupleable to a memory. The memory controller to schedule memory accesses to regions of the memory based on memory timing parameters specific to the regions. A method includes receiving a memory access request at a memory device. The method further includes accessing, from a timing data store of the memory device, data representing a memory timing parameter specific to a region of the memory cell circuitry targeted by the memory access request. The method also includes scheduling, at the memory controller, the memory access request based on the data.