Abstract:
A memory read operation is directed at a group of semiconductor devices from which a first semiconductor device has been removed. An error in data for the memory read operation is detected based on error-correction coding (ECC). The error is caused at least in part by the first semiconductor device having been removed. ECC is used to determine corrected data for the memory read operation.
Abstract:
In one form, a data processor comprises a memory accessing agent and a memory controller. The memory accessing agent selectively initiates read accesses to and write accesses from a memory. The memory controller is coupled to the memory accessing agent and is adapted to be coupled to the memory and to access the memory using a start-gap wear-leveling algorithm. The memory controller is adapted to maintain a metadata log in a region of the memory and to store in the metadata log a start address and a gap address used in the start-gap wear-leveling algorithm, and upon initialization to access the metadata log to retrieve an initial start address and an initial gap address for use in the start-gap wear-leveling algorithm. In another form, a memory module may comprise a memory buffer including such a memory controller.
Abstract:
A memory cell is read by measuring a parameter associated with the memory cell with a first resolution to determine a value stored in the memory cell. The parameter is also measured with a second resolution that is finer than the first resolution. The memory cell is reprogrammed to mitigate an offset between the parameter as measured with the second resolution and the parameter as measured with the first resolution.
Abstract:
A memory device receives a plurality of read commands and/or write commands in parallel. The memory device transmits data corresponding to respective read commands on respective portions of a data bus and receives data corresponding to respective write commands on respective portions of the data bus. The memory device includes I/O logic to receive the plurality of read commands in parallel, to transmit the data corresponding to the respective read commands on respective portions of the data bus, and to receive the data corresponding to the respective write commands on respective portions of the data bus.
Abstract:
Exemplary embodiments provide wear spreading among die regions (i.e., one or more circuits) in an integrated circuit or among dies by using operating condition data in addition to or instead of environmental data such as temperature data, from each of a plurality of die regions. Control logic produces a cumulative amount of time each of the plurality of die regions has spent at an operating condition based on operating condition data wherein the operating condition data is based on at least one of the following operating characteristics: frequency of operation of the plurality of die regions, an operating voltage of the plurality of die regions, an activity level of the plurality of die regions, a timing margin of the plurality of die regions, and a number of detected faults of the plurality of die regions. The method and apparatus spreads wear among the plurality of same type of die regions by controlling task execution among the plurality of die regions using the die wear-out data.
Abstract:
A method includes, for each data value in a set of one or more data values, determining a boundary between a high order portion of the data value and a low order portion of the data value, storing the low order portion at a first memory location utilizing a low data fidelity storage scheme, and storing the high order portion at a second memory location utilizing a high data fidelity storage scheme for recording data at a higher data fidelity than the low data fidelity storage scheme.
Abstract:
A neural network runs a known input data set using an error free power setting and using an error prone power setting. The differences in the outputs of the neural network using the two different power settings determine a high level error rate associated with the output of the neural network using the error prone power setting. If the high level error rate is excessive, the error prone power setting is adjusted to reduce errors by changing voltage and/or clock frequency utilized by the neural network system. If the high level error rate is within bounds, the error prone power setting can remain allowing the neural network to operate with an acceptable error tolerance and improved efficiency. The error tolerance can be specified by the neural network application.
Abstract:
A controller assigns variable length addresses to addressable elements that are connected to a network. The variable length addresses are determined based on probabilities that packets are addressed to the corresponding addressable element. The controller transmits, to the addressable elements via the network, a routing table indicating the variable length addresses assigned to the addressable elements. Routers or addressable elements receive the routing table and route one or more packets over the network to an addressable element using variable length addresses included in a header of the one or more packets.
Abstract:
A memory fence or other similar operation is executed with reduced latency. An early fence operation is executed and acts as a hint to the processor executing the thread that executes the fence. This hint causes the processor to begin performing sub-operations for the fence earlier than if no such hint were executed. Examples of sub-operations for the fence include operations to make data written to by writes prior to the fence operation available to other threads. A resolving fence, which occurs after the early fence, performs the remaining sub-operations for the fence. By triggering some or all of the sub-operations for a memory fence that will occur in the future, the early fence operation reduces the amount of latency associated with that memory fence operation.
Abstract:
Systems, apparatuses, and methods for sharing an field programmable gate array compute engine are disclosed. A system includes one or more processors and one or more FPGAs. The system receives a request, generated by a first user process, to allocate a portion of processing resources on a first FPGA. The system maps the portion of processing resources of the first FPGA into an address space of the first user process. The system prevents other user processes from accessing the portion of processing resources of the first FPGA. Later, the system detects a release of the portion of the processing resources on the first FPGA by the first user process. Then, the system receives a second request to allocate the first FPGA from a second user process. In response to the second request, the system maps the first FPGA into an address space of the second user process.