摘要:
Embodiments include a method and system of dynamically allocatable memory error mitigation. In one embodiment, a system applies an error mitigation mechanism to one of multiple groups of memory units, wherein the one group is in active use during an error test of a second group of memory units. The system deactivates and tests the second group of memory units for errors. In response to detecting an error in a memory unit of the second group, the system applies, to the memory unit of the second group having the error, the error mitigation mechanism for active use. The system then activates the second group of memory units with the error mitigation mechanism applied to the memory unit of the second group having the error.
摘要:
Embodiments include a method and system of dynamically allocatable memory error mitigation. In one embodiment, a system applies an error mitigation mechanism to one of multiple groups of memory units, wherein the one group is in active use during an error test of a second group of memory units. The system deactivates and tests the second group of memory units for errors. In response to detecting an error in a memory unit of the second group, the system applies, to the memory unit of the second group having the error, the error mitigation mechanism for active use. The system then activates the second group of memory units with the error mitigation mechanism applied to the memory unit of the second group having the error.
摘要:
A cache memory system uses multi-bit Error Correcting Code (ECC) with a low storage and complexity overhead. In an embodiment, error correction logic may include a first error correction logic to determine a number of errors in data that is stored in a cache line of a cache memory, and a second error correction logic to receive the data from the first error correction logic if the number of errors is determined to be greater than one and to perform error correction responsive to receipt of the data. The cache memory system can be operated at very low idle power, without dramatically increasing transition latency to and from an idle power state due to loss of state. Other embodiments are described and claimed.
摘要:
A cache memory system is provided that uses multi-bit Error Correcting Code (ECC) with a low storage and complexity overhead. The cache memory system can be operated at very low idle power, without dramatically increasing transition latency to and from an idle power state due to loss of state.
摘要:
A processor may comprise a cache, which may be divided into a first and second section while the processor operates in a low-power mode. A cache line of the first section may be fragmented into segments. A first encoder may generate first data bits and check bits while encoding a first portion of a data stream and a second encoder may, separately, generate second data bits and check bits while encoding a second portion of the data stream. The first data bits may be stored in a first segment of the first section and the check bits in a first portion of the second section that is associated with the first segment. The first decoder may correct errors in multiple bit positions within the first data bits using the check bits stored in the first portion and the second decoder may, separately, decode the second data bits using the second set of check bits.
摘要:
A processor may comprise a cache, which may be divided into a first and second section while the processor operates in a low-power mode. A cache line of the first section may be fragmented into segments. A first encoder may generate first data bits and check bits while encoding a first portion of a data stream and a second encoder may, separately, generate second data bits and check bits while encoding a second portion of the data stream. The first data bits may be stored in a first segment of the first section and the check bits in a first portion of the second section that is associated with the first segment. The first decoder may correct errors in multiple bit positions within the first data bits using the check bits stored in the first portion and the second decoder may, separately, decode the second data bits using the second set of check bits.
摘要:
A massaging device is provided. The massaging device includes a shell having first end and a second end, the first end being a massaging head, a size and a shape of the shell configured to massage a body part of a user. The massaging device also includes an opening disposed close to the second end. The massaging device further includes a housing to house a vibrating motor, the housing configured to be inserted into the shell through the opening. The massaging device also includes a rolling member disposed at the opening, and the rolling member is configured to control the vibration of the vibrating motor when being operated by a user.
摘要:
An apparatus for filtering species in a fluid includes a body having a first side and a second side, a first set of nano-fingers positioned on the body near the first side, a second set of nano-fingers positioned on the body closer to the second side than the first set of nano-fingers, wherein the nano-fingers in the second set of nano-fingers are arranged on the body at a relatively more densely than the nano-fingers in the first set of nano-fingers, and a cover positioned over the first set of nano-fingers and the second set of nano-fingers to form a channel with the body within which the first and second sets of nano-fingers are positioned.
摘要:
The present invention provides regulatory polynucleotide molecules isolated from plant proline rich protein genes and linked to a viral enhancer molecule. The invention further discloses compositions, polynucleotide constructs, transformed host cells, transgenic plants and seeds containing the regulatory polynucleotide sequences, and methods for preparing and using the same.
摘要:
A method, system and apparatus are provided for performing peer-to-peer (P2P) data sharing operations between user equipment (UE) devices in a wireless-enabled communications environment. A first client node comprises content data and operates in a server peer mode to provide content data. A second client node submits a request to a P2P application server (P2P AS) for the content data. In response, the P2P AS provides the address of the first client node to the second client node. The second client node then uses the provided address to submit a request to the first client node to provide the content data. The first client node accepts the request and then provides the content data to the second client node.