摘要:
A system, apparatus, and method are disclosed for predicting accesses to memory. In one embodiment, an exemplary apparatus comprises a processor configured to execute program instructions and process program data, a memory including the program instructions and the program data, and a memory processor. The memory processor can include a speculator configured to receive an address containing the program instructions or the program data. Such a speculator can comprise a sequential predictor for generating a configurable number of sequential addresses. The speculator can also include a nonsequential predictor configured to associate a subset of addresses to the address and to predict a group of addresses based on at least one address of the subset, wherein at least one address of the subset is unpatternable to the address.
摘要:
A system, apparatus, and method are disclosed for storing predictions as well as examining and using one or more caches for anticipating accesses to a memory. In one embodiment, an exemplary apparatus is a prefetcher for managing predictive accesses with a memory. The prefetcher can include a speculator to generate a range of predictions, and multiple caches. For example, the prefetcher can include a first cache and a second cache to store predictions. An entry of the first cache is addressable by a first representation of an address from the range of predictions, whereas an entry of the second cache is addressable by a second representation of the address. The first and the second representations are compared in parallel against the stored predictions of either the first cache and the second cache, or both.
摘要:
A system, apparatus, and method are disclosed for storing and prioritizing predictions to anticipate nonsequential accesses to a memory. In one embodiment, an exemplary apparatus is configured as a prefetcher for predicting accesses to a memory. The prefetcher includes a prediction generator configured to generate a prediction that is unpatternable to an address. Also, the prefetcher also can include a target cache coupled to the prediction generator to maintain the prediction in a manner that determines a priority for the prediction. In another embodiment, the prefetcher can also include a priority adjuster. The priority adjuster sets a priority for a prediction relative to other predictions. In some cases, the placement of the prediction is indicative of the priority relative to priorities for the other predictions. In yet another embodiment, the prediction generator uses the priority to determine that the prediction is to be generated before other predictions.
摘要:
A system, apparatus, and method are disclosed for managing predictive accesses to memory. In one embodiment, an exemplary apparatus is configured as a prediction inventory that stores predictions in a number of queues. Each queue is configured to maintain predictions until a subset of the predictions is either issued to access a memory or filtered out as redundant. In another embodiment, an exemplary prefetcher predicts accesses to a memory. The prefetcher comprises a speculator for generating a number of predictions and a prediction inventory, which includes queues each configured to maintain a group of items. The group of items typically includes a triggering address that corresponds to the group. Each item of the group is of one type of prediction. Also, the prefetcher includes an inventory filter configured to compare the number of predictions against one of the queues having the either the same or different prediction type as the number of predictions.
摘要:
One embodiment of the invention sets forth a technique for compressing and storing display data and optionally compressing and storing cursor data in a memory that is local to a graphics processing unit to reduce the power consumed by a mobile computing device when refreshing the screen. Compressing the display data and optionally the cursor data also reduces the relative cost of the invention by reducing the size of the local memory relative to the size that would be necessary if the display data were stored locally in uncompressed form. Thus, the invention may improve mobile computing device battery life, while keeping additional costs low.
摘要:
One embodiment of the invention sets forth a technique for compressing and storing display data and optionally compressing and storing cursor data in a memory that is local to a graphics processing unit to reduce the power consumed by a mobile computing device when refreshing the screen. Compressing the display data and optionally the cursor data also reduces the relative cost of the invention by reducing the size of the local memory relative to the size that would be necessary if the display data were stored locally in uncompressed form. Thus, the invention may improve mobile computing device battery life, while keeping additional costs low.
摘要:
Display data and video data are stored within a graphics processing unit to reduce power consumed by the computing device during video playback. Storing display data and video data within the GPU reduces power consumption, because bus transaction activity is reduced and the need to read data from a larger, common main memory is avoided.
摘要:
One embodiment of the invention sets forth a technique for compressing and storing display data and optionally compressing and storing cursor data in a memory that is local to a graphics processing unit to reduce the power consumed by a mobile computing device when refreshing the screen. Compressing the display data and optionally the cursor data also reduces the relative cost of the invention by reducing the size of the local memory relative to the size that would be necessary if the display data were stored locally in uncompressed form. Thus, the invention may improve mobile computing device battery life, while keeping additional costs low.
摘要:
One embodiment of the invention sets forth a technique for compressing and storing display data and optionally compressing and storing cursor data in a memory that is local to a graphics processing unit to reduce the power consumed by a mobile computing device when refreshing the screen. Compressing the display data and optionally the cursor data also reduces the relative cost of the invention by reducing the size of the local memory relative to the size that would be necessary if the display data were stored locally in uncompressed form. Thus, the invention may improve mobile computing device battery life, while keeping additional costs low.
摘要:
One embodiment of the invention sets forth a technique for compressing and storing display data and optionally compressing and storing cursor data in a memory that is local to a graphics processing unit to reduce the power consumed by a mobile computing device when refreshing the screen. Compressing the display data and optionally the cursor data also reduces the relative cost of the invention by reducing the size of the local memory relative to the size that would be necessary if the display data were stored locally in uncompressed form. Thus, the invention may improve mobile computing device battery life, while keeping additional costs low.