Abstract:
An exemplary method for intelligent compression defines a threshold value for a temperature reading generated by a temperature sensor. Data blocks received into the compression module are compressed according to either a first mode or a second mode, the selection of which is determined based on a comparison of the active level for the temperature reading to the defined threshold value. The first compression mode may be associated with a lossless compression algorithm while the second compression mode is associated with a lossy compression algorithm. Or, both the first compression mode and the second compression mode may be associated with a lossless compression algorithm, however, for the first compression mode the received data blocks are produced at a default high quality level setting while for the second compression mode the received data blocks are produced at a reduced quality level setting.
Abstract:
Systems and methods for pre-fetching address translations in a memory management unit (MMU) are disclosed. The MMU detects a triggering condition related to one or more translation caches associated with the MMU, the triggering condition associated with a trigger address, generates a sequence descriptor describing a sequence of address translations to pre-fetch into the one or more translation caches, the sequence of address translations comprising a plurality of address translations corresponding to a plurality of address ranges adjacent to an address range containing the trigger address, and issues an address translation request to the one or more translation caches for each of the plurality of address translations, wherein the one or more translation caches pre-fetch at least one address translation of the plurality of address translations into the one or more translation caches when the at least one address translation is not present in the one or more translation caches.
Abstract:
An exemplary method for intelligent compression defines a threshold value for a key performance indicator. Based on the key performance indicator value, data blocks generated by a producer component may be scaled down to reduce power and/or bandwidth consumption when being compressed according to a lossless compression module. The compressed data blocks are then stored in a memory component along with metadata that signals the scaling factor used prior to compression. Consumer components later retrieving the compressed data blocks from the memory component may decompress the data blocks and upscale, if required, based on the scaling factor signaled by the metadata.
Abstract:
Systems and methods are provided for managing performance of a computing device having dissimilar memory types. An exemplary embodiment comprises a method for interleaving dissimilar memory devices. The method involves determining an interleave bandwidth ratio comprising a ratio of bandwidths for two or more dissimilar memory devices. The dissimilar memory devices are interleaved according to the interleave bandwidth ratio. Memory address requests are distributed from one or more processing units to the dissimilar memory devices according to the interleave bandwidth ratio.
Abstract:
Certain aspects of the present disclosure provide techniques for concurrently performing inferences using a machine learning model and optimizing parameters used in executing the machine learning model. An example method generally includes receiving a request to perform inferences on a data set using the machine learning model and performance metric targets for performance of the inferences. At least a first inference is performed on the data set using the machine learning model to meet a latency specified for generation of the first inference from receipt of the request. While performing the at least the first inference, operational parameters resulting in inference performance approaching the performance metric targets are identified based on the machine learning model and operational properties of the computing device. The identified operational parameters are applied to performance of subsequent inferences using the machine learning model.
Abstract:
Certain aspects of the present disclosure provide techniques for adaptively executing machine learning models on a computing device. An example method generally includes receiving weight information for a machine learning model to be executed on a computing device. The received weight information is reduced into quantized weight information having a reduced bit size relative to the received weight information. First inferences using the machine learning model and the received weight information, and second inferences are performed using the machine learning model and the quantized weight information. Results of the first and second inferences are compared, it is determined that results of the second inferences are within a threshold performance level of results of the first inferences, and based on the determination, one or more subsequent inferences are performed using the machine learning model and the quantized weight information.
Abstract:
An exemplary method for intelligent compression uses a foveated-compression approach. First, the location of a fixation point within an image frame is determined. Next, the image frame is sectored into two or more sectors such that one of the two or more sectors is designated as a fixation sector and the remaining sectors are designated as foveation sectors. A sector may be defined by one or more tiles within the image frame. The fixation sector includes the particular tile that contains the fixation point and is compressed according to a lossless compression algorithm. The foveation sectors are compressed according to lossy compression algorithms. As the locations of foveation sectors increase in angular distance from the location of the fixation sector, a compression factor may be increased.
Abstract:
Systems, methods, and computer programs are disclosed for controlling memory frequency. One method comprises a first memory client generating a compressed data buffer and compression statistics related to the compressed data buffer. The compressed data buffer and the compression statistics are stored in a memory device. Based on the stored compression statistics, a frequency or voltage setting of the memory device is adjusted for enabling a second memory client to read the compressed data buffer.
Abstract:
Methods and systems for pre-fetching address translations in a memory management unit (MMU) of a device are disclosed. In an embodiment, the MMU receives a pre-fetch command from an upstream component of the device, the pre-fetch command including an address of an instruction, pre-fetches a translation of the instruction from a translation table in a memory of the device, and stores the translation of the instruction in a translation cache associated with the MMU.
Abstract:
Certain aspects of the present disclosure provide techniques for concurrently performing inferences using a machine learning model and optimizing parameters used in executing the machine learning model. An example method generally includes receiving a request to perform inferences on a data set using the machine learning model and performance metric targets for performance of the inferences. At least a first inference is performed on the data set using the machine learning model to meet a latency specified for generation of the first inference from receipt of the request. While performing the at least the first inference, operational parameters resulting in inference performance approaching the performance metric targets are identified based on the machine learning model and operational properties of the computing device. The identified operational parameters are applied to performance of subsequent inferences using the machine learning model.