Abstract:
Aspects disclosed relate to a priority-based access of compressed memory lines in a processor-based system. In an aspect, a memory access device in the processor-based system receives a read access request for memory. If the read access request is higher priority, the memory access device uses the logical memory address of the read access request as the physical memory address to access the compressed memory line. However, if the read access request is lower priority, the memory access device translates the logical memory address of the read access request into one or more physical memory addresses in memory space left by the compression of higher priority lines. In this manner, the efficiency of higher priority compressed memory accesses is improved by removing a level of indirection otherwise required to find and access compressed memory lines.
Abstract:
Reducing metadata size in compressed memory systems of processor-based systems is disclosed. In one aspect, a compressed memory system provides 2 N compressed data regions, corresponding 2 N sets of free memory lists, and a metadata circuit. The metadata circuit associates virtual addresses with abbreviated physical addresses, which omit N upper bits of corresponding full physical addresses, of memory blocks of the 2 N compressed data regions. A compression circuit of the compressed memory system receives a memory access request including a virtual address, and selects one of the 2 N compressed data regions and one of the 2 N sets of free memory lists based on a modulus of the virtual address and 2 N . The compression circuit retrieves an abbreviated physical address corresponding to the virtual address from the metadata circuit, and performs a memory access operation on a memory block associated with the abbreviated physical address in the selected compressed data region.
Abstract:
Reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems is disclosed. In this regard, a memory system including a compression circuit is provided. The compression circuit includes a compress circuit that is configured to cache free memory lists using free memory list caches comprising a plurality of buffers. When a number of pointers cached within the free memory list cache falls below a low threshold value, an empty buffer of the plurality of buffers is refilled from a system memory. In some aspects, when a number of pointers of the free memory list cache exceeds a high threshold value, a full buffer of the free memory list cache is emptied to the system memory. In this manner, memory access operations for emptying and refilling the free memory list cache may be minimized.
Abstract:
Aspects disclosed involve reducing or avoiding buffering of evicted cache data from an uncompressed cache memory in a compression memory system when stalled write operations occur. A processor-based system is provided that includes a cache memory and a compression memory system. When a cache entry is evicted from the cache memory, cache data and a virtual address associated with the evicted cache entry are provided to the compression memory system. The compression memory system reads metadata associated with the virtual address of the evicted cache entry to determine the physical address in the compression memory system mapped to the evicted cache entry. If the metadata is not available, the compression memory system stores the evicted cache data at a new, available physical address in the compression memory system without waiting for the metadata. Thus, buffering of the evicted cache data to avoid or reduce stalling write operations is not necessary.
Abstract:
Application of a ZUC cryptographic functions in wireless communication includes receiving a data stream at the wireless communication apparatus and applying the ZUC cryptographic function to the data stream. The ZUC cryptographic function includes generating at least one multi-byte pseudo-random number that provides an index to one of a plurality of substitution boxes. Each of the substitution boxes is further based on one or more normative substitution boxes. The ZUC cryptographic function further includes retrieving a value from each of the substitution boxes using each byte of the multi-byte pseudo-random number, assembling the retrieved values into at least one substituted values, and generating at least one key value based on the substituted values, wherein the key value is used in applying the ZUC cryptographic function to the data stream. The method also includes processing the data stream after application of the ZUC cryptographic function.
Abstract:
Enhanced cryptographic techniques are provided which facilitate higher data rates in a wireless communication system. In one aspect, improvements to the ZUC algorithm are disclosed which can reduce the number of logical operations involved key stream generation, reduce computational burden on a mobile device implementing ZUC, and extend battery life. The disclosed techniques include, for instance, receiving, at a wireless communication apparatus, a data stream having data packets for ciphering or deciphering. The wireless apparatus can generate a cipher key for the cryptographic function, determine a starting address of a first data packet in the data stream and shift the cipher key to align with the starting address of the first data packet. Once aligned, the processing apparatus applies the cryptographic function to a first block of the first data packet using the shifted cipher key and manages a remaining portion of the cipher key to handle arbitrarily aligned data across multiple packets.
Abstract:
Aspects of the disclosure generally relate to methods and apparatus for wireless communication. In an aspect, a method for dynamically processing data on interleaved multithreaded (MT) systems is provided. The method generally includes monitoring loading on one or more active processor threads, determining whether to remove a task or create an additional task based on the monitored loading of the one or more active processor threads and a number of tasks running on one or more of the one or more active processor threads, and if a determination is made to remove a task or create an additional task, distributing the resulting tasks among one or more available processor threads.