PROCESSING UNIT INCLUDING A DYNAMICALLY ALLOCATABLE VECTOR REGISTER FILE FOR NON-VECTOR INSTRUCTION PROCESSING

    公开(公告)号:US20250103335A1

    公开(公告)日:2025-03-27

    申请号:US18475320

    申请日:2023-09-27

    Abstract: A processing unit including a dynamically allocatable vector register file for non-vector instruction processing is disclosed. The processing unit includes an integer execution circuit and integer register file for processing integer instructions. The processing unit also includes a vector execution circuit and a vector register file for processing vector instructions. The integer and vector register files are each sized at design time. A processing unit may be called upon to execute varying workloads that vary between integer and vector operations. Rather than statically dedicating the entire vector register file to vector registers, the processor is configured to dynamically allocate a portion(s) of the vector registers in the vector register file for use in the execution of integer instructions.

    OPTIMIZING CACHE ENERGY CONSUMPTION IN PROCESSOR-BASED DEVICES

    公开(公告)号:US20240403216A1

    公开(公告)日:2024-12-05

    申请号:US18325419

    申请日:2023-05-30

    Abstract: Optimizing cache energy consumption in processor-based devices is disclosed herein. In some aspects, a processor-based device comprises a way lookup table (WLUT) circuit that is configured to receive an effective address (EA) for a memory access request. The WLUT circuit determines that a tag portion of the EA corresponds to a tag of a WLUT entry among a plurality of WLUT entries. In response, the WLUT circuit transmits a predicted way indicator of the WLUT entry to a cache controller. The cache controller accesses, in a set among a plurality of sets of a cache memory device corresponding to a set portion of the EA, only a predicted tag way among a plurality of tag ways of the cache memory device indicated by the predicted way indicator and only a predicted data way among a plurality of data ways of the cache memory device indicated by the predicted way indicator.

    STATIC RANDOM ACCESS MEMORY (SRAM) FAULT CORRECTION

    公开(公告)号:US20250087295A1

    公开(公告)日:2025-03-13

    申请号:US18466110

    申请日:2023-09-13

    Abstract: This disclosure provides systems, methods, and devices for memory systems that support SRAM fault correction. In a first aspect, a method includes receiving, by a memory controller coupled to a memory module through a first channel and configured to store data in and access data stored in the memory module through the first channel from a host device, data to be stored in a memory of the memory module, determining, by the memory controller, a row in the memory at which the data will be stored, determining, by the memory controller based on the row, an address associated with the row, wherein the address indicates one bit location in the row at which data will not be stored, and storing, by the memory controller, the data at the row in accordance with the address, wherein the data is not stored at the one bit location.

    System and method for performing energy-efficient processing through reduced data movement

    公开(公告)号:US12153924B2

    公开(公告)日:2024-11-26

    申请号:US18171012

    申请日:2023-02-17

    Abstract: A system for performing energy-efficient computing reduces the amount of data that is transferred between a processor and an external memory device. The processor and the external memory device are equipped with first and second near data processing control units (NCUs), respectively, that coordinate offloading of preselected subprocesses from the processor to a first processing circuit disposed on or near the external memory device. When the processor is performing one of these preselected processes, the first NCU transmits commands and memory addresses to the second NCU. The processing circuit on or near the memory device performs the subprocess or subprocesses and the result is forwarded by the second NCU to the first NCU, which forwards it to the processor to complete the process.

Patent Agency Ranking