DYNAMIC ADJUSTMENT OF MEMORY OPERATING FREQUENCY TO AVOID RF INTERFERENCE WITH WIFI

    公开(公告)号:US20240334340A1

    公开(公告)日:2024-10-03

    申请号:US18128805

    申请日:2023-03-30

    CPC classification number: H04W52/029 H04W52/0274

    Abstract: An apparatus and method for efficiently performing power management for increasing reliable wireless signal transfer performed by mobile computing devices. In various implementations, a computing system includes a network interface and multiple components for processing tasks. The network interface sends, to at least a given component of the multiple components, an indication specifying the corresponding operating frequency ranges used by one or more radio modules used for wireless communication with an access point. The given component determines whether an operating clock frequency of the given component overlaps any of the received operating frequency ranges and associated harmonic frequencies. If so, then the given component changes the operating clock frequency to a frequency that does not overlap any of the received operating frequency ranges and associated harmonic frequencies.

    WAVE LEVEL MATRIX MULTIPLY INSTRUCTIONS
    64.
    发明公开

    公开(公告)号:US20240329998A1

    公开(公告)日:2024-10-03

    申请号:US18619392

    申请日:2024-03-28

    CPC classification number: G06F9/3802 G06F9/3001 G06F9/30098 G06F9/3867

    Abstract: An apparatus and method for efficiently processing multiplication and accumulate operations for matrices in applications. In various implementations, a computing system includes a parallel data processing circuit and a memory. The memory stores the instructions (or translated commands) of a parallel data application. The circuitry of the parallel data processing circuit performs a matrix multiplication operation using source operands accessed only once from a vector register file and multiple instantiations of a vector processing circuit capable of performing multiple matrix multiplication operations corresponding to multiple different types of instructions. The multiplier circuit and the adder circuit of the vector processing circuit perform each of the fused multiply add (FMA) operation and the dot product (inner product) operation without independent, dedicated execution pipelines with one execution pipeline for the FMA operation and the other separate execution pipeline for the dot product operation.

    Global addressing for switch fabric

    公开(公告)号:US12105952B2

    公开(公告)日:2024-10-01

    申请号:US17957469

    申请日:2022-09-30

    CPC classification number: G06F3/0607 G06F3/0629 G06F3/067

    Abstract: Systems, methods, and techniques are provided for a fabric addressable memory. A memory access request is received from a host computing device attached via one edge port of one or more interconnect switches, the memory access request directed to a destination segment of a physical fabric memory block that is allocated in local physical memory of the host computing device. The edge port accesses a stored mapping between segments of the physical fabric memory block and one or more destination port identifiers that are each associated with a respective edge port of the fabric addressable memory. The memory access request is routed by the one edge port to a destination edge port based on the stored mapping.

    LARGE NUMBER INTEGER ADDITION USING VECTOR ACCUMULATION

    公开(公告)号:US20240319964A1

    公开(公告)日:2024-09-26

    申请号:US18126107

    申请日:2023-03-24

    CPC classification number: G06F7/503

    Abstract: A processor includes one or more processor cores configured to perform accumulate top (ACCT) and accumulate bottom (ACCB) instructions. To perform such instructions, at least one processor core of the processor includes an ACCT data path that adds a first portion of a block of data to a first lane of a set of lanes of a top accumulator and adds a carry-out bit to a second lane of the set of lanes of the top accumulator. Further, the at least one processor core includes an ACCB data path that adds a second portion of the block of data to a first lane of a set of lanes of a bottom accumulator and adds a carry-out bit to a second lane of the set of lanes of the bottom accumulator.

    LATENCY REDUCTION FOR TRANSITIONS BETWEEN ACTIVE STATE AND SLEEP STATE OF AN INTEGRATED CIRCUIT

    公开(公告)号:US20240319781A1

    公开(公告)日:2024-09-26

    申请号:US18189993

    申请日:2023-03-24

    CPC classification number: G06F1/3275 G06F1/3228 G06F1/3287

    Abstract: An apparatus and method for efficient power management of multiple integrated circuits. In various implementations, a computing system includes an integrated circuit with a security processor. The security processor determines the integrated circuit transitions to an active state from a sleep state that is not intended to maintain configuration information to return to the active state without restarting an operating system. In the sleep state, multiple components of the integrated circuit have a power supply reference level turned off, which provides low power consumption for the integrated circuit. The security processor performs the bootup operation using information stored in persistent on-chip memory. By not using information stored in off-chip memory, the security processor reduces the latency of the transition. The persistent on-chip memory utilizes synchronous random-access memory that receives a standby power supply reference level that continually supplies a voltage magnitude by not being turned off.

Patent Agency Ranking