Frequency scaling for per-core accelerator assignments

    公开(公告)号:US12248783B2

    公开(公告)日:2025-03-11

    申请号:US18369082

    申请日:2023-09-15

    Abstract: Methods for frequency scaling for per-core accelerator assignments and associated apparatus. A processor includes a CPU (central processing unit) having multiple cores that can be selectively configured to support frequency scaling and instruction extensions. Under this approach, some cores can be configured to support a selective set of AVX instructions (such as AVX3/5G-ISA instructions) and/or AMX instructions, while other cores are configured to not support these AVX/AMX instructions. In one aspect, the selective AVX/AMX instructions are implemented in one or more ISA extension units that are separate from the main processor core (or otherwise comprises a separate block of circuitry in a processor core) that can be selectively enabled or disabled. This enables cores having the separate unit(s) disabled to consume less power and/or operate at higher frequencies, while supporting the selective AVX/AMX instructions using other cores. These capabilities enhance performance and provides flexibility to handle a variety of applications requiring use of advanced AVX/AMX instructions to support accelerated workloads.

    Side-channel exploit detection
    42.
    发明授权

    公开(公告)号:US12248570B2

    公开(公告)日:2025-03-11

    申请号:US17739930

    申请日:2022-05-09

    Abstract: The present disclosure is directed to systems and methods for detecting side-channel exploit attacks such as Spectre and Meltdown. Performance monitoring circuitry includes first counter circuitry to monitor CPU cache misses and second counter circuitry to monitor DTLB load misses. Upon detecting an excessive number of cache misses and/or load misses, the performance monitoring circuitry transfers the first and second counter circuitry data to control circuitry. The control circuitry determines a CPU cache miss to DTLB load miss ratio for each of a plurality of temporal intervals. The control circuitry the identifies, determines, and/or detects a pattern or trend in the CPU cache miss to DTLB load miss ratio. Upon detecting a deviation from the identified CPU cache miss to DTLB load miss ratio pattern or trend indicative of a potential side-channel exploit attack, the control circuitry generates an output to alert a system user or system administrator.

    Authenticator-integrated generative adversarial network (GAN) for secure deepfake generation

    公开(公告)号:US12248556B2

    公开(公告)日:2025-03-11

    申请号:US17356116

    申请日:2021-06-23

    Abstract: An apparatus to facilitate an authenticator-integrated generative adversarial network (GAN) for secure deepfake generation is disclosed. The apparatus includes one or more processors to: generate, by a generative neural network, samples based on feedback received from a discriminator neural network and from an authenticator neural network, the generative neural network aiming to trick the discriminator neural network to identify the generated samples as real content samples; digest, by the authenticator neural network, the real content samples, the generated samples from the generative neural network, and an authentication code; embed, by the authenticator neural network, the authentication code into the generated samples from the generative neural network by contributing to a generator loss provided to the generative neural network; generate, by the generative neural network, content comprising the embedded authentication code; and verify, by the authenticator neural network, the content based on the embedded authentication code.

    Debug trace microsectors
    44.
    发明授权

    公开(公告)号:US12248021B2

    公开(公告)日:2025-03-11

    申请号:US17132683

    申请日:2020-12-23

    Abstract: Systems and methods described herein may relate to data transactions involving a microsector architecture. Control circuitry may organize transactions to and from the microsector architecture to, for example, enable direct addressing transactions as well as batch transactions across multiple microsectors. A data path disposed between programmable logic circuitry of a column of microsectors and a column of row controllers may form a micro-network-on-chip used by a network-on-chip to interface with the programmable logic circuitry.

    MAGNETIC INDUCTORS FOR SEMICONDUCTOR PACKAGING

    公开(公告)号:US20250079300A1

    公开(公告)日:2025-03-06

    申请号:US18240318

    申请日:2023-08-30

    Abstract: Magnetic inductors for microelectronics packages are provided. Magnetic inductive structures include a magnetic region, a magnetic region base region, and a conductive region that forms a channel within the magnetic region. The magnetic region has a different chemical composition than the base region. Additional structures are provided in which the magnetic region is recessed into a package substrate core. Further inductor structures are provided in which the conductive region includes through-core vias and the conductive region at least partially encircles a portion of a package substrate core. Additionally, methods of manufacture are provided for semiconductor packages that include magnetic inductors.

    VARIABLE PRECISION IN VECTORIZATION

    公开(公告)号:US20250077527A1

    公开(公告)日:2025-03-06

    申请号:US18930440

    申请日:2024-10-29

    Inventor: Robert Vaughn

    Abstract: Systems, apparatuses and methods may provide for technology that identifies a first keyword and a second keyword in a plurality of keywords, determines that a first relevance associated with the first keyword is greater than a second relevance associated with the second keyword, vectorizes the first keyword to a first level of precision, vectorizes the second keyword to a second level of precision, wherein the first level of precision is greater than the second level of precision, and stores the vectorized first keyword and the vectorized second keyword to a retrieval-augmented generation (RAG) vector database.

    INSTRUCTION PREFETCH BASED ON THREAD DISPATCH COMMANDS

    公开(公告)号:US20250077232A1

    公开(公告)日:2025-03-06

    申请号:US18882364

    申请日:2024-09-11

    Abstract: A graphics processing device is provided that includes a set of compute units to execute a workload, a cache coupled with the set of compute units, and circuitry coupled with the cache and the set of compute units. The circuitry is configured to, in response to a cache miss for the read from a first cache, broadcast an event within the graphics processor device to identify data associated with the cache miss, receive the event at a second compute unit in the set of compute units, and prefetch the data identified by the event into a second cache that is local to the second compute unit before an attempt to read the instruction or data by the second thread.

    Reinforcement learning (RL) and graph neural network (GNN)-based resource management for wireless access networks

    公开(公告)号:US12245052B2

    公开(公告)日:2025-03-04

    申请号:US17483208

    申请日:2021-09-23

    Abstract: A computing node to implement an RL management entity in an NG wireless network includes a NIC and processing circuitry coupled to the NIC. The processing circuitry is configured to generate a plurality of network measurements for a corresponding plurality of network functions. The functions are configured as a plurality of ML models forming a multi-level hierarchy. Control signaling from an ML model of the plurality is decoded, the ML model being at a predetermined level (e.g., a lowest level) in the hierarchy. The control signaling is responsive to a corresponding network measurement and at least second control signaling from a second ML model at a level that is higher than the predetermined level. A plurality of reward functions is generated for training the ML models, based on the control signaling from the MLO model at the predetermined level in the multi-level hierarchy.

Patent Agency Ranking