ARTIFICIAL INTELLIGENCE VIA HARDWARE-ASSISTED TOURNAMENT

    公开(公告)号:US20220198261A1

    公开(公告)日:2022-06-23

    申请号:US17131546

    申请日:2020-12-22

    Abstract: A system and method for providing for adoption of solvers for solving at least one task is disclosed. The system and method include a controller, solvers capable of solving the at least one task, and at least one memory. The controller admits ones of the solvers into a competition for solving the at least one task, provides, via the at least one memory, an input of the task to the admitted solvers, provides, via the at least one memory, intermediate results of execution by the admitted solvers that are provided the input, receives a prediction of the next intermediate result from the admitted solvers predicting from at least one of the provided input and received intermediate results, and ranks the at least one of the admitted solvers for solving the task based on at least one of the next intermediate results, the provided input and received intermediate results.

    LOOK-AHEAD TELEPORTATION FOR RELIABLE COMPUTATION IN MULTI-SIMD QUANTUM PROCESSOR

    公开(公告)号:US20210255871A1

    公开(公告)日:2021-08-19

    申请号:US16794124

    申请日:2020-02-18

    Abstract: A technique for processing qubits in a quantum computing device is provided. The technique includes determining that, in a first cycle, a first quantum processing region is to perform a first quantum operation that does not use a qubit that is stored in the first quantum processing region, identifying a second quantum processing region that is to perform a second quantum operation at a second cycle that is later than the first cycle, wherein the second quantum operation uses the qubit, determining that between the first cycle and the second cycle, no quantum operations are performed in the second quantum processing region, and moving the qubit from the first quantum processing region to the second quantum processing region.

    METHOD AND SYSTEM FOR OPPORTUNISTIC LOAD BALANCING IN NEURAL NETWORKS USING METADATA

    公开(公告)号:US20190391850A1

    公开(公告)日:2019-12-26

    申请号:US16019374

    申请日:2018-06-26

    Abstract: Methods and systems for opportunistic load balancing in deep neural networks (DNNs) using metadata. Representative computational costs are captured, obtained or determined for a given architectural, functional or computational aspect of a DNN system. The representative computational costs are implemented as metadata for the given architectural, functional or computational aspect of the DNN system. In an implementation, the computed computational cost is implemented as the metadata. A scheduler detects whether there are neurons in subsequent layers that are ready to execute. The scheduler uses the metadata and neuron availability to schedule and load balance across compute resources and available resources.

    Setting operating points for circuits in an integrated circuit chip

    公开(公告)号:US10097091B1

    公开(公告)日:2018-10-09

    申请号:US15793951

    申请日:2017-10-25

    Abstract: The described embodiments include an apparatus that controls voltages for an integrated circuit chip having a set of circuits. The apparatus includes a switching voltage regulator separate from the integrated circuit chip and two or more low dropout (LDO) regulators fabricated on the integrated circuit chip. The switching voltage regulator provides an output voltage that is received as an input voltage by each of the two or more LDO regulators, and each of the two or more LDO regulators provides a local output voltage, each local output voltage received as a local input voltage by a different subset of the circuits in the set of circuits. During operation, a controller sets an operating point for each of the subsets of circuits based on a combined power efficiency for the subsets of the circuits and the LDO regulators, each operating point including a corresponding frequency and voltage.

    MECHANISMS TO IMPROVE DATA LOCALITY FOR DISTRIBUTED GPUS

    公开(公告)号:US20180115496A1

    公开(公告)日:2018-04-26

    申请号:US15331002

    申请日:2016-10-21

    Abstract: Systems, apparatuses, and methods for implementing mechanisms to improve data locality for distributed processing units are disclosed. A system includes a plurality of distributed processing units (e.g., GPUs) and memory devices. Each processing unit is coupled to one or more local memory devices. The system determines how to partition a workload into a plurality of workgroups based on maximizing data locality and data sharing. The system determines which subset of the plurality of workgroups to dispatch to each processing unit of the plurality of processing units based on maximizing local memory accesses and minimizing remote memory accesses. The system also determines how to partition data buffer(s) based on data sharing patterns of the workgroups. The system maps to each processing unit a separate portion of the data buffer(s) so as to maximize local memory accesses and minimize remote memory accesses.

    TEMPERATURE-AWARE TASK SCHEDULING AND PROACTIVE POWER MANAGEMENT

    公开(公告)号:US20170371719A1

    公开(公告)日:2017-12-28

    申请号:US15192784

    申请日:2016-06-24

    CPC classification number: G06F9/4893 G06F1/206 G06F1/329 G06F9/5094 Y02D10/24

    Abstract: Systems, apparatuses, and methods for performing temperature-aware task scheduling and proactive power management. A SoC includes a plurality of processing units and a task queue storing pending tasks. The SoC calculates a thermal metric for each pending task to predict an amount of heat the pending task will generate. The SoC also determines a thermal gradient for each processing unit to predict a rate at which the processing unit's temperature will change when executing a task. The SoC also monitors a thermal margin of how far each processing unit is from reaching its thermal limit. The SoC minimizes non-uniform heat generation on the SoC by scheduling pending tasks from the task queue to the processing units based on the thermal metrics for the pending tasks, the thermal gradients of each processing unit, and the thermal margin available on each processing unit.

Patent Agency Ranking