CONTENT FEEDBACK BASED ON REGION OF VIEW

    公开(公告)号:US20230014520A1

    公开(公告)日:2023-01-19

    申请号:US17379362

    申请日:2021-07-19

    Inventor: ROTO LE

    Abstract: Content feedback based on region of view, including: determining, for a user of a recipient device receiving content from a presenting device, a region of view of the content associated with the user; generating, based on the region of view, a visual overlay; and displaying, by the presenting device, the visual overlay applied to the content.

    Detection of false color in an image

    公开(公告)号:US11558592B1

    公开(公告)日:2023-01-17

    申请号:US17558459

    申请日:2021-12-21

    Abstract: Devices, methods, and systems for detecting false color in an image. An edge preserving filter is applied to an image sensor output to generate a first demosaiced image. A low pass filter is applied to the image sensor output to generate a second demosaiced image. A hue difference between the first demosaiced image and the second demosaiced image is calculated. A false color region is detected responsive to the hue difference exceeding a threshold hue difference.

    Per-instruction energy debugging using instruction sampling hardware

    公开(公告)号:US11556162B2

    公开(公告)日:2023-01-17

    申请号:US15923153

    申请日:2018-03-16

    Abstract: A processor utilizes instruction based sampling to generate sampling data sampled on a per instruction basis during execution of an instruction. The sampling data indicates what processor hardware was used due to the execution of the instruction. Software receives the sampling data and generates an estimate of energy used by the instruction based on the sampling data. The sampling data may include microarchitectural events and the energy estimate utilizes a base energy amount corresponding to the instruction executed along with energy amounts corresponding to the microarchitectural events in the sampling data. The sampling data may include switching events associated with hardware blocks that switched due to execution of the instruction and the energy estimate for the instruction is based on the switching events and capacitance estimates associated with the hardware blocks.

    MULTI-DIE STACKED POWER DELIVERY
    345.
    发明申请

    公开(公告)号:US20230009881A1

    公开(公告)日:2023-01-12

    申请号:US17371459

    申请日:2021-07-09

    Abstract: A multi-die processor semiconductor package includes a first base integrated circuit (IC) die configured to provide, based at least in part on an indication of a configuration of a first plurality of compute dies 3D stacked on top of the first base IC die, a unique power domain to each of the first plurality of compute dies. In some embodiments, the semiconductor package also includes a second base IC die including a second plurality of compute dies 3D stacked on top of the second base IC die and an interconnect communicably coupling the first base IC die to the second base IC die.

    SYSTEM AND METHOD FOR PROVIDING SYSTEM LEVEL SLEEP STATE POWER SAVINGS

    公开(公告)号:US20230004400A1

    公开(公告)日:2023-01-05

    申请号:US17943265

    申请日:2022-09-13

    Abstract: A system for providing system level sleep state power savings includes a plurality of memory channels and corresponding plurality of memories coupled to respective memory channels. The system includes one or more processors operative to receive information indicating that a system level sleep state is to be entered and in response to receiving the system level sleep indication, moves data stored in at least a first of the plurality of memories to at least a second of the plurality of memories. In some implementations, in response to moving the data to the second memory, the processor causes power management logic to shut off power to: at least the first memory, to a corresponding first physical layer device operatively coupled to the first memory and to a first memory controller operatively coupled to the first memory and place the second memory in a self-refresh mode of operation.

    NEURAL NETWORK POWER MANAGEMENT IN A MULTI-GPU SYSTEM

    公开(公告)号:US20230004204A1

    公开(公告)日:2023-01-05

    申请号:US17899523

    申请日:2022-08-30

    Inventor: Greg Sadowski

    Abstract: Systems, apparatuses, and methods for managing power consumption for a neural network implemented on multiple graphics processing units (GPUs) are disclosed. A computing system includes a plurality of GPUs implementing a neural network. In one implementation, the plurality of GPUs draw power from a common power supply. To prevent the power consumption of the system from exceeding a power limit for long durations, the GPUs coordinate the scheduling of tasks of the neural network. At least one or more first GPUs schedule their computation tasks so as not to overlap with the computation tasks of one or more second GPUs. In this way, the system spends less time consuming power in excess of a power limit, allowing the neural network to be implemented in a more power efficient manner.

    Resource-aware compression
    348.
    发明授权

    公开(公告)号:US11544196B2

    公开(公告)日:2023-01-03

    申请号:US16725971

    申请日:2019-12-23

    Abstract: Systems, apparatuses, and methods for implementing a multi-tiered approach to cache compression are disclosed. A cache includes a cache controller, light compressor, and heavy compressor. The decision on which compressor to use for compressing cache lines is made based on certain resource availability such as cache capacity or memory bandwidth. This allows the cache to opportunistically use complex algorithms for compression while limiting the adverse effects of high decompression latency on system performance. To address the above issue, the proposed design takes advantage of the heavy compressors for effectively reducing memory bandwidth in high bandwidth memory (HBM) interfaces as long as they do not sacrifice system performance. Accordingly, the cache combines light and heavy compressors with a decision-making unit to achieve reduced off-chip memory traffic without sacrificing system performance.

    EFFICIENT RANK SWITCHING IN MULTI-RANK MEMORY CONTROLLER

    公开(公告)号:US20220413759A1

    公开(公告)日:2022-12-29

    申请号:US17357007

    申请日:2021-06-24

    Abstract: A data processor includes a staging buffer, a command queue, a picker, and an arbiter. The staging buffer receives and stores first memory access requests. The command queue stores second memory access requests, each indicating one of a plurality of ranks of a memory system. The picker picks among the first memory access requests in the staging buffer and provides selected ones of the first memory access requests to the command queue. The arbiter selects among the second memory access requests from the command queue based on at least a preference for accesses to a current rank of the memory system. The picker picks accesses to the current rank among the first memory access requests of the staging buffer and provides the selected ones of the first memory access requests to the command queue.

Patent Agency Ranking