Adaptive world switching
    452.
    发明授权

    公开(公告)号:US11243799B2

    公开(公告)日:2022-02-08

    申请号:US16556521

    申请日:2019-08-30

    Abstract: An apparatus includes a plurality of virtual machines, a hypervisor coupled to the plurality of virtual machines, and a graphical processing unit (GPU) coupled to the hypervisor. The plurality of virtual machines are allocated a plurality of time slices. The hypervisor initiates a world switch to a first virtual machine of the plurality of virtual machines. The GPU makes a determination as to whether to adjust the time slice associated with the first virtual machine based on an assessment of time slice adjustment parameters related to an execution time of at least one of the plurality of virtual machines.

    Multi-version shaders
    453.
    发明授权

    公开(公告)号:US11243752B2

    公开(公告)日:2022-02-08

    申请号:US16509165

    申请日:2019-07-11

    Abstract: Described herein are techniques for generating a stitched shader program. The techniques include identifying a set of shader programs to include in the stitched shader program, wherein the set includes at least one multiversion shader program that includes a first version of instructions and a second version of instructions, wherein the first version of instructions uses a first number of resources that is different than a second number of resources used by the second version of instructions. The techniques also include combining the set of shader programs to form the stitched shader program. The techniques further include determining a number of resources for the stitched shader program. The techniques also include based on the determined number of resources, modifying the instructions corresponding to the multiversion shader program to, when executed, execute either the first version of instructions, or the second version of instructions.

    TERMINATION CALIBRATION SCHEME USING A CURRENT MIRROR

    公开(公告)号:US20220038102A1

    公开(公告)日:2022-02-03

    申请号:US17502741

    申请日:2021-10-15

    Abstract: Systems, apparatuses, and methods for conveying and receiving information as electrical signals in a computing system are disclosed. A computing system includes multiple transmitters sending singled-ended data signals to multiple receivers. A termination voltage is generated and sent to the multiple receivers. The termination voltage is coupled to each of signal termination circuitry and signal sampling circuitry within each of the multiple receivers. Any change in the termination voltage affects the termination circuitry and affects comparisons performed by the sampling circuitry. Received signals are reconstructed at the receivers using the received signals, the signal termination circuitry and the signal sampling circuitry.

    DATA COMMUNICATIONS WITH ENHANCED SPEED MODE

    公开(公告)号:US20220035765A1

    公开(公告)日:2022-02-03

    申请号:US17503959

    申请日:2021-10-18

    Abstract: An interconnect controller for a data processing platform includes a data link layer controller for selectively receiving data packets from and sending data packets to a higher protocol layer, and a physical layer controller coupled to the data link layer controller and adapted to be coupled to a communication link. The physical layer controller operates according to a predetermined protocol selectively at one of a plurality of enhanced speeds that are not specified by any published standard and are separated from each other by the same predetermined amount. In response to performing a link initialization, the interconnect controller performs at least one setup operation to select a speed, and subsequently operates the communication link using a selected speed.

    Arithemetic logic unit register sequencing

    公开(公告)号:US11237827B2

    公开(公告)日:2022-02-01

    申请号:US16696108

    申请日:2019-11-26

    Abstract: A graphics processing unit (GPU) sequences provision of operands to a set of operand registers, thereby allowing the GPU to share at least one of the operand registers between processing. The GPU includes a plurality of arithmetic logic units (ALUs) with at least one of the ALUs configured to perform double precision operations. The GPU further includes a set of operand registers configured to store single precision operands. For a plurality of executing threads that request double precision operations, the GPU stores the corresponding operands at the operand registers. Over a plurality of execution cycles, the GPU sequences transfer of operands from the set of operand registers to a designated double precision operand register. During each execution cycle, the double-precision ALU executes a double precision operation using the operand stored at the double precision operand register.

    ASSIGNING VARIABLE LENGTH ADDRESS IDENTIFIERS TO PACKETS IN A PROCESSING SYSTEM

    公开(公告)号:US20220029954A1

    公开(公告)日:2022-01-27

    申请号:US17496256

    申请日:2021-10-07

    Inventor: David A. Roberts

    Abstract: A controller assigns variable length addresses to addressable elements that are connected to a network. The variable length addresses are determined based on probabilities that packets are addressed to the corresponding addressable element. The controller transmits, to the addressable elements via the network, a routing table indicating the variable length addresses assigned to the addressable elements. Routers or addressable elements receive the routing table and route one or more packets over the network to an addressable element using variable length addresses included in a header of the one or more packets.

    ALLREDUCE ENHANCED DIRECT MEMORY ACCESS FUNCTIONALITY

    公开(公告)号:US20210406209A1

    公开(公告)日:2021-12-30

    申请号:US17032195

    申请日:2020-09-25

    Abstract: Systems, apparatuses, and methods for performing an allreduce operation on an enhanced direct memory access (DMA) engine are disclosed. A system implements a machine learning application which includes a first kernel and a second kernel. The first kernel corresponds to a first portion of a machine learning model while the second kernel corresponds to a second portion of the machine learning model. The first kernel is invoked on a plurality of compute units and the second kernel is converted into commands executable by an enhanced DMA engine to perform a collective communication operation. The first kernel is executed on the plurality of compute units in parallel with the enhanced DMA engine executing the commands for performing the collective communication operation. As a result, the allreduce operation may be executed in parallel on the enhanced DMA engine to the compute units.

    High dynamic range for head-mounted display device

    公开(公告)号:US11209899B2

    公开(公告)日:2021-12-28

    申请号:US15807483

    申请日:2017-11-08

    Abstract: A technique for adjusting the brightness values of images to be displayed on a stereoscopic head mounted display is provided herein. This technique improves the perceived dynamic range of the head mounted display by dynamically adjusting the pixel intensities (also known generally as “exposure”) of the images presented on the head mounted display based on a detected gaze direction. The head mounted display includes an eye tracker that is able to sense the gaze directions of the eyes. The eye tracker, head mounted display, or a processor of a computer system receives this information, determines an intersection point of the eye gaze and a screen within the head mounted display and identifies a gaze area around this intersection point. Using this gaze area, the processing system adjusts the pixel intensities of an image displayed on the screen based on the intensities of the pixels within the gaze area.

Patent Agency Ranking