LEVERAGING LOW POWER STATES FOR FAULT TESTING OF PROCESSING CORES AT RUNTIME

    公开(公告)号:US20230123956A1

    公开(公告)日:2023-04-20

    申请号:US18068666

    申请日:2022-12-20

    Abstract: In various examples, one or more components or regions of a processing unit—such as a processing core, and/or component thereof—may be tested for faults during deployment in the field. To perform testing while in deployment, the state of a component subject to test may be retrieved and/or stored during the test to maintain state integrity, the component may be clamped to communicatively isolate the component from other components of the processing unit, a test vector may be applied to the component, and the output of the component may be compared against an expected output to determine if any faults are present. The state of the component may be restored after testing, and the clamp removed, thereby returning the component to its operating state without a perceivable detriment to operation of the processing unit in deployment.

    Leveraging low power states for fault testing of processing cores at runtime

    公开(公告)号:US11573872B2

    公开(公告)日:2023-02-07

    申请号:US17556473

    申请日:2021-12-20

    Abstract: In various examples, one or more components or regions of a processing unit—such as a processing core, and/or component thereof—may be tested for faults during deployment in the field. To perform testing while in deployment, the state of a component subject to test may be retrieved and/or stored during the test to maintain state integrity, the component may be clamped to communicatively isolate the component from other components of the processing unit, a test vector may be applied to the component, and the output of the component may be compared against an expected output to determine if any faults are present. The state of the component may be restored after testing, and the clamp removed, thereby returning the component to its operating state without a perceivable detriment to operation of the processing unit in deployment.

    Multi-pass rendering in a screen space pipeline

    公开(公告)号:US10147222B2

    公开(公告)日:2018-12-04

    申请号:US14952390

    申请日:2015-11-25

    Abstract: A multi-pass unit interoperates with a device driver to configure a screen space pipeline to perform multiple processing passes with buffered graphics primitives. The multi-pass unit receives primitive data and state bundles from the device driver. The primitive data includes a graphics primitive and a primitive mask. The primitive mask indicates the specific passes when the graphics primitive should be processed. The state bundles include one or more state settings and a state mask. The state mask indicates the specific passes where the state settings should be applied. The primitives and state settings are interleaved. For a given pass, the multi-pass unit extracts the interleaved state settings for that pass and configures the screen space pipeline according to those state settings. The multi-pass unit also extracts the interleaved graphics primitives to be processed in that pass. Then, the multi-pass unit causes the screen space pipeline to process those graphics primitives.

    LOSS-SCALING FOR DEEP NEURAL NETWORK TRAINING WITH REDUCED PRECISION

    公开(公告)号:US20240078433A1

    公开(公告)日:2024-03-07

    申请号:US18385871

    申请日:2023-10-31

    CPC classification number: G06N3/084 G06N3/04 G06N3/063

    Abstract: In training a deep neural network using reduced precision, gradient computation operates on larger values without affecting the rest of the training procedure. One technique trains the deep neural network to develop loss, scales the loss, computes gradients at a reduced precision, and reduces the magnitude of the computed gradients to compensate for scaling of the loss. In one example non-limiting arrangement, the training forward pass scales a loss value by some factor S and the weight update reduces the weight gradient contribution by 1/S. Several techniques can be used for selecting scaling factor S and adjusting the weight update.

    Loss-scaling for deep neural network training with reduced precision

    公开(公告)号:US11842280B2

    公开(公告)日:2023-12-12

    申请号:US15971884

    申请日:2018-05-04

    CPC classification number: G06N3/084 G06N3/04 G06N3/063

    Abstract: In training a deep neural network using reduced precision, gradient computation operates on larger values without affecting the rest of the training procedure. One technique trains the deep neural network to develop loss, scales the loss, computes gradients at a reduced precision, and reduces the magnitude of the computed gradients to compensate for scaling of the loss. In one example non-limiting arrangement, the training forward pass scales a loss value by some factor S and the weight update reduces the weight gradient contribution by 1/S. Several techniques can be used for selecting scaling factor S and adjusting the weight update.

    Leveraging low power states for fault testing of processing cores at runtime

    公开(公告)号:US11204849B2

    公开(公告)日:2021-12-21

    申请号:US16818327

    申请日:2020-03-13

    Abstract: In various examples, one or more components or regions of a processing unit—such as a processing core, and/or component thereof—may be tested for faults during deployment in the field. To perform testing while in deployment, the state of a component subject to test may be retrieved and/or stored during the test to maintain state integrity, the component may be clamped to communicatively isolate the component from other components of the processing unit, a test vector may be applied to the component, and the output of the component may be compared against an expected output to determine if any faults are present. The state of the component may be restored after testing, and the clamp removed, thereby returning the component to its operating state without a perceivable detriment to operation of the processing unit in deployment.

Patent Agency Ranking