APPARATUS AND METHOD FOR SCALABLE ERROR DETECTION AND REPORTING

    公开(公告)号:US20220398147A1

    公开(公告)日:2022-12-15

    申请号:US17849356

    申请日:2022-06-24

    Abstract: Apparatus and method for scalable error reporting. For example, one embodiment of an apparatus comprises error detection circuitry to detect an error in a component of a first tile within a tile-based hierarchy of a processing device; error classification circuitry to classify the error and record first error data based on the classification; a first tile interface to combine the first error data with second error data received from one or more other components associated with the first tile to generate first accumulated error data; and a master tile interface to combine the first accumulated error data with second accumulated error data received from at least one other tile interface to generate second accumulated error data and to provide the second accumulated error data to a host executing an application to process the second accumulated error data.

    HIGH SPEED MEMORY SYSTEM INTEGRATION

    公开(公告)号:US20210391301A1

    公开(公告)日:2021-12-16

    申请号:US16898198

    申请日:2020-06-10

    Abstract: Embodiments disclosed herein include multi-die electronic packages. In an embodiment, an electronic package comprises a package substrate and a first die electrically coupled to the package substrate. In an embodiment, an array of die stacks are electrically coupled to the first die. In an embodiment the array of die stacks are between the first die and the package substrate. In an embodiment, individual ones of the die stacks comprise a plurality of second dies arranged in a vertical stack.

    APPARATUS AND METHOD FOR DYNAMIC PROVISIONING, QUALITY OF SERVICE, AND SCHEDULING IN A GRAPHICS PROCESSOR

    公开(公告)号:US20200278938A1

    公开(公告)日:2020-09-03

    申请号:US16700853

    申请日:2019-12-02

    Abstract: An apparatus and method for dynamic provisioning and traffic control on a memory fabric. For example, one embodiment of an apparatus comprises: a graphics processing unit (GPU) comprising a plurality of graphics processing resources; slice configuration hardware logic to logically subdivide the graphics processing resources into a plurality of slices; and slice allocation hardware logic to allocate a designated set of slices to each virtual machine (VM) of a plurality of VMs running in a virtualized execution environment; and a plurality of queues associated with each VM at different levels of a memory interconnection fabric, the queues for a first VM to store memory traffic for that VM at the different levels of the memory interconnection fabric; arbitration hardware logic coupled to the plurality of queues and distributed across the different levels of the memory interconnection fabric, the arbitration hardware logic to cause memory traffic to be blocked from one or more upstream queues of the first VM upon detecting that a downstream queue associated with the first VM is full or at a specified threshold.

Patent Agency Ranking