Power grid architecture and optimization with EUV lithography

    公开(公告)号:US11347925B2

    公开(公告)日:2022-05-31

    申请号:US15636278

    申请日:2017-06-28

    Abstract: A system and method for laying out power grid connections for standard cells are described. In various embodiments, a standard cell uses unidirectional tracks for each of the multiple power vertical metal 3 layer tracks and power horizontal metal 2 tracks. One or more of the multiple vertical metal 3 layer posts are routed with a minimum length based on a pitch of power horizontal metal 2 layer straps. One or more vertical metal 1 posts used for a power connection or a ground connection are routed from a top to a bottom of an active region permitting multiple locations to be used for connections to one of the multiple power horizontal metal 2 layer straps. Two or more power horizontal metal 2 layer straps are placed within a power metal 2 layer track without being connected to one another.

    REDUCING BURN-IN FOR MONTE-CARLO SIMULATIONS VIA MACHINE LEARNING

    公开(公告)号:US20220147668A1

    公开(公告)日:2022-05-12

    申请号:US17094690

    申请日:2020-11-10

    Abstract: Techniques are disclosed for compressing data. The techniques include identifying, in data to be compressed, a first set of values, wherein the first set of values include a first number of two or more consecutive identical non-zero values; including, in compressed data, a first control value indicating the first number of non-zero values and a first data item corresponding to the consecutive identical non-zero values; identifying, in the data to be compressed, a second value having an exponent value included in a defined set of exponent values; including, in the compressed data, a second control value indicating the exponent value and a second data item corresponding to a portion of the second value other than the exponent value; and including, in the compressed data, a third control value indicating a third set of one or more consecutive zero values in the data to be compressed.

    Techniques to improve translation lookaside buffer reach by leveraging idle resources

    公开(公告)号:US11321241B2

    公开(公告)日:2022-05-03

    申请号:US17008435

    申请日:2020-08-31

    Abstract: Techniques are disclosed for processing address translations. The techniques include detecting a first miss for a first address translation request for a first address translation in a first translation lookaside buffer, in response to the first miss, fetching the first address translation into the first translation lookaside buffer and evicting a second address translation from the translation lookaside buffer into an instruction cache or local data share memory, detecting a second miss for a second address translation request referencing the second address translation, in the first translation lookaside buffer, and in response to the second miss, fetching the second address translation from the instruction cache or the local data share memory.

    MEMORY BANDWIDTH REDUCTION TECHNIQUES FOR LOW POWER CONVOLUTIONAL NEURAL NETWORK INFERENCE APPLICATIONS

    公开(公告)号:US20220129752A1

    公开(公告)日:2022-04-28

    申请号:US17571045

    申请日:2022-01-07

    Abstract: Systems, apparatuses, and methods for implementing memory bandwidth reduction techniques for low power convolutional neural network inference applications are disclosed. A system includes at least a processing unit and an external memory coupled to the processing unit. The system detects a request to perform a convolution operation on input data from a plurality of channels. Responsive to detecting the request, the system partitions the input data from the plurality of channels into 3D blocks so as to minimize the external memory bandwidth utilization for the convolution operation being performed. Next, the system loads a selected 3D block from external memory into internal memory and then generates convolution output data for the selected 3D block for one or more features. Then, for each feature, the system adds convolution output data together across channels prior to writing the convolution output data to the external memory.

    Integrated circuit product customizations for identification code visibility

    公开(公告)号:US11315883B2

    公开(公告)日:2022-04-26

    申请号:US16680978

    申请日:2019-11-12

    Abstract: An apparatus includes a substrate including an identification code on a first side of the substrate and near a perimeter of the substrate. The apparatus includes a stiffener structure attached to the first side of the substrate. The stiffener structure has a cutout in an outer perimeter of the stiffener structure. The stiffener structure is oriented with respect to the substrate to cause the cutout to expose the identification code. The cutout may have a first dimension and a second dimension orthogonal to the first dimension. The first dimension may exceed a corresponding first dimension of the identification code and the second dimension may exceed a corresponding second dimension of the identification code, thereby forming a void region between the identification code and edges of the stiffener structure.

    REFRESH MANAGEMENT FOR MEMORY
    427.
    发明申请

    公开(公告)号:US20220122652A1

    公开(公告)日:2022-04-21

    申请号:US17564575

    申请日:2021-12-29

    Abstract: A memory controller interfaces with a random access memory over a memory channel. A refresh control circuit monitors an activate counter which counts a rolling number of activate commands sent over the memory channel to a memory region of the memory. In response to the activate counter being above an intermediate management threshold value, the refresh control circuit only issue a refresh management (RFM) command if there is no REF command currently held at the refresh command circuit for the memory region.

    Tags for request packets on a network communication link

    公开(公告)号:US11301410B1

    公开(公告)日:2022-04-12

    申请号:US17120208

    申请日:2020-12-13

    Inventor: Gordon Caruk

    Abstract: An electronic device includes a requester and a link interface coupled between the requester and a link. The requester is configured to send a request packet to a completer on the link via the link interface. When sending the request packet to the completer, the requester sends, to the completer via the link interface, the request packet with a tag that is not unique with respect to tags in other request packets from the requester that will be in the internal elements of the completer before the request packet is in the internal elements of the completer, but that is unique with respect to tags in other request packets from the requester that will be in the internal elements of the completer while the request packet is in the internal elements of the completer.

    Spatial partitioning in a multi-tenancy graphics processing unit

    公开(公告)号:US11295507B2

    公开(公告)日:2022-04-05

    申请号:US17091957

    申请日:2020-11-06

    Abstract: A graphics processing unit (GPU) or other apparatus includes a plurality of shader engines. The apparatus also includes a first front end (FE) circuit and one or more second FE circuits. The first FE circuit is configured to schedule geometry workloads for the plurality of shader engines in a first mode. The first FE circuit is configured to schedule geometry workloads for a first subset of the plurality of shader engines and the one or more second FE circuits are configured to schedule geometry workloads for a second subset of the plurality of shader engines in a second mode. In some cases, a partition switch is configured to selectively connect the first FE circuit or the one or more second FE circuits to the second subset of the plurality of shader engines depending on whether the apparatus is in the first mode or the second mode.

    MASKED FAULT DETECTION FOR RELIABLE LOW VOLTAGE CACHE OPERATION

    公开(公告)号:US20220103191A1

    公开(公告)日:2022-03-31

    申请号:US17125145

    申请日:2020-12-17

    Abstract: Systems, apparatuses, and methods for implementing masked fault detection for reliable low voltage cache operation are disclosed. A processor includes a cache that can operate at a relatively low voltage level to conserve power. However, at low voltage levels, the cache is more likely to suffer from bit errors. To mitigate the bit errors occurring in cache lines at low voltage levels, the cache employs a strategy to uncover masked faults during runtime accesses to data by actual software applications. For example, on the first read of a given cache line, the data of the given cache line is inverted and written back to the same data array entry. Also, the error correction bits are regenerated for the inverted data. On a second read of the given cache line, if the fault population of the given cache line changes, then the given cache line's error protection level is updated.

Patent Agency Ranking