Abstract:
The disclosure relates to a data communication network connecting a plurality of computation clusters. The data communication network is arranged for receiving via N data input ports, N>1, input signals from first clusters of the plurality and for outputting output signals to second clusters of the plurality via M data output ports, M>1. The communication network includes a segmented bus network for interconnecting clusters of the plurality and a controller arranged for concurrently activating up to P parallel data busses of the segmented bus network, thereby forming bidirectional parallel interconnections between P of the N inputs, P
Abstract:
A method of forming conductive paths and vias is disclosed. In one aspect, patterns of a hard mask layer are transferred into a dielectric layer by etching to form trenches. The trenches define locations for conductive paths of an upper metallization level. At least one trench is interrupted in a longitudinal direction by a block portion of the hard mask layer, the block portion defining the tip-to-tip location of a pair of the conductive paths to be formed. The trenches extend partially through the dielectric layer in regions exposed by the hard mask layer, thereby deepening first and the second holes to extend completely through the dielectric layer. After removing the hard mask layer, the deepened first and second holes and the trenches are filled with a conductive material to form the conductive paths in the trenches and to form the vias in the deepened first and second holes.
Abstract:
Three transistor two junction magnetoresistive random-access memory (MRAM) bit cells provided. An example MRAM bit cell includes a first magnetic tunnel junction, MTJ, connected to a first bit line. The MRAM bit cell also includes a second MTJ connected to a second bit line. In addition, the MRAM bit cell includes a first transistor connected to the first MTJ and to a ground conductor. The MRAM bit cell further includes a second transistor connected to the second MTJ and to the ground conductor. Additionally, the MRAM bit cell includes a third transistor connected to the first transistor and to the second transistor.
Abstract:
The disclosure relates to a data communication network connecting a plurality of computation clusters. The data communication network is arranged for receiving via N data input ports, N>1, input signals from first clusters of the plurality and for outputting output signals to second clusters of the plurality via M data output ports, M>1. The communication network includes a segmented bus network for interconnecting clusters of the plurality and a controller arranged for concurrently activating up to P parallel data busses of the segmented bus network, thereby forming bidirectional parallel interconnections between P of the N inputs, P
Abstract:
The disclosed technology generally relates to magnetoresistive devices, and more particularly to a magnetic tunnel junction (MTJ) device formed in an interconnection structure, and to a method of integrating the (MTJ) device in the interconnection structure. According to an aspect, a device includes a first interconnection level including a first dielectric layer and a first set of conductive paths arranged in the first dielectric layer, a second interconnection level arranged on the first connection level and including a second dielectric layer and a second set of conductive paths arranged in the second dielectric layer, and a third interconnection level arranged on the second interconnection level and including a third dielectric layer and a third set of conductive paths arranged in the third dielectric layer. The device additionally includes a magnetic tunnel junction (MTJ) device including a bottom layer, a top layer and an MTJ structure arranged between the bottom layer and the top layer, wherein the bottom layer is connected to a bottom layer contact portion of the first set of conductive paths and the top layer is connected to a top layer contact portion of the second or third set of conductive paths. The device further includes a multi-level via extending through the second dielectric layer and the third dielectric layer, between a first via contact portion of the first set of conductive paths and a second via contact portion of the third set of conductive paths, wherein a height of the MTJ device corresponds to, or is less than, a height of the multi-level via.
Abstract:
A semiconductor circuit comprises a Front End of Line (FEOL) comprising a plurality of transistors, each of which having a source region, a drain region and a gate region arranged between the source region and the drain region and comprising a gate electrode. The semiconductor circuit also comprises a buried interconnect that is arranged in the FEOL and electrically connected to the gate region from below through a bottom contact portion of the gate electrode. By using a buried interconnect the routing of the circuit may be facilitated.
Abstract:
The disclosed technology generally relates to semiconductor memory devices, and more particularly to a static random access memory (SRAM) device. One aspect of the disclosed technology is a bit cell for a static random access memory (SRAM) comprising: a first and a second vertical stack of transistors arranged on a substrate. Each stack includes a pull-up transistor, a pull-down transistor and a pass transistor, each transistor including a horizontally extending channel, the pull-up transistor and the pull-down transistor having a common gate electrode extending vertically between the pull-up transistor and the pull-down transistor and the pass transistor having a gate electrode being separate from the common gate electrode. A source/drain of the pull-up transistor and of the pull-down transistor of the first stack, a source/drain of the pass transistor of the first stack and the common gate electrode of the pull-up and pull-down transistors of the second stack are electrically interconnected. A source/drain of the pull-up transistor and of the pull-down transistor of the second stack, a source/drain of the pass transistor of the second stack and the common gate electrode of the pull-up and pull-down transistors of the first stack are electrically interconnected.
Abstract:
Three transistor two junction magnetoresistive random-access memory (MRAM) bit cells are disclosed. An example MRAM bit cell includes a first magnetic tunnel junction, MTJ, connected to a first bit line. The MRAM bit cell also includes a second MTJ connected to a second bit line. In addition, the MRAM bit cell includes a first transistor connected to the first MTJ and to a ground conductor. The MRAM bit cell further includes a second transistor connected to the second MTJ and to the ground conductor. Additionally, the MRAM bit cell includes a third transistor connected to the first transistor and to the second transistor.
Abstract:
The disclosed technology generally relates to integrated circuit (IC), and more particularly to IC devices having one or more power gating switches and methods of fabricating the same. In one aspect, an IC device comprises a front end-of-the-line (FEOL) portion and a back end-of-the-line (BEOL) portion electrically connected to the FEOL portion. The BEOL portion comprises a plurality of metallization levels, wherein each metallization level comprises a plurality of metal lines extending in a lateral direction and a plurality of conductive vertical via structures. The IC device further comprises a power gating transistor formed in the BEOL portion and in direct electrical contact with at least one of the via structures or one of the metal lines.
Abstract:
A method and hardware system for mapping an input map of a convolutional neural network layer to an output map are disclosed. An array of processing elements are interconnected to support unidirectional dataflows through the array along at least three different spatial directions. Each processing element is adapted to combine values of dataflows along different spatial directions into a new value for at least one of the supported dataflows. For each data entry in the output map, a plurality of products from pairs of weights of a selected convolution kernel and selected data entries in the input map is provided and arranged into a plurality of associated partial sums. Products associated with a same partial sum are accumulated on the array and accumulated on the array into at least one data entry in the output map.