-
公开(公告)号:US10261923B2
公开(公告)日:2019-04-16
申请号:US15660819
申请日:2017-07-26
Applicant: Intel Corporation
Inventor: Kaushik Vaidyanathan , Daniel H. Morris , Uygar E. Avci , Ian A. Young , Tanay Karnik , Huichu Liu
Abstract: Described is an apparatus which comprises: a first electrical path comprising at least one driver and receiver; and a second electrical path comprising at least one driver and receiver, wherein the first and second electrical paths are to receive a same input signal, wherein the first electrical path and the second electrical path are parallel to one another and have substantially the same propagation delays, and wherein the second electrical path is enabled during a first operation mode and disabled during a second operation mode.
-
公开(公告)号:US12288153B2
公开(公告)日:2025-04-29
申请号:US18408716
申请日:2024-01-10
Applicant: Intel Corporation
Inventor: Gautham Chinya , Huichu Liu , Arnab Raha , Debabrata Mohapatra , Cormac Brick , Lance Hacking
Abstract: Methods and systems include a neural network system that includes a neural network accelerator. The neural network accelerator includes multiple processing engines coupled together to perform arithmetic operations in support of an inference performed using the deep neural network system. The neural network accelerator also includes a schedule-aware tensor data distribution circuitry or software that is configured to load tensor data into the multiple processing engines in a load phase, extract output data from the multiple processing engines in an extraction phase, reorganize the extracted output data, and store the reorganized extracted output data to memory.
-
公开(公告)号:US11411172B2
公开(公告)日:2022-08-09
申请号:US16130912
申请日:2018-09-13
Applicant: Intel Corporation
Inventor: Huichu Liu , Sasikanth Manipatruni , Daniel Morris , Kaushik Vaidyanathan , Tanay Karnik , Ian Young
Abstract: An apparatus is provided which comprises a full adder including magnetoelectric material and spin orbit material. In some embodiments, the adder includes: a 3-bit carry generation structure and a multi-bit sum generation structure coupled to the 3-bit carry generation structure. In some embodiments, the 3-bit carry generation structure includes at least three cells comprising magnetoelectric material and spin orbit material, wherein the 3-bit carry generation structure is to perform a minority logic operation on first, second, and third inputs to generate a carry output. In some embodiments, the multi-bit sum generation structure includes at least four cells comprising magnetoelectric material and spin orbit material, wherein the multi-bit sum generation structure is to perform a minority logic operation on the first, second, and third inputs and the carry output to generate a sum output.
-
公开(公告)号:US11043256B2
公开(公告)日:2021-06-22
申请号:US16458022
申请日:2019-06-29
Applicant: Intel Corporation
Inventor: Kaushik Vaidyanathan , Huichu Liu , Tanay Karnik , Sreenivas Subramoney , Jayesh Gaur , Sudhanshu Shukla
IPC: G11C11/4096 , G11C7/10 , G11C11/4091 , G11C11/4097 , G11C11/408 , G11C11/4094 , G11C7/22
Abstract: Described are mechanisms and methods for amortizing the cost of address decode, row-decode and wordline firing across multiple read accesses (instead of just on one read access). Some or all memory locations that share a wordline (WL) may be read, by walking through column multiplexor addresses (instead of just reading out one column multiplexor address per WL fire or memory access). The mechanisms and methods disclosed herein may advantageously enable N distinct memory words to be read out if the array uses an N-to-1 column multiplexor. Since memories such as embedded DRAMs (eDRAMs) may undergo a destructive read, for a given WL fire, a design may be disposed to sense N distinct memory words and restore them in order.
-
公开(公告)号:US11037614B2
公开(公告)日:2021-06-15
申请号:US16615780
申请日:2018-07-23
Applicant: Intel Corporation
Inventor: Huichu Liu , Sasikanth Manipatruni , Ian A. Young , Tanay Karnik , Daniel H. Morris , Kaushik Vaidyanathan
IPC: G11C11/22 , H01L27/11507 , H01L49/02
Abstract: Described is an apparatus to reduce or eliminate imprint charge, wherein the apparatus which comprises: a source line; a bit-line; a memory bit-cell coupled to the source line and the bit-line; a first multiplexer coupled to the bit-line; a second multiplexer coupled to the source-line; a first driver coupled to the first multiplexer; a second driver coupled to the second multiplexer; and a current source coupled to the first and second drivers.
-
公开(公告)号:US10901486B2
公开(公告)日:2021-01-26
申请号:US16384715
申请日:2019-04-15
Applicant: Intel Corporation
Inventor: Kaushik Vaidyanathan , Daniel H. Morris , Uygar E. Avci , Ian A. Young , Tanay Karnik , Huichu Liu
IPC: G06F1/3234 , G06F13/40 , G06F1/3296 , G06F1/324 , H03K19/0185
Abstract: Described is an apparatus which comprises: a first electrical path comprising at least one driver and receiver; and a second electrical path comprising at least one driver and receiver, wherein the first and second electrical paths are to receive a same input signal, wherein the first electrical path and the second electrical path are parallel to one another and have substantially the same propagation delays, and wherein the second electrical path is enabled during a first operation mode and disabled during a second operation mode.
-
公开(公告)号:US10331582B2
公开(公告)日:2019-06-25
申请号:US15430765
申请日:2017-02-13
Applicant: INTEL CORPORATION
Inventor: Ishwar S. Bhati , Huichu Liu , Jayesh Gaur , Kunal Korgaonkar , Sasikanth Manipatruni , Sreenivas Subramoney , Tanay Karnik , Hong Wang , Ian A. Young
IPC: G06F13/16 , G06F12/0811
Abstract: A processor includes a processing core and a cache controller including a read queue and a separate write queue. The read queue is to buffer read requests of the processing core to a non-volatile memory, last level cache (NVM-LLC), and the write queue is to buffer write requests to the NVM-LLC. The cache controller is to detect whether the write queue is full. The cache controller further prioritizes a first order of sending requests to the NVM-LLC when the write queue contains an empty slot, the first order specifying a first pattern of sending the read requests before the write requests, and prioritizes a second order of sending requests to the NVM-LLC in response to a determination that the write queue is full, the second order specifying a second pattern of alternating between sending a write request from the write queue and a read request from the read queue.
-
公开(公告)号:US20190043549A1
公开(公告)日:2019-02-07
申请号:US16144896
申请日:2018-09-27
Applicant: Intel Corporation
Inventor: Kaushik Vaidyanathan , Daniel H. Morris , Huichu Liu , Dileep J. Kurian , Uygar E. Avci , Tanay Karnik , Ian A. Young
IPC: G11C11/22 , G11C11/413 , G06F1/32
Abstract: Embodiments include apparatuses, methods, and systems associated with save-restore circuitry including metal-ferroelectric-metal (MFM) devices. The save-restore circuitry may be coupled to a bit node and/or bit bar node of a pair of cross-coupled inverters to save the state of the bit node and/or bit bar node when an associated circuit block transitions to a sleep state, and restore the state of the bit node and/or bit bar node when the associated circuit block transitions from the sleep state to an active state. The save-restore circuitry may be used in a flip-flop circuit, a register file circuit, and/or another suitable type of circuit. The save-restore circuitry may include a transmission gate coupled between the bit node (or bit bar node) and an internal node, and an MFM device coupled between the internal node and a plate line. Other embodiments may be described and claimed.
-
19.
公开(公告)号:US20240022259A1
公开(公告)日:2024-01-18
申请号:US18465495
申请日:2023-09-12
Applicant: Intel Corporation
Inventor: Gautham Chinya , Debabrata Mohapatra , Arnab Raha , Huichu Liu , Cormac Brick
CPC classification number: H03M7/3082 , G06F16/2237 , G06N3/063 , G06N3/08
Abstract: Methods, systems, articles of manufacture, and apparatus are disclosed to decode zero-value-compression data vectors. An example apparatus includes: a buffer monitor to monitor a buffer for a header including a value indicative of compressed data; a data controller to, when the buffer includes compressed data, determine a first value of a sparse select signal based on (1) a select signal and (2) a first position in a sparsity bitmap, the first value of the sparse select signal corresponding to a processing element that is to process a portion of the compressed data; and a write controller to, when the buffer includes compressed data, determine a second value of a write enable signal based on (1) the select signal and (2) a second position in the sparsity bitmap, the second value of the write enable signal corresponding to the processing element that is to process the portion of the compressed data.
-
公开(公告)号:US11734174B2
公开(公告)日:2023-08-22
申请号:US16576687
申请日:2019-09-19
Applicant: Intel Corporation
Inventor: Huichu Liu , Tanay Karnik , Tejpal Singh , Yen-Cheng Liu , Lavanya Subramanian , Mahesh Kumashikar , Sri Harsha Choday , Sreenivas Subramoney , Kaushik Vaidyanathan , Daniel H. Morris , Uygar E. Avci , Ian A. Young
IPC: G06F12/08 , G06F12/0804 , G06F12/0866 , G06F12/0806 , G06F11/20
CPC classification number: G06F12/0804 , G06F11/2089 , G06F12/0806 , G06F12/0866
Abstract: Described is an low overhead method and apparatus to reconfigure a pair of buffered interconnect links to operate in one of these three modes—first mode (e.g., bandwidth mode), second mode (e.g., latency mode), and third mode (e.g., energy mode). In bandwidth mode, each link in the pair buffered interconnect links carries a unique signal from source to destination. In latency mode, both links in the pair carry the same signal from source to destination, where one link in the pair is “primary” and other is called the “assist”. Temporal alignment of transitions in this pair of buffered interconnects reduces the effective capacitance of primary, thereby reducing delay or latency. In energy mode, one link in the pair, the primary, alone carries a signal, while the other link in the pair is idle. An idle neighbor on one side reduces energy consumption of the primary.
-
-
-
-
-
-
-
-
-