-
公开(公告)号:US11783160B2
公开(公告)日:2023-10-10
申请号:US15884001
申请日:2018-01-30
Applicant: Intel Corporation
Inventor: Phil Knag , Gregory Kengho Chen , Raghavan Kumar , Huseyin Ekin Sumbul , Ram Kumar Krishnamurthy
Abstract: Various systems, devices, and methods for operating on a data sequence. A system includes a set of circuits that form an input layer to receive a data sequence; first hardware computing units to transform the data sequence, the first hardware computing units connected using a set of randomly selected weights, a first hardware computing unit to: receive an input from a second hardware computing unit, determine a weight of a connection between the first and second hardware computing units using an identifier of the second hardware computing unit and a fixed random weight generator, and operate on the input using the weight to determine a state of the first hardware computing unit; and second hardware computing units to operate on states of the first computing units to generate an output based on the data sequence.
-
公开(公告)号:US11770258B2
公开(公告)日:2023-09-26
申请号:US17562461
申请日:2021-12-27
Applicant: Intel Corporation
Inventor: Vikram Suresh , Sanu Mathew , Manoj Sastry , Santosh Ghosh , Raghavan Kumar , Rafael Misoczki
CPC classification number: H04L9/3239 , H04L9/0869 , H04L9/3247 , H04L9/50
Abstract: In one example an apparatus comprises a computer readable memory, hash logic to generate a message hash value based on an input message, signature logic to generate a signature to be transmitted in association with the message, the signature logic to apply a hash-based signature scheme to a private key to generate the signature comprising a public key, and accelerator logic to pre-compute at least one set of inputs to the signature logic. Other examples may be described.
-
公开(公告)号:US11726950B2
公开(公告)日:2023-08-15
申请号:US16586975
申请日:2019-09-28
Applicant: Intel Corporation
Inventor: Huseyin Ekin Sumbul , Gregory K. Chen , Phil Knag , Raghavan Kumar , Ram Krishnamurthy
CPC classification number: G06F15/8046 , G06F17/153 , G06N3/063
Abstract: A compute near memory (CNM) convolution accelerator enables a convolutional neural network (CNN) to use dedicated acceleration to achieve efficient in-place convolution operations with less impact on memory and energy consumption. A 2D convolution operation is reformulated as 1D row-wise convolution. The 1D row-wise convolution enables the CNM convolution accelerator to process input activations row-by-row, while using the weights one-by-one. Lightweight access circuits provide the ability to stream both weights and input rows as vectors to MAC units, which in turn enables modules of the CNM convolution accelerator to implement convolution for both [1×1] and chosen [n×n] sized filters.
-
公开(公告)号:US11455431B2
公开(公告)日:2022-09-27
申请号:US17132387
申请日:2020-12-23
Applicant: Intel Corporation
Inventor: Vikram Suresh , Raghavan Kumar , Sanu Mathew
Abstract: A method comprises generating, during an enrollment process conducted in a controlled environment, a dark bit mask comprising a plurality of state information values derived from a plurality of entropy sources at a plurality of operating conditions for an electronic device, and using at least a portion of the plurality of state information values to generate a set of challenge-response pairs for use in an authentication process for the electronic device.
-
公开(公告)号:US11347994B2
公开(公告)日:2022-05-31
申请号:US16160466
申请日:2018-10-15
Applicant: Intel Corporation
Inventor: Amrita Mathuriya , Sasikanth Manipatruni , Victor Lee , Huseyin Sumbul , Gregory Chen , Raghavan Kumar , Phil Knag , Ram Krishnamurthy , Ian Young , Abhishek Sharma
IPC: G06F3/06 , G06N3/04 , G06F12/0875 , G06N3/063
Abstract: The present disclosure is directed to systems and methods of bit-serial, in-memory, execution of at least an nth layer of a multi-layer neural network in a first on-chip processor memory circuitry portion contemporaneous with prefetching and storing layer weights associated with the (n+1)st layer of the multi-layer neural network in a second on-chip processor memory circuitry portion. The storage of layer weights in on-chip processor memory circuitry beneficially decreases the time required to transfer the layer weights upon execution of the (n+1)st layer of the multi-layer neural network by the first on-chip processor memory circuitry portion. In addition, the on-chip processor memory circuitry may include a third on-chip processor memory circuitry portion used to store intermediate and/or final input/output values associated with one or more layers included in the multi-layer neural network.
-
公开(公告)号:US11303429B2
公开(公告)日:2022-04-12
申请号:US16455950
申请日:2019-06-28
Applicant: Intel Corporation
Inventor: Santosh Ghosh , Vikram Suresh , Sanu Mathew , Manoj Sastry , Andrew H. Reinders , Raghavan Kumar , Rafael Misoczki
Abstract: In one example an apparatus comprises a computer readable memory, an XMSS operations logic to manage XMSS functions, a chain function controller to manage chain function algorithms, a secure hash algorithm-2 (SHA2) accelerator, a secure hash algorithm-3 (SHA3) accelerator, and a register bank shared between the SHA2 accelerator and the SHA3 accelerator. Other examples may be described.
-
公开(公告)号:US20220108039A1
公开(公告)日:2022-04-07
申请号:US17551961
申请日:2021-12-15
Applicant: Intel Corporation
Inventor: Vikram Suresh , Sanu Mathew , Rafael Misoczki , Santosh Ghosh , Raghavan Kumar , Manoj Sastry , Andrew H. Reinders
Abstract: Embodiments are directed to post quantum public key signature operation for reconfigurable circuit devices. An embodiment of an apparatus includes one or more processors; and a reconfigurable circuit device, the reconfigurable circuit device including a dedicated cryptographic hash hardware engine, and a reconfigurable fabric including logic elements (LEs), wherein the one or more processors are to configure the reconfigurable circuit device for public key signature operation, including mapping a state machine for public key generation and verification to the reconfigurable fabric, including mapping one or more cryptographic hash engines to the reconfigurable fabric, and combining the dedicated cryptographic hash hardware engine with the one or more mapped cryptographic hash engines for cryptographic signature generation and verification.
-
公开(公告)号:US11218320B2
公开(公告)日:2022-01-04
申请号:US16455908
申请日:2019-06-28
Applicant: Intel Corporation
Inventor: Vikram Suresh , Sanu Mathew , Manoj Sastry , Santosh Ghosh , Raghavan Kumar , Rafael Misoczki
Abstract: In one example an apparatus comprises a computer readable memory, hash logic to generate a message hash value based on an input message, signature logic to generate a signature to be transmitted in association with the message, the signature logic to apply a hash-based signature scheme to a private key to generate the signature comprising a public key, and accelerator logic to pre-compute at least one set of inputs to the signature logic. Other examples may be described.
-
公开(公告)号:US11016701B2
公开(公告)日:2021-05-25
申请号:US16146878
申请日:2018-09-28
Applicant: Intel Corporation
Inventor: Ian Young , Ram Krishnamurthy , Sasikanth Manipatruni , Amrita Mathuriya , Abhishek Sharma , Raghavan Kumar , Phil Knag , Huseyin Sumbul , Gregory Chen
Abstract: Techniques and mechanisms for a memory device to perform in-memory computing based on a logic state which is detected with a voltage-controlled oscillator (VCO). In an embodiment, a VCO circuit of the memory device receives from a memory array a first signal indicating a logic state that is based on one or more currently stored data bits. The VCO provides a conversion from the logic state being indicated by a voltage characteristic of the first signal to the logic state being indicated by a corresponding frequency characteristic of a cyclical signal. Based on the frequency characteristic, the logic state is identified and communicated for use in an in-memory computation at the memory device. In another embodiment, a result of the in-memory computation is written back to the memory array.
-
公开(公告)号:US20210119766A1
公开(公告)日:2021-04-22
申请号:US17133711
申请日:2020-12-24
Applicant: Intel Corporation
Inventor: Vikram B. Suresh , Rosario Cammarota , Sanu K. Mathew , Zeshan A. Chishti , Raghavan Kumar , Rafael Misoczki
IPC: H04L9/00 , G06F12/0802 , G06N20/00
Abstract: Technologies for memory and I/O efficient operations on homomorphically encrypted data are disclosed. In the illustrative embodiment, a cloud compute device is to perform operations on homomorphically encrypted data. In order to reduce memory storage space and network and I/O bandwidth, ciphertext blocks can be manipulated as data structures, allowing operands for operations on a compute engine to be created on the fly as the compute engine is performing other operations, using orders of magnitude less storage space and bandwidth.
-
-
-
-
-
-
-
-
-