SECURE KEY STORAGE USING PHYSICALLY UNCLONABLE FUNCTIONS

    公开(公告)号:US20170288869A1

    公开(公告)日:2017-10-05

    申请号:US15628386

    申请日:2017-06-20

    Abstract: Some implementations disclosed herein provide techniques and arrangements for provisioning keys to integrated circuits/processor/apparatus. In one embodiment, the apparatus includes a physically unclonable functions (PUF) circuit to generate a hardware key based on at least one manufacturing variation of the apparatus and a nonvolatile memory coupled to the PUF circuit, the nonvolatile memory to store an encrypted key, the encrypted key comprising a first key encrypted using the hardware key. The apparatus further includes a hardware cipher component coupled to the nonvolatile memory and the PUF circuit, the hardware cipher component to decrypt the encrypted key stored in the nonvolatile memory with at least the hardware key to generate a decrypted copy of the first key and fixed logic circuitry coupled to the PUF circuit and the hardware cipher component, the fixed logic circuitry to verify that the decrypted copy of the first key is valid.

    LOW CONTENTION CURRENT CIRCUITS
    13.
    发明公开

    公开(公告)号:US20240356552A1

    公开(公告)日:2024-10-24

    申请号:US18305147

    申请日:2023-04-21

    Abstract: A disclosed example includes a read local bitline; and a plurality of pulldown transistor circuits coupled to the read local bitline, a first one of the pulldown transistor circuits including: a first low threshold voltage transistor, the first low threshold voltage transistor including a first drain terminal coupled to the read local bitline; and a second low threshold voltage transistor, the second low threshold voltage transistor including a second drain terminal coupled to a first source terminal of the first low threshold voltage transistor, the second low threshold voltage transistor to persist a voltage level detectable at a gate terminal of the second low threshold voltage transistor, the voltage level representative of a bit of information.

    OPTICAL NEURAL NETWORK ACCELERATOR
    14.
    发明公开

    公开(公告)号:US20240242067A1

    公开(公告)日:2024-07-18

    申请号:US18621802

    申请日:2024-03-29

    CPC classification number: G06N3/067

    Abstract: Systems, apparatuses and methods include technology that executes, with a first plurality of panels, a first matrix-matrix multiplication operation of a first layer of an optical neural network (ONN) to generate output optical signals based on input optical signals that pass through an optical path of the ONN, and weights of the first layer of the ONN. The first plurality of panels includes an input panel, a weight panel and a photodetector panel. The executing includes generating, with the input panel, the input optical signals, where the input optical signals represent an input to the first matrix-matrix multiplication operation of the first layer of the ONN, representing, with the weight panel, the weights of the first layer of the ONN, and generating, with the photodetector panel, output photodetector signals based on the output optical signals that are generated based on the input optical signals and the weights.

    HIGH PERFORMANCE FAST MUX-D SCAN FLIP-FLOP

    公开(公告)号:US20220224316A1

    公开(公告)日:2022-07-14

    申请号:US17711638

    申请日:2022-04-01

    Abstract: A fast Mux-D scan flip-flop is provided, which bypasses a scan multiplexer to a master keeper side path, removing delay overhead of a traditional Mux-D scan topology. The design is compatible with simple scan methodology of Mux-D scan, while preserving smaller area and small number of inputs/outputs. Since scan Mux is not in the forward critical path, circuit topology has similar high performance as level-sensitive scan flip-flop and can be easily converted into bare pass-gate version. The new fast Mux-D scan flip-flop combines the advantages of the conventional LSSD and Mux-D scan flip-flop, without the disadvantages of each.

    INSTRUCTION SET FOR HYBRID CPU AND ANALOG IN-MEMORY ARTIFICIAL INTELLIGENCE PROCESSOR

    公开(公告)号:US20200242459A1

    公开(公告)日:2020-07-30

    申请号:US16262583

    申请日:2019-01-30

    Abstract: Techniques are provided for implementing a hybrid processing architecture comprising a general-purpose processor (CPU) and a neural processing unit (NPU), coupled to an analog in-memory artificial intelligence (AI) processor. According to an embodiment, the hybrid processor implements an AI instruction set including instructions to perform analog in-memory computations. The AI processor comprises one or more layers, the NN layers including memory circuitry and analog processing circuitry. The memory circuitry is configured to store the weighting factors and the input data. The analog processing circuitry is configured to perform analog calculations on the stored weighting factors and the stored input data in accordance with the execution, by the NPU, of instruction from the AI instruction set. The AI instruction set includes instructions to perform dot products, multiplication, differencing, normalization, pooling, thresholding, transposition, and backpropagation training. The NN layers are configured as convolutional NN layers and/or fully connected NN layers.

Patent Agency Ranking