Throughput increase for compute engine

    公开(公告)号:US12260214B1

    公开(公告)日:2025-03-25

    申请号:US17937332

    申请日:2022-09-30

    Abstract: A compute channel can have multiple computational circuit blocks coupled in series to form a pipeline. The compute channel can perform a computation on an input tensor to generate an output tensor based on an instruction. When the computational does not require all of the computational circuit blocks, the throughput of the compute channel can be increased by splitting the data elements of the input tensor into multiple input data streams. The multiple input data streams are provided to respective subsets of one or more computational circuit blocks in the pipeline using bypass circuitry of the computational circuit blocks, and the computation can be performed on multiple input data streams in the respective subsets of one or more computational circuit blocks to generate multiple output data streams corresponding to the output tensor.

    Assisted composition of quantum algorithms

    公开(公告)号:US12260191B2

    公开(公告)日:2025-03-25

    申请号:US18471692

    申请日:2023-09-21

    Abstract: A quantum programming environment may include an assisted composition system to assist the composition of quantum objects. The assisted composition system may receive a partial portion of a quantum object that is being composed but not yet fully composed by a user. The assisted composition system may determine a first abstract representation of the partial portion of the quantum object being composed. The assisted composition system may determine that the first abstract representation resembles at least a first portion of a second abstract representation of a stored quantum object stored in a library for the quantum programming environment. The assisted composition system may obtain a second portion of the stored quantum object from the library and provide it to the user as a next portion to the partial portion of the quantum object being composed.

    Enhanced search autocompletion framework

    公开(公告)号:US12259916B1

    公开(公告)日:2025-03-25

    申请号:US18331468

    申请日:2023-06-08

    Abstract: Systems and techniques are disclosed for determining relevant search query autocompletions for presentation to a user that has entered a prefix into a search application interface. A target search space represented as a knowledge graph may be searched to generate a subgraph of nodes representing autocompletion candidates that correspond to the prefix. Further historical and/or interaction data associated with the user and/or other users of the search application may be used to detemir additional autocompletion candidates. A machine-learned model may be trained to score and rank the candidates based on embeddings extracted by the model for the autocompletion candidates. A listing of the autocompletion candidate ordered based on the scores may be presented as autocompletion suggestions to the user.

    Techniques for conserving power on a device

    公开(公告)号:US12257496B1

    公开(公告)日:2025-03-25

    申请号:US17665911

    申请日:2022-02-07

    Abstract: This disclosure describes, in part, techniques for conserving power on an electronic device. For instance, at given time intervals, the electronic device may be sending input data to a network device and receiving audio data from the network device. The electronic device may then use one or more techniques to determine when to switch from operating in a first mode, where the electronic device sends and/or receives the data, to operating in a second mode, where the electronic device ceases sending and/or receiving the data. For example, the electronic device may make the determination based on an amount of data stored in a buffer, whether the electronic device receives data, using data received from the network device, and/or the like. Based on the determination, the electronic device may switch to the second mode in order to conserve power.

    Dynamic response signing capability in a distributed system

    公开(公告)号:US12256018B1

    公开(公告)日:2025-03-18

    申请号:US18376756

    申请日:2023-10-04

    Abstract: A system that provides responses to requests obtains a key that is used to digitally sign the request. The key is derived from information that is shared with a requestor to which the response is sent. The requestor derives, using the shared information, derives a key usable to verify the digital signature of the response, thereby enabling the requestor to operate in accordance with whether the digital signature of the response matches the response.

    Sparse machine learning acceleration

    公开(公告)号:US12254398B2

    公开(公告)日:2025-03-18

    申请号:US17301271

    申请日:2021-03-30

    Abstract: To reduce the storage size of weight tensors and speed up loading of weight tensors from system memory, a compression technique can be employed to remove zero values from a weight tensor before storing the weight tensor in system memory. A sparsity threshold can be enforced to achieve a compression ratio target by forcing small weight values to zero during training. When the weight tensor is loaded from system memory, a direct memory access (DMA) engine with an in-line decompression unit can decompress the weight tensor on-the-fly. By performing the decompression in the DMA engine, expansion of the weight values back to the original weight tensor size can be carried out in parallel while other neural network computations are being performed by the processing unit.

Patent Agency Ranking