SYSTEMS FOR ESTIMATING BIT ERROR RATE (BER) OF ENCODED DATA USING NEURAL NETWORKS

    公开(公告)号:WO2023059517A1

    公开(公告)日:2023-04-13

    申请号:PCT/US2022/045432

    申请日:2022-09-30

    Abstract: Examples described herein utilize multi-layer neural networks, such as multi-layer recurrent neural networks, to estimate a bit error rate (BER) of encoded data based on a retrieved version of encoded data (e.g., data encoded using one or more encoding techniques) from a memory. The neural networks may have nonlinear mapping and distributed processing capabilities which may be advantageous to estimate a BER of encoded data, e.g., to facilitate decoding of the encoded data. In this manner, neural networks described herein may be used to improve or facilitate aspects of decoding at ECC decoders, e.g., by comparing an estimated BER to a threshold (e.g., a threshold BER level) prior to decoding of the encoded data. For example, an additional NN activation indication may be provided, e.g., to indicate that the encoded data may be decoded or to indicate that error present in the encoded data is to be reduced.

    DEEP LEARNING ACCELERATOR AND RANDOM ACCESS MEMORY WITH SEPARATE MEMORY ACCESS CONNECTIONS

    公开(公告)号:WO2021206974A1

    公开(公告)日:2021-10-14

    申请号:PCT/US2021/025018

    申请日:2021-03-30

    Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. An integrated circuit may be configured to execute instructions with matrix operands and configured with: random access memory configured to store instructions executable by the Deep Learning Accelerator and store matrices of an Artificial Neural Network; a connection between the random access memory and the Deep Learning Accelerator; a first interface to a memory controller of a Central Processing Unit; and a second interface to a direct memory access controller. While the Deep Learning Accelerator is using the random access memory to process current input to the Artificial Neural Network in generating current output from the Artificial Neural Network, the direct memory access controller may concurrently load next input into the random access memory; and at the same time, the Central Processing Unit may concurrently retrieve prior output from the random access memory.

    METHODS AND APPARATUS FOR PERSISTENT BIOMETRIC PROFILING

    公开(公告)号:WO2021055796A1

    公开(公告)日:2021-03-25

    申请号:PCT/US2020/051562

    申请日:2020-09-18

    Abstract: Methods and apparatus for biometric data maintenance, access and distribution across two or more experiential and/or network domains. In one embodiment, a 5G NRbased network architecture is provided which allows ultra-low latency and effectively user-imperceptible biometric data use for e.g., authentication and maintenance of user state across multiple domains via multiple constituent user devices (e.g., UEs). The network architecture includes both (i) a distributed biometric database (BDB) model wherein relevant biometric data for individuals/UEs is intelligently cached in various portions of the distributed database, and (ii) centralized and local BAEs (biometric analytics entities) which manage the aforementioned intelligent caching, as well as network configuration using one or both of 5G NR network "slicing" and CU/DU split options to optimize end-user biometric-related applications such as those providing identification/authentication, AR functions, VR functions or yet others.

    WIRELESS DEVICES AND SYSTEMS INCLUDING EXAMPLES OF COMPENSATING POWER AMPLIFIER NOISE

    公开(公告)号:WO2019226736A1

    公开(公告)日:2019-11-28

    申请号:PCT/US2019/033459

    申请日:2019-05-22

    Abstract: Examples described herein include methods, devices, and systems which may compensate input data for non-linear power amplifier noise to generate compensated input data. In compensating the noise, during an uplink transmission time interval (TTI), a switch path is activated to provide amplified input data to a receiver stage including a coefficient calculator. The coefficient calculator may calculate an error representative of the noise based partly on the input signal to be transmitted and a feedback signal to generate coefficient data associated with the power amplifier noise. The feedback signal is provided, after processing through the receiver, to a coefficient calculator. During an uplink TTI, the amplified input data may also be transmitted as the RF wireless transmission via an RF antenna. During a downlink TTI, the switch path may be deactivated and the receiver stage may receive an additional RF wireless transmission to be processed in the receiver stage.

    DISCOVERY OF HARDWARE CHARACTERISTICS OF DEEP LEARNING ACCELERATORS FOR OPTIMIZATION VIA COMPILER

    公开(公告)号:WO2022098505A1

    公开(公告)日:2022-05-12

    申请号:PCT/US2021/055611

    申请日:2021-10-19

    Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory. A computing device running a compiler can interact and/or probe an integrated circuit device to identify hardware characteristics of the integrated circuit device in performing matrix computations. The compiler can generate and optimize a result of compilation from a description of an artificial neural network based at least in part on the hardware characteristics of the integrated circuit device. The result of compilation can include first data representative of parameters of the artificial neural network and second data representative of instructions executable by the integrated circuit device to generate an output of the artificial neural network based on the first data and an input to the artificial neural network.

    COMPUTATIONAL STORAGE AND NETWORKED BASED SYSTEM

    公开(公告)号:WO2020205598A1

    公开(公告)日:2020-10-08

    申请号:PCT/US2020/025405

    申请日:2020-03-27

    Abstract: Methods, systems, and apparatuses related to computational storage are described. For example, storage accessible to an accelerator may be shared between and, accessible to either of, a host and the accelerator. A computational storage system may include storage providing a portion of a shared file system accessible by a host and by accelerator logic of the computational storage system. Host interface logic may be configured to receive a storage command from the host to store data on the storage at a time the data is created. The host interface logic may be further configured to receive a storage command from the host for the accelerator logic to perform a computational task using the stored data on the storage. The accelerator logic can perform the computational task using the stored data on the storage.

    NEURON CALCULATOR FOR ARTIFICIAL NEURAL NETWORKS

    公开(公告)号:WO2020047254A1

    公开(公告)日:2020-03-05

    申请号:PCT/US2019/048810

    申请日:2019-08-29

    Abstract: Examples described herein include systems and methods, including wireless devices and systems with neuron calculators that may perform one or more functionalities of a wireless transceiver. The neuron calculator calculates output signals that may be implemented, for example, using accumulation units that sum the multiplicative processing results of ordered sets from ordered neurons with connection weights for each connection between an ordered neuron and outputs of the neuron calculator. The ordered sets may be a combination of some input signals, with the number of signals determined by an order of the neuron. Accordingly, a k th -order neuron may include an ordered set comprising product values of k input signals, where the input signals are selected from a set of k ‑combinations with repetition. As an example in a wireless transceiver, the neuron calculator may perform channel estimation as a channel estimation processing component of the receiver portion of a wireless transceiver.

    FULL DUPLEX DEVICE-TO-DEVICE COOPERATIVE COMMUNICATION

    公开(公告)号:WO2019050980A1

    公开(公告)日:2019-03-14

    申请号:PCT/US2018/049598

    申请日:2018-09-05

    Abstract: Examples described herein include apparatuses and methods for full duplex device-to-device cooperative communication. Example systems described herein may include self-interference noise calculators. The output of a self-interference noise calculator may be used to compensate for the interference experienced due to signals transmitted by another antenna of the same wireless device or system. In implementing such a self-interference noise calculator, a selected wireless relaying device or wireless destination device may operate in a full-duplex mode, such that relayed messages may be transmitted as well as information from other sources or destinations during a common time period (e.g., symbol, slot, subframe, etc.).

    MEMORY DEVICES AND METHODS WHICH MAY FACILITATE TENSOR MEMORY ACCESS

    公开(公告)号:WO2018194824A1

    公开(公告)日:2018-10-25

    申请号:PCT/US2018/025660

    申请日:2018-04-02

    Abstract: Examples described herein include systems and methods which include an apparatus comprising a memory array including a plurality of memory cells and a memory controller coupled to the memory array. The memory controller comprises a memory mapper configured to configure a memory map on the basis of a memory command associated with a memory access operation. The memory map comprises a specific sequence of memory access instructions to access at least one memory cell of the memory array. For example, the specific sequence of memory access instructions for a diagonal memory command comprises a sequence of memory access instructions that each access a memory cell along a diagonal of the memory array.

Patent Agency Ranking