PRIVACY MEASUREMENT AND QUANTIFICATION
    24.
    发明申请
    PRIVACY MEASUREMENT AND QUANTIFICATION 有权
    隐私测量和量化

    公开(公告)号:US20150261959A1

    公开(公告)日:2015-09-17

    申请号:US14627185

    申请日:2015-02-20

    Abstract: System(s) and method(s) to provide privacy measurement and privacy quantification of sensor data are disclosed. The sensor data is received from a sensor. The private content associated with the sensor data is used to calculate a privacy measuring factor by using entropy based information theoretic model. A compensation value with respect to distribution dissimilarity is determined. The compensation value compensates a statistical deviation in the privacy measuring factor. The compensation value and the privacy measuring factor are used to determine a privacy quantification factor. The privacy quantification factor is scaled with respect to a predefined finite scale to obtain at least one scaled privacy quantification factor to provide quantification of privacy of the sensor data.

    Abstract translation: 公开了提供传感器数据的隐私测量和隐私定量的系统和方法。 从传感器接收传感器数据。 与传感器数据相关联的私有内容用于通过使用基于熵的信息理论模型来计算隐私测量因子。 确定相对于分布不相似度的补偿值。 补偿值补偿隐私测量因子中的统计偏差。 补偿值和隐私测量因子用于确定隐私量化因子。 隐私量化因子相对于预定义的有限比例被缩放以获得至少一个缩放的隐私量化因子,以提供传感器数据的隐私的量化。

    METHOD AND SYSTEM FOR A LOW-POWER LOSSLESS IMAGE COMPRESSION USING A SPIKING NEURAL NETWORK

    公开(公告)号:US20240422334A1

    公开(公告)日:2024-12-19

    申请号:US18740775

    申请日:2024-06-12

    Abstract: This disclosure relates generally to reducing earth-bound image volume with an efficient lossless compression technique. The embodiment thus provides a method and system for reducing earth-bound image volume based on a Spiking Neural Network (SNN) model. Moreover, the embodiments herein further provide a complete lossless compression framework comprises of a SNN-based Density Estimator (DE) followed by a classical Arithmetic Encoder (AE). The SNN model is used to obtain residual errors which are compressed by AE and thereafter transmitted to the receiving station. While reducing the power consumption during transmission by similar percentages, the system also saves in-situ computation power as it uses SNN based DE compared to its Deep Neural Network (DNN) counterpart. The SNN model has a lower memory footprint compared to a corresponding Arithmetic Neural Network (ANN) model and lower latency, which exactly fit the requirement for on-board computation in small satellite.

    METHOD AND SYSTEM OF SPIKING NEURAL NETWORK-BASED ECG CLASSIFIER FOR WEARABLE EDGE DEVICES

    公开(公告)号:US20240176987A1

    公开(公告)日:2024-05-30

    申请号:US18368859

    申请日:2023-09-15

    CPC classification number: G06N3/045 G06N3/08 G16H40/67 G16H50/20

    Abstract: This disclosure relates generally to method and system for spiking neural network based ECG classifier for wearable edge devices. Employing deep neural networks to extract the features from ECG signal have high computational intensity and large power consumption. The spiking neural network of the present disclosure obtains a training dataset comprising a plurality of ECG time-series data. The spiking neural network comprise a reservoir-based spiking neural network and a feed forward based spiking neural network. Each of the spiking neural network having a logistic regression-based ECG classifier are trained to classify one or more class labels. The peak-based spike encoder of each spiking neural network obtains a plurality of encoded spike trains from the plurality of ECG time-series. The peak-based spike encoder provides high performance for classifying one or more labels. Efficacy of the peak-based spike encoder for classification is experimentally evaluated with different datasets.

    FIELD PROGRAMMABLE GATE ARRAY (FPGA) BASED NEUROMORPHIC COMPUTING ARCHITECTURE

    公开(公告)号:US20230122192A1

    公开(公告)日:2023-04-20

    申请号:US17684937

    申请日:2022-03-02

    Abstract: This disclosure relates generally to a method and a system for computing using a field programmable gate array (FPGA) neuromorphic architecture. Implementing energy efficient Artificial Intelligence (AI) applications at power constrained environment/devices is challenging due to huge energy consumption during both training and inferencing. The disclosure is a FPGA architecture based neuromorphic computing platform, the basic components include a plurality of neurons and memory. The FPGA neuromorphic architecture is parameterized, parallel and modular, thus enabling improved energy/inference and Latency-Throughput. Based on values of the plurality of features of the data set, the FPGA neuromorphic architecture is generated in a modular and parallel fashion. The output of the disclosed FPGA neuromorphic architecture is the plurality of output spikes from the neuron, which becomes the basis of inference for computing.

Patent Agency Ranking