ACTIVATION FUNCTION FOR HOMOMORPHICALLY-ENCRYPTED NEURAL NETWORKS

    公开(公告)号:US20240303471A1

    公开(公告)日:2024-09-12

    申请号:US18178684

    申请日:2023-03-06

    CPC classification number: G06N3/047

    Abstract: Implementations herein disclose an activation function for homomorphically-encrypted neural networks. A data-agnostic activation technique is provided that collects information about the distribution of the most-dominant activated locations in the feature maps of the trained model and maintains a map of those locations. This map, along with a defined percent of random locations, decides which neurons in the model are activated using an activation function. Advantages of implementations herein include allowing for efficient activation function computations in encrypted computations of neural networks, yet no data-dependent computation is done during inference time (e.g., data-agnostic). Implementations utilize negligible overhead in model storage, while preserving the same accuracy as with general activation functions and runs in orders of magnitude faster than approximation-based activation functions. Furthermore, implementations herein can be applied post-hoc to already-trained models and, as such, do not utilize fine-tuning.

    TECHNOLOGIES FOR COLLECTIVE AUTHORIZATION WITH HIERARCHICAL GROUP KEYS

    公开(公告)号:US20230075259A1

    公开(公告)日:2023-03-09

    申请号:US18051825

    申请日:2022-11-01

    Abstract: Technologies for secure collective authorization include multiple computing devices in communication over a network. A computing device may perform a join protocol with a group leader to receive a group private key that is associated with an interface implemented by the computing device. The interface may be an instance of an object model implemented by the computing device or membership of the computing device in a subsystem. The computing device receives a request for attestation to the interface, selects the group private key for the interface, and sends an attestation in response to the request. Another computing device may receive the attestation and verify the attestation with a group public key corresponding to the group private key. The group private key may be an enhanced privacy identifier (EPID) private key, and the group public key may be an EPID public key. Other embodiments are described and claimed.

    Defending neural networks by randomizing model weights

    公开(公告)号:US11568211B2

    公开(公告)日:2023-01-31

    申请号:US16233700

    申请日:2018-12-27

    Abstract: The present disclosure is directed to systems and methods for the selective introduction of low-level pseudo-random noise into at least a portion of the weights used in a neural network model to increase the robustness of the neural network and provide a stochastic transformation defense against perturbation type attacks. Random number generation circuitry provides a plurality of pseudo-random values. Combiner circuitry combines the pseudo-random values with a defined number of least significant bits/digits in at least some of the weights used to provide a neural network model implemented by neural network circuitry. In some instances, selection circuitry selects pseudo-random values for combination with the network weights based on a defined pseudo-random value probability distribution.

    SECURITY OPTIMIZING COMPUTE DISTRIBUTION IN A HYBRID DEEP LEARNING ENVIRONMENT

    公开(公告)号:US20210406652A1

    公开(公告)日:2021-12-30

    申请号:US16912152

    申请日:2020-06-25

    Abstract: Embodiments are directed to security optimizing compute distribution in a hybrid deep learning environment. An embodiment of an apparatus includes one or more processors to determine security capabilities and compute capabilities of a client machine requesting to use a machine learning (ML) model hosted by the apparatus; determine, based on the security capabilities and based on exposure criteria of the ML model, that one or more layers of the ML model can be offloaded to the client machine for processing; define, based on the compute capabilities of the client machine, a split level of the one or more layers of the ML model for partition of the ML model, the partition comprising offload layers of the one or more layers of the ML model to be processed at the client machine; and cause the offload layers of the ML model to be downloaded to the client machine.

    Technologies for anonymous context attestation and threat analytics

    公开(公告)号:US10440046B2

    公开(公告)日:2019-10-08

    申请号:US14866628

    申请日:2015-09-25

    Abstract: Technologies for anonymous context attestation and threat analytics include a computing device to receive sensor data generated by one or more sensors of the computing device and generate an attestation quote based on the sensor data. The attestation quote includes obfuscated attributes of the computing device based on the sensor data. The computing device transmits zero knowledge commitment of the attestation quote to a server and receives a challenge from the server in response to transmitting the zero knowledge commitment. The challenge requests an indication regarding whether the obfuscated attributes of the computing device have commonality with attributes identified in a challenge profile received with the challenge. The computing device generates a zero knowledge proof that the obfuscated attributes of the computing device have commonality with the attributes identified in the challenge profile.

    CONTROLLED INTRODUCTION OF UNCERTAINTY IN SYSTEM OPERATING PARAMETERS

    公开(公告)号:US20190042747A1

    公开(公告)日:2019-02-07

    申请号:US16023160

    申请日:2018-06-29

    Abstract: The present disclosure is directed to systems and methods for mitigating or eliminating the effectiveness of a side channel attack, such as a Meltdown or Spectre type attack by selectively introducing a variable, but controlled, quantity of uncertainty into the externally accessible system parameters visible and useful to the attacker. The systems and methods described herein provide perturbation circuitry that includes perturbation selector circuitry and perturbation block circuitry. The perturbation selector circuitry detects a potential attack by monitoring the performance/timing data generated by the processor. Upon detecting an attack, the perturbation selector circuitry determines a variable quantity of uncertainty to introduce to the externally accessible system data. The perturbation block circuitry adds the determined uncertainty into the externally accessible system data. The added uncertainty may be based on the frequency or interval of the event occurrences indicative of an attack.

Patent Agency Ranking