SYSTEMS AND METHODS FOR DECENTRALIZED ATTRIBUTION OF GENERATIVE MODELS

    公开(公告)号:US20220198332A1

    公开(公告)日:2022-06-23

    申请号:US17544201

    申请日:2021-12-07

    IPC分类号: G06N20/00 G06F21/64

    摘要: A system and associated methods for decentralized attribution of GAN models is disclosed. Given a group of models derived from the same dataset and published by different users, attributability is achieved when a public verification service associated with each model (a linear classifier) returns positive only for outputs of that model. Each model is parameterized by keys distributed by a registry. The keys are computed from first-order sufficient conditions for decentralized attribution. The keys are orthogonal or opposite to each other and belong to a subspace dependent on the data distribution and the architecture of the generative model.

    TARGETED ATTACKS ON DEEP REINFORCEMENT LEARNING-BASED AUTONOMOUS DRIVING WITH LEARNED VISUAL PATTERNS

    公开(公告)号:US20240303349A1

    公开(公告)日:2024-09-12

    申请号:US18599821

    申请日:2024-03-08

    IPC分类号: G06F21/57 G06N20/00

    摘要: A system may be configured for implementing targeted attacks on deep reinforcement learning-based autonomous driving with learned visual patterns. In some examples, processing circuitry receives first input specifying an initial state for a driving environment and user configurable input specifying a target state. Processing circuitry may generate a representative dataset of the driving environment by performing multiple rollouts of the vehicle through the driving environment, including performing an action for the vehicle from the initial state with variable strength noise added to determine a next state for each rollout resulting from the action. Processing circuitry may train an artificial intelligence model to output a next predicted state based on the representative dataset as training input. In such an example, processing circuitry outputs from the artificial intelligence model, an attack plan against the autonomous driving agent to achieve the target state from the initial state.

    TEMPORAL KNOWLEDGE DISTILLATION FOR ACTIVE PERCEPTION

    公开(公告)号:US20220121855A1

    公开(公告)日:2022-04-21

    申请号:US17504257

    申请日:2021-10-18

    摘要: Temporal knowledge distillation for active perception is provided. Despite significant performance improvements in object detection and classification using deep structures, they still require prohibitive runtime to process images and maintain the highest possible performance for real-time applications. Observing that a human visual system (HVS) relies heavily on temporal dependencies among frames from visual input to conduct recognition efficiently, embodiments described herein propose a novel framework dubbed as temporal knowledge distillation (TKD). The TKD framework distills temporal knowledge gained from a heavy neural network-based model over selected video frames (e.g., the perception of the moments) for a light-weight model. To enable the distillation, two novel procedures are described: 1) a long-short term memory (LSTM)-based key frame selection method; and 2) a novel teacher-bounded loss design.

    ADAPTIVE AND HIERARCHICAL CONVOLUTIONAL NEURAL NETWORKS USING PARTIAL RECONFIGURATION ON FPGA

    公开(公告)号:US20220067453A1

    公开(公告)日:2022-03-03

    申请号:US17464069

    申请日:2021-09-01

    IPC分类号: G06K9/62 G06N3/04

    摘要: Adaptive and hierarchical convolutional neural networks (AH-CNNs) using partial reconfiguration on a field-programmable gate array (FPGA) are provided. An AH-CNN is implemented to adaptively switch between shallow and deep networks to reach a higher throughput on resource-constrained devices, such as a multiprocessor system on a chip (MPSoC) with a central processing unit (CPU) and FPGA. To this end, the AH-CNN includes a novel CNN architecture having three parts: 1) a shallow part which is a light-weight CNN model, 2) a decision layer which evaluates the shallow part's performance and makes a decision whether deeper processing would be beneficial, and 3) one or more deep parts which are deep CNNs with a high inference accuracy.

    Computer-vision-based clinical assessment of upper extremity function

    公开(公告)号:US10849532B1

    公开(公告)日:2020-12-01

    申请号:US16214847

    申请日:2018-12-10

    IPC分类号: A61B5/11 A61B5/107

    摘要: Methods and systems are presented for kinematic tracking and assessment of upper extremity function of a patient. A sequence of 2D images is captured by one or more cameras of a patient performing an upper extremity function assessment tasks. The captured images are processed to separately track body movements in 3D space, hand movements, and object movements. The hand movements are tracked by adjusting a position, orientation, and finger positions of a three-dimensional virtual model of a hand to match the hand in each 2D image. Based on the tracked movement data, the system is able to identify specific aspects of upper extremity function that exhibit impairment instead of providing only a generalized indication of upper extremity impairment.