Behavior-based host modeling
    12.
    发明授权

    公开(公告)号:US10476753B2

    公开(公告)日:2019-11-12

    申请号:US15902369

    申请日:2018-02-22

    Abstract: Methods and systems for modeling host behavior in a network include determining a first probability function for observing each of a set of process-level events at a first host based on embedding vectors for the first event and the first host. A second probability function is determined for the first host issuing each of a set of network-level events connecting to a second host based on embedding vectors for the first host and the second host. The first and second probability functions are maximized to determine a set of likely process-level and network-level events for the first host. A security action is performed based on the modeled host behavior.

    KNOWLEDGE TRANSFER SYSTEM FOR ACCELERATING INVARIANT NETWORK LEARNING

    公开(公告)号:US20180351971A1

    公开(公告)日:2018-12-06

    申请号:US16055675

    申请日:2018-08-06

    Abstract: A computer-implemented method for implementing a knowledge transfer based model for accelerating invariant network learning is presented. The computer-implemented method includes generating an invariant network from data streams, the invariant network representing an enterprise information network including a plurality of nodes representing entities, employing a multi-relational based entity estimation model for transferring the entities from a source domain graph to a target domain graph by filtering irrelevant entities from the source domain graph, employing a reference construction model for determining differences between the source and target domain graphs, and constructing unbiased dependencies between the entities to generate a target invariant network, and outputting the generated target invariant network on a user interface of a computing device.

    TRAINING A TIME-SERIES-LANGUAGE MODEL ADAPTED FOR DOMAIN-SPECIFIC TASKS

    公开(公告)号:US20250124279A1

    公开(公告)日:2025-04-17

    申请号:US18889610

    申请日:2024-09-19

    Abstract: Systems and methods for training a time-series-language (TSLa) model adapted for domain-specific tasks. An encoder-decoder neural network can be trained to tokenize time-series data to obtain a discrete-to-language embedding space. The TSLa model can learn a linear mapping function by concatenating token embeddings from the discrete-to-language embedding space with positional encoding to obtain mixed-modality token sequences. Token augmentation can transform the tokens from the mixed-modality token sequences with to obtain augmented tokens. The augmented tokens can train the TSLa model using a computed token likelihood to predict next tokens for the mixed-modality token sequences to obtain a trained TSLa model. A domain-specific dataset can fine-tune the trained TSLa model to adapt the trained TSLa model to perform a domain-specific task.

Patent Agency Ranking