GENERATIVE ADVERSARIAL NETWORKS IN PREDICTING SEQUENTIAL DATA

    公开(公告)号:US20190258984A1

    公开(公告)日:2019-08-22

    申请号:US15898847

    申请日:2018-02-19

    摘要: Techniques for predicting sequential data using generative adversarial networks are disclosed herein. In some embodiments, a method comprises: receiving a request associated with a user of an online service; in response to the receiving of the request, retrieving a first plurality of sequential data points of the user from a profile of the user stored on a database of the online service, the first plurality of sequential data points comprising at least one attribute for each one of a plurality of sequential career points of the user; generating at least one predicted data point for the user based on the first plurality of sequential data points using a generative model, the generated at least one predicted data point comprising at least one attribute for a predicted career point for the user; and performing a function of the online service using the generated at least one predicted data point.

    CONNECTING MACHINE LEARNING METHODS THROUGH TRAINABLE TENSOR TRANSFORMERS

    公开(公告)号:US20200311613A1

    公开(公告)日:2020-10-01

    申请号:US16370156

    申请日:2019-03-29

    IPC分类号: G06N20/20 G06N5/04

    摘要: Herein are techniques for configuring, integrating, and operating trainable tensor transformers that each encapsulate an ensemble of trainable machine learning (ML) models. In an embodiment, a computer-implemented trainable tensor transformer uses underlying ML models and additional mechanisms to assemble and convert data tensors as needed to generate output records based on input records and inferencing. The transformer processes each input record as follows. Input tensors of the input record are converted into converted tensors. Each converted tensor represents a respective feature of many features that are capable of being processed by the underlying trainable models. The trainable models are applied to respective subsets of converted tensors to generate an inference for the input record. The inference is converted into a prediction tensor. The prediction tensor and input tensors are stored as output tensors of a respective output record for the input record.