-
公开(公告)号:US20240249192A1
公开(公告)日:2024-07-25
申请号:US18417556
申请日:2024-01-19
Applicant: Google LLC
Inventor: Sercan Omer Arik , Si-An Chen , Nathanael Christian Yoder , Chun-Liang Li
IPC: G06N20/00
CPC classification number: G06N20/00
Abstract: The present disclosure provides an architecture for time series forecasting. The architecture is based on multi-layer perceptrons (MLPs), which involve stacking linear models with non-linearities between them. In this architecture, the time-domain MLPs and feature-domain MLPs are used to perform both time-domain and feature-domain operations in a sequential manner, alternating between them. In some examples, auxiliary data is used as input, in addition to historical data. The auxiliary data can include known future data points, as well as static information that does not vary with time. The alternation of time-domain and feature-domain operations using linear models allows the architecture to learn temporal patterns while leveraging cross-variate information to generate more accurate time series forecasts.
-
公开(公告)号:US20230110117A1
公开(公告)日:2023-04-13
申请号:US17954978
申请日:2022-09-28
Applicant: Google LLC
Inventor: Sercan Omer Arik , Nathanael Christian Yoder , Tomas Pfister
Abstract: Aspects of the disclosure provide for self-adapting forecasting (SAF) during the training and execution of machine learning models trained for multi-horizon forecasting on time-series data. The distribution of time-series data can shift over different periods of time. A deep neural network and other types of machine learning models are trained assuming that training data is independent and identically distributed (i.i.d.). With a computer system configured to execute SAF, the system can, at inference time, update a trained encoder to generate an encoded representation of time-series data capturing features characterizing the current distribution of the input time-series data. The updated encoded representation can be fed into a decoder trained to generate a multi-horizon forecast based on the updated encoded representation of the time-series data. At each instance of inference, the base weights of a trained model can be reused and updated to generate an updated encoded representation for that instance.
-