-
公开(公告)号:US10560057B1
公开(公告)日:2020-02-11
申请号:US16157042
申请日:2018-10-10
Applicant: Google LLC
Inventor: Alexander Fabrikant , Andrew Stephen Tomkins , James Alexander Cook , Atish Das Sarma
Abstract: A method is disclosed for estimating a duration that an instance of subject matter-related content will remain relevant. An archive of data sources spanning a period of time is analyzed to identify past instances in time in which a subject matter was newsworthy, and the duration of each instance. On receiving an indication that a user is interested in a current instance of the subject matter, an estimated period of time that the current instance will be of interest to the user is may be determined based on the duration of the past instances. In some aspects, the archive may be a corpus of social media data compiled from a social network, with each data source including or representative of an interaction between users in the social network, and the current instance of the subject technology may include content provided to the user through a social stream.
-
公开(公告)号:US20240370717A1
公开(公告)日:2024-11-07
申请号:US18313189
申请日:2023-05-05
Applicant: Google LLC
Inventor: Qifei Wang , Yicheng Fan , Wei Xu , Jiayu Ye , Lu Wang , Chuo-Ling Chang , Dana Alon , Erik Nathan Vee , Hongkun Yu , Matthias Grundmann , Shanmugasundaram Ravikumar , Andrew Stephen Tomkins
IPC: G06N3/08
Abstract: A method for a cross-platform distillation framework includes obtaining a plurality of training samples. The method includes generating, using a student neural network model executing on a first processing unit, a first output based on a first training sample. The method also includes generating, using a teacher neural network model executing on a second processing unit, a second output based on the first training sample. The method includes determining, based on the first output and the second output, a first loss. The method further includes adjusting, based on the first loss, one or more parameters of the student neural network model. The method includes repeating the above steps for each training sample of the plurality of training samples.
-