-
公开(公告)号:US10574779B2
公开(公告)日:2020-02-25
申请号:US15258610
申请日:2016-09-07
Applicant: Amazon Technologies, Inc.
Inventor: James Marvin Freeman, II , Aaron M. Bromberg , Bryant F. Herron-Patmon , Nush Karmacharya , Joshua B. Barnard , Peter Wei-Chih Chen , Stephen A. Slotnick , Abhishek Dubey , Andrew J. Watts , Richard J. Winograd
IPC: H04L29/08 , H04L29/06 , G06F15/167 , G06F16/9535 , G06F16/957 , H04L12/861
Abstract: Disclosed are various embodiments for predictive caching of content to facilitate instantaneous use of the content. If a user is likely to commence use of a content item through a client, and if the client has available resources to facilitate instantaneous use, the client is configured to predictively cache the content item before the user commences use. In doing so, the client may obtain metadata for the content item from a server. The client may then initialize various resources to facilitate instantaneous use of the content item by the client based at least in part on the metadata.
-
公开(公告)号:US10389838B2
公开(公告)日:2019-08-20
申请号:US15399184
申请日:2017-01-05
Applicant: Amazon Technologies, Inc.
Inventor: Lei Li , Andrew Jason Ma , Gurpreet Singh Ahluwalia , Abhishek Dubey , Sachin Shah , Vijay Sen , Gregory Scott Benjamin , Prateek Rameshchandra Shah , Cody Wayne Maxwell Powell , Meltem Celikel , Darryl Hudgin , James Marvin Freeman, II , Aaron M. Bromberg , Bryant F. Herron-Patmon , Nush Karmacharya , Joshua B. Barnard , Peter Wei-Chih Chen , Stephen A. Slotnick , Andrew J. Watts , Richard J. Winograd
IPC: H04L29/08 , H04L29/06 , G06F21/10 , G06F12/02 , G06F12/0813 , G06F12/0862 , G06F12/121
Abstract: Disclosed are various embodiments for client-side predictive caching of content to facilitate use of the content. If account is likely to commence use of a content item through a client, the client is configured to predictively cache the content item before the use is commenced. In doing so, the client may obtain an initial portion of the content item from another computing device. The client may then initialize various resources to facilitate use of the content item by the client. The client-side cache may be divided into multiple segments with different content selection criteria.
-
公开(公告)号:US09544388B1
公开(公告)日:2017-01-10
申请号:US14274121
申请日:2014-05-09
Applicant: Amazon Technologies, Inc.
Inventor: Lei Li , Andrew Jason Ma , Gurpreet Singh Ahluwalia , Abhishek Dubey , Sachin Shah , Vijay Sen , Gregory Scott Benjamin , Prateek RameshChandra Shah , Cody Wayne Maxwell Powell , Meltem Celikel , Darryl Hudgin , James Marvin Freeman , Aaron M. Bromberg , Bryant F. Herron-Patmon , Nush Karmacharya , Joshua B. Barnard , Peter Wei-Chih Chen , Stephen A. Slotnick , Andrew J. Watts , Richard J. Winograd
CPC classification number: H04L67/2842 , G06F12/023 , G06F12/0813 , G06F12/0862 , G06F12/121 , G06F21/10 , G06F2212/1024 , G06F2212/154 , G06F2212/455 , G06F2212/507 , G06F2212/6024 , G06F2221/0753 , H04L65/60 , H04L67/125 , H04L67/22 , H04L67/2857 , H04L67/42
Abstract: Disclosed are various embodiments for client-side predictive caching of content to facilitate instantaneous use of the content. If a user is likely to commence use of a content item through a client, the client is configured to predictively cache the content item before the user commences use. In doing so, the client may obtain metadata for the content item and an initial portion of the content item from another computing device. The client may then initialize various resources to facilitate instantaneous use of the content item by the client based at least in part on the metadata and the initial portion. The client-side cache may be divided into multiple segments with different content selection criteria.
Abstract translation: 公开了用于内容的客户端预测性高速缓存以促进内容的即时使用的各种实施例。 如果用户可能通过客户端开始使用内容项目,则客户端被配置为在用户开始使用之前预测性地缓存内容项目。 在这样做时,客户端可以从另一个计算设备获取内容项的元数据和内容项的初始部分。 客户端然后可以初始化各种资源以便于至少部分地基于元数据和初始部分来由客户端即时使用内容项目。 客户端缓存可以被分成具有不同内容选择标准的多个段。
-
公开(公告)号:US09444861B2
公开(公告)日:2016-09-13
申请号:US14953891
申请日:2015-11-30
Applicant: Amazon Technologies, Inc.
Inventor: James Marvin Freeman, II , Aaron M. Bromberg , Bryant F. Herron-Patmon , Nush Karmacharya , Joshua B. Barnard , Peter Wei-Chih Chen , Stephen A. Slotnick , Abhishek Dubey , Andrew J. Watts , Richard J. Winograd
IPC: G06F15/16 , H04N5/445 , H04L29/06 , H04L29/08 , G06F15/167 , H04L12/861 , G06F17/30
CPC classification number: H04L67/2847 , G06F15/167 , G06F17/30867 , G06F17/30902 , H04L49/90 , H04L63/061 , H04L63/068 , H04L65/4069 , H04L65/4084 , H04L67/10 , H04L67/22 , H04L67/28 , H04L67/2804 , H04L67/306 , H04L67/42 , H04L2463/101
Abstract: Disclosed are various embodiments for predictive caching of content to facilitate instantaneous use of the content. If a user is likely to commence use of a content item through a client, and if the client has available resources to facilitate instantaneous use, the client is configured to predictively cache the content item before the user commences use. In doing so, the client may obtain metadata for the content item and an initial portion of the content item from a server. The client may then initialize various resources to facilitate instantaneous use of the content item by the client based at least in part on the metadata and the initial portion.
-
公开(公告)号:US10516753B2
公开(公告)日:2019-12-24
申请号:US16278433
申请日:2019-02-18
Applicant: Amazon Technologies, Inc.
Inventor: Lei Li , Andrew Jason Ma , Gurpreet Singh Ahluwalia , Abhishek Dubey , Sachin Shah , Vijay Sen , Gregory Scott Benjamin , Prateek Rameshchandra Shah , Cody Wayne Maxwell Powell , Meltem Celikel , Darryl Hudgin , James Marvin Freeman, II , Aaron M. Bromberg , Bryant F. Herron-Patmon , Nush Karmacharya , Joshua B. Barnard , Peter Wei-Chih Chen , Stephen A. Slotnick , Andrew J. Watts , Richard J. Winograd
IPC: H04L29/08 , H04L29/06 , G06F21/10 , G06F12/02 , G06F12/0813 , G06F12/0862 , G06F12/121
Abstract: Disclosed are various embodiments for predictive caching of content to facilitate use of the content. If account is likely to commence use of a content item, the content item is cached before the use is commenced. The cache may be divided into multiple segments with different content selection criteria.
-
公开(公告)号:US09344371B1
公开(公告)日:2016-05-17
申请号:US14513090
申请日:2014-10-13
Applicant: Amazon Technologies, Inc.
Inventor: Soumya Sanyal , Ernest S. Powers, III , Mack Zhou , Matthew T. Tavis , Stephen A. Slotnick , John Wai Yam Hui , Charles Porter Schermerhorn
IPC: H04L12/803 , H04L29/08 , H04L29/06
CPC classification number: H04L63/108 , G06F17/2705 , G06F21/105 , G06F2221/0773 , H04L47/125 , H04L47/741 , H04L47/745 , H04L47/823 , H04L47/826 , H04L63/10 , H04L63/1441 , H04L63/20 , H04L67/02 , H04L67/22 , H04L67/32 , H04L67/42
Abstract: A lightweight throttling mechanism allows for dynamic control of access to resources in a distributed environment. Each request received by a server of a server group is parsed to determine tokens in the request, which are compared with designated rules to determine whether to process or reject the request based on usage data associated with an aspect of the request, the token values, and the rule(s) specified for the request. The receiving of each request can be broadcast to throttling components for each server such that the global state of the system is known to each server. The system then can monitor usage and dynamically throttle requests based on real time data in a distributed environment.
Abstract translation: 轻量级节流机制允许在分布式环境中动态地控制对资源的访问。 解析由服务器组的服务器接收到的每个请求,以确定请求中的令牌,其与指定规则进行比较,以确定是否基于与请求方面相关联的使用数据来处理或拒绝请求,令牌值, 以及为该请求指定的规则。 可以将每个请求的接收广播到每个服务器的限制组件,使得系统的全局状态对于每个服务器是已知的。 然后,系统可以监视使用情况并根据分布式环境中的实时数据动态调节请求。
-
公开(公告)号:US20160080444A1
公开(公告)日:2016-03-17
申请号:US14953891
申请日:2015-11-30
Applicant: Amazon Technologies, Inc.
Inventor: James Marvin Freeman, II , Aaron M. Bromberg , Bryant F. Herron-Patmon , Nush Karmacharya , Joshua B. Barnard , Peter Wei-Chih Chen , Stephen A. Slotnick , Abhishek Dubey , Andrew J. Watts , Richard J. Winograd
IPC: H04L29/06 , H04L12/861
CPC classification number: H04L67/2847 , G06F15/167 , G06F17/30867 , G06F17/30902 , H04L49/90 , H04L63/061 , H04L63/068 , H04L65/4069 , H04L65/4084 , H04L67/10 , H04L67/22 , H04L67/28 , H04L67/2804 , H04L67/306 , H04L67/42 , H04L2463/101
Abstract: Disclosed are various embodiments for predictive caching of content to facilitate instantaneous use of the content. If a user is likely to commence use of a content item through a client, and if the client has available resources to facilitate instantaneous use, the client is configured to predictively cache the content item before the user commences use. In doing so, the client may obtain metadata for the content item and an initial portion of the content item from a server. The client may then initialize various resources to facilitate instantaneous use of the content item by the client based at least in part on the metadata and the initial portion.
Abstract translation: 公开了用于预测性缓存内容以促进内容的即时使用的各种实施例。 如果用户可能通过客户端开始使用内容项,并且如果客户端具有可用资源以便于即时使用,则客户端被配置为在用户开始使用之前预测性地缓存内容项。 在这样做时,客户端可以从服务器获得内容项的元数据和内容项的初始部分。 客户端然后可以初始化各种资源以便于至少部分地基于元数据和初始部分来由客户端即时使用内容项目。
-
公开(公告)号:US09729557B1
公开(公告)日:2017-08-08
申请号:US15155890
申请日:2016-05-16
Applicant: Amazon Technologies, Inc.
Inventor: Soumya Sanyal , Ernest S. Powers, III , Mack Zhou , Matthew T. Tavis , Stephen A. Slotnick , John Wai Yam Hui , Charles Porter Schermerhorn
CPC classification number: H04L63/108 , G06F17/2705 , G06F21/105 , G06F2221/0773 , H04L47/125 , H04L47/741 , H04L47/745 , H04L47/823 , H04L47/826 , H04L63/10 , H04L63/1441 , H04L63/20 , H04L67/02 , H04L67/22 , H04L67/32 , H04L67/42
Abstract: A lightweight throttling mechanism allows for dynamic control of access to resources in a distributed environment. Each request received by a server of a server group is parsed to determine tokens in the request, which are compared with designated rules to determine whether to process or reject the request based on usage data associated with an aspect of the request, the token values, and the rule(s) specified for the request. The receiving of each request can be broadcast to throttling components for each server such that the global state of the system is known to each server. The system then can monitor usage and dynamically throttle requests based on real time data in a distributed environment.
-
-
-
-
-
-
-