-
公开(公告)号:US09237387B2
公开(公告)日:2016-01-12
申请号:US12611133
申请日:2009-11-03
申请人: John A. Bocharov , Krishna Prakash Duggaraju , Lin Liu , Jack E. Freelander , Ning Lin , Anirban Roy
发明人: John A. Bocharov , Krishna Prakash Duggaraju , Lin Liu , Jack E. Freelander , Ning Lin , Anirban Roy
IPC分类号: H04N21/472 , H04N21/845 , H04N21/433 , H04N21/434 , H04L29/06 , H04L29/08
CPC分类号: H04N21/8456 , H04L65/605 , H04L67/2804 , H04N21/4331 , H04N21/4348 , H04N21/47202
摘要: A low latency streaming system provides a stateless protocol between a client and server with reduced latency. The server embeds incremental information in media fragments that eliminates the usage of a typical control channel. In addition, the server provides uniform media fragment responses to media fragment requests, thereby allowing existing Internet cache infrastructure to cache streaming media data. Each fragment has a distinguished Uniform Resource Locator (URL) that allows the fragment to be identified and cached by both Internet cache servers and the client's browser cache. The system reduces latency using various techniques, such as sending fragments that contain less than a full group of pictures (GOP), encoding media without dependencies on subsequent frames, and by allowing clients to request subsequent frames with only information about previous frames.
摘要翻译: 低延迟流系统在客户端和服务器之间提供无状态协议,延迟时间缩短。 服务器将增量信息嵌入到媒体片段中,从而消除了典型控制信道的使用。 另外,服务器对媒体片段请求提供统一的媒体片段响应,从而允许现有的因特网缓存基础设施来缓存流媒体数据。 每个片段都有一个不同的统一资源定位符(URL),允许由两个Internet缓存服务器和客户端的浏览器缓存来标识和缓存片段。 该系统使用各种技术来减少等待时间,例如发送包含少于一整组图像(GOP)的片段,编码媒体,而不依赖于后续帧,以及允许客户端仅使用关于先前帧的信息来请求后续帧。
-
公开(公告)号:US20110080940A1
公开(公告)日:2011-04-07
申请号:US12611133
申请日:2009-11-03
申请人: John A. Bocharov , Krishna Prakash Duggaraju , Lin Liu , Jack E. Freelander , Ning Lin , Anirban Roy
发明人: John A. Bocharov , Krishna Prakash Duggaraju , Lin Liu , Jack E. Freelander , Ning Lin , Anirban Roy
CPC分类号: H04N21/8456 , H04L65/605 , H04L67/2804 , H04N21/4331 , H04N21/4348 , H04N21/47202
摘要: A low latency streaming system provides a stateless protocol between a client and server with reduced latency. The server embeds incremental information in media fragments that eliminates the usage of a typical control channel. In addition, the server provides uniform media fragment responses to media fragment requests, thereby allowing existing Internet cache infrastructure to cache streaming media data. Each fragment has a distinguished Uniform Resource Locator (URL) that allows the fragment to be identified and cached by both Internet cache servers and the client's browser cache. The system reduces latency using various techniques, such as sending fragments that contain less than a full group of pictures (GOP), encoding media without dependencies on subsequent frames, and by allowing clients to request subsequent frames with only information about previous frames.
摘要翻译: 低延迟流系统在客户端和服务器之间提供无状态协议,延迟时间缩短。 服务器将增量信息嵌入到媒体片段中,从而消除了典型控制信道的使用。 另外,服务器对媒体片段请求提供统一的媒体片段响应,从而允许现有的因特网缓存基础设施来缓存流媒体数据。 每个片段都有一个不同的统一资源定位符(URL),允许由两个Internet缓存服务器和客户端的浏览器缓存来标识和缓存片段。 该系统使用各种技术来减少等待时间,例如发送包含少于一整组图像(GOP)的片段,编码媒体,而不依赖于后续帧,以及允许客户端仅使用关于先前帧的信息来请求后续帧。
-
公开(公告)号:US20100268789A1
公开(公告)日:2010-10-21
申请号:US12425395
申请日:2009-04-17
申请人: Won Suk Yoo , Anil K. Ruia , Himanshu Patel , John A. Bocharov , Ning Lin
发明人: Won Suk Yoo , Anil K. Ruia , Himanshu Patel , John A. Bocharov , Ning Lin
IPC分类号: G06F15/167
CPC分类号: H04L67/2842 , H04L67/2833 , H04L67/2885
摘要: A live caching system is described herein that reduces the burden on origin servers for serving live content. In response to receiving a first request that results in a cache miss, the system forwards the first request to the next tier while “holding” other requests for the same content. If the system receives a second request while the first request is pending, the system will recognize that a similar request is outstanding and hold the second request by not forwarding the request to the origin server. After the response to the first request arrives from the next tier, the system shares the response with other held requests. Thus, the live caching system allows a content provider to prepare for very large events by adding more cache hardware and building out a cache server network rather than by increasing the capacity of the origin server.
摘要翻译: 这里描述了实时缓存系统,其减少了用于服务实况内容的原始服务器的负担。 响应于接收到导致高速缓存未命中的第一请求,系统将第一请求转发到下一层,同时“保持”其他对相同内容的请求。 如果系统在第一个请求未决时接收到第二个请求,则系统将识别出类似的请求未完成,并且通过不将请求转发给原始服务器来保持第二个请求。 在第一个请求的响应从下一个层次到达之后,系统与其他持有的请求共享响应。 因此,实时缓存系统允许内容提供商通过添加更多的高速缓存硬件和构建缓存服务器网络来准备非常大的事件,而不是增加源服务器的容量。
-
公开(公告)号:US08046432B2
公开(公告)日:2011-10-25
申请号:US12425395
申请日:2009-04-17
申请人: Won Suk Yoo , Anil K. Ruia , Himanshu Patel , John A. Bocharov , Ning Lin
发明人: Won Suk Yoo , Anil K. Ruia , Himanshu Patel , John A. Bocharov , Ning Lin
IPC分类号: G06F15/16 , G06F15/167
CPC分类号: H04L67/2842 , H04L67/2833 , H04L67/2885
摘要: A live caching system is described herein that reduces the burden on origin servers for serving live content. In response to receiving a first request that results in a cache miss, the system forwards the first request to the next tier while “holding” other requests for the same content. If the system receives a second request while the first request is pending, the system will recognize that a similar request is outstanding and hold the second request by not forwarding the request to the origin server. After the response to the first request arrives from the next tier, the system shares the response with other held requests. Thus, the live caching system allows a content provider to prepare for very large events by adding more cache hardware and building out a cache server network rather than by increasing the capacity of the origin server.
摘要翻译: 这里描述了实时缓存系统,其减少了用于服务实况内容的原始服务器的负担。 响应于接收到导致高速缓存未命中的第一请求,系统将第一请求转发到下一层,同时“保持”其他对相同内容的请求。 如果系统在第一个请求未决时接收到第二个请求,则系统将识别出类似的请求未完成,并且通过不将请求转发给原始服务器来保持第二个请求。 在第一个请求的响应从下一个层次到达之后,系统与其他持有的请求共享响应。 因此,实时缓存系统允许内容提供商通过添加更多的高速缓存硬件和构建缓存服务器网络来准备非常大的事件,而不是增加源服务器的容量。
-
公开(公告)号:US20050250440A1
公开(公告)日:2005-11-10
申请号:US10959421
申请日:2004-10-06
申请人: Peter Zhou , Dexing Pang , Yiu-Cho Tong , Ning Lin , David Addington , Rowena Albanna , Amro Albanna , Keith Bolton
发明人: Peter Zhou , Dexing Pang , Yiu-Cho Tong , Ning Lin , David Addington , Rowena Albanna , Amro Albanna , Keith Bolton
IPC分类号: G01S19/34 , G01S1/00 , G01S5/00 , G01S5/14 , G01S19/09 , G01S19/35 , G08C17/02 , H01Q1/22 , H04W64/00 , H04Q7/20
CPC分类号: G08C17/02 , G01S5/0027 , G01S5/0036 , G01S19/17 , H04W64/00
摘要: The present invention generally relates to systems, methods and applications utilizing the convergence of any combination of the following three technologies: wireless positioning or localization technology, wireless communications technology and sensor technology. In particular, certain embodiments of the present invention relate to a remote device that includes a sensor for determining or measuring a desired parameter, a receiver for receiving position data from the Global Positioning System (GPS) satellite system, a processor for determining whether or not alert conditions are present and a wireless transceiver for transmitting the measured parameter data and the position data to a central station, such as an application service provider (ASP). The ASP, in turn, may communicate the measured data, position data and notification of any alerts to an end user via an alert device. The present invention also relates to various applications and systems utilizing the capabilities of such a device.
摘要翻译: 本发明一般涉及利用以下三种技术的任何组合的融合的系统,方法和应用:无线定位或定位技术,无线通信技术和传感器技术。 特别地,本发明的某些实施例涉及一种包括用于确定或测量所需参数的传感器的远程设备,用于从全球定位系统(GPS)卫星系统接收位置数据的接收机,用于确定是否 存在警报条件和用于将测量的参数数据和位置数据发送到诸如应用服务提供商(ASP)的中央站的无线收发器。 ASP又可以通过警报设备将测量的数据,位置数据和任何警报的通知传达给最终用户。 本发明还涉及利用这种装置的能力的各种应用和系统。
-
公开(公告)号:US09514243B2
公开(公告)日:2016-12-06
申请号:US12629904
申请日:2009-12-03
申请人: Won Suk Yoo , Venkat Raman Don , Anil K. Ruia , Ning Lin , Chittaranjan Pattekar
发明人: Won Suk Yoo , Venkat Raman Don , Anil K. Ruia , Ning Lin , Chittaranjan Pattekar
IPC分类号: G06F17/30
CPC分类号: G06F17/30902
摘要: An intelligent caching system is described herein that intelligently consolidates the name-value pairs in content requests containing query strings so that only substantially non-redundant responses are cached, thereby saving cache proxy resources. The intelligent caching system determines which name-value pairs in the query string can affect the redundancy of the content response and which name-value pairs can be ignored. The intelligent caching system organically builds the list of relevant name-value pairs by relying on a custom response header or other indication from the content server. Thus, the intelligent caching system results in fewer requests to the content server as well as fewer objects in the cache.
摘要翻译: 本文描述了一种智能缓存系统,其智能地整合包含查询字符串的内容请求中的名称 - 值对,使得仅基本上非冗余的响应被缓存,从而节省缓存代理资源。 智能缓存系统确定查询字符串中哪些名称 - 值对可以影响内容响应的冗余,哪些名称 - 值对可以被忽略。 智能缓存系统通过依赖于内容服务器的自定义响应头或其他指示来有机地构建相关名称 - 值对的列表。 因此,智能缓存系统导致对内容服务器的请求减少以及缓存中的较少对象。
-
公开(公告)号:US20050014058A1
公开(公告)日:2005-01-20
申请号:US10619406
申请日:2003-07-15
申请人: Nileshkumar Dave , Ning Lin
发明人: Nileshkumar Dave , Ning Lin
CPC分类号: H01M8/0271 , H01M8/2485 , Y10T29/4911
摘要: A multi-layer seal system for a manifold (10) of a proton exchange membrane fuel cell includes a silicone rubber filler layer (22) between endplates (9) to compensate for the uneven edges of cell elements, an elastomer gasket (15) disposed within a groove (24) in the contact surfaces of a manifold (10), and a rigid dielectric strip (40) coplanar with the contact surfaces (17) of the endplates (9) interposed between the silicone rubber filler layer (22) and the gasket (15). The rigid dielectric strip (40) may be either angled (40a) for a corner seal, or flat (40b).
摘要翻译: 用于质子交换膜燃料电池的歧管(10)的多层密封系统包括在端板(9)之间的硅橡胶填充层(22),以补偿电池元件的不平坦边缘;弹性体垫圈(15) 在歧管(10)的接触表面的凹槽(24)内以及与位于硅橡胶填充层(22)和(3)之间的端板(9)的接触表面(17)共面的刚性介质条(40) 垫圈(15)。 刚性介质条(40)可以是用于角密封件的成角度(40a)或平面(40b)。
-
公开(公告)号:US20110249679A1
公开(公告)日:2011-10-13
申请号:US13140054
申请日:2009-08-31
申请人: Ning Lin , Xiaohong Qian
发明人: Ning Lin , Xiaohong Qian
IPC分类号: H04L12/56
摘要: A method for implementing FRR comprising_starting up an upper layer protocol software to manage and configure a FRR route; an upper layer protocol software sending down a active next hop of the FRR; a driver writing an IP address of the FRR into an ECMP table and creating a software table to record correspondence between a FRR group and ECMP group; informing the driver of a prefix address of a subnet route and the index of the FRR group, and the driver finding the index of the ECMP group, and writing information of the subnet route and the index of the ECMP group into hardware; an upper layer protocol software informing the driver of the index of the FRR and an IP address of a new standby next hop; the driver looking up for the index of the ECMP group, and updating the next hop address of the ECMP group.
摘要翻译: 一种用于实现FRR的方法,包括:启动上层协议软件,以管理和配置FRR路由; 发送FRR的活跃下一跳的上层协议软件; 驱动器将FRR的IP地址写入ECMP表,并创建一个软件表,记录FRR组和ECMP组之间的对应关系; 通知驾驶员子网路由的前缀地址和FRR组的索引,以及查找ECMP组的索引的驱动程序,将子网路由信息和ECMP组索引信息写入硬件; 上层协议软件,通知驾驶员FRR的索引和新的备用下一跳的IP地址; 司机查找ECMP组的索引,并更新ECMP组的下一跳地址。
-
公开(公告)号:US20100274885A1
公开(公告)日:2010-10-28
申请号:US12427774
申请日:2009-04-22
申请人: Won Suk Yoo , Anil K. Ruia , Himanshu Patel , Ning Lin
发明人: Won Suk Yoo , Anil K. Ruia , Himanshu Patel , Ning Lin
IPC分类号: G06F15/16 , G06F15/173
CPC分类号: H04L67/1008 , H04L67/1002
摘要: A load balancing system is described herein that proactively balances client requests among multiple destination servers using information about anticipated loads or events on each destination server to inform the load balancing decision. The system detects one or more upcoming events that will affect the performance and/or capacity for handling requests of a destination server. Upon detecting the event, the system informs the load balancer to drain connections around the time of the event. Next, the event occurs on the destination server, and the system detects when the event is complete. In response, the system informs the load balancer to restore connections to the destination server. In this way, the system is able to redirect clients to other available destination servers before the tasks occur. Thus, the load balancing system provides more efficient routing of client requests and improves responsiveness.
摘要翻译: 这里描述了一种负载平衡系统,它使用关于每个目的地服务器上的预期负载或事件的信息来主动平衡多个目的地服务器之间的客户端请求以通知负载平衡决定。 系统检测将影响目标服务器请求的性能和/或容量的一个或多个即将到来的事件。 在检测到事件时,系统通知负载平衡器在事件发生的时间内排除连接。 接下来,事件发生在目标服务器上,系统检测事件何时完成。 作为响应,系统通知负载均衡器恢复与目标服务器的连接。 这样,在任务发生之前,系统能够将客户端重定向到其他可用的目标服务器。 因此,负载平衡系统提供更有效的客户端请求路由并提高响应能力。
-
公开(公告)号:US20110131341A1
公开(公告)日:2011-06-02
申请号:US12626957
申请日:2009-11-30
申请人: Won Suk Yoo , Venkat Raman Don , Anil K. Ruia , Ning Lin , Chittaranjan Pattekar
发明人: Won Suk Yoo , Venkat Raman Don , Anil K. Ruia , Ning Lin , Chittaranjan Pattekar
IPC分类号: G06F15/16
CPC分类号: G06F16/9574
摘要: A selective pre-caching system reduces the amount of content cached at cache proxies by limiting the cached content to that content that a particular cache proxy is responsible for caching. This can substantially reduce the content stored on each cache proxy and reduces the amount of resources consumed for pre-caching in preparation for a particular event. The cache proxy receives a list of content items that and an indication of the topology of the cache network. The cache proxy uses the received topology to determine the content items in the received list of content items that the cache proxy is responsible for caching. The cache proxy then retrieves the determined content items so that they are available in the cache before client requests are received.
摘要翻译: 选择性预缓存系统通过将缓存内容限制为特定缓存代理负责缓存的内容来减少缓存代理缓存的内容量。 这可以显着减少存储在每个缓存代理上的内容,并且减少为预先缓存而消耗的资源量,以准备特定事件。 高速缓存代理接收内容项的列表以及高速缓存网络拓扑的指示。 缓存代理使用接收到的拓扑来确定缓存代理负责缓存的内容项的接收列表中的内容项。 缓存代理然后检索确定的内容项,使得它们在接收到客户端请求之前在高速缓存中可用。
-
-
-
-
-
-
-
-
-