LOW LATENCY CACHEABLE MEDIA STREAMING
    22.
    发明申请
    LOW LATENCY CACHEABLE MEDIA STREAMING 有权
    低延迟缓存媒体流

    公开(公告)号:US20110080940A1

    公开(公告)日:2011-04-07

    申请号:US12611133

    申请日:2009-11-03

    Abstract: A low latency streaming system provides a stateless protocol between a client and server with reduced latency. The server embeds incremental information in media fragments that eliminates the usage of a typical control channel. In addition, the server provides uniform media fragment responses to media fragment requests, thereby allowing existing Internet cache infrastructure to cache streaming media data. Each fragment has a distinguished Uniform Resource Locator (URL) that allows the fragment to be identified and cached by both Internet cache servers and the client's browser cache. The system reduces latency using various techniques, such as sending fragments that contain less than a full group of pictures (GOP), encoding media without dependencies on subsequent frames, and by allowing clients to request subsequent frames with only information about previous frames.

    Abstract translation: 低延迟流系统在客户端和服务器之间提供无状态协议,延迟时间缩短。 服务器将增量信息嵌入到媒体片段中,从而消除了典型控制信道的使用。 另外,服务器对媒体片段请求提供统一的媒体片段响应,从而允许现有的因特网缓存基础设施来缓存流媒体数据。 每个片段都有一个不同的统一资源定位符(URL),允许由两个Internet缓存服务器和客户端的浏览器缓存来标识和缓存片段。 该系统使用各种技术来减少等待时间,例如发送包含少于一整组图像(GOP)的片段,编码媒体,而不依赖于后续帧,以及允许客户端仅使用关于先前帧的信息来请求后续帧。

    Fuel cell manifold seal with rigid inner layer
    23.
    发明申请
    Fuel cell manifold seal with rigid inner layer 有权
    具有刚性内层的燃料电池歧管密封

    公开(公告)号:US20050014058A1

    公开(公告)日:2005-01-20

    申请号:US10619406

    申请日:2003-07-15

    CPC classification number: H01M8/0271 H01M8/2485 Y10T29/4911

    Abstract: A multi-layer seal system for a manifold (10) of a proton exchange membrane fuel cell includes a silicone rubber filler layer (22) between endplates (9) to compensate for the uneven edges of cell elements, an elastomer gasket (15) disposed within a groove (24) in the contact surfaces of a manifold (10), and a rigid dielectric strip (40) coplanar with the contact surfaces (17) of the endplates (9) interposed between the silicone rubber filler layer (22) and the gasket (15). The rigid dielectric strip (40) may be either angled (40a) for a corner seal, or flat (40b).

    Abstract translation: 用于质子交换膜燃料电池的歧管(10)的多层密封系统包括在端板(9)之间的硅橡胶填充层(22),以补偿电池元件的不平坦边缘;弹性体垫圈(15) 在歧管(10)的接触表面的凹槽(24)内以及与位于硅橡胶填充层(22)和(3)之间的端板(9)的接触表面(17)共面的刚性介质条(40) 垫圈(15)。 刚性介质条(40)可以是用于角密封件的成角度(40a)或平面(40b)。

    METHOD FOR IMPLEMENTING FAST REROUTE
    25.
    发明申请
    METHOD FOR IMPLEMENTING FAST REROUTE 审中-公开
    实施快速REROUTE的方法

    公开(公告)号:US20110249679A1

    公开(公告)日:2011-10-13

    申请号:US13140054

    申请日:2009-08-31

    CPC classification number: H04L45/00 H04L45/22 H04L45/24 H04L45/28 H04L45/50

    Abstract: A method for implementing FRR comprising_starting up an upper layer protocol software to manage and configure a FRR route; an upper layer protocol software sending down a active next hop of the FRR; a driver writing an IP address of the FRR into an ECMP table and creating a software table to record correspondence between a FRR group and ECMP group; informing the driver of a prefix address of a subnet route and the index of the FRR group, and the driver finding the index of the ECMP group, and writing information of the subnet route and the index of the ECMP group into hardware; an upper layer protocol software informing the driver of the index of the FRR and an IP address of a new standby next hop; the driver looking up for the index of the ECMP group, and updating the next hop address of the ECMP group.

    Abstract translation: 一种用于实现FRR的方法,包括:启动上层协议软件,以管理和配置FRR路由; 发送FRR的活跃下一跳的上层协议软件; 驱动器将FRR的IP地址写入ECMP表,并创建一个软件表,记录FRR组和ECMP组之间的对应关系; 通知驾驶员子网路由的前缀地址和FRR组的索引,以及查找ECMP组的索引的驱动程序,将子网路由信息和ECMP组索引信息写入硬件; 上层协议软件,通知驾驶员FRR的索引和新的备用下一跳的IP地址; 司机查找ECMP组的索引,并更新ECMP组的下一跳地址。

    Wireless mouse with electronic switch circuit
    26.
    发明申请
    Wireless mouse with electronic switch circuit 有权
    无线鼠标带电子开关电路

    公开(公告)号:US20110102322A1

    公开(公告)日:2011-05-05

    申请号:US12588862

    申请日:2009-10-30

    CPC classification number: G06F3/03543 G06F3/0383 G06F2203/0384

    Abstract: A wireless mouse for inputting commands to a host computer includes a casing, a control circuit, a wireless receiver, an electronic switch circuit and a resilient member. The control circuit includes a wireless module to transmit a wireless signal to the wireless receiver. The wireless receiver can be either received in the port of the casing or attached to a connector of the host computer. The electronic switch circuit includes a control terminal, a power input terminal connected to an external power source, and a power output terminal connected to the control circuit. The input and output power terminals are electrically connected or disconnected according to electrical potential at the control terminal. The resilient member is disposed on an inner side of the port. The resilient member is electrically connected with the control terminal by insertion of the wireless receiver in order to disconnect the input and output power terminals.

    Abstract translation: 用于向主计算机输入命令的无线鼠标包括壳体,控制电路,无线接收器,电子开关电路和弹性构件。 控制电路包括用于向无线接收器发送无线信号的无线模块。 无线接收器可以被接收在壳体的端口中或者被连接到主计算机的连接器。 电子开关电路包括控制端子,连接到外部电源的电力输入端子和连接到控制电路的电力输出端子。 输入和输出电源端子根据控制端的电位电气连接或断开。 弹性构件设置在端口的内侧。 弹性构件通过插入无线接收器与控制终端电连接,以便断开输入和输出电源端子。

    PROACTIVE LOAD BALANCING
    27.
    发明申请
    PROACTIVE LOAD BALANCING 有权
    主动负载平衡

    公开(公告)号:US20100274885A1

    公开(公告)日:2010-10-28

    申请号:US12427774

    申请日:2009-04-22

    CPC classification number: H04L67/1008 H04L67/1002

    Abstract: A load balancing system is described herein that proactively balances client requests among multiple destination servers using information about anticipated loads or events on each destination server to inform the load balancing decision. The system detects one or more upcoming events that will affect the performance and/or capacity for handling requests of a destination server. Upon detecting the event, the system informs the load balancer to drain connections around the time of the event. Next, the event occurs on the destination server, and the system detects when the event is complete. In response, the system informs the load balancer to restore connections to the destination server. In this way, the system is able to redirect clients to other available destination servers before the tasks occur. Thus, the load balancing system provides more efficient routing of client requests and improves responsiveness.

    Abstract translation: 这里描述了一种负载平衡系统,它使用关于每个目的地服务器上的预期负载或事件的信息来主动平衡多个目的地服务器之间的客户端请求以通知负载平衡决定。 系统检测将影响目标服务器请求的性能和/或容量的一个或多个即将到来的事件。 在检测到事件时,系统通知负载平衡器在事件发生的时间内排除连接。 接下来,事件发生在目标服务器上,系统检测事件何时完成。 作为响应,系统通知负载均衡器恢复与目标服务器的连接。 这样,在任务发生之前,系统能够将客户端重定向到其他可用的目标服务器。 因此,负载平衡系统提供更有效的客户端请求路由并提高响应能力。

    NETWORK CACHING FOR MULTIPLE CONTEMPORANEOUS REQUESTS
    28.
    发明申请
    NETWORK CACHING FOR MULTIPLE CONTEMPORANEOUS REQUESTS 有权
    网络播放多个同步请求

    公开(公告)号:US20100268789A1

    公开(公告)日:2010-10-21

    申请号:US12425395

    申请日:2009-04-17

    CPC classification number: H04L67/2842 H04L67/2833 H04L67/2885

    Abstract: A live caching system is described herein that reduces the burden on origin servers for serving live content. In response to receiving a first request that results in a cache miss, the system forwards the first request to the next tier while “holding” other requests for the same content. If the system receives a second request while the first request is pending, the system will recognize that a similar request is outstanding and hold the second request by not forwarding the request to the origin server. After the response to the first request arrives from the next tier, the system shares the response with other held requests. Thus, the live caching system allows a content provider to prepare for very large events by adding more cache hardware and building out a cache server network rather than by increasing the capacity of the origin server.

    Abstract translation: 这里描述了实时缓存系统,其减少了用于服务实况内容的原始服务器的负担。 响应于接收到导致高速缓存未命中的第一请求,系统将第一请求转发到下一层,同时“保持”其他对相同内容的请求。 如果系统在第一个请求未决时接收到第二个请求,则系统将识别出类似的请求未完成,并且通过不将请求转发给原始服务器来保持第二个请求。 在第一个请求的响应从下一个层次到达之后,系统与其他持有的请求共享响应。 因此,实时缓存系统允许内容提供商通过添加更多的高速缓存硬件和构建缓存服务器网络来准备非常大的事件,而不是增加源服务器的容量。

Patent Agency Ranking