Abstract:
A low latency streaming system provides a stateless protocol between a client and server with reduced latency. The server embeds incremental information in media fragments that eliminates the usage of a typical control channel. In addition, the server provides uniform media fragment responses to media fragment requests, thereby allowing existing Internet cache infrastructure to cache streaming media data. Each fragment has a distinguished Uniform Resource Locator (URL) that allows the fragment to be identified and cached by both Internet cache servers and the client's browser cache. The system reduces latency using various techniques, such as sending fragments that contain less than a full group of pictures (GOP), encoding media without dependencies on subsequent frames, and by allowing clients to request subsequent frames with only information about previous frames.
Abstract:
A multi-layer seal system for a manifold (10) of a proton exchange membrane fuel cell includes a silicone rubber filler layer (22) between endplates (9) to compensate for the uneven edges of cell elements, an elastomer gasket (15) disposed within a groove (24) in the contact surfaces of a manifold (10), and a rigid dielectric strip (40) coplanar with the contact surfaces (17) of the endplates (9) interposed between the silicone rubber filler layer (22) and the gasket (15). The rigid dielectric strip (40) may be either angled (40a) for a corner seal, or flat (40b).
Abstract:
A method for implementing FRR comprising_starting up an upper layer protocol software to manage and configure a FRR route; an upper layer protocol software sending down a active next hop of the FRR; a driver writing an IP address of the FRR into an ECMP table and creating a software table to record correspondence between a FRR group and ECMP group; informing the driver of a prefix address of a subnet route and the index of the FRR group, and the driver finding the index of the ECMP group, and writing information of the subnet route and the index of the ECMP group into hardware; an upper layer protocol software informing the driver of the index of the FRR and an IP address of a new standby next hop; the driver looking up for the index of the ECMP group, and updating the next hop address of the ECMP group.
Abstract:
A wireless mouse for inputting commands to a host computer includes a casing, a control circuit, a wireless receiver, an electronic switch circuit and a resilient member. The control circuit includes a wireless module to transmit a wireless signal to the wireless receiver. The wireless receiver can be either received in the port of the casing or attached to a connector of the host computer. The electronic switch circuit includes a control terminal, a power input terminal connected to an external power source, and a power output terminal connected to the control circuit. The input and output power terminals are electrically connected or disconnected according to electrical potential at the control terminal. The resilient member is disposed on an inner side of the port. The resilient member is electrically connected with the control terminal by insertion of the wireless receiver in order to disconnect the input and output power terminals.
Abstract:
A load balancing system is described herein that proactively balances client requests among multiple destination servers using information about anticipated loads or events on each destination server to inform the load balancing decision. The system detects one or more upcoming events that will affect the performance and/or capacity for handling requests of a destination server. Upon detecting the event, the system informs the load balancer to drain connections around the time of the event. Next, the event occurs on the destination server, and the system detects when the event is complete. In response, the system informs the load balancer to restore connections to the destination server. In this way, the system is able to redirect clients to other available destination servers before the tasks occur. Thus, the load balancing system provides more efficient routing of client requests and improves responsiveness.
Abstract:
A live caching system is described herein that reduces the burden on origin servers for serving live content. In response to receiving a first request that results in a cache miss, the system forwards the first request to the next tier while “holding” other requests for the same content. If the system receives a second request while the first request is pending, the system will recognize that a similar request is outstanding and hold the second request by not forwarding the request to the origin server. After the response to the first request arrives from the next tier, the system shares the response with other held requests. Thus, the live caching system allows a content provider to prepare for very large events by adding more cache hardware and building out a cache server network rather than by increasing the capacity of the origin server.