Abstract:
In an embodiment, an application server receives, from a given UE, data that is configured to visually represent physical user input detected at the given UE at a first level of precision. The application server determines data presentation capabilities of a target UE and/or a performance level associated with a connection between the application server and the target UE. The application server selectively transitions the received data from the first level of precision to a second level of precision based on the determination, and transmits the selectively transitioned data to the target UE for presentation. In another embodiment, the application server receives a request to adjust display settings of the target UE from the given UE responsive to detected physical user input. The application server selectively adjusts the target UE's display settings based on the received request.
Abstract:
In an embodiment, an application server receives, from a given UE, data that is configured to visually represent physical user input detected at the given UE at a first level of precision. The application server determines data presentation capabilities of a target UE and/or a performance level associated with a connection between the application server and the target UE. The application server selectively transitions the received data from the first level of precision to a second level of precision based on the determination, and transmits the selectively transitioned data to the target UE for presentation. In another embodiment, the application server receives a request to adjust display settings of the target UE from the given UE responsive to detected physical user input. The application server selectively adjusts the target UE's display settings based on the received request.
Abstract:
Systems, methods and devices process received media content to generate personalized media presentations on an end point device. Received media content may be buffered in a moving window buffer, and processed to create tokens by parsing a next content element, and, for each content element, identifying a speaker or actor, creating a text representation, and measuring perceptual properties such as pitch, timbre, volume, timing, and frame rate. The end point device may compare a segment of tokens within buffered media content to a list of replacement subject matter within a user profile to determine whether the segment matches any of the replacement subject matter, and identify substitute subject matter for the matched replacement subject matter. The end point device may create a replacement sequence by modifying the substitute subject matter using the perceptual properties of the tokens in the segment, and render a personalized media presentation including the replacement sequence.
Abstract:
In an embodiment, a given user equipment (UE) in a local communication session (e.g., a P2P or ad-hoc session) between multiple UEs is designated to record session data. The given UE records the session data exchanged between the multiple UEs during the local communication session and uploads the recorded session data to a server after the local communication session has terminated. In another embodiment, a session controller (e.g., a remote server or a P2P node) receives multiple media feeds from multiple transmitting UEs, and selectively interlaces subsets of the multiple media feeds into interlaced output feed(s) that are transmitted to target UE(s). The target UE(s) provide feedback which permits the session controller to determine a lowest relevant configuration (LRC) for the target UE(s) that is used to regulate the interlaced output feed(s) transmitted thereto.
Abstract:
In an embodiment, a UE is participating in a communication session that shares a video stream with target UE(s). The UE receives user input that identifies high priority portion(s) of the video stream. The UE generates a first video feed based on the high priority portion(s) and a second video feed based at least on other portion(s) of the video stream. The first and second video feeds are exchanged with the target UE(s) on first and second links, respectively. In an example, the first link that carries the first video feed can be allocated QoS. The target UE(s) combine the first and second video feeds to reconstruct a version of the video stream, and then present the reconstructed version of the video stream.