Abstract:
Embodiments herein provide mechanisms for viewport dependent adaptive streaming of point cloud content. For example, a user equipment (UE) may receive a media presentation description (MPD) for point cloud content in a dynamic adaptive streaming over hypertext transfer protocol (DASH) format. The MPD may include viewport information for a plurality of recommended viewports and indicate individual adaptation sets of the point cloud content that are associated with the respective recommended viewports. The UE may select a first viewport from the plurality of recommended viewports (e.g., based on viewport data that indicates a current viewport of the user and/or a user-selected viewport). The UE may request one or more representations of a first adaptation set, of the adaptation sets, that corresponds to the first viewport. Other embodiments may be described and claimed.
Abstract:
An apparatus and system to provide QoE metrics reporting mechanisms for RTP-based 360-degree video delivery in live immersive streaming and real-time immersive conversational service applications are described for both in-camera and network-based stitching. Initial and desired parameters for viewports used in a teleconference are exchanged, and the teleconference established using 360° media. RTP FoV reports sent during the teleconference each contain viewport orientation information, as well as information for the QoE metrics.
Abstract:
Various embodiments herein provide techniques for Session Description Protocol (SDP)-based signaling of camera calibration parameters for multiple video streams. In embodiments, a device may receive an SDP attribute to indicate that a bitstream included in a real-time transport protocol (RTP)-based media stream includes camera calibration parameters. The device may obtain the camera calibration parameters based on the SDP attribute, and process the RTP-based media stream based on the camera calibration parameters. In embodiments, the camera calibration parameters may be used to stitch together (e.g., align and/or synchronize) the multiple video streams. In embodiments, the stitched video streams may form an immersive video content (e.g., 360-degree video content). Other embodiments may be described and claimed.
Abstract:
Various embodiments herein provide adaptive streaming mechanisms for distribution of point cloud content. The point cloud content may include immersive media content in a dynamic adaptive streaming over hypertext transfer protocol (DASH) format. Various embodiments provide DASH-based mechanisms to support viewport indication during streaming of volumetric point cloud content. Other embodiments may be described and claimed.
Abstract:
Embodiments herein provide mechanisms for viewport dependent adaptive streaming of point cloud content. For example, a user equipment (UE) may receive a media presentation description (MPD) for point cloud content in a dynamic adaptive streaming over hypertext transfer protocol (DASH) format. The MPD may include viewport information for a plurality of recommended viewports and indicate individual adaptation sets of the point cloud content that are associated with the respective recommended viewports. The UE may select a first viewport from the plurality of recommended viewports (e.g., based on viewport data that indicates a current viewport of the user and/or a user-selected viewport). The UE may request one or more representations of a first adaptation set, of the adaptation sets, that corresponds to the first viewport. Other embodiments may be described and claimed.
Abstract:
Systems, apparatuses, methods, and computer-readable media are provided for negotiating Radio Access Network (RAN)-level capabilities toward improving end-to-end quality of Internet Protocol Multimedia Subsystem (IMS) communication sessions, such as Voice over Long-Term Evolution (VoLTE) calls. Disclosed embodiments include Session Description Protocol-based mechanisms to signal the RAN-level capabilities. The RAN-level capabilities may include, for example, delay budget information signaling, Transmission Time Interval bundling, RAN frame aggregation, RAN-assisted codec adaptation or access network bitrate recommendation, and/or other like capabilities. Other embodiments may be described and/or claimed.
Abstract:
Devices and methods for video decoding with application layer forward error correction in a wireless device are generally described herein. In some methods a partial source symbol block is received that includes at least one encoded source symbol representing an original video frame. In such methods, the at least one encoded source symbol is systematic, the source symbol is decoded to recover a video frame, and the video frame is provided to a video decoder that generates a portion of an original video signal from the recovered video frame.
Abstract:
Embodiments provide methods, systems, and apparatuses for multicast broadcast multimedia service (MBMS)-assisted content distribution in a wireless communication network. A proxy terminal may include an MBMS access client configured to receive and cache an MBMS transmission including media data and metadata. The proxy terminal may further include a hypertext transfer protocol (HTTP) server module configured to transmit at least a portion of the media data to a user equipment (UE) of the wireless communication network via an HTTP transmission. The media data and metadata may be in a dynamic adaptive streaming over HTTP (DASH) format. The proxy terminal may be included in an evolved Node B (eNB), the UE, or another UE of the wireless communication network.
Abstract:
Capability exchange signaling techniques provide orientation sensor information from UEs to network servers. The orientation sensor information describes a device's support for orientation sensor capabilities, or a current orientation state of the device. Based on such information, a multimedia server provides different encoded versions of multimedia content for different device orientation modes supported by the device. The server may also adapt, dynamically and in real-time, media capture or transcode parameters for creating content tailored (i.e., optimized) for the device's current orientation mode, or for its various intermediate orientation states and spatial positions.
Abstract:
Precoding parameters used for precoding of a source are selected to minimize distortion that would otherwise be induced in the source during encoding and transmission of the source over a multiple input multiple output (MIMO) channel.