Abstract:
An information processing apparatus for a system generates a virtual viewpoint image based on image data obtained by performing imaging from a plurality of directions using a plurality of cameras. The information processing apparatus includes a determination unit configured to determine a generation method for generating a virtual viewpoint image using the image data from among a plurality of generation methods depending on a situation of imaging using the plurality of cameras, and an output unit configured to output data in accordance with a result of the determination performed by the determination unit.
Abstract:
Auxiliary or enhanced features associated with broadcast television programs are activated using carrier-based active text enhancement (CATE) signals embedded within timed text (TT) associated with the broadcast program. The active text enhancements can be interpreted by the viewer's set top box (STB) or other receiver to activate software applications, video clips, imagery, uniform resource locators (URLs), interactive interface features or the like on either or both of primary or secondary displays. Timed text enhancements can flexibly reference different types of content to provide richer and more powerful viewer experiences for the viewer.
Abstract:
Provided is an information processing apparatus that includes circuitry configured to receive a multiplexed image signal and sound signal from another information processing apparatus using a moving picture experts group (MPEG) 2-transport stream (TS); and perform control to cause reduction of data contained in a packetized elementary stream (PES) payload in a PES packet packed in a transport (TS) packet specified by a packet identifier (PID) specifying sound data transmission described in a program map table (PMT) and to extract a presentation time stamp (PTS) contained in a PES header portion of the PES packet after requesting the another information processing apparatus to reduce a sound data amount.
Abstract:
A system synchronizes one or more media streams, such as video stream or audio streams, by embedding frame identifiers in each compressed media stream, and then by using synchronizing signals to render each frame simultaneously by referencing the embedded frame identifier. Since the frame identifier is embedded in between encoded frames of the compressed media stream without altering any of the compressed, encoded data, the frame identifier information could be rapidly embedded without creating lag associated with manipulating existing data. This technique could be used, for example, to synchronize a single HD video on a plurality of display devices (e.g. a football game on a wall of video monitors), or to synchronize a plurality of HD video streams on a single display device (e.g. a plurality of live video feeds on a single computer monitor).
Abstract:
본 개시는 IP 기반의 방송 망에서 시스템의 데이터 송신 방법에 있어서, 서비스에 대한 MPU (Media Processing Unit)를 이용하여 MMTP(MPEG media transport Protocol) 패킷을 생성하는 동작; 상기 MMTP 패킷을 이용하여 IP 패킷을 생성하는 동작; 상기 IP 패킷을 이용하여 L2(layer 2) 패킷을 생성하고, 상기 L2 패킷을 이용하여 L1(layer 1) 패킷 스트림을 생성하는 동작; 및 상기 L1 패킷 스트림을 송신하는 동작을 포함하되, 상기 시스템의 절대 시간 정보가 상기 L1 패킷 스트림의 전송 프레임 및 상기 L2 패킷 중 하나에 포함됨을 특징으로 하는 송신 방법을 제공한다.
Abstract:
A method of signaling individual layers in a transport stream is provided that includes: determining a plurality of layers in a transport stream, wherein each layer includes a respective transport stream parameter setting; determining an additional layer for the plurality of layers in the transport stream, wherein the additional layer enhances one or more of the plurality of layers including a base layer and the respective layer parameter settings for the plurality of layers do not take into account the additional layer; and determining an additional transport stream parameter setting for the additional layer, the additional transport stream parameter setting specifying a relationship between the additional layer and at least a portion of the plurality of layers, wherein the additional transport stream parameter setting is used to decode the additional layer and the at least a portion of the plurality of layers.
Abstract:
When layered coded content is transmitted over a fixed capacity network link, bitrate peaks may occur at similar time instances at the base layer and enhancement layer. To more efficiently use the bandwidth, the present principles propose different methods, such as adding a delay to a base layer bit stream or an enhancement layer bit stream, and shifting an "over-the-limit" portion of bits by a time window. At the receiver side, the present principles provide different channel change mechanisms to allow a user to change channel quickly even given the delay added in the bit streams. In particular, a decoder can start rendering the base layer content, without having to wait for the enhancement layer to be available. In one embodiment, the decoding of the base layer content is slowed down in order to align in time with the enhancement layer content.
Abstract:
Systems and methods are described for simplifying the sub-bitstream extraction and the rewriting process. In an exemplary method, a video is encoded as a multi-layer scalable bitstream including at least a base layer and a first non-base layer. The bitstream is subject to the constraint that the image slice segments in the first non-base layer each refer to a picture parameter set in the base layer. Additional constraints and extra high level syntax elements are also described. Embodiments are directed to (i) constraints on the output layer set for sub-bitstream extraction process; (ii) VPS generation for the sub-bitstream extraction process; and (iii) SPS/PPS generation for the sub-bitstream extraction process.