Abstract:
Techniques are described for synchronous capture and playback of 4D content. A 4D synchronous capture system retrieves sensor readings that describe environmental parameters at a point of time at which the sensor readings have been captured. Based the sensor readings, the system generates 4D content data, which digitally represents the environmental parameters captured by sensor readings. The system synchronizes the 4D content data with concurrently captured audio-visual content data and causes the concurrently captured 4D content data and the audio-visual content data to be associated with the same timing information. Based on the time synchronization, the 4D content data reproduces the environmental parameters in synch with the playback of the audio-visual content, in an embodiment.
Abstract:
A system and method for effectuating channel changes in a multicast adaptive bitrate (MABR) streaming network using a dedicated bandwidth pipe for downloading a requested channel's data in a recovery segment by issuing an HTTP request. A video management agent is configured to stitch the recovery segment's data with the regular channel stream during the channel change to generate a hybrid stream, which is multicast streamed toward the requesting device. Once the data from the regular channel stream is properly joined, recovery segment downloading ceases and the bandwidth consumed for sending recovery data in the dedicated bandwidth pipe is released.
Abstract:
A packet-based video network comprises a plurality of packetized video data nodes acting as packetized video data sources and/or packetized video data destinations; a packet switch configured to provide at least two selectable video packet routes amongst the plurality of nodes and to switch from one of the video packet routes to another of the video packet routes at a switching operation; and a video synchroniser configured to synchronise the video frame periods of at least those nodes acting as packetized video data sources; in which: each node acting as a packetized video data source is configured to launch onto the network packetized video data such that, for at least those video frame periods adjacent to a switching operation: the node launches onto the network packetized video data required for decoding that frame during a predetermined active video data portion of the video frame period, and the node does not launch onto the network packetized video data required for decoding that frame during a predetermined remaining portion of the video frame period; and the network is configured so that a switching operation from one of the video packet routes to another of the video packet routes is implemented during a time period corresponding to the predetermined remaining portion.
Abstract:
It is presented a video stream provider for providing an output video stream. The video stream provider comprises: a processor; and a memory storing instructions that, when executed by the processor, causes the video stream provider to: receive a first video stream comprising a plurality of video frames, the first video stream being a main video stream; receive a second video stream comprising a plurality of video frames, wherein the video frames of the second video stream correspond to the video frames of the first video stream, the second video stream being a complementary video stream; determine a corrupted video frame of the main video stream; replace the corrupted video frame with a corresponding video frame from the complementary video stream to generate an output video stream; and output the output video stream.
Abstract:
방송 수신 장치는 프로그램을 포함하는 서비스 및 시그널링 정보를 수신하는 브로드캐스트 인터페이스, 상기 시그널링 정보는 재생되는 상기 프로그램의 미디어 타임 정보를 포함하고; 컴패니언 스크린 디바이스를 발견하는 컴패니언 스크린 인터페이스; 및 상기 브로드캐스트 인터페이스 및 상기 컴패니언 스크린 인터페이스를 동작시키는 제어부를 포함하고, 상기 제어부는 상기 시그널링 정보를 기초로 상기 프로그램과 상기 컴패니언 스크린 디바이스에서 디스플레이되는 프로그램 사이에 시간 동기화와 관련된 데이터를 제공하는 서비스 시간 정보를 생성하는 시간 동기화 서비스 프로세서를 포함하고, 상기 컴패니언 스크린 인터페이스는 상기 서비스 시간 정보를 상기 컴패니언 스크린 디바이스로 전달한다.
Abstract:
본 발명은, 방송 서비스를 제공하는 방법에 있어서, 적어도 2개의 망들을 통해서 상기 방송 서비스가 제공될 경우, 상기 망들 각각의 고정 단대단 지연 중 최대값을 획득하는 과정과, 상기 최대값을 기반으로, 상기 방송 서비스의 패킷들을 수신한 수신기의 출력 시점을 제어하는 과정을 포함한다.
Abstract:
A video synchronous playback system includes: a mobile terminal, a personal computer (PC), an encoding server, a streaming server, and a playback device, where the mobile terminal is configured to capture a current displayed frame of a played video to obtain a first image, perform bitmap scaling processing on the first image to obtain a second image, perform image compression processing on the second image to obtain a third image, and send the third image to the encoding server by using the PC; and the encoding server is configured to restore the third image into a bitmap image to obtain a fourth image, perform bitmap scaling processing on the fourth image to obtain a fifth image, perform format conversion, encoding processing, and encapsulation on the fifth image to obtain a video stream, and send the video stream to a target playback device by using the streaming server.
Abstract:
A method (400) for synchronizing playback of a program including a video and associated first audio at a first electronic device with playback of a second audio associated with the program at a second electronic device that also receives the first audio, the method comprising: decoding (405), by a first audio decoder in the second electronic device, the first audio, and outputting the decoded first audio; decoding (410), by a second audio decoder in the second electronic device, the second audio and outputting the decoded second audio for playing back by the second electronic device; receiving (415) a user command to synchronize the playback of the video at the first electronic device and playback of the second audio at the second electronic device; responsive to the user command, the method further comprising capturing (505), by a capturing device in the second electronic device, the playback of the first audio at the first electronic device; determining (510), by the second electronic device, an offset between the outputted decoded first audio and the captured first audio; and adjusting (520) outputting of the decoded second audio according to the offset, so that the playback of the first audio at the first electronic device is synchronized with the playback of the second audio at the second electronic device.