Abstract:
Example video data processing methods and apparatus are disclosed. One example method includes obtaining viewport information by a server. The server obtains spatial object information based on the viewport information, where the spatial object information is used to describe a specified spatial object in panoramic space. The server obtains a first bitstream, where the first bitstream is obtained by encoding image data in the specified spatial object. The server obtains a second bitstream, where the second bitstream is obtained by encoding image data in the panoramic space. The server transmits the first bitstream and the second bitstream to a client.
Abstract:
Example video data processing methods and apparatus are disclosed. One example method includes obtaining viewport information by a server. The server obtains spatial object information based on the viewport information, where the spatial object information is used to describe a specified spatial object in panoramic space. The server obtains a first bitstream that is obtained by encoding image data in the specified spatial object. The server obtains a second bitstream that is obtained by encoding image data in the panoramic space. The server transmits the first bitstream and the second bitstream to a client.
Abstract:
A method and an apparatus for processing video data. The method includes: parsing media presentation description to obtain flag information, where the flag information is used to identify a first representation of a video, where playing duration of a segment in the first representation is shorter than playing duration of a segment in a second representation of the video; obtaining switching instruction information, where the switching instruction information is used to instruct to switch from a current spatial object to a target spatial object; determining a target representation from the first representation of the video based on the flag information and the switching instruction information, where the target representation corresponds to the target spatial object; and obtaining a current playing moment of the video, and obtaining a target representation segment based on the current playing moment and the target representation.
Abstract:
Embodiments of the present invention provide a method for synchronous playback by multiple smart devices, and an apparatus. A first device acquires frame synchronization information at intervals of a preset time, and sends the frame synchronization information to one or more second devices, where the frame synchronization information is frame information of a frame to be played by the first device or frame information of a frame that the first device starts to play currently; and after learning the frame synchronization information sent by the first device, the second device adjusts frame resources played by itself. Because the multiple smart devices that perform synchronous playback are generally in one local area network, a transmission delay of frame information from the first device to the second device can be ignored, thereby improving a synchronization effect among the multiple smart devices.
Abstract:
Disclosed are an encoding/decoding method, apparatus, and system. In an implementation, the encoding method includes: encoding video information, where the video information includes M frames, the M frames include a first frame, a second frame, and a third frame, the second frame refers to the first frame, and the third frame refers to the second frame or the first frame, storing the first frame, the second frame, and the third frame in a buffer to obtain candidates of a long-term reference frame, and selecting a subset from the candidates based on a feedback signal as a long-term reference frame.
Abstract:
Embodiments of this application provide an encoding method, a decoding method, and an electronic device. The method includes: obtaining a current frame; obtaining a reconstructed picture corresponding to a reference frame of the current frame from an external reference list of an encoder, where the reference frame of the current frame is a frame encoded by the encoder, and the external reference list is independent of the encoder; performing, by the encoder, intra coding on the reconstructed picture; and performing, by the encoder, inter coding on the current frame based on a result of the intra coding, to obtain a bitstream corresponding to the current frame. In this way, the encoder may flexibly select a reference frame from the external reference list for encoding, thereby implementing cross-frame reference or cross-resolution reference, and improving flexibility of reference frame management of the encoder.
Abstract:
Example video data processing methods and apparatus are disclosed. One example method includes receiving a first stream from a client, where the first bitstream is obtained by encoding image data in a specified spatial object. The specified spatial object is part of panoramic space, and a size of the specified spatial object is larger than a size of a spatial object of the panoramic space corresponding to viewport information. The spatial object corresponding to the viewport information is located in the specified spatial object. The client receives a second stream, where the second bitstream is obtained by encoding image data of a panoramic image of the panoramic space with a lower resolution than a resolution of the image data included in the specified spatial object. The client plays the second bitstream and first bitstream.
Abstract:
This application provides a media data transmission method and an apparatus. The method includes: A client sends first information to a server. The client receives video data packets that correspond to a second target video picture and that are sent by the server. The first information is used to indicate spatial information of a region in which a first target video picture is located, and the first target video picture includes a video picture within a current viewport. The second target video picture includes the first target video picture, at least one of the video data packets corresponding to the second target video picture carries at least one piece of second information, and the second information is used to indicate spatial information of a region in which the second target video picture is located.
Abstract:
This application provides a video processing method and apparatus. The method includes: adding, by a server, perception attribute information of an object and spatial location information of the object to a video bitstream or a video file, and encapsulating the video bitstream or the video file, where the perception attribute information is used to indicate a property presented when the object is perceived by a user; and obtaining, by a terminal device, the video bitstream or the video file that carries the perception attribute information of the object and the spatial location information of the object, and performing perception rendering on a perception attribute of the object based on behavior of the user, the perception attribute information of the object and the spatial location information of the object.
Abstract:
Embodiments of the present invention provide a streaming-technology based video data processing method and apparatus. The method includes: obtaining a media presentation description, where the media presentation description includes index information of video data; obtaining the video data based on the index information of the video data; obtaining tilt information of the video data; and processing the video data based on the tilt information of the video data. According to the video data processing method and apparatus in the embodiments of the present invention, information received by a client includes tilt information of video data, and the client may adjust a presentation manner for the video data based on the tilt information.