VIDEO ENHANCEMENT METHOD AND APPARATUS

    公开(公告)号:US20250039476A1

    公开(公告)日:2025-01-30

    申请号:US18916139

    申请日:2024-10-15

    Abstract: The present disclosure discloses a video enhancement method and apparatus. The method includes: segmenting a target video into a plurality of groups of images, the images in the same group belonging to the same scene; determining, for each group of images, a matched video enhancement algorithm using a pre-trained quality assessment model, and performing video enhancement processing on the each group of images using the video enhancement algorithm; and sequentially splicing video enhancement processing results of all groups of images to obtain video enhancement data of the target video. With the present disclosure, the video enhancement processing effect can be improved and the video viewing experience can be improved.

    METHOD AND APPARATUS FOR GENERATING VIDEO INTERMEDIATE FRAME

    公开(公告)号:US20240428371A1

    公开(公告)日:2024-12-26

    申请号:US18658694

    申请日:2024-05-08

    Abstract: A method for generating a video intermediate frame includes performing a warp operation on a plurality of target video frames, based on a bidirectional optical flow between the plurality of target video frames, to obtain a plurality of pictures, determining a similarity between sub-pictures corresponding to image sub-regions in the plurality of pictures, predicting, based on the similarity, a network depth of a frame synthesis network matched with a corresponding image sub-region, the network depth increasing as the similarity decreases, performing synthetic processing, using the frame synthesis network, on corresponding sub-pictures of each of the image sub-regions, based on the network depth matched with the corresponding image sub-region, to obtain a plurality of images, and splicing the plurality of images to obtain intermediate frames of the plurality of target video frames, based on each of the image sub-regions.

    SIGNAL TRANSFORMING METHOD AND DEVICE
    14.
    发明申请

    公开(公告)号:US20180242021A1

    公开(公告)日:2018-08-23

    申请号:US15955411

    申请日:2018-04-17

    Abstract: Provided are a signal transforming method and a signal transforming device. For example, the signal transforming method includes determining a minimum-value matrix and a maximum-value matrix with respect to elements of a matrix used in frequency transformation, wherein the minimum-value matrix is configured of elements of minimum value and the maximum-value matrix is configured of elements of maximum value; determining a maximum threshold value of a result value of a function indicating at least one selected from transform distortion, normalization, and orthogonality of the matrix; determining a transform matrix configured of elements that are greater than the elements of the minimum-value matrix and less than the elements of the maximum-value matrix at respective positions of the matrix, and in which the result value of the function is less than the maximum threshold value; and transforming an input signal by using the determined transform matrix.

    METHOD FOR OBTAINING MOTION INFORMATION
    15.
    发明申请
    METHOD FOR OBTAINING MOTION INFORMATION 审中-公开
    获取运动信息的方法

    公开(公告)号:US20160134886A1

    公开(公告)日:2016-05-12

    申请号:US14898291

    申请日:2014-06-13

    Inventor: Jie CHEN Il-koo KIM

    CPC classification number: H04N19/52 H04N19/136 H04N19/176 H04N19/44 H04N19/503

    Abstract: Provided is a method for obtaining motion information in video encoding/decoding, which includes calculating a first predictor of a motion vector of a current block/sub-block according to a motion vector of each reference block in a first reference block set of the current block/sub-block; determining a first motion vector difference between a motion vector of each reference block in a second reference block set of the current block/sub-block and a first predictor of the motion vector of the reference block in the second reference block set of the current block/sub-block; predicting a first motion vector difference between the motion vector of the current block/sub-block and the first predictor of the motion vector of the current block/sub-block according to the first motion vector difference of each reference block in the second reference block set to obtain a predictor of the first motion vector difference of the current block/sub-block; and determining a second predictor of the motion vector of the current block/sub-block according to the predictor of the first motion vector difference of the current block/sub-block and the first predictor of the motion vector of the current block/sub-block. The method makes it possible to improve encoding/decoding performance.

    Abstract translation: 提供了一种用于获得视频编码/解码中的运动信息的方法,其包括根据当前的第一参考块集合中的每个参考块的运动矢量来计算当前块/子块的运动矢量的第一预测器 块/子块 确定当前块/子块的第二参考块集合中的每个参考块的运动矢量与当前块的第二参考块集合中的参考块的运动矢量的第一预测器之间的第一运动矢量差 /子块; 根据第二参考块中的每个参考块的第一运动矢量差来预测当前块/子块的运动矢量与当前块/子块的运动矢量的第一预测器之间的第一运动矢量差 设置为获得当前块/子块的第一运动矢量差的预测器; 以及根据当前块/子块的当前块/子块的运动矢量的第一运动矢量差的预测器和当前块/子块的运动矢量的第一预测器,确定当前块/子块的运动矢量的第二预测器, 块。 该方法可以提高编码/解码性能。

    OBSTACLE AVOIDANCE PLAYING METHOD AND APPARATUS

    公开(公告)号:US20250005885A1

    公开(公告)日:2025-01-02

    申请号:US18665133

    申请日:2024-05-15

    Abstract: An obstacle avoidance playing method includes acquiring human eye position information of a viewer in a playing scene and three-dimensional data of an object in a respective viewing space region; determining a visible region of a display screen based on the human eye position information, the three-dimensional data of the object, and size and position information of the display screen, the visible region corresponding to a portion of the display screen that is unobstructed to the viewer; and displaying image content using (i) a matched obstacle avoidance mode determined based on the visible region and (ii) a preset obstacle avoidance strategy such that the image content is displayed in the visible region.

    META-SEARCHING METHOD AND APPARATUS
    18.
    发明公开

    公开(公告)号:US20240119096A1

    公开(公告)日:2024-04-11

    申请号:US18217831

    申请日:2023-07-03

    CPC classification number: G06F16/953 G06F16/951 G06F21/31

    Abstract: A meta-searching method, including determining a target metaverse for a current search using intent classification based on an inquiry text included in a searching request of a user; extracting a searching clue and a clue type; selecting a current search engine for performing the current search using the target metaverse and the clue type by prioritizing a search engine associated with the target metaverse; performing identity authentication using the user account information of the user in the target metaverse prior to the search based on determining that user account information is required to search for the searching clue for performing a search using the current search engine, and performing the search using the current search engine based on the searching clue; and providing the search results to the user.

    BIDIRECTIONAL OPTICAL FLOW ESTIMATION METHOD AND APPARATUS

    公开(公告)号:US20230281829A1

    公开(公告)日:2023-09-07

    申请号:US18168209

    申请日:2023-02-13

    Abstract: A bidirectional optical flow estimation method and apparatus are provided. The method includes acquiring a target image pair of which optical flow is to be estimated, and constructing an image pyramid for each target image in the target image pair respectively, and performing bidirectional optical flow estimation using a pre-trained optical flow estimation model based on the image pyramid, to obtain bidirectional optical flow between the target images. An optical flow estimation module in the optical flow estimation model is recursively called to perform the bidirectional optical flow estimation sequentially based on images of respective layers in the image pyramid according to a preset order, forward warping towards middle processing is performed on an image of a corresponding layer of the image pyramid before each call of the optical flow estimation module, and an image of an intermediate frame obtained by the forward warping towards middle processing is inputted into the optical flow estimation module. With the disclosure, the efficiency and generalization of bidirectional optical flow estimation can be improved, and model training and optical flow estimation overheads can be reduced.

Patent Agency Ranking