视频的处理方法、装置、电子设备和存储介质

    公开(公告)号:WO2023088104A1

    公开(公告)日:2023-05-25

    申请号:PCT/CN2022/129397

    申请日:2022-11-03

    Inventor: 陈誉中

    Abstract: 本公开实施例提供了一种视频的处理方法、装置、电子设备和存储介质。该方法包括:获取原始视频,其中,所述原始视频为单视角视频;确定所述原始视频中的多个原始视频帧中的每个原始视频帧的目标深度信息;根据所述目标深度信息以及每个原始视频帧中的原始像素的像素值生成每个原始视频帧对应的三维视角模型,以使客户端根据所述三维视角模型生成所述原始视频对应的新视角视频,其中,位于所述三维视角模型的视角范围内的多个视角与相应原始视频帧的视角之间的差值的绝对值分别小于或等于预设角度阈值。

    COLOR AND INFRA-RED THREE-DIMENSIONAL RECONSTRUCTION USING IMPLICIT RADIANCE FUNCTION

    公开(公告)号:WO2022182421A1

    公开(公告)日:2022-09-01

    申请号:PCT/US2021/070188

    申请日:2021-02-24

    Applicant: GOOGLE LLC

    Abstract: An image is rendered based a neural radiance field (NeRF) volumetric representation of a scene, where the NeRF representation is based on captured frames of video data, each frame including a color image, a widefield IR image, and a plurality of depth IR images of the scene. Each depth IR image is captured when the scene is illuminated by a different pattern of points of IR light, and the illumination by the patterns occurs at different times. The NeRF representation provides a mapping between positions and viewing directions to a color and optical density at each position in the scene, where the color and optical density at each position enables a viewing of the scene from a new perspective, and the NeRF representation provides a mapping between positions and viewing directions to IR values for each of the different patterns of points of IR light from the new perspective.

    METHOD AND APPARATUS FOR PROCESSING IMAGE CONTENT

    公开(公告)号:WO2021063919A1

    公开(公告)日:2021-04-08

    申请号:PCT/EP2020/077179

    申请日:2020-09-29

    Abstract: A method and system are provided for processing image content. The method comprises receiving information about a content image captured at least by one camera. The content includes multi-view representation of an image including both distorted and undistorted areas. The camera parameters and image parameters are then obtained and used to determine to which areas are undistorted and which areas are distorted in said image. This is used to calculate depth map of the image using the determined undistorted and distorted information. A final stereoscopic image is then rendered that uses the distorted and undistorted areas and calculation of depth map.

    웨어러블 카메라 및 넥 가이드
    4.
    发明申请

    公开(公告)号:WO2021006655A1

    公开(公告)日:2021-01-14

    申请号:PCT/KR2020/008998

    申请日:2020-07-09

    Abstract: 본 발명은, 'U'자 형상의 케이스에 3개의 카메라모듈을 일정 각도 간격으로 설치함으로써, 목에 착용할 수 있고, 360도 각도 전방위 영상을 획득할 수 있는 웨어러블 카메라를 제공하기 위한 것으로, 3개의 카메라의 광축은 동일 평면상에 위치하고 후방 카메라와 전방 카메라의 광축 간의 각도는 전방카메라들의 광축이 이루는 각도보다 커서 영상 스티칭 작업시 정확도를 향상시키면서 동시에 비정보성 이미지를 최소화 할 수 있는 웨어러블 카메라를 제공한다.

    A METHOD AND CORRESPONDING SYSTEM FOR GENERATING VIDEO-BASED MODELS OF A TARGET SUCH AS A DYNAMIC EVENT

    公开(公告)号:WO2020040679A1

    公开(公告)日:2020-02-27

    申请号:PCT/SE2019/050707

    申请日:2019-07-22

    Abstract: There is disclosed a method and corresponding systems for generating one or more video-based models of a target. The method comprises providing (S1) video streams from at least two moving or movable vehicles equipped with cameras for simultaneously imaging the target from different viewpoints. Position synchronization of the moving or movable vehicles is provided to create a stable image base, which represents the distance between the moving or movable vehicles. Pointing synchronization of the cameras is provided to cover the same object(s) and/or dynamic event(s). Time synchronization of the video frames of the video streams is provided to obtain, for at least one point in time, a set of simultaneously registered video frames. The method further comprises generating (S2), for said at least one point in time, at least one three-dimensional, 3D, model of the target based on the corresponding set of simultaneously registered video frames.

    AUTOMATIC DYNAMIC DIAGNOSIS GUIDE WITH AUGMENTED REALITY

    公开(公告)号:WO2020006142A1

    公开(公告)日:2020-01-02

    申请号:PCT/US2019/039340

    申请日:2019-06-26

    Abstract: An augmented reality (AR) system for diagnosis, troubleshooting and repair of industrial robots. The disclosed diagnosis guide system communicates with a controller of an industrial robot and collects data from the robot controller, including a trouble code identifying a problem with the robot. The system then identifies an appropriate diagnosis decision tree based on the collected data, and provides an interactive step-by-step troubleshooting guide to a user on an AR-capable mobile device, including augmented reality for depicting actions to be taken during testing and component replacement. The system includes data collector, tree generator and guide generator modules, and builds the decision tree and the diagnosis guide using a stored library of diagnosis trees, decisions and diagnosis steps, along with the associated AR data.

    360 비디오 시스템에서 오버레이 미디어 처리 방법 및 그 장치

    公开(公告)号:WO2019235849A1

    公开(公告)日:2019-12-12

    申请号:PCT/KR2019/006808

    申请日:2019-06-05

    Abstract: 본 발명에 따른 360 비디오 수신 장치에 의하여 수행되는 360 영상 데이터 처리 방법은 360 영상 데이터를 수신하는 단계, 상기 360 영상 데이터로부터 인코딩된 픽처에 대한 정보 및 메타데이터를 획득하는 단계, 상기 인코딩된 픽처에 대한 정보를 기반으로 픽처를 디코딩하는 단계, 상기 메타데이터를 기반으로 디코딩된 픽처 및 오버레이를 렌더링하는 단계를 포함하고, 상기 메타데이터는 그룹 정보를 포함하고, 상기 그룹 정보는 함께 렌더링될 수 있는 메인 미디어와 오버레이가 포함되는 그룹을 지시하는 그룹 타입 정보를 포함하고, 상기 디코딩된 픽처는 상기 메인 미디어를 포함하고, 상기 그룹 정보는 상기 그룹에 속하는 트랙이 메인 미디어를 포함하는지 또는 오버레이 미디어를 포함하는지 여부를 지시하는 정보를 포함한다.

Patent Agency Ranking