-
1.
公开(公告)号:US20190208177A1
公开(公告)日:2019-07-04
申请号:US16295582
申请日:2019-03-07
Inventor: Tatsuya KOYAMA , Toshiyasu SUGIO , Toru MATSUNOBU , Satoshi YOSHIKAWA , Pongsak LASANG , Chi WANG
IPC: H04N13/139 , H04N13/282 , G06T7/80
CPC classification number: H04N13/139 , G06T7/55 , G06T7/70 , G06T7/85 , G06T2207/10021 , H04N13/282
Abstract: A three-dimensional model generating device includes: a converted image generating unit that, for each of input images included in one or more items of video data and having mutually different viewpoints, generates a converted image from the input image that includes fewer pixels than the input image; a camera parameter estimating unit that detects features in the converted images and estimates, for each of the input images, a camera parameter at a capture time of the input image, based on a pair of similar features between two of the converted images; and a three-dimensional model generating unit that generates a three-dimensional model using the input images and the camera parameters.
-
公开(公告)号:US20190051036A1
公开(公告)日:2019-02-14
申请号:US16163010
申请日:2018-10-17
Inventor: Toru MATSUNOBU , Toshiyasu SUGIO , Satoshi YOSHIKAWA , Tatsuya KOYAMA , Pongsak LASANG , Jian GAO
Abstract: Provided is a three-dimensional reconstruction method of reconstructing a three-dimensional model from multi-view images. The method includes: selecting two frames from the multi-view images; calculating image information of each of the two frames; selecting a method of calculating corresponding keypoints in the two frames, according to the image information; and calculating the corresponding keypoints using the method of calculating corresponding keypoints selected in the selecting of the method of calculating corresponding keypoints.
-
3.
公开(公告)号:US20200234453A1
公开(公告)日:2020-07-23
申请号:US16624230
申请日:2018-05-16
Inventor: Takaaki IDERA , Takaaki MORIYAMA , Shohji OHTSUBO , Pongsak LASANG , Takrit TANASNITIKUL
Abstract: There is provided a projection instruction device that generates a projection image to be projected on parcel based on sensing information of the parcel, the device including: a processor; and a memory, in which by cooperating with the memory, the processor performs weighting on a value of a feature amount of a color image of parcel included in the sensing information based on a distance image of parcel included in the sensing information, and tracks the parcel based on the weighted value of the feature amount of the color image.
-
公开(公告)号:US20220277516A1
公开(公告)日:2022-09-01
申请号:US17746190
申请日:2022-05-17
Inventor: Toru MATSUNOBU , Satoshi YOSHIKAWA , Masaki FUKUDA , Kensho TERANISHI , Pradit MITTRAPIYANURUK , Keng Liang LOI , Pongsak LASANG
IPC: G06T17/00 , G06T19/00 , G06T3/40 , G06T7/90 , G06T7/70 , G01S17/89 , G01B11/26 , G01S7/4865 , G01S17/86
Abstract: A three-dimensional model generation method executed by an information processing device includes: obtaining a first three-dimensional model from a measuring device that emits an electromagnetic wave and obtains a reflected wave which is the electromagnetic wave reflected by a measurement target to thereby generate a first three-dimensional model including first position information indicating first three-dimensional positions in the measurement target; obtaining a multi-viewpoint image generated by one or more cameras shooting the measurement target from different positions; and generating a second three-dimensional model by enhancing the definition of the first three-dimensional model using the multi-viewpoint image.
-
5.
公开(公告)号:US20220277480A1
公开(公告)日:2022-09-01
申请号:US17748803
申请日:2022-05-19
Inventor: Takafumi TOKUHIRO , Zheng WU , Pongsak LASANG
Abstract: This position estimation device of a moving body with n cameras for imaging the surrounding scene is provided with: an estimation unit which, for each of the n cameras, calculates a camera candidate position in a map space on the basis of the camera image position of a feature point in the scene extracted from the camera image and the map space position of said feature point pre-stored in the map data; and a verification unit which, with reference to said candidate positions, projects onto the camera image of each of the n cameras a feature point cloud in the scene stored in the map data, and calculates the accuracy of the candidate positions of the n cameras on the basis of the matching degree between the feature point cloud projected onto the camera image and a feature point cloud extracted from the camera images.
-
-
-
-