Abstract:
A method for measuring a hole provided in a workpiece is provided and the method comprises: obtaining a three-dimensional point cloud model of the workpiece and a two-dimensional image of the workpiece, defining a first contour in the three-dimensional point cloud model based on an intensity difference of the two-dimensional image, defining a second contour and a third contour respectively based in the first contour, bounding a data point testing region between the second contour and the third contour, respectively defining data point sampling regions along a plurality of cross-section directions of the data point testing region, respectively sampling data points in the data point sampling regions to obtain a turning point set comprising turning points, wherein each of the turning points has the largest turning margin, connecting the turning points which are distributed in the turning point set along a ring direction to obtain an edge of the hole.
Abstract:
A system and method for determining individualized depth information in an augmented reality scene are described. The method includes receiving a plurality of images of a physical area from a plurality of cameras, extracting a plurality of depth maps from the plurality of images, generating an integrated depth map from the plurality of depth maps, and determining individualized depth information corresponding to a point of view of the user based on the integrated depth map and a plurality of position parameters.
Abstract:
An image obtaining method comprises: by a projecting device, separately projecting an image acquisition light and a reference light onto a target object, wherein the light intensity of the image acquisition light is higher than the light intensity of the reference light; by an image obtaining device, obtaining a first image and a second image, both the first image and the second image comprising the image of the target object, with the target object of the first image being illuminated by the image acquisition light, and the target object of the second image being illuminated by the reference light, wherein the first image has a first area including a part of the target object, and the second image has a second area including the part of the target object; and by a computing device, performing a difference evaluation procedure to obtain a required light intensity based on a required amount.
Abstract:
A calibration method for a robotic arm system is provided. The method includes: capturing an image of a calibration object fixed to a front end of the robotic arm by a visual device, wherein a pedestal of the robotic arm has a pedestal coordinate system, and the front end of the robotic arm has a first relative relationship with the pedestal, the front end of the robotic arm has a second relative relationship with the calibration object; receiving the image and obtaining three-dimensional feature data of the calibration object according to the image by a computing device; and computing a third relative relationship between the visual device and the pedestal according to the three-dimensional feature data, the first relative relationship, and the second relative relationship to calibrate a position error between a physical location of the calibration object and a predictive positioning-location generated by the visual device.
Abstract:
A multi-modal image alignment method includes obtaining first points corresponding to a center vertex of a calibration object and second point groups corresponding to side vertices of the calibration object from two-dimensional images, obtaining third points corresponding to the center vertex from three-dimensional images, performing first optimizing computation using a first coordinate system associated with the two-dimensional images, the first points and the third points to obtain a first transformation matrix, processing on the three-dimensional images using the first transformation matrix to generate firstly-transformed images respectively, performing second optimizing computation using the firstly-transformed images, the first points and the second point groups to obtain a second transformation matrix, and transforming an image to be processed from a second coordinate system associated with the three-dimensional images to the first coordinate system using the first transformation matrix and the second transformation matrix.
Abstract:
A controlling system and a controlling method for virtual display are provided. The controlling system for virtual display includes a visual line tracking unit, a space forming unit, a hand information capturing unit, a transforming unit and a controlling unit. The visual line tracking unit is used for tracking a visual line of a user. The space forming unit is used for forming a virtual display space according to the visual line. The hand information capturing unit is used for obtaining a hand location of the user's one hand in a real operation space. The transforming unit is used for transforming the hand location to be a cursor location in the virtual display space. The controlling unit is used for controlling the virtual display according to the cursor location.