Abstract:
The present invention is a data cleaning device executing a data cleaning method. The data cleaning method includes executing an application, and executing a prompt plugin management program. The operation of executing the prompt plugin management program includes generating a prompt instruction based on an unformatted content via a first prompt template, transmitting the prompt instruction to a first device, and receiving a formatted content from the first device. The formatted content is generated by the first device based on the prompt instruction through a large language model.
Abstract:
A video generating device and method are provided. The device analyzes a plurality of real-time images corresponding to a plurality of users to segment a target image from each of the real-time images. The device generates a three-dimensional portrait model corresponding to each of the users based on the target image of each of the real-time images. The device determines a first three-dimensional scenario template from the three-dimensional scenario templates based on a user quantity of the users and the position quantity corresponding to each of the three-dimensional scenario templates. The device composites the three-dimensional portrait models to the spatial label position of the first three-dimensional scenario template to generate a video corresponding to the users.
Abstract:
An augmented reality processing device is provided, comprising an image capturing circuit and a processor. The processor is connected to the image capturing circuit, and execute operations of: generating an original point cloud image according to the first environment image and a physical object in the first environment image; generating an expanded point cloud image corresponding to the physical object from the second environment image according to the first environment image and the physical object point cloud set, and generating a superimposed point cloud image according to the expanded point cloud image and the original point cloud image; and generating a transformation matrix according to the original point cloud image and the expanded point cloud image, and superimposing a virtual object to the second environment image according to the superimposed point cloud image and the transformation matrix.
Abstract:
A computer device calculates an estimated depth for each of non-feature points of a sparse point cloud map of an image according to feature-point depths of feature points of the sparse point cloud map and pixel depths of pixels of an image depth map of the image, and generates a synthesized depth map according to the feature-point depths and the estimated depths.
Abstract:
A visual positioning apparatus, method, and non-transitory computer readable storage medium thereof are provided. The visual positioning apparatus derives an image by sensing a visual code marker in a space and performs the following operations: (a) identifying an identified marker image included in the image, (b) searching out the corner positions of the identified marker image, (c) deciding a marker structure of the identified marker image according to the corner positions, wherein the marker structure includes vertices, (d) selecting a portion of the vertices as first feature points, (e) searching out a second feature point for each first feature point, (f) updating the vertices of the marker structure according to the second feature points, (g) selecting a portion of the updated vertices as the third feature points, and (h) calculating the position of the visual positioning apparatus according to the third feature points.
Abstract:
A virtual and real image fusion method is disclosed. The method comprises the following operations: obtaining a picture of a three dimensional space by a first camera, in which the picture comprises a screen picture and a tag picture of an entity tag and the screen picture is projected on the entity tag; obtaining a corresponding point data of the entity tag on the screen picture according to the picture by a processor; obtaining a spatial correction parameter according to the corresponding point data by the processor; and displaying an image on the screen picture according to the spatial correction parameter by the processor.
Abstract:
A camera system and an image-providing method are disclosed to overcome that conventional cameras cannot make a decision by themselves as for whether and/or how to capture images. The disclosed camera system includes a camera for capturing images and a computer device used to calculate an estimated camera location and pose for the camera. The camera system also includes a location adjusting device and a pose adjusting device to adjust the camera to the estimated camera location and pose.
Abstract:
A virtuality-reality overlapping method is provided. A point cloud map related to a real scene is constructed. Respective outline border vertexes of a plurality of objects are located by using 3D object detection. According to the outline border vertexes of the objects, the point cloud coordinates of the final candidate outline border vertexes are located according to the screening result of a plurality of projected key frames. Then, the point cloud map is projected to the real scene for overlapping a virtual content with the real scene.
Abstract:
A space coordinate converting server and method thereof are provided. The space coordinate converting server receives a field video recorded with a 3D object from an image capturing device, and generates a point cloud model accordingly. The space coordinate converting server determines key frames of the field video, and maps the point cloud model to key images of the key frames based on rotation and translation information of the image capturing device for generating a characterized 3D coordinate set. The space coordinate converting server determines 2D coordinates of the 3D object in key images, and selects 3D coordinates from the characterized 3D coordinate set according to the 2D coordinates. The space coordinate converting server determines a space coordinate converting relation according to marked points of the 3D object and the 3D coordinates.
Abstract:
A display system, an image compensation method and a non-transitory computer readable storage medium thereof are provided. The display system includes a flexible panel, a prediction unit, a compensation unit, an image synthesis unit and a control unit. The prediction unit predicts a prediction angle of the flexible panel in a final time. The compensation unit generates a first compensation image according to an initial display angle of the flexible panel in an initial time, and generates a second compensation image according to the prediction angle. The image synthesis unit synthesizes a first display image according to the first compensation image and the second compensation image. The control unit selectively substitutes the first display image for an image displayed on the flexible panel in the final time.