Abstract:
A navigation apparatus and method are provided. The apparatus receives an input data and at least a piece of positioning information. The apparatus performs a semantic analysis on the input data to generate a plurality of pieces of semantic information. The apparatus selects at least one of the semantic information as a filtering condition. The apparatus compares the filtering condition with a plurality of semantic tags in the map data to determine whether the semantic tags have at least one first semantic tag that meets the filtering condition. When determining that the semantic tags have the at least one first semantic tag, the apparatus generates a comparison result, wherein the comparison result is related to an object corresponding to the at least one first semantic tag. The apparatus generates a navigation route according to the comparison result and the at least a piece of positioning information.
Abstract:
A display method for display an image on a transparent display component of a display device is provided. The display method includes receiving a display content, determining a background resolution of the image, selecting one of background images as a first background image based on the display content and the background resolution, performing image processing on the first background image to generate a second background image, adding the display content to the second background image to generate the image, and displaying the image on the transparent display component.
Abstract:
A testing device and a testing method thereof are provided. The testing device is connected to a terminal device running a graphical user interface (GUI). The testing device runs a testing program to start a recording procedure to execute the following steps: detecting a plurality of actions generated in response to operations on the terminal device; detecting a foreground application of the GUI; reading a plurality of pieces of object information of the foreground application; and determining the actions to record an object property operation of the foreground application and a call command. The testing device further stops the recording procedure to generate and store a script file and a reference log file. The script file includes the object property operation and the call command.
Abstract:
A camera system and an image-providing method are disclosed to overcome that conventional cameras cannot make a decision by themselves as for whether and/or how to capture images. The disclosed camera system includes a camera for capturing images and a computer device used to calculate an estimated camera location and pose for the camera. The camera system also includes a location adjusting device and a pose adjusting device to adjust the camera to the estimated camera location and pose.
Abstract:
A space coordinate converting server and method thereof are provided. The space coordinate converting server receives a field video recorded with a 3D object from an image capturing device, and generates a point cloud model accordingly. The space coordinate converting server determines key frames of the field video, and maps the point cloud model to key images of the key frames based on rotation and translation information of the image capturing device for generating a characterized 3D coordinate set. The space coordinate converting server determines 2D coordinates of the 3D object in key images, and selects 3D coordinates from the characterized 3D coordinate set according to the 2D coordinates. The space coordinate converting server determines a space coordinate converting relation according to marked points of the 3D object and the 3D coordinates.
Abstract:
An image object tracking method and apparatus are provided. The image object tracking method includes the steps of: determining a feature point of a target object in a first frame, determining a prediction point of the feature point in a second frame, calculating an estimated rotation angle of an image capturing device according to a distance between the coordinate of the prediction point and the coordinate of the feature point and a distance between the image capturing device and the target object, calculating a lens rotation angle of the image capturing device rotated from a time point that the first frame is captured to a time point that the second frame is captured according to a piece of inertial measurement information provided by an inertial measurement unit, and determining whether the prediction point corresponds to the feature point by comparing the estimated rotation angle and the lens rotation angle.
Abstract:
A head mounted device suitable for guiding an exhibition is disclosed. The head mounted device includes an image capturing unit, a process module and an information interface. The process module includes a recognition unit, a computing unit and a control unit. The image capturing unit captures an input image in invisible spectrum. The recognition unit recognizes an invisible code from the input image. The computing unit calculates a relative distance and a relative angle between the head mounted device and an exhibition object. By comparing the relative distance with a threshold distance, the control unit determines whether to trigger the information interface and present an exhibit-object introduction based on relative distance and relative angle.
Abstract:
An object detection and tracking method and system are provided. The object detection and tracking method includes the following steps: (i) selecting one of a plurality of frames of a video as a current frame, (ii) searching in an object tracker searching area of the current frame to generate a current object tracker, (iii) searching in each auxiliary tracker searching area of the current frame to individually generate a current auxiliary tracker, (iv) when the current object tracker is located at a block different from the blocks located by the generated object trackers, generating a new auxiliary tracker at the central position of the current frame, and (v) repeating the above steps until all the frames have been processed.
Abstract:
A visual positioning apparatus, method, and non-transitory computer readable storage medium thereof are provided. The visual positioning apparatus derives an image by sensing a visual code marker in a space and performs the following operations: (a) identifying an identified marker image included in the image, (b) searching out the corner positions of the identified marker image, (c) deciding a marker structure of the identified marker image according to the corner positions, wherein the marker structure includes vertices, (d) selecting a portion of the vertices as first feature points, (e) searching out a second feature point for each first feature point, (f) updating the vertices of the marker structure according to the second feature points, (g) selecting a portion of the updated vertices as the third feature points, and (h) calculating the position of the visual positioning apparatus according to the third feature points.
Abstract:
An intelligent device is configured to provide relevant information related to an image. The intelligent device includes a sensor module, an activation module, an image capture module, an image recognition module, and an information providing module. The sensor module includes a magnetometer and a gyroscope. The magnetometer detects a direction of a gesture relative to the intelligent device. The gyroscope detects at least one angular velocity of the gesture. The activation module determines the gesture according to the direction and the angular velocities, and generates an activating signal according to the gesture determined. The image capture module captures the image according to the activation signal. The image recognition module identifies the image to retrieve relevant information related to the image. The information providing module provides the relevant information.