Abstract:
A method, a processing device, and a system for information display are proposed. The information display system is installed on a mobile vehicle and includes a light transmitting display, an image capturing device configured to capture a target image of a target object, a positioning device configured to generate position information of the mobile vehicle, and a processing device configured to perform the following operations. First reference position information of the target object is calculated according to the position information of the mobile vehicle. Object recognition processing is performed on the target image. A display position of virtual information of the target object is determined using a recognition result of the object recognition processing or the first reference position information according to the recognition result of the object recognition processing. The virtual information is displayed through the light transmitting display according to the display position of the virtual information.
Abstract:
A method, a processing device, and a system for information display are provided, and the system includes a light transmissive display. A first information extraction device extracts spatial position information of a user, and a second information extraction device extracts spatial position information of a target object. The processing device performs the following steps. Display position information of virtual information of the target object on the display is determined according to the spatial position information of the user and the spatial position information of the target object. The display position information includes a first display reference position corresponding to a previous time and a second display reference position corresponding to a current time. An actual display position of the virtual information on the display corresponding to the current time is determined according to a distance between the first display reference position and the second display reference position. The virtual information is displayed on the display according to the actual display position.
Abstract:
A device and a method for calculating a swinging direction of a human face in an obscured human face image are provided. The method includes the following. An obscured human face image including a human face is captured. Non-obscured face detection technology is used to obtain a feature anchor point to be replaced in the obscured human face image, obscured face detection technology is used to obtain a plurality of candidate feature anchor points in the obscured human face image, and the plurality of candidate feature anchor points are used to determine an updated feature anchor point corresponding to the feature anchor point to be replaced. An adjustment operation is performed on a three-dimensional model to obtain an adjusted three-dimensional model. The updated feature anchor point and the adjusted three-dimensional model are used to calculate a swinging direction of the human face.
Abstract:
An active interactive navigation system includes a display device, a target object image capturing device, a user image capturing device, and a processing device. The target object image capturing device captures a dynamic object image. The user image capturing device obtains a user image. The processing device recognizes and selects a service user from the user image and captures a facial feature of the service user. If the facial feature matches facial feature points, the processing device detects a line of sight of the service user and accordingly recognizes a target object watched by the service user, generates face position three-dimensional coordinates corresponding to the service user, position three-dimensional coordinates corresponding to the target object, and depth and width information, accordingly calculates a cross-point position where the line of sight passes through the display device, and display virtual information of the target object on the cross-point position of the display device.
Abstract:
A fingerprint recognition method is provided. The method includes obtaining a plurality of fingerprint images by sensing a finger of a user, respectively calculating geometric center points corresponding to the fingerprint images, and calculating positions and offsets of the fingerprint images according to the geometric center points. The method also includes filling signals in the fingerprint images into a part of pixels in a pixel array according to the positions and the offsets of the fingerprint images, and obtaining signals of other pixels in the pixel array by inputting the signals filled in the part of pixels in the pixel array into an artificial intelligence engine. The method further includes generating a candidate fingerprint image and recognizing a user based on the candidate fingerprint image.
Abstract:
A fingerprint recognition method is provided. The method includes obtaining a plurality of fingerprint images by sensing a finger of a user, respectively calculating geometric center points corresponding to the fingerprint images, and calculating positions and offsets of the fingerprint images according to the geometric center points. The method also includes filling signals in the fingerprint images into a part of pixels in a pixel array according to the positions and the offsets of the fingerprint images, and obtaining signals of other pixels in the pixel array by inputting the signals filled in the part of pixels in the pixel array into an artificial intelligence engine. The method further includes generating a candidate fingerprint image and recognizing a user based on the candidate fingerprint image.
Abstract:
A method, a processing device, and a system for information display are provided, and the system includes a light transmissive display. A first information extraction device extracts spatial position information of a user, and a second information extraction device extracts spatial position information of a target object. The processing device performs the following steps. Display position information of virtual information of the target object on the display is determined according to the spatial position information of the user and the spatial position information of the target object. The display position information includes a first display reference position corresponding to a previous time and a second display reference position corresponding to a current time. An actual display position of the virtual information on the display corresponding to the current time is determined according to a distance between the first display reference position and the second display reference position. The virtual information is displayed on the display according to the actual display position.
Abstract:
A recognition system and an image augmentation and training method thereof are provided. The image augmentation and training method of a recognition system includes the following steps. A plurality of image frames are obtained, wherein each of the image frames includes an object pattern. A plurality of environmental patterns are obtained. The object pattern is separated from each of the image frames. A plurality of image parameters are set. The image frames, based on the object patterns and the environmental patterns, are augmented according to the image parameters to increase the number of the image frames. A recognition model is trained using the image frames.
Abstract:
An eyeball locating method, an image processing device, and an image processing system are proposed. The method includes the following steps. A human facial image of a user is obtained, wherein the human facial image includes an unobscured human facial region and an obscured human facial region, and the unobscured human facial region includes an eye region. At least one unobscured human facial feature is detected from the unobscured human facial region, and at least one obscured human facial feature is estimated from the obscured human facial region. Next, an eyeball position is located according to the unobscured human facial feature and the obscured human facial feature.
Abstract:
An optical film structure includes a first substrate having a first surface and a second surface, an optical component includes a micro-lens array and disposed on the first surface of the first substrate, a micro-lens array including a plurality of micro-lens units each of which has a round concentrated area with a projected radius R formed on the first surface, a planarization layer disposed on the optical component, a light absorbing layer disposed on the planarization layer and including a plurality of light absorbing units each of which has a width W. Light incident from the second surface of the first substrate and passing through the micro-lens array is focused on the light absorbing units. The micro-lens array and the planarization layer have a difference in refractive index greater than or equal to 0.2; and W is less than or equal to R/2.