Abstract:
An image monitoring apparatus including an image sensing module and a processor is provided. The image sensing module is configured to obtain an invisible light dynamic image of an objective scene. The invisible light dynamic image includes a plurality of frames. The processor is configured to perform operations according to at least one frame of the invisible light dynamic image to determine a status of at least one live body corresponding to the objective scene to be one of a plurality of status types and determine at least one status valid region of the invisible light dynamic image, and set scene information of each pixel of the at least one status valid region to be one of a plurality of scene types according to the status type of the at least one live body. An image monitoring method is also provided.
Abstract:
An electronic device, an iris recognition method and a non-volatile computer-readable medium are provided. A processor in the electronic device obtains a first and second iris images, and calculates a plurality of first and second feature mark boxes that are non-uniformly arranged according to the first and second iris images. The processor uses the first feature mark boxes to obtain a first and second image features from the first and second iris images respectively, and compares the first and second image features to obtain a first recognition result. The processor uses the second feature mark boxes to obtain a third and fourth image features from the second and first iris images respectively, and compares the third and fourth image features to obtain a second recognition result. The processor determines a similarity degree of the first and second iris images according to the first and second recognition results.
Abstract:
Provided are an image transform method and an image transform network. The method is for the image transform network including an image generator, a transform discriminator and a focus discriminator, and includes: generating a transformed image according to an un-transformed image and focus information by the image generator; computing a transform discrimination value according to the transformed image by the transform discriminator; computing a value of a first generator loss function and updating the image generator by the image generator; generating a focus discrimination value according to the un-transformed image, the transformed image, and the focus information by the focus discriminator; and computing a value of a second generator loss function according to the focus discrimination value and updating the image generator according to the value of the second generator loss function by the image generator.
Abstract:
Provided are an image transform method and an image transform network. The method is for the image transform network including an image generator, a transform discriminator and a focus discriminator, and includes: generating a transformed image according to an un-transformed image and focus information by the image generator; computing a transform discrimination value according to the transformed image by the transform discriminator; computing a value of a first generator loss function and updating the image generator by the image generator; generating a focus discrimination value according to the un-transformed image, the transformed image, and the focus information by the focus discriminator; and computing a value of a second generator loss function according to the focus discrimination value and updating the image generator according to the value of the second generator loss function by the image generator.
Abstract:
According to an exemplary embodiment, a method for object positioning by using depth images is executed by a hardware processor as following: converting depth information of each of a plurality of pixels in each of one or more depth images into a real world coordinate; based on the real world coordinate, computing a distance of each pixel to an edge in each of a plurality of directions; assigning a weight to the distance of each pixel to each edge; and based on the weight of the distance of each pixel to each edge and a weight limit, selecting one or more extremity positions of an object.
Abstract:
A facial recognition system and a physiological information generative method are disclosed. The facial recognition system includes a visible light sensor, a thermal imaging sensor, and a processor. The generative method is executed by the processor, using a real-time object detection algorithm. The generative method includes receiving one or more current visible and thermal images, and identifying the nasal area in the current thermal images when a face region in the current visible images is not identifiable by the real-time object detection algorithm in the processor; determining in the processor that respiratory information is abnormal according to the cycles of exhalation and inhalation, as detected through brightness changes in the nostril area; and notifying abnormal respiratory information.
Abstract:
A thermal imaging apparatus for measuring a temperature of a target in a monitored area comprises a thermal imager, an optical image capturing device and a computing processing device. The thermal imager is configured to capture a thermal image of the monitored area. The optical image capturing device is configured to capture optical images of the monitored area. The computing processing device is configured to determine one of the optical images as a determined optical image synchronizing with the thermal image according to positions of blocks corresponding to the target in the thermal image and the optical images, perform calculation according to the thermal image and the determined optical image to obtain a measured distance between the target and the thermal imaging apparatus, and perform calibration according to the measured distance and the thermal image to obtain a calibrated temperature value of the target.
Abstract:
A controlling system and a controlling method for virtual display are provided. The controlling system for virtual display includes a visual line tracking unit, a space forming unit, a hand information capturing unit, a transforming unit and a controlling unit. The visual line tracking unit is used for tracking a visual line of a user. The space forming unit is used for forming a virtual display space according to the visual line. The hand information capturing unit is used for obtaining a hand location of the user's one hand in a real operation space. The transforming unit is used for transforming the hand location to be a cursor location in the virtual display space. The controlling unit is used for controlling the virtual display according to the cursor location.
Abstract:
An electronic device, an iris recognition method and a non-volatile computer-readable medium are provided. A processor in the electronic device obtains a first and second iris images, and calculates a plurality of first and second feature mark boxes that are non-uniformly arranged according to the first and second iris images. The processor uses the first feature mark boxes to obtain a first and second image features from the first and second iris images respectively, and compares the first and second image features to obtain a first recognition result. The processor uses the second feature mark boxes to obtain a third and fourth image features from the second and first iris images respectively, and compares the third and fourth image features to obtain a second recognition result. The processor determines a similarity degree of the first and second iris images according to the first and second recognition results.