Abstract:
An example method includes capturing a target image of a print product printed by a printer. The method also includes aligning the target image with a reference image corresponding to the target image. The method further includes analyzing the reference image and the target image using a machine learning model. The method includes labeling each of a plurality of pixels as having a defect based on the analysis. The label is applied to each individual pixel of the plurality of pixels.
Abstract:
In one example in accordance with the present disclosure, a system is described. The system includes a pose estimator to identify a plurality of anatomical points on a person depicted in an image. A frame former of the system generates a frame for the person by connecting a first set of the plurality of anatomical points to form a skeleton and forming at least a head region and a torso region of the person based on a second set of the plurality of anatomical points. A map generator of the system generates a spatial gradient map projecting outwards from the frame. In the system, the spatial gradient map is based on pixel distance from the frame and an intensity along the gradient map indicates a pixels likelihood of form a part of the person.
Abstract:
An example system includes a landmark engine to detect a facial landmark in an image of a face. The system includes a comparison engine to determine a difference between the facial landmark in the image and a facial landmark of a neutral face. The system also includes an action engine to determine whether a facial action unit occurred based on whether the difference satisfies a condition.
Abstract:
One example of a video monitoring system includes a frame acquisition subsystem, a stage gate motion detection subsystem, a person detection subsystem, a face recognition subsystem, and an alert emission subsystem. The frame acquisition subsystem extracts frames from an input video. The stage gate motion detection subsystem separates background motion from foreground motion within frames. The person detection subsystem detects people including faces and bodies within the foreground motion. The face recognition subsystem matches detected faces to previously registered users. The alert emission subsystem provides alerts based on events detected by the stage gate motion subsystem, the person detection subsystem, and the face recognition subsystem.
Abstract:
In example implementations, a method is provided. The method may be executed by a processor. The method includes receiving an image. A person from a plurality of people within the image is identified. The person is segmented from the image. A background image of the image is replaced with a clean background image to remove the plurality of people from the image except the person that is identified.
Abstract:
One example of a video monitoring system includes a frame acquisition subsystem, a stage gate motion detection subsystem, a person detection subsystem, a face recognition subsystem, and an alert emission subsystem. The frame acquisition subsystem extracts frames from an input video. The stage gate motion detection subsystem separates background motion from foreground motion within frames. The person detection subsystem detects people including faces and bodies within the foreground motion. The face recognition subsystem matches detected faces to previously registered users. The alert emission subsystem provides alerts based on events detected by the stage gate motion subsystem, the person detection subsystem, and the face recognition subsystem.
Abstract:
Examples disclosed herein relate to creating an image collage in a semantic theme based shape. For example, a processor may determine a semantic theme associated with an image collection, select a shape associated with the semantic theme, and create a collage of at least a subset of the image collection in the selected shape. The processor may output the created collage.
Abstract:
Examples disclosed herein relate to creating an image collage in a semantic theme based shape. For example, a processor may determine a semantic theme associated with an image collection, select a shape associated with the semantic theme, and create a collage of at least a subset of the image collection in the selected shape. The processor may output the created collage.
Abstract:
An example device is described for facilitating polygon localization. In various aspects, the device can comprise a processor. In various instances, the device can comprise a non-transitory machine-readable memory that can store machine-readable instructions. In various cases, the processor can execute the machine-readable instructions, which can cause the processor to localize a polygon depicted in an image, based on execution of a deep learning pipeline. In various aspects, the deep learning pipeline can comprise a circular-softmax block.
Abstract:
An example system includes a first camera to capture a first image of a first facial body part and a second camera to capture a second image of a second facial body part. The example system further includes a transformation engine to transform respective scales of the first image and the second image to a scale of full facial images. The example system further includes a local location engine to: identify first facial landmarks and second facial landmarks in respective transformed versions of the first image and the second image, the first facial landmarks and the second facial landmarks used to determine that an action has occurred.