Abstract:
A liveness test method and apparatus is disclosed. A processor implemented liveness test method includes extracting an interest region of an object from a portion of the object in an input image, performing a liveness test on the object using a neural network model-based liveness test model, the liveness test model using image information of the interest region as provided first input image information to the liveness test model and determining liveness based at least on extracted texture information from the information of the interest region by the liveness test model, and indicating a result of the liveness test.
Abstract:
At least one example embodiment discloses a facial recognition apparatus configured to obtain a two-dimensional (2D) input image including a face region of a user, detect a facial feature point from the 2D input image, adjust a pose of a stored three-dimensional (3D) facial model based on the detected facial feature point, generate a 2D projection image from the adjusted 3D facial model, perform facial recognition based on the face region in the 2D input image and a face region in the 2D projection image, and output a result of the facial recognition.
Abstract:
A medical image registration method includes performing a first registration by registering first medical images, including a registration image and a display image, the display image being an image to be displayed, performing a second registration by registering a second medical image having a different modality than a modality of the first medical images, with the registration image, and extracting a cross-section of one of the first medical images, corresponding to a cross-section of the second medical image, from the display image, according to the first registration and the second registration.
Abstract:
A method for registering medical images of different types, includes: receiving a selection of at least one point in a first medical image that is acquired in non-real time; extracting, from the first medical image, a first anatomic object which includes the selected point and a second anatomic object which is adjacent to the selected point; extracting, from a second medical image that is acquired in real time, a third anatomic object which corresponds to the first anatomic object and a fourth anatomic object which corresponds to the second anatomic object; and registering the first medical image and the second medical image based on a geometric relation between the first, second, third, and fourth anatomic objects.
Abstract:
An electronic device is provided. The electronic device includes memory storing artificial intelligence models and one or more programs including instructions, and one or more processor, wherein the one or more programs including instructions, when executed by the one or more processors, cause the electronic device to load the artificial intelligence models stored in memory and execute a runtime engine of a framework, identify whether an operation function is supported on a target processor, identify whether a first node for executing an inference on the artificial intelligence models operate without errors based on supporting the operation function on the target processor, repeat the identification until a last node by adding one more nodes in case that the first node operates without errors, form a first group by creating a partition from the first node to an identified N−1st node based on the identification that an error occurred on an Nth node, and form a second group by creating a partition for the Nth node on which the error occurred.
Abstract:
According to various embodiments, an electronic device may comprise a display module, a processor, and a memory operatively connected to the display module and the processor. The memory can store instructions that allow, when executed, in response to reception of at least one piece of content, the processor to identify whether an application corresponding to the at least one piece of content is an information-sharing application, extract schedule-related information based on the received at least one piece of content in response to identifying that the application is the information-sharing application, generate at least one piece of recommendation information based on the extracted schedule-related information in response to execution of a schedule-related application, and display the generated at least one piece of recommendation information through the display module. Various other embodiments are possible.
Abstract:
An electronic device is provided. The electronic device includes a first display disposed on a first surface of the electronic device, a second display disposed on a second surface of the electronic device and having at least a portion thereof being unviewable to a user according to a folding state of the electronic device, a memory configured to store instructions, and a processor electrically connected to the first display, the second display, and the memory. The processor is configured to execute the instructions to detect a change in the folding state of the electronic device while displaying a first image on one of the first display or the second display, when the change in the folding state is detected, generate a second image to be displayed on the other of the first display or the second display, while generating the second image, store a snapshot image of the first image in the memory and display the snapshot image on the other of the first display or the second display, and when the second image is generated, display the second image on the other of the first display or the second display instead of the snapshot image.
Abstract:
A liveness test method and apparatus is disclosed. A processor implemented liveness test method includes extracting an interest region of an object from a portion of the object in an input image, performing a liveness test on the object using a neural network model-based liveness test model, the liveness test model using image information of the interest region as provided first input image information to the liveness test model and determining liveness based at least on extracted texture information from the information of the interest region by the liveness test model, and indicating a result of the liveness test.
Abstract:
A training method of training an illumination compensation model includes extracting, from a training image, an albedo image of a face area, a surface normal image of the face area, and an illumination feature, the extracting being based on an illumination compensation model; generating an illumination restoration image based on the albedo image, the surface normal image, and the illumination feature; and training the illumination compensation model based on the training image and the illumination restoration image.
Abstract:
Face recognition of a face, to determine whether the face correlates with an enrolled face, may include generating a personalized three-dimensional (3D) face model based on a two-dimensional (2D) input image of the face, acquiring 3D shape information and a normalized 2D input image of the face based on the personalized 3D face model, generating feature information based on the 3D shape information and pixel color values of the normalized 2D input image, and comparing the feature information with feature information associated with the enrolled face. The feature information may include first and second feature information generated based on applying first and second deep neural network models to the pixel color values of the normalized 2D input image and the 3D shape information, respectively. The personalized 3D face model may be generated based on transforming a generic 3D face model based on landmarks detected in the 2D input image.