Abstract:
An automated classification apparatus includes a 3D (three dimensional) Inception-Resnet block structure, a global average pooling structure and a fully connected layer. The 3D Inception-Resnet block structure includes a 3D Inception-Resnet structure configured to receive 3D medical image of a patient's shoulder and extract features from the 3D medical image and 3D Inception-Downsampling structure configured to downsample information of a feature map including the features. The global average pooling structure is configured to operate an average pooling for an output of the 3D Inception-Resnet block structure. The fully connected layer is disposed after the 3D global average pooling structure. The automated classification apparatus is configured to automatically classify the 3D medical image into a plurality of categories.
Abstract:
Disclosed herein is an image preprocessing/analysis apparatus using machine learning-based artificial intelligence. The image preprocessing apparatus includes a computing system, and the computing system includes: a processor; a communication interface configured to receive an input image; and an artificial neural network configured to generate first and second preprocessing conditions through inference on the input image. The processor includes a first preprocessing module configured to generate a first preprocessed image and a second preprocessing module configured to generate a second preprocessed image. The processor is configured to control the first preprocessing module, the second preprocessing module, the artificial neural network, and the communication interface so that the first preprocessed image and the second preprocessed image are transferred to an image analysis module configured to perform image analysis on the input image based on the first preprocessed image and the second preprocessed image.
Abstract:
A biosignal-based avatar control system according to an embodiment of the present disclosure includes an avatar generating unit that generates a user's avatar in a virtual reality environment, a biosignal measuring unit that measures the user's biosignal using a sensor, a command determining unit that determines the user's command based on the measured biosignal, an avatar control unit that controls the avatar to perform the command, an output unit that outputs an image of the avatar in real-time, and a protocol generating unit for generating a protocol that provides predetermined tasks, and determines if the avatar performed the predetermined tasks. According to an embodiment of the present disclosure, it is possible to provide feedback in real-time by understanding the user's intention through analysis of biosignals and controlling the user's avatar in a virtual reality environment, thereby improving the user's brain function and motor function.
Abstract:
Disclosed is an apparatus for a motor imagery training, which includes a measuring unit that measures a brain signal while a user performs a motor imagery training, a preprocessing unit that performs preprocessing with respect to the brain signal, a feature extraction unit that selects a time period including information related to a motor imagery from the preprocessed brain signal and calculates feature data corresponding to the brain signal of the selected time period, and a classification unit that classifies the brain signal into one of a plurality of classes based on the feature data, and the motor imagery training is any one of a first training in which the user imagines moving a body part and a second training in which the user imagines feeling a somatosensory stimuli of a tangible object using the body part.
Abstract:
Disclosed is an apparatus for selectively collecting electroencephalogram (EEG) data through motion recognition including a motion recognition unit to recognize a motion of a user by analyzing an image taken through a camera, an EEG measurement unit installed at a head part of the user to measure an EEG of the user, and a control unit to control the EEG measurement unit to measure an EEG of the user during the recognized motion of the user and to generate an EEG data set based on the measured EEG, and a method using the apparatus.
Abstract:
A method for recognizing blink and eye movement based on electroencephalogram (EEG) and a system thereof are disclosed. The method includes measuring a subject's EEG-based electrooculogram (EOG) signal (hereinafter referred to as “EOG signal”) using three electrodes connected to an EEG head cap. The method further includes performing potential blink detection and zero crossing detection using the EOG signal, and generating a plurality of parameters used for blink and eye movement classification using the EOG signal depending on presence or absence of the potential blink and the zero crossing. In addition, the method further includes classifying the blink and up/down/left/right eye movements of using the plurality of parameters.
Abstract:
Provided are a method and apparatus for detecting an event-related potential (ERP), the method including: detecting an R-peak signal by detecting an electrocardiogram (ECG) signal of a subject via an ECG sensor; inducing an evoked potential to the subject by presenting an ERP stimulus to the subject at a certain period on basis of the R-peak signal; and detecting an electroencephalogram (EEG) signal of the subject exposed to the ERP stimulus by using an EEG sensor, and extracting an ERP signal from the EEG signal, wherein intermixture of a heart-rate evoked potential (HEP) of the subject with the ERP signal is inhibited by removing the ERP stimulus being presented to the subject during the certain period, after a latency by a certain time from a point in time the R-peak occurs.
Abstract:
The present disclosure relates to technology that controls a robot based on brain-computer interface, and a robot control method acquires a first biosignal indicating an intention to start the operation of the robot from a user to operate the robot, provides the user with visual stimulation of differently set signal cycles corresponding to a plurality of objects for which the robot executes motions, acquires a second biosignal evoked by the visual stimulation from the user to identify an object selected by the user, and acquires a third biosignal corresponding to a motion for the identified object from the user to induce the robot to execute the corresponding motion.
Abstract:
A method for registering a tooth image with a tooth structure according to an embodiment includes a first registering step for registering a tooth image model obtained from a medical image taken when an object bites a bite including a marker with a bite scanning model obtained by scanning the bite, a second registering step for registering the bite scanning model with a tooth scanning model obtained by scanning a tooth shape of the object, and a third registering step for registering the tooth image model with the tooth scanning model based on the results of the first registering step and the results of the second registering step. As a result, a model including an accurate shape of tooth part difficult to obtain from a medical imaging apparatus can be easily obtained.
Abstract:
Exemplary embodiments relate to a system for customized addiction therapy based on a bio signal, including a sensor configured to measure a bio signal from a user, a state determining unit configured to determine a user state based on the measured bio signal, a cognitive behavioral therapy image determining unit configured to determine a cognitive behavioral therapy image based on the determined user state, and an image providing unit configured to provide the determined cognitive behavioral therapy image to the user, and its method.