Abstract:
Disclosed is an apparatus for measuring electrocardiogram (ECG) using wireless communication, including a first measuring device and a second measuring device connected to each other using wireless communication, wherein the first measuring device includes a first electrode configured to measure a first signal generated by a heartbeat, and a slave signal generation unit configured to generate a slave signal based on the first signal and a wireless virtual ground signal received from the second measuring device, and the second measuring device includes a second electrode configured to measure a second signal generated by a heartbeat, a ground electrode configured to measure a ground signal, a wireless virtual ground unit configured to generate the wireless virtual ground signal based on the ground signal, and an ECG measuring unit configured to measure ECG based on the slave signal, the second signal, and the wireless virtual ground signal.
Abstract:
A stimulation apparatus using low intensity focused ultrasound, which has a low intensity ultrasound focusing array having a plurality of transducers for outputting low intensity ultrasound beams, and a fixing device to which the low intensity ultrasound focusing array is attached, the fixing device being configured to fix the low intensity ultrasound focusing array to an upper body of a user. The low intensity ultrasound beams outputted from the transducers are focused to at least one focus. The focus is positioned to a spinal cord of the user or nerves around the spinal cord so that low intensity ultrasound stimulation is applied to the spinal cord or nerve cells of the nerves around the spinal cord.
Abstract:
An ultrasonic diagnosis and therapy apparatus according to an embodiment may include an ultrasound output unit including a plurality of ultrasound output elements, a circuit board that can be attached and detached through a connecting board connected to the ultrasound output unit to determine a function of the ultrasound output unit, and a control unit configured to control a setting value of each of the plurality of ultrasound output elements, wherein therapy and diagnosis functions are selectively or simultaneously implemented by changing the circuit board. With the ultrasonic diagnosis and therapy apparatus, it is possible to selectively or simultaneously implement the therapy and diagnosis functions by selectively mounting different types of circuit boards that determine the type and function of ultrasound outputted from the ultrasonic transducers.
Abstract:
A focused ultrasound stimulation apparatus according to the present disclosure includes a transducer which outputs low intensity/high intensity ultrasound, an acoustic lens which is placed in close contact with a user's skin and is customized to focus the ultrasound onto a target focal point, and a fixture for fixing the transducer and the acoustic lens to each other. The acoustic lens is customized using a 3-dimensional (3D) printer based on the pre-captured user's cranial shape, to focus the ultrasound a desired focus target for each user, thereby improving accuracy compared to conventional ultrasound stimulation apparatus.
Abstract:
A real-time acoustic simulation method based on artificial intelligence according to an embodiment of the present disclosure includes acquiring medical image data of a target area to be treated; determining ultrasound parameters related to the output of an ultrasonic transducer based on the medical image data; inputting the ultrasound parameters to a numerical model to generate a numerical model based acoustic simulation image for a specific position of the ultrasonic transducer; training an artificial intelligence model using the medical image data and the numerical model based acoustic simulation image; generating an artificial intelligence model based acoustic simulation image for an arbitrary position of the ultrasonic transducer using the trained artificial intelligence model; and outputting a real-time acoustic simulation image with a position change of the ultrasonic transducer.
Abstract:
The present disclosure relates to a device capable of removing senescent cells by facilitating the phagocytosis of the senescent cells by specifically stimulating the senescent cells using an ultrasound output unit, a kit for removing senescent cells and a method for specifically removing senescent cells, and by selectively and specifically stimulating senescent cells by irradiating ultrasound under a specific condition, thereby promoting the secretion of various cytokines recruiting immune cells, only the senescent cells can be removed specifically and, furthermore, cell regeneration can be promoted.
Abstract:
A biosignal-based avatar control system according to an embodiment of the present disclosure includes an avatar generating unit that generates a user's avatar in a virtual reality environment, a biosignal measuring unit that measures the user's biosignal using a sensor, a command determining unit that determines the user's command based on the measured biosignal, an avatar control unit that controls the avatar to perform the command, an output unit that outputs an image of the avatar in real-time, and a protocol generating unit for generating a protocol that provides predetermined tasks, and determines if the avatar performed the predetermined tasks. According to an embodiment of the present disclosure, it is possible to provide feedback in real-time by understanding the user's intention through analysis of biosignals and controlling the user's avatar in a virtual reality environment, thereby improving the user's brain function and motor function.
Abstract:
A brain to brain interface system has a brain activity detection device configured to detect activity state information of a brain, a brain stimulation device configured to stimulate an area of at least a part of the brain to activate or inactivate brain cells of the corresponding area, and a computer configured to control the brain activity detection device and the brain stimulation device, wherein brain activity state information of a subject's brain (“a target brain”) is obtained through the brain activity detection device, and an area of at least a part of the target brain is stimulated through the brain stimulation device based on the brain activity state information of the target brain to regulate a function of the target brain.
Abstract:
Disclosed in the present application is provided a method of automating a dental crown design based on artificial intelligence, the method may include: acquiring a three-dimensional intra-oral scanner image acquired from a patient and a three-dimensional dental crown mesh image designed by a dental technician in correspondence with the intra-oral scanner image; preprocessing the acquired three-dimensional intra-oral scanner image and the three-dimensional dental crown mesh image designed by the dental technician in correspondence with the intra-oral scanner image; converting an input mesh model and an output mesh model into an input voxel image and an output voxel image, respectively; and generating an AI output voxel image corresponding to the input voxel image using the converted input voxel image and output voxel image as training data, and training an artificial neural network by comparing the generated AI output voxel image with the output voxel image included in the training data.
Abstract:
The present disclosure relates to a method for converting magnetic resonance imaging (MRI) to a computed tomography (CT) image using an artificial intelligence machine learning model, for use in ultrasound treatment device applications. The method includes acquiring training data including an MRI image and a CT image for machine learning; training an artificial neural network model using the training data, wherein artificial neural network model generates a CT image corresponding to the MRI image, and compares the generated CT image with the original CT image included in the training data; receiving an input MRI image to be converted to a CT image; splitting the input MRI image into a plurality of patches; generating patches of a CT image corresponding to the patches of the input MRI image using the trained artificial neural network model; and merging the patches of the CT image to generate an output CT image.