Abstract:
A method of modeling a radio wave environment, the method being executed by a radio wave environment modeling apparatus using a group of intelligent robots includes measuring an intensity of radio waves received from at least one follower robot; measuring a distance between the at least one follower robot and a leader robot that belongs to the group of intelligent robots; and estimating an environment parameter using a wave model. Further, the method includes classifying the environment parameter estimated from at least one lattice by comparing the estimated environment parameter with predetermined environment parameters; and analogizing an intensity of radio waves of the at least one follower robot that are received from the at least one lattice.
Abstract:
Provided are a system and method for active data collection mode control for reducing crowd-sourcing signal data collection required for fingerprint database (FPDB) maintenance. The system for active data collection mode control for reducing crowd-sourcing signal data collection required for FPDB maintenance includes a mobile device configured to support a survey mode, a localization mode, and a crowd-sourcing mode and a server configured to receive data from the mobile device, generate and update an FPDB, and control a data collection mode.
Abstract:
A situation recognition apparatus and method analyzes an image to convert a position and motion change rate of an object in a space and an object number change rate into energy information, and then changes the energy information into entropy in connection with an entropy theory of a measurement theory of a disorder within a space. Accordingly, the situation recognition apparatus and method recognizes an abnormal situation in the space and issues a warning for the recognized abnormal situation. Therefore, the situation recognition apparatus and method recognizes an abnormal situation within a space, thereby effectively preventing or perceiving a real-time incident at an early stage.
Abstract:
Provided is a segmentation and tracking system based on self-learning using video patterns in video. The present invention includes a pattern-based labeling processing unit configured to extract a pattern from a learning image and then perform labeling in each pattern unit to generate a self-learning label in the pattern unit, a self-learning-based segmentation/tracking network processing unit configured to receive two adjacent frames extracted from the learning image and estimate pattern classes in the two frames selected from the learning image, a pattern class estimation unit configured to estimate a current labeling frame through a previous labeling frame extracted from the image labeled by the pattern-based labeling processing unit and a weighted sum of the estimated pattern classes of a previous frame of the learning image, and a loss calculation unit configured to calculate a loss between a current frame and the current labeling frame by comparing the current labeling frame with the current labeling frame estimated by the pattern class estimation unit.
Abstract:
Disclosed is a snake robot for exploration and disclosed is a separable module type snake robot that configures an ad-hoc mesh network by separating a body part into multiple mobile relay modules according to a propagation situation to seamlessly transmit image information to a remote control center.
Abstract:
Provided are a method and a control apparatus for cooperative cleaning using multiple cleaning robots, including monitoring an overall cleaning condition of an extensive space and automatically assigning multiple cleaning robots to a space required to be cleaned, and when a cleaning area is fixed based on a cooperative cleaning method, data on an amount of garbage generated from the cleaning area or a cleaning condition of the cleaning area may be accumulated to facilitate easier management of the cleaning.
Abstract:
Provided is an atypical environment-based location recognition apparatus. The apparatus includes a sensing information acquisition unit configured to, from sensing data collected by sensor modules, detect object location information and semantic label information of a video image and detect an event in the video image; a walk navigation information provision unit configured to acquire user movement information; a metric map generation module configured to generate a video odometric map using sensing data collected through a sensing information acquisition unit and reflect the semantic label information; and a topology map generation module configured to generate a topology node using sensing data acquired through the sensing information acquisition unit and update the topology node through the collected user movement information.
Abstract:
Provided is a multi-agent based manned-unmanned collaboration system including: a plurality of autonomous driving robots configured to form a mesh network with neighboring autonomous driving robots, acquire visual information for generating situation recognition and spatial map information, and acquire distance information from the neighboring autonomous driving robots to generate location information in real time; a collaborative agent configured to construct location positioning information of a collaboration object, target recognition information, and spatial map information from the visual information, the location information, and the distance information collected from the autonomous driving robots, and provide information for supporting battlefield situational recognition, threat determination, and command decision using the generated spatial map information and the generated location information of the autonomous driving robot; and a plurality of smart helmets configured to display the location positioning information of the collaboration object, the target recognition information, and the spatial map information constructed through the collaborative agent and present the pieces of information to wearers.
Abstract:
The present disclosure discloses a method of determining precise positioning. A method of determining precise positioning according to an embodiment of the present disclosure includes: determining at least one piece of image positioning information of at least one image object detected from at least one image; determining at least one piece of wireless positioning information of at least one wireless object on the basis of signal strength of a wireless signal; performing mapping for the at least one piece of image positioning information and the at least one piece of wireless positioning information; and determining final positioning information on the basis of the at least one piece of image positioning information, and the at least one piece of wireless positioning information for which mapping is performed.
Abstract:
Disclosed are an apparatus for measuring position of other apparatus and a method for the same. The apparatus may comprise at least one light emitting part transmitting a photo signal, at least one light receiving part receiving a photo signal transmitted from other apparatus, and a signal processing part controlling the at least one light emitting part to transmit the photo signal including identification information of itself, acquiring identification information of the other apparatus based on the photo signal received from the other apparatus, and acquiring a positional information of the other apparatus based on the acquired identification information of the other apparatus. Thus, the apparatus located in an arbitrary space may accurately acquire relative positional information of counterpart apparatuses.