Abstract:
A multi-camera control apparatus and method to maintain a location and a size of an object in a continuous viewpoint switching service are provided. The multi-camera control method of controlling a main camera configured to capture a moving object of interest and controlling at least one sub-camera configured to capture the object of interest at a different viewpoint from that of the main camera, may include extracting the object of interest from a first image generated by the main camera, controlling a capturing scheme of the main camera based on a change in a location and a size of the extracted object of interest, projecting the object of interest onto a second image generated by the sub-camera, and controlling a capturing scheme of the sub-camera based on a change in a location and a size of the projected object of interest.
Abstract:
The present disclosure discloses a method of determining precise positioning. A method of determining precise positioning according to an embodiment of the present disclosure includes: determining at least one piece of image positioning information of at least one image object detected from at least one image; determining at least one piece of wireless positioning information of at least one wireless object on the basis of signal strength of a wireless signal; performing mapping for the at least one piece of image positioning information and the at least one piece of wireless positioning information; and determining final positioning information on the basis of the at least one piece of image positioning information, and the at least one piece of wireless positioning information for which mapping is performed.
Abstract:
Disclosed is a stereo matching method and apparatus based on a stereo vision, the method including acquiring a left image and a right image, identifying image data by applying a window to each of the acquired left image and right image, storing the image data in a line buffer, extracting a disparity from the image data stored in the line buffer, and generating a depth map based on the extracted disparity.
Abstract:
Disclosed are an object tracking method and an object tracking apparatus performing the object tracking method. The object tracking method may include extracting locations of objects in an object search area using a global camera, identifying an interest object selected by a user from the objects, and determining an error in tracking the identified interest object and correcting the determined error.
Abstract:
The present invention relates to a system for tracking an object. The system includes an image capturing unit configured to capture a video of a predetermined observation area and output the captured video; and a multi-object tracker configured to output an object-tracking image by tracking multiple objects within an object image which is generated by extracting the objects from each of image frames obtained from the video obtained from the image capturing unit, wherein the multi-object tracker determines whether occlusion of the objects or hijacking occurs while performing multi-object tracking, and when it is determined that at least one of the occlusion and hijacking occurs, the multi-object tracker outputs the object-tracking image corrected by removing the occurring occlusion or hijacking.
Abstract:
Provided is a method of separating a foreground and a background by extracting a depth image through a stereo camera, generating an occupancy grid map on the basis of the depth image, predicting a free space, and computing a membership value, the method including setting a threshold value of a foreground object existing in a free space boundary region of the predicted free space, determining whether the membership value reaches the threshold value of the foreground object while the membership value is computed, terminating the computing of the membership value when it is determined that the membership value being computed reaches the threshold value of the foreground object in the determining, and separating a foreground and a background through the computed membership value.