Abstract:
The embodiments relate to a robot and a server communicating with the robot, the robot being driven by using at least one among a driving wheel, a propeller, and a manipulator moving at least one joint.
Abstract:
An action robot may include a main body, at least one joint, and at least one limb configured to be rotatably connected to the main body via the joint. The joint may be configured to provide elastic force in a direction in which the limb is unfolded or pulled away from the main body and a wire connected to the limb to pull the limb in a direction in which the limb is folded or pulled toward the main body. The wire may be connected to an elevation rod provided inside the main body. A drive assembly may be provided outside the main body and be configured to lift the elevation rod. A rod spring may be provided and configured to provide a downward elastic force to the elevation rod. A wire support provided within the main body may be configured to support the wire.
Abstract:
A charging robot includes: a station; a multi-joint manipulator including a plurality of first joints and a plurality of second joints which have rotation axes orthogonal to each other and are connected with each other alternately, the joint manipulator being provided on the station; a charging connector provided on an end of the multi-joint manipulator; a manipulator moving mechanism configured to move the multi-joint manipulator to an outside of the station; a first motor configured to pivot the first joint by a predetermined angle when the first joint is positioned at a set point; a first actuator configured to move the first motor toward the first joint and to connect the first motor to the first joint when the first joint is positioned at the set point; a second motor configured to pivot the second joint by a predetermined angle when the second joint is positioned at the set point; and a second actuator configured to move the second motor toward the second joint and to connect the second motor to the second joint when the second joint is positioned at the set point.
Abstract:
Disclosed herein are a method for driving a robot based on an external image, and a robot and a server implementing the same. In the method, and the robot and server implementing the same, drive of a robot is controlled further using external images acquired by camera modules installed outside the robot. To this end, a robot according to an embodiment of the present disclosure includes a communication unit configured to communicate with external camera modules acquiring external images including the robot that is being driven, a drive-information acquiring unit configured to acquire driving related information at the time of driving the robot, a driving unit configured to drive the robot, and a control unit configured to control the driving unit using external information including the external images received from the external camera modules and the driving related information.
Abstract:
Disclosed is an autonomous mobile robot including a main body, and a driving part positioned below the main body and configured to move the main body, wherein the driving part includes a rotation part that is rotatably provided and is configured to dispose a sensor module including one or more sensors outwards, a base positioned below the rotation part, and a driving wheel installed on the base, thereby embodying a sensing system with low cost and high efficiency.
Abstract:
Provided is a robot system. The robot system includes a manipulator configured to perform a preset operation on a plurality of objects, a transparent cover configured to define a chamber in which the plurality of objects and the manipulator are accommodated, the transparent cover being provided with a touch panel, a camera installed to face an internal region of the chamber, a projector configured to emit light to one area within the chamber, and a controller configured to control the projector so that the projector emits the light to a target area corresponding to a touch point of the touch panel, recognize a target object disposed in the target area based on image information of the camera, and control the manipulator so that an operation is performed on the target object.
Abstract:
A hand of a robot includes: a hand main body including a palm; a plurality of fingers connected to the hand main body; a first spreader disposed inside the hand main body; a second spreader positioned between the palm and the first spreader and disposed in parallel to the first spreader; a plurality of stretching wires connected to any one of the first spreader and the second spreader, and configured to pull the fingers to cause the fingers to be stretched; a plurality of bending wires connected the other one of the first spreader and the second spreader, and configured to pull the fingers to cause the fingers to be bent; a driving mechanism configured to rotate and/or shift the first spreader and the second spreader; and a rotation detection sensor provided on at least one of the first spreader or the second spreader.
Abstract:
A mobile input device and a command input method using the same are disclosed. The mobile input device capable of moving using a driving motor includes a command recognition unit configured to recognize at least one of a voice command and a gesture command, a command transmitting unit configured to transmit a command signal corresponding to at least one of the voice command and the gesture command input to the command recognition unit to an external electronic device, a moving unit including the driving motor, and a controller configured to control recognition of at least one of the voice command and the gesture command, transmission of the command signal to the external electronic device, and movement of the mobile input device.
Abstract:
Disclosed is a method of controlling a mobile robot, including receiving user input including a predetermined service request by the mobile robot, receiving an article to be served, by the mobile robot, searching for a user, analyzing a gesture of the user, and extracting a serving position, by the mobile robot, analyzing an image of the serving position and extracting a distance and height of the serving position, moving the mobile robot to the serving position and lifting the served article to be served, to a height of the serving position, and putting down the article to be served at the serving position by horizontally moving the article to be served to the serving position. Accordingly, the serving robot directly receives a serving article and provides the serving article to a user at a position desired by the user without a user operation of receiving the serving article. The serving robot determines a user at a serving position, reads a gesture of the user from an image, and determines a table on which the serving article is to be put in order to put the serving article at an accurate position.
Abstract:
Disclosed is a mobile robot including a body forming an outer appearance, a driving part configured to move the body, an image acquisition unit configured to capture an image of a traveling area and to generate image information, a tray configured to support an article to be carried, and a controller configured to determine a size of the article to be carried and to form an inclined surface by controlling a slide module disposed above the tray to move at least one slide of the slide module to protrude depending on the size of the article to be carried. Accordingly, a porter robot loads a load in an accommodation space without lifting the load by a user.