CARE ROBOT CONTROLLER
    1.
    发明申请

    公开(公告)号:US20220111536A1

    公开(公告)日:2022-04-14

    申请号:US17280305

    申请日:2020-04-21

    Abstract: The present invention discloses a care robot controller, which includes: a controller body that includes slide rails, finger slot sliders and a joystick, wherein the finger slot sliders are movably arranged on the slide rails and configured to receive pressing, and the joystick is configured to control the care robot; a gesture parsing unit configured to parse three-dimensional gestures of the controller body, and control the care robot to perform corresponding actions when the three-dimensional gestures of the controller body are in line with preset gestures; and a tactile sensing unit configured to sense the pressing received by the finger slot sliders and initiate a user mode corresponding to the pressing information, so that the controller body provides corresponding vibration feedback. Thus the user can control the controller efficiently and conveniently, the control accuracy is improved, and effective man-machine interaction is realized.

    VIRTUAL REALITY-BASED CAREGIVING MACHINE CONTROL SYSTEM

    公开(公告)号:US20220281112A1

    公开(公告)日:2022-09-08

    申请号:US17637265

    申请日:2020-04-21

    Abstract: A virtual reality-based caregiving machine control system includes a visual unit, configured to obtaining environmental information around a caregiving machine, and transmitting the environmental information to a virtual scene generation unit and a calculation unit; the calculation unit, configured to receiving control instructions for the caregiving machine, and obtaining, by calculation according to the environmental information, an action sequence of executing the control instructions by the caregiving machine; the virtual scene generation unit, configured to generating a virtual reality scene from the environmental information, and displaying the virtual reality scene on a touch display screen in combination with the action sequence; and the touch display screen, configured to receiving a touch screen adjusting instruction for the action sequence and feeding back same to the calculation unit for execution, and receiving a confirmation instruction for the action sequence.

    DIGITAL IMAGE CALCULATION METHOD AND SYSTEM FOR RGB-D CAMERA MULTI-VIEW MATCHING BASED ON VARIABLE TEMPLATE

    公开(公告)号:US20240428430A1

    公开(公告)日:2024-12-26

    申请号:US18648456

    申请日:2024-04-28

    Abstract: Disclosed is a digital image calculation method and system for RGB-D camera multi-view matching based on a variable template, the method includes six steps: acquiring data, preprocessing point cloud data, performing feature point matching, re-registering a variable template, calculating point cloud data transformation relationships among large-view images, and performing point cloud fusion. A size of a non-adjacent image matching template is adjusted based on registration results of adjacent angles of view, and correct registration of feature points of images from non-adjacent angles of view is accordingly achieved, which improves matching accuracy, eliminates cumulative errors in image sets, and provides more accurate initial values for subsequent iterations of point cloud fusion, such that the number of iterations is reduced, and three-dimensional reconstruction of images is implemented.

    AUTONOMOUS MOBILE GRABBING METHOD FOR MECHANICAL ARM BASED ON VISUAL-HAPTIC FUSION UNDER COMPLEX ILLUMINATION CONDITION

    公开(公告)号:US20230042756A1

    公开(公告)日:2023-02-09

    申请号:US17784905

    申请日:2021-10-26

    Abstract: The present disclosure discloses an autonomous mobile grabbing method for a mechanical arm based on visual-haptic fusion under a complex illumination condition, which mainly includes approaching control over a target position and feedback control over environment information.
    According to the method, under the complex illumination condition, weighted fusion is conducted on visible light and depth images of a preselected region, identification and positioning of a target object are completed based on a deep neural network, and a mobile mechanical arm is driven to continuously approach the target object; in addition, the pose of the mechanical arm is adjusted according to contact force information of a sensor module, the external environment and the target object; and meanwhile, visual information and haptic information of the target object are fused, and the optimal grabbing pose and the appropriate grabbing force of the target object are selected.
    By adopting the method, the object positioning precision and the grabbing accuracy are improved, the collision damage and instability of the mechanical arm are effectively prevented, and the harmful deformation of the grabbed object is reduced.

Patent Agency Ranking