ROBOT DEVICE, METHOD FOR THE COMPUTER-IMPLEMENTED TRAINING OF A ROBOT CONTROL MODEL, AND METHOD FOR CONTROLLING A ROBOT DEVICE

    公开(公告)号:US20230063799A1

    公开(公告)日:2023-03-02

    申请号:US17893596

    申请日:2022-08-23

    申请人: Robert Bosch GmbH

    IPC分类号: B25J9/16 G06T7/10

    摘要: A robot device, a method for training a robot control model, and a method for controlling a robot device. The method for training includes: supplying an image showing object(s), to a first and second prediction model to produce a first and second pickup prediction that has, for each pixel of the image, a first and second pickup robot configuration vector with an assigned first and second success probability; supplying the first and second pickup prediction to a blending model of the robot control model to produce a third pickup prediction that has, for each pixel of the image: a third pickup robot configuration vector that is a weighted combination of the first and second pickup robot configuration vector, and a third success probability that is a weighted combination of the first and second success probability; and training the robot control model by adapting the first and second weighting factors.

    METHOD FOR PICKING UP AN OBJECT BY MEANS OF A ROBOTIC DEVICE

    公开(公告)号:US20230114306A1

    公开(公告)日:2023-04-13

    申请号:US17934887

    申请日:2022-09-23

    申请人: Robert Bosch GmbH

    IPC分类号: G06T7/50 B25J9/16

    摘要: A method for picking up an object by means of a robotic device. The method includes obtaining at least one depth image of the object; determining, for each of a plurality of points of the object, the value of a measure of the scattering of surface normal vectors in an area around the point of the object; supplying the determined values to a neural network configured to output, in response to an input containing measured scattering values, an indication of object locations for pick-up; determining a location of the object for pick-up from an output which the neural network outputs in response to the supply of the determined values; and controlling the robotic device to pick up the object at the determined location.

    METHOD FOR GENERATING TRAINING DATA FOR SUPERVISED LEARNING FOR TRAINING A NEURAL NETWORK

    公开(公告)号:US20230098284A1

    公开(公告)日:2023-03-30

    申请号:US17935496

    申请日:2022-09-26

    申请人: Robert Bosch GmbH

    摘要: A method for generating training data for supervised learning for training a neural network to identify, from digital images of objects, locations of the objects for interacting with the objects. The method includes: acquiring, for each training object, at least one digital reference image and a plurality of further images of the training object; for each training object, specifying a location of the training object, mapping the at least one reference image onto a descriptor image, identifying descriptors of the specified location, mapping the further images of the training object onto further descriptor images, and determining locations in the further images by locating points in the further images, the descriptors of which in the further descriptor images correspond to the specified descriptors of the at least one specified location; and generating the training data for supervised learning by marking the determined locations for the further images of the training objects.

    METHOD FOR CONTROLLING A ROBOTIC DEVICE

    公开(公告)号:US20220375210A1

    公开(公告)日:2022-11-24

    申请号:US17661041

    申请日:2022-04-27

    申请人: Robert Bosch GmbH

    摘要: A method for controlling a robotic device. The method includes: obtaining an image, processing the image using a neural convolutional network, which generates an image in a feature space from the image, the image in the feature space, feeding the image in the feature space to a neural actor network, which generates an action parameter image, feeding the image in the feature space and the action parameter image to a neural critic network, which generates an assessment image, which defines for each pixel an assessment for the action defined by the set of action parameter values for that pixel, selecting, from multiple sets of action parameters of the action parameter image, that set of action parameter values having the highest assessment, and controlling the robot for carrying out an action according to the selected action parameter set.