-
公开(公告)号:US11842744B2
公开(公告)日:2023-12-12
申请号:US17587568
申请日:2022-01-28
发明人: Katsushi Ikeuchi , Masaaki Fukumoto , Johnny H. Lee , Jordan Lee Kravitz , David William Baumert
IPC分类号: B25J11/00 , G10L21/0208 , B25J9/16 , B25J19/02
CPC分类号: G10L21/0208 , B25J9/1602 , B25J11/003 , B25J11/0005 , B25J19/026
摘要: Noise reduction in a robot system includes the use of a gesture library that pairs noise profiles with gestures that can be performed by the robot. A gesture to be performed by the robot is obtained, and the robot performs the gesture. The robot's performance of the gesture creates noise, and when a user speaks to the robot while the robot performs a gesture, incoming audio includes both user audio and robot noise. A noise profile associated with the gesture is retrieved from the gesture library and is applied to remove the robot noise from the incoming audio.
-
公开(公告)号:US11270717B2
公开(公告)日:2022-03-08
申请号:US16406788
申请日:2019-05-08
发明人: Katsushi Ikeuchi , Masaaki Fukumoto , Johnny H. Lee , Jordan Lee Kravitz , David William Baumert
IPC分类号: A01G23/089 , G10L21/0208 , B25J9/16 , B25J11/00 , B25J19/02
摘要: Noise reduction in a robot system includes the use of a gesture library that pairs noise profiles with gestures performed by the robot. A gesture to be performed by the robot is obtained, and the robot performs the gesture. The robot's performance of the gesture creates noise, and when a user speaks to the robot while the robot performs a gesture, incoming audio includes both user audio and robot noise. A noise profile associated with the gesture is retrieved from the gesture library and is applied to remove the robot noise from the incoming audio.
-
公开(公告)号:US11731271B2
公开(公告)日:2023-08-22
申请号:US16916343
申请日:2020-06-30
发明人: Naoki Wake , Kazuhiro Sasabuchi , Katsushi Ikeuchi
CPC分类号: B25J9/1661 , B25J9/161 , B25J13/003 , G06T7/20 , G06V20/00 , G06V40/107 , G06V40/28 , G06T2207/10024 , G06T2207/10028 , G06T2207/30196
摘要: Traditionally, robots may learn to perform tasks by observation in clean or sterile environments. However, robots are unable to accurately learn tasks by observation in real environments (e.g., cluttered, noisy, chaotic environments). Methods and systems are provided for teaching robots to learn tasks in real environments based on input (e.g., verbal or textual cues). In particular, a verbal-based Focus-of-Attention (FOA) model receives input, parses the input to recognize at least a task and a target object name. This information is used to spatio-temporally filter a demonstration of the task to allow the robot to focus on the target object and movements associated with the target object within a real environment. In this way, using the verbal-based FOA, a robot is able to recognize “where and when” to pay attention to the demonstration of the task, thereby enabling the robot to learn the task by observation in a real environment.
-
公开(公告)号:US11443161B2
公开(公告)日:2022-09-13
申请号:US16469024
申请日:2016-12-12
摘要: A method and apparatus for robot gesture generation is described. Generally speaking, a concept corresponding to a utterance to be spoken by a robot is determined (204). After a concept is determined or selected, a symbolic representation of a gesture that corresponds to the determined concept is retrieved from a predetermined gesture library (206). Subsequently, the symbolic representation is provided to cause the robot to perform the gesture (208). In such way, a more natural, comprehensive and effective communication between human and robots may be achieved.
-
-
-