Abstract:
A method and apparatus in an automated data storage library for identifying a location of a robot in the automated data storage library. Signals are transmitted from a set of transmitters in the automated data storage library, wherein a location of the set of transmitters is known. The signals transmitted from the set of transmitters are received at a receiver located on the robot to form a set of received signals. The location of the robot is determined using the set of received signals and the location of the set of transmitters.
Abstract:
An image is displayed using anamorphic video. A first portion of an image is displayed on a display at a first scale. At least one second portion of the image is displayed on the display. The at least one second portion is adjacent the first portion of the image. The second portion is displayed at a second scale higher than the first scale.
Abstract:
An apparatus and method for determination of a steering angle and velocity using distances from obstacles in a plurality of directions or changes in distances with time as input information; obtaining a function output for each of the directions using the input information as a parameter and using a function regarding steering angle, a function regarding velocity and a function regarding degree of danger; executing operations using each function output as a parameter to compute a steering angle for obstacle avoidance and also compute a steering angle for route tracing and a velocity by a predetermined method; and synthesizing the obtained steering angle for route tracing and for obstacle avoidance with a velocity as derived. The foregoing control method avoids obstacles and is comparatively simple, even in an unknown environment without constructing a complicated rule base.
Abstract:
Disclosed herein are systems and methods for autonomous vehicle operation, in which a processor is configured to receive sensor data collected by a first sensor of a first autonomous vehicle during navigation of the first autonomous vehicle through a particular location and prior to a control signal subsequently generated by a controller of the first autonomous vehicle; determine based on the sensor data an event that triggered the control signal. A communication device coupled to the processor is configured to transmit to a second autonomous vehicle an instruction, based on the determined event, to adjust sensor data collected by a second sensor of the second autonomous vehicle during navigation of the second autonomous vehicle in the particular location.
Abstract:
A method for interactions during encounters between a mobile robot and an actor, a mobile robot configured for execution of delivery tasks in an outdoor environment, and a use of the mobile robot. The method comprises the mobile robot traveling on a pedestrian pathway; detecting an actor by the mobile robot via a sensor system; identifying a situation associated with the detected actor; in response to the identified situation, determining an action to execute by the mobile robot, and executing the determined action by the mobile robot. The mobile robot comprises a navigation component configured for at least partially autonomous navigation in an outdoor environment; a sensor system configured for collecting sensor data during an encounter between the mobile robot and an actor; a processing component configured to process the sensor data and output actions for the mobile robot to perform; and an output component configured for executing actions determined by the processing component.
Abstract:
A scalable solution to robot behavioral navigation following natural language instructions is presented. An example of the solution includes: receiving, by a pre-trained sequential prediction model, a navigation graph of the task environment, instructions in natural language and an initial location of the robot in the navigation graph, wherein the navigation graph comprises nodes indicating locations in the task environment, coordinates of the nodes, and edges indicating connectivity between the locations; and predicting sequentially, by the pre-trained sequential prediction model, a sequence of single-step behaviors executable by the robot to navigate the robot from the initial location to a destination.
Abstract:
Remote presence systems and methods are presented. In one embodiment, a system may comprise a pilot workstation comprising a pilot computing station having a display, a microphone, a camera oriented to capture images of the pilot, a network connectivity subsystem, and a master input device such as a keyboard, mouse, or joystick. The pilot network connectivity subsystem may be operatively coupled to an electromechanically mobile workstation comprising a mobile base interconnected to a head component. The mobile workstation may comprise a display, a microphone, a camera oriented to capture images of nearby people and structures, and a workstation network connectivity subsystem that preferably is operatively coupled to the pilot network connectivity subsystem. Preferably by virtue of the system components, the pilot is able to remotely project a virtual presence of himself in the form of images, sound, and motion of the mobile workstation at the location of the mobile workstation.
Abstract:
A service providing system includes a request receiving robot and a service providing robot. The request receiving robot includes a floating unit configured to float in air, a recognition unit configured to recognize a service providing request by a user, and a transmitter configured to transmit the recognized service providing request. The service providing robot includes a receiver configured to receive the service providing request transmitted by the request receiving robot, a moving unit configured to move the service providing robot to the user who makes the service providing request as a destination according to the received service providing request, and a service providing unit configured to provide a service to the user.
Abstract:
Systems, apparatuses and methods may generate a map of a search environment based on a probability of a target human being present within the search environment, capture a red, green, blue, depth (RGBD) image of one or more potential target humans in the search environment based on the map, and cause a robot apparatus to obtain a frontal view position with respect to at least one of the one or more potential target humans based on the RGBD images.
Abstract:
A communication controlling method includes: (A) receiving, from second communication devices, device identification information items for identifying the second communication devices, and situation information items for grasping situations around the second communication devices; (B) when at least one of the received situation information items includes an information item on a user, updating a neighborhood information database indicating which second communication device is around which user, based on the information item on the user and the at least one of the device identification information items; (C) when receiving a request for connection to a target user from the first communication device, selecting, from among the second communication devices, a second communication device present around the target user with reference to the neighborhood information database; and (D) communicably connecting the selected second communication device and the first communication device.