Abstract:
Environmental hazards and safe conditions can be indicated to an operator of a vehicle. An audio signal of varying intensities can be played to indicate the severity of a potential hazard. An audio signal can be played at different locations in the vehicle cabin to indicate the location of a potential hazard. Surfaces such as but not limited to windows can be tinted to indicate a hazardous or safe condition. Tinting can be of various intensities commensurate with a potential hazard. Location of tinting can indicate the location of a potential hazard.
Abstract:
In embodiments of collaborative camera viewpoint control for interactive telepresence, a system includes a vehicle that travels based on received travel instructions, and the vehicle includes a camera system of multiple cameras that each capture video of an environment in which the vehicle travels from different viewpoints. Viewing devices receive the video of the environment from the different viewpoints, where the video of the environment from a selected one of the viewpoints is displayable to users of the viewing devices. Controller devices that are associated with the viewing devices can each receive a user input as a proposed travel instruction for the vehicle based on the selected viewpoint of the video that is displayed on the viewing devices. A trajectory planner receives the proposed travel instructions initiated via the controller devices, and generates a consensus travel instruction for the vehicle based on the proposed travel instructions.
Abstract:
A gated time of flight (GT-TOF) range camera that transmits a plurality of light pulses to illuminate features in a scene and gates ON a photosensor in the camera for one multi-exposure gate having a plurality of exposure periods following each of the plurality of light pulses to register amounts of light reflected by features in the scene from the light pulses and uses the registered amounts of light to determine distances to the features.
Abstract:
Real-time registration of a camera array in an image capture device may be implemented in the field by adjusting a selected subset of independent parameters in a mapping function, termed registration coefficients, which have been determined to have the largest contribution to registration errors so that the array can be maintained in its initial factory-optimized calibrated state. By having to adjust only a relatively small subset of registration coefficients from within a larger set of coefficients (which are typically determined using a specialized calibration target in a factory setting), far fewer matching patterns need to be identified in respective images captured by cameras in the array in order to correct for registration errors. Such simplified pattern matching may be performed using images that are captured during normal camera array usage so that registration may be performed in real-time in the field without the need for specialized calibration targets.
Abstract:
In embodiments of immersive interactive telepresence, a system includes a vehicle that captures an experience of an environment in which the vehicle travels, and the experience includes audio and video of the environment. User interactive devices receive the audio and the video of the environment, and each of the user interactive devices represent the experience for one or more users who are remote from the environment. A trajectory planner is implemented to route the vehicle based on obstacle avoidance and user travel intent as the vehicle travels in the environment. The trajectory planner can route the vehicle to achieve a location objective in the environment without explicit direction input from a vehicle operator or from the users of the user interactive devices.
Abstract:
Various technologies described herein pertain to using detected physical gestures to cause calls to transfer between client devices. A physical gesture between a first client device and a second client device can be detected (e.g., utilizing the first client device, the second client device, a disparate client-side device, a server, etc.). The first client device participates in a call, while the second client device is not participating in the call at a time of the detection of the physical gesture. Responsive to detection of the physical gesture, participation of the second client device in the call can be initiated. Participation of the second client device in the call can be initiated by causing the call to transfer from the first client device to the second client device or causing the second client device to join the call while the first client device continues to participate in the call.
Abstract:
Various technologies described herein pertain to using detected physical gestures to cause calls to transfer between client devices. A physical gesture between a first client device and a second client device can be detected (e.g., utilizing the first client device, the second client device, a disparate client-side device, a server, etc.). The first client device participates in a call, while the second client device is not participating in the call at a time of the detection of the physical gesture. Responsive to detection of the physical gesture, participation of the second client device in the call can be initiated. Participation of the second client device in the call can be initiated by causing the call to transfer from the first client device to the second client device or causing the second client device to join the call while the first client device continues to participate in the call.
Abstract:
Various technologies described herein pertain to using detected physical gestures to cause calls to transfer between client devices. A physical gesture between a first client device and a second client device can be detected (e.g., utilizing the first client device, the second client device, a disparate client-side device, a server, etc.). The first client device participates in a call, while the second client device is not participating in the call at a time of the detection of the physical gesture. Responsive to detection of the physical gesture, participation of the second client device in the call can be initiated. Participation of the second client device in the call can be initiated by causing the call to transfer from the first client device to the second client device or causing the second client device to join the call while the first client device continues to participate in the call.
Abstract:
Various technologies described herein pertain to using detected physical gestures to cause calls to transfer between client devices. A physical gesture between a first client device and a second client device can be detected (e.g., utilizing the first client device, the second client device, a disparate client-side device, a server, etc.). The first client device participates in a call, while the second client device is not participating in the call at a time of the detection of the physical gesture. Responsive to detection of the physical gesture, participation of the second client device in the call can be initiated. Participation of the second client device in the call can be initiated by causing the call to transfer from the first client device to the second client device or causing the second client device to join the call while the first client device continues to participate in the call.
Abstract:
Environmental hazards and safe conditions can be indicated to an operator of a vehicle. An audio signal of varying intensities can be played to indicate the severity of a potential hazard. An audio signal can be played at different locations in the vehicle cabin to indicate the location of a potential hazard. Surfaces such as but not limited to windows can be tinted to indicate a hazardous or safe condition. Tinting can be of various intensities commensurate with a potential hazard. Location of tinting can indicate the location of a potential hazard.