Abstract:
The disclosed embodiments relate to methods and systems for avoiding a collision between an obstacle and a vehicle, such as an aircraft, on a ground surface. A processor receives a detection signal from one of a plurality of proximity sensors. The detection signal indicates that the obstacle has been detected. In response to receiving the detection signal, a video image signal is transmitted from the processor to a display in the cockpit of the aircraft. The video image signal corresponds to a particular video imager that is associated with the particular proximity sensor that detected the obstacle. A video image, of a particular region around the aircraft that includes the obstacle is displayed on a display. In response to receiving the detection signal, the processor can also transmit an alert signal, and a brake activation signal to activate a braking system to prevent the aircraft from colliding with the obstacle.
Abstract:
The present invention describes a road terrain detection system that comprises a method for classifying selected locations in the environment of a vehicle based on sensory input signals such as pixel values of a camera image. The method comprises a high level spatial feature generation for selected locations in the environment called base points. The spatial feature generation of the base points is based on a value-continuous confidence representation that captures visual and physical properties of the environment, generated by so called base classifiers operating on raw sensory data. Consequently, the road terrain detection incorporates both local properties of sensor data and their spatial relationship in a two-step feature extraction process.
Abstract:
In a computerized system including a camera mounted in a moving vehicle. The camera acquires consecutively in real time image frames including images of an object within the field of view of the camera. Range to the object from the moving vehicle is determined in real time. A dimension, e.g. a width, is measured in the respective images of two or more image frames, thereby producing measurements of the dimension. The measurements are processed to produce a smoothed measurement of the dimension. The dimension is measured subsequently in one or more subsequent frames. The range from the vehicle to the object is calculated in real time based on the smoothed measurement and the subsequent measurements. The processing preferably includes calculating recursively the smoothed dimension using a Kalman filter.
Abstract:
An object of the present invention is to provide a method for preventing turbulence-induced accidents that can expand a detection range to about 20 km without increasing the size of a device or increasing the energy consumption, can perform planar distribution monitoring of turbulence when the turbulence is detected in the flight direction and also can output a signal for autopilot steering input that decreases the fuselage shaking when the turbulence is difficult to avoid, as well as to provide a device having those functions. In the method for preventing turbulence-induced accidents according to the present invention, an optical remote airflow measurement device of a Doppler lidar system using a laser beam is used to routinely enable distant turbulence to be detected by fixing a laser emission course in a flight direction and taking a long integration time of a reception signal, and to enable planar distribution of the turbulence to be displayed when turbulence is detected, by scanning the laser emission course in a horizontal direction and switching an image display to a two-dimensional display.
Abstract:
An apparatus for providing information about a three-dimensional environment to a user includes; a handle, at least one sensor operatively coupled to the handle, a tactile pad disposed on the handle, a plurality of tactile buttons arrayed on the tactile pad, a plurality of actuators, wherein each actuator is operatively coupled to one of the plurality of tactile buttons to control a height thereof in relation to the tactile pad, and a processor which receives signals from the at least one sensor and controls positioning of the plurality of actuators to represent a physical environment sensed by the at least one sensor.
Abstract:
A robot obstacle detection system including a robot housing which navigates with respect to a surface and a sensor subsystem aimed at the surface for detecting the surface. The sensor subsystem includes an emitter which emits a signal having a field of emission and a photon detector having a field of view which intersects the field of emission at a region. The subsystem detects the presence of an object proximate the mobile robot and determines a value of a signal corresponding to the object. It compares the value to a predetermined value, moves the mobile robot in response to the comparison, and updates the predetermined value upon the occurrence of an event.
Abstract:
System and method for preventing collisions between a vehicle and objects in a path of the vehicle includes a laser system arranged to direct at least one laser beam outward therefrom and which is in an eye-safe portion of the electromagnetic spectrum, an imaging receiver for receiving at least one laser beam reflected from objects in the path of the laser beam, a processor coupled to the receiver and arranged to receive signals derived from the received laser beam and process the signals to determine a distance between the laser system and the objects from which the laser beam has been reflected, and one or more reactive systems coupled to the processor. The processor controls the reactive system to indicate the presence of objects at specific distances from the vehicle. This indication may be used to take preventive action to avoid the collision, either manually or automatically.
Abstract:
The adverse effects of various sources of error present in satellite imaging when determining ground location information are reduced to provide more accurate ground location information for imagery, thereby rendering the information more useful for various entities utilizing the images. The determination of ground location coordinates associated with one or more pixels of an image acquired by an imaging system aboard a satellite or other remote platform includes obtaining a first earth image associated with a first earth view, obtaining a second earth image associated with a second earth view, the second earth image not overlapping the first earth image, and using known location information associated with the first earth image to determine location information associated with the second earth image.
Abstract:
A robot obstacle detection system including a robot housing which navigates with respect to a surface and a sensor subsystem aimed at the surface for detecting the surface. The sensor subsystem includes an emitter which emits a signal having a field of emission and a photon detector having a field of view which intersects the field of emission at a region. The subsystem detects the presence of an object proximate the mobile robot and determines a value of a signal corresponding to the object. It compares the value to a predetermined value, moves the mobile robot in response to the comparison, and updates the predetermined value upon the occurrence of an event.