Abstract:
A vehicle and a system and a method of operating the vehicle. The system includes a reasoning engine, an episodic memory, a resolver and a controller. The reasoning engine infers a plurality of possible scenarios based on a current state of an environment of the vehicle. The episodic memory determines a historical likelihood for each of the plurality of possible scenarios. The resolver selects a scenario from the plurality of possible scenarios using the historical likelihoods. The controller operates the vehicle based on the selected scenario.
Abstract:
A control system of the autonomous vehicle may generate multiple possible behavior control movements based on the driving goal and the assessment of the vehicle environment. In doing so, the method and system selects one of the best behavior control, among the multiple possible movements, and the selection is based on the quantitative grading of its driving behavior.
Abstract:
A method of constructing a probabilistic representation of the location of an object within a workspace includes obtaining a plurality of 2D images of the workspace, with each respective 2D image being acquired from a camera disposed at a different location within the workspace. A foreground portion is identified within at least two of the plurality of 2D images, and each foreground portion is projected to each of a plurality of parallel spaced planes. An area is identified within each of the plurality of planes where a plurality of projected foreground portions overlap. These identified areas are combined to form a 3D bounding envelope of an object. This bounding envelope is a probabilistic representation of the location of the object within the workspace.
Abstract:
A virtual lane estimation system includes a memory device, a sensor and a computer. The memory device is configured to store a road map that corresponds to a portion of a road ahead of a vehicle. The sensor is configured to observe a plurality of trajectories of a plurality of neighboring vehicles that traverse the portion of the road. The computer is configured to initialize a recursive self-organizing map as a plurality of points arranged as a two-dimensional grid aligned with the road map, train the points in the recursive self-organizing map in response to the trajectories, generate a directed graph that contains one or more virtual lanes through the road map in response to the points trained to the trajectories, and generate a control signal that controls navigation of the vehicle through the portion of the road in response to the virtual lanes in the directed graph.
Abstract:
An autonomous vehicle, cognitive system for operating an autonomous vehicle and method of operating an autonomous vehicle. The cognitive system includes one or more hypothesizer modules, a hypothesis resolver, one or more decider modules, and a decision resolver. Data related to an agent is received at the cognitive system. The one or more hypothesizer modules create a plurality of hypotheses for a trajectory of the agent based on the received data. The hypothesis resolver selects a single hypothesis for the trajectory of the agent from the plurality of hypotheses based on a selection criteria. The one or more decider modules create a plurality of decisions for a trajectory of the autonomous vehicle based on the selected hypothesis for the agent. The decision resolver selects a trajectory for the autonomous vehicle from the plurality of decisions. The autonomous vehicle is operated based on the selected trajectory.
Abstract:
In various embodiments, methods, systems, and autonomous vehicles are provided. In one exemplary embodiment, a method is provided that includes obtaining first sensor inputs pertaining to one or more actors in proximity to an autonomous vehicle; obtaining second sensor inputs pertaining to operation of the autonomous vehicle; obtaining, via a processor, first neural network outputs via a first neural network, using the first sensor inputs; and obtaining, via the processor, second neural network outputs via a second neural network, using the first network outputs and the second sensor inputs, the second neural network outputs providing one or more recommended actions for controlling the autonomous vehicle.
Abstract:
A machine vision system for a controllable robotic device proximal to a workspace includes an image acquisition sensor arranged to periodically capture vision signal inputs each including an image of a field of view including the workspace. A controller operatively couples to the robotic device and includes a non-transitory memory component including an executable vision perception routine. The vision perception routine includes a focus loop control routine operative to dynamically track a focus object in the workspace and a background loop control routine operative to monitor a background of the workspace. The focus loop control routine executes simultaneously asynchronously in parallel with the background loop control routine to determine a combined resultant including the focus object and the background based upon the periodically captured vision signal inputs. The controller is operative to control the robotic device to manipulate the focus object based upon the focus loop control routine.
Abstract:
A machine vision system for a controllable robotic device proximal to a workspace includes an image acquisition sensor arranged to periodically capture vision signal inputs each including an image of a field of view including the workspace. A controller operatively couples to the robotic device and includes a non-transitory memory component including an executable vision perception routine. The vision perception routine includes a focus loop control routine operative to dynamically track a focus object in the workspace and a background loop control routine operative to monitor a background of the workspace. The focus loop control routine executes simultaneously asynchronously in parallel with the background loop control routine to determine a combined resultant including the focus object and the background based upon the periodically captured vision signal inputs. The controller is operative to control the robotic device to manipulate the focus object based upon the focus loop control routine.
Abstract:
A human monitoring system includes a plurality of cameras and a visual processor. The plurality of cameras are disposed about a workspace area, where each camera is configured to capture a video feed that includes a plurality of image frames, and the plurality of image frames are time-synchronized between the respective cameras. The visual processor is configured to receive the plurality of image frames from the plurality of vision-based imaging devices and detect the presence of a human from at least one of the plurality of image frames using pattern matching performed on an input image. The input image to the pattern matching is a sliding window portion of the image frame that is aligned with a rectified coordinate system such that a vertical axis in the workspace area is aligned with a vertical axis of the input image.
Abstract:
A human monitoring system includes a plurality of cameras and a visual processor. The plurality of cameras are disposed about a workspace area, where each camera is configured to capture a video feed that includes a plurality of image frames, and the plurality of image frames are time-synchronized between the respective cameras. The visual processor is configured to receive the plurality of image frames from the plurality of vision-based imaging devices and determine an integrity score for each respective image frame. The processor may then isolate a foreground section from two or more of the views, determine a principle body axis for each respective foreground section, and determine a location point according to a weighted least squares function amongst the various principle body axes.