Abstract:
Systems and methods for detecting and correcting defective products include capturing at least one image of a product with at least one image sensor to generate an original image of the product. An encoder encodes portions of an image extracted from the original image to generate feature space vectors. A decoder decodes the feature space vectors to reconstruct the portions of the image into reconstructed portions by predicting defect-free structural features in each of the portions according to hidden layers trained to predict defect-free products. Each of the reconstructed portions are merged into a reconstructed image of a defect-free representation of the product. The reconstructed image is communicated to a contrastor to detect anomalies indicating defects in the product.
Abstract:
Systems and methods are disclosed to assist a driver with a dangerous condition by creating a graph representation where traffic participants and static elements are the vertices and the edges are relations between pairs of vertices; adding attributes to the vertices and edges of the graph based on information obtained on the driving vehicle, the traffic participants and additional information; creating a codebook of dangerous driving situations, each represented as graphs; performing subgraph matching between the graphs in the codebook and the graph representing a current driving situation to select a set of matching graphs from the codebook; determining a distance metric between each selected codebook graphs and the matching subgraph of the current driving situation; from codebook graphs with a low distance, determining potential dangers; and generating an alert if one or more of the codebook dangers are imminent.
Abstract:
Methods and systems for detecting and correcting anomalies include predicting normal behavior of a monitored system based on training data that includes only sensor data collected during normal behavior of the monitored system. The predicted normal behavior is compared to recent sensor data to determine that the monitored system is behaving abnormally. A corrective action is performed responsive to the abnormal behavior to correct the abnormal behavior.
Abstract:
A computer-implemented method and system are provided for driving assistance. The system includes an image capture device configured to capture image data relative to an outward view from a motor vehicle. The system further includes a processor configured to detect and localize objects, in a real-world map space, from the image data using a trainable object localization Convolutional Neural Network (CNN). The CNN is trained to detect and localize the objects from image and radar pairs that include the image data and radar data for different driving scenes of a natural driving environment. The processor is further configured to provide a user-perceptible object detection result to a user of the motor vehicle.
Abstract:
A computer-implemented method and system are provided for driving assistance. The system includes an image capture device configured to capture image data relative to an outward view from a motor vehicle. The system further includes a processor configured to detect and localize objects, in a real-world map space, from the image data using a trainable object localization Convolutional Neural Network (CNN). The CNN is trained to detect and localize the objects from image and radar pairs that include the image data and radar data for different driving scenes of a natural driving environment. The processor is further configured to provide a user-perceptible object detection result to a user of the motor vehicle.
Abstract:
A computer-implemented method for training a deep neural network to recognize traffic scenes (TSs) from multi-modal sensors and knowledge data is presented. The computer-implemented method includes receiving data from the multi-modal sensors and the knowledge data and extracting feature maps from the multi-modal sensors and the knowledge data by using a traffic participant (TS) extractor to generate a first set of data, using a static objects extractor to generate a second set of data, and using an additional information extractor. The computer-implemented method further includes training the deep neural network, with training data, to recognize the TSs from a viewpoint of a vehicle.
Abstract:
A video device for predicting driving situations while a person drives a car is presented. The video device includes multi-modal sensors and knowledge data for extracting feature maps, a deep neural network trained with training data to recognize real-time traffic scenes (TSs) from a viewpoint of the car, and a user interface (UI) for displaying the real-time TSs. The real-time TSs are compared to predetermined TSs to predict the driving situations. The video device can be a video camera. The video camera can be mounted to a windshield of the car. Alternatively, the video camera can be incorporated into the dashboard or console area of the car. The video camera can calculate speed, velocity, type, and/or position information related to other cars within the real-time TS. The video camera can also include warning indicators, such as light emitting diodes (LEDs) that emit different colors for the different driving situations.
Abstract:
A computer-implemented method and system are provided for video-based anomaly detection. The method includes forming, by a processor, a Deep High-Order Convolutional Neural Network (DHOCNN)-based model having a one-class Support Vector Machine (SVM) as a loss layer of the DHOCNN-based model. An objective of the SVM is configured to perform the video-based anomaly detection. The method further includes generating, by the processor, one or more predictions of an impending anomaly based on the high-order deep learning based model applied to an input image. The method also includes initiating, by the processor, an action to a hardware device to mitigate expected harm to at least one item selected from the group consisting of the hardware device, another hardware device related to the hardware device, and a person related to the hardware device.
Abstract:
Systems and methods are disclosed to assist a driver with a dangerous condition by creating a graph representation where traffic participants and static elements are the vertices and the edges are relations between pairs of vertices; adding attributes to the vertices and edges of the graph based on information obtained on the driving vehicle, the traffic participants and additional information; creating a codebook of dangerous driving situations, each represented as graphs; performing subgraph matching between the graphs in the codebook and the graph representing a current driving situation to select a set of matching graphs from the codebook; determining a distance metric between each selected codebook graphs and the matching subgraph of the current driving situation; from codebook graphs with a low distance, determining potential dangers; and generating an alert if one or more of the codebook dangers are imminent.
Abstract:
A system and method for a motorized land vehicle that detects objects obstructing a driver's view of an active road, includes an inertial measurement unit-enabled global position system (GPS/IMU) subsystem for obtaining global position system (GPS) position and heading data of a land vehicle operated by the driver as the vehicle travels along a road, a street map subsystem for obtaining street map data of the GPS position of the vehicle using the GPS position and heading data as the vehicle travels along the road, and a three-dimensional (3D) object detector subsystem for detecting objects ahead of the vehicle and determining a 3D position and 3D size data of each of the detected objects ahead of the vehicle. The street map subsystem merges the street map data, the GPS position and heading data of the vehicle and the 3D position data and 3D size data of the detected objects, to create real-time two-dimensional (2D) top-view map representation of a traffic scene ahead of the vehicle. The street map subsystems finds active roads ahead of the vehicle in the traffic scene, and finds each active road segment of the active roads ahead of the vehicle that is obstructed by one of the detected objects. A driver alert subsystem notifies a driver of the vehicle of each of the active road segments that is obstructed by one of the detected objects.