Abstract:
Systems and methods for correspondence estimation and flexible ground modeling include communicating two-dimensional (2D) images of an environment to a correspondence estimation module (200), including a first image (102a) and a second image (102b) captured by an image capturing device (100). First features, including geometric features and semantic features, are hierarchically extract from the first image with a first convolutional neural network (CNN) according to activation map weights (210a), and second features, including geometric features and semantic features, are hierarchically extracted from the second image with a second CNN according to the activation map weights (210b). Correspondences between the first features and the second features are estimated, including hierarchical fusing of geometric correspondences and semantic correspondences (220). A 3-dimensional (3D) model of a terrain is estimated using the estimated correspondences belonging to the terrain surface (300). Relative locations of elements and objects in the environment are determined according to the 3D model of the terrain (300). A user is notified of the relative locations (400).
Abstract:
Systems and methods are disclosed for enhancing cybersecurity in a computer system by detecting safeness levels of executables. An installation lineage of an executable is identified in which entities forming the installation lineage include at least an installer of the monitored executable, and a network address from which the executable is retrieved. Each entity of the entities forming the installation lineage is individually analyzed using at least one safeness analysis. Results of the at least one safeness analysis of each entity are inherited by other entities in the lineage of the executable. A backtrace result for the executable is determined based on the inherited safeness evaluation of the executable. A total safeness of the executable, based on at least the backtrace result, is evaluated against a set of thresholds to detect a safeness level of the executable. The safeness level of the executable is output on a display screen.
Abstract:
A system for implementing a wireless communication network is provided. The system includes a plurality of unmanned aerial vehicles (UAVs) forming a wireless multi-hop mesh network constituting a backhaul. A given one of the UAVs includes a radio access network (RAN) agent configured to determine at least one UAV configuration for optimized coverage of one or more user equipment (UE) devices in a terrestrial zone, a haul agent configured to coordinate an optimization of the backhaul based at least in part on the at least one UAV configuration determined by the RAN agent, and a core agent configured to implement a distributed core architecture among the plurality of UAVs. The system further includes a controller configured to control the plurality of UAVs based on information received from at least one of the agents.
Abstract:
A system, method, and computer program product are provided for suspicious remittance detection for a set of users. The method includes detecting (410), by a processor, unrealistic user location movements, based on login activities and remittance activities. The method includes detecting (420), by the processor, abnormal user remittance behavior based on account activities and the remittance activities by detecting any users who are silent for a threshold period of time and thereafter remit an amount of money greater than a threshold money amount. The method includes detecting (430), by the processor, abnormal overall user behavior, based a joint user profile determined across all users from the login activities, the remittance activities, and the account activities. The method includes aggregating (440), by the processor, detection results to generate a final list of suspicious transactions. The method includes performing (450), by the processor, loss preventative actions for each of the suspicious transactions in the final list.
Abstract:
Systems and methods for automatically generating a set of meta-parameters used to train invariant-based anomaly detectors are provided. Data is transformed into a first set of time series data and a second set of time series data. A fitness threshold search is performed on the first set of time series data to automatically generate a fitness threshold, and a time resolution search is performed on the set of second time series data to automatically generate a time resolution. A set of meta-parameters including the fitness threshold and the time resolution are sent to one or more user devices across a network to govern the training of an invariant-based anomaly detector.
Abstract:
Systems and methods for training semantic segmentation. Embodiments of the present invention include predicting semantic labeling of each pixel in each of at least one training image using a semantic segmentation model. Further included is predicting semantic boundaries at boundary pixels of objects in the at least one training image using a semantic boundary model concurrently with predicting the semantic labeling. Also included is propagating sparse labels to every pixel in the at least one training image using the predicted semantic boundaries. Additionally, the embodiments include optimizing a loss function according the predicted semantic labeling and the propagated sparse labels to concurrently train the semantic segmentation model and the semantic boundary model to accurately and efficiently generate a learned semantic segmentation model from sparsely annotated training images.
Abstract translation:
训练语义分割的系统和方法。 本发明的实施例包括使用语义分割模型来预测至少一个训练图像中的每一个中的每个像素的语义标记。 进一步包括在预测语义标签的同时使用语义边界模型预测至少一个训练图像中的对象的边界像素处的语义边界。 还包括使用预测的语义边界将稀疏标签传播到至少一个训练图像中的每个像素。 另外,实施例包括根据预测的语义标签和传播的稀疏标签优化丢失函数,以同时训练语义分段模型和语义边界模型,以从稀疏注释的训练图像准确并高效地生成学习的语义分段模型。 p >
Abstract:
A baby detection system and a mass transit surveillance system are provided. The baby detection system includes a camera (110) configured to capture an input image of a subject purported to be a baby and presented at an electronic-gate system. The baby detection system further includes a memory (122) storing a deep learning model configured to perform a baby detection task for an electronic-gate application corresponding to the electronic-gate system. The baby detection system also includes a processor (121) configured to apply the deep learning model to the input image to provide a baby detection result of either a presence or an absence of an actual baby in relation to the subject purported to be the baby. The baby detection task is configured to evaluate one or more different distractor modalities corresponding to one or more different physical spoofing materials to prevent baby spoofing for the baby detection task.
Abstract:
Methods for system failure prediction include clustering log files according to structural log patterns. Feature representations of the log files are determined based on the log clusters. A likelihood of a system failure is determined based on the feature representations using a neural network. An automatic system control action is performed if the likelihood of system failure exceeds a threshold.
Abstract:
A video device for predicting driving situations while a person drives a car is presented. The video device includes multi-modal sensors and knowledge data for extracting feature maps, a deep neural network trained with training data to recognize real-time traffic scenes (TSs) from a viewpoint of the car, and a user interface (UI) for displaying the real-time TSs. The real-time TSs are compared to predetermined TSs to predict the driving situations. The video device can be a video camera. The video camera can be mounted to a windshield of the car. Alternatively, the video camera can be incorporated into the dashboard or console area of the car. The video camera can calculate speed, velocity, type, and/or position information related to other cars within the real-time TS. The video camera can also include warning indicators, such as light emitting diodes (LEDs) that emit different colors for the different driving situations.
Abstract:
A computer-implemented method for training a deep neural network to recognize traffic scenes (TSs) from multi-modal sensors and knowledge data is presented. The computer-implemented method includes receiving data from the multi-modal sensors and the knowledge data and extracting feature maps from the multi-modal sensors and the knowledge data by using a traffic participant (TS) extractor to generate a first set of data, using a static objects extractor to generate a second set of data, and using an additional information extractor. The computer-implemented method further includes training the deep neural network, with training data, to recognize the TSs from a viewpoint of a vehicle.