TEMPORAL-BASED PERCEPTION FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20240312219A1

    公开(公告)日:2024-09-19

    申请号:US18185074

    申请日:2023-03-16

    CPC classification number: G06V20/58 B60W60/001 B60W2420/403

    Abstract: In various examples, temporal-based perception for autonomous or semi-autonomous systems and applications is described. Systems and methods are disclosed that use a machine learning model (MLM) to intrinsically fuse feature maps associated with different sensors and different instances in time. To generate a feature map, image data generated using image sensors (e.g., cameras) located around a vehicle are processed using a MLM that is trained to generate the feature map. The MLM may then fuse the feature maps in order to generate a final feature map associated with a current instance in time. The feature maps associated with the previous instances in time may be preprocessed using one or more layers of the MLM, where the one or more layers are associated with performing temporal transformation before the fusion is performed. The MLM may then use the final feature map to generate one or more outputs.

    SHARPNESS-AWARE MINIMIZATION FOR ROBUSTNESS IN SPARSE NEURAL NETWORKS

    公开(公告)号:US20240127067A1

    公开(公告)日:2024-04-18

    申请号:US18459083

    申请日:2023-08-31

    CPC classification number: G06N3/082

    Abstract: Systems and methods are disclosed for improving natural robustness of sparse neural networks. Pruning a dense neural network may improve inference speed and reduces the memory footprint and energy consumption of the resulting sparse neural network while maintaining a desired level of accuracy. In real-world scenarios in which sparse neural networks deployed in autonomous vehicles perform tasks such as object detection and classification for acquired inputs (images), the neural networks need to be robust to new environments, weather conditions, camera effects, etc. Applying sharpness-aware minimization (SAM) optimization during training of the sparse neural network improves performance for out of distribution (OOD) images compared with using conventional stochastic gradient descent (SGD) optimization. SAM optimizes a neural network to find a flat minimum: a region that both has a small loss value, but that also lies within a region of low loss.

    ESTIMATING OPTIMAL TRAINING DATA SET SIZES FOR MACHINE LEARNING MODEL SYSTEMS AND APPLICATIONS

    公开(公告)号:US20230376849A1

    公开(公告)日:2023-11-23

    申请号:US18318212

    申请日:2023-05-16

    CPC classification number: G06N20/00

    Abstract: In various examples, estimating optimal training data set sizes for machine learning model systems and applications. Systems and methods are disclosed that estimate an amount of data to include in a training data set, where the training data set is then used to train one or more machine learning models to reach a target validation performance. To estimate the amount of training data, subsets of an initial training data set may be used to train the machine learning model(s) in order to determine estimates for the minimum amount of training data needed to train the machine learning model(s) to reach the target validation performance. The estimates may then be used to generate one or more functions, such as a cumulative density function and/or a probability density function, wherein the function(s) is then used to estimate the amount of training data needed to train the machine learning model(s).

    SCALABLE SEMANTIC IMAGE RETRIEVAL WITH DEEP TEMPLATE MATCHING

    公开(公告)号:US20220147743A1

    公开(公告)日:2022-05-12

    申请号:US17226584

    申请日:2021-04-09

    Abstract: Approaches presented herein provide for semantic data matching, as may be useful for selecting data from a large unlabeled dataset to train a neural network. For an object detection use case, such a process can identify images within an unlabeled set even when an object of interest represents a relatively small portion of an image or there are many other objects in the image. A query image can be processed to extract image features or feature maps from only one or more regions of interest in that image, as may correspond to objects of interest. These features are compared with images in an unlabeled dataset, with similarity scores being calculated between the features of the region(s) of interest and individual images in the unlabeled set. One or more highest scored images can be selected as training images showing objects that are semantically similar to the object in the query image.

    AUGMENTING LEGACY NEURAL NETWORKS FOR FLEXIBLE INFERENCE

    公开(公告)号:US20230325670A1

    公开(公告)日:2023-10-12

    申请号:US17820780

    申请日:2022-08-18

    CPC classification number: G06N3/082

    Abstract: A technique for dynamically configuring and executing an augmented neural network in real-time according to performance constraints also maintains the legacy neural network execution path. A neural network model that has been trained for a task is augmented with low-compute “shallow” phases paired with each legacy phase and the legacy phases of the neural network model are held constant (e.g., unchanged) while the shallow phases are trained. During inference, one or more of the shallow phases can be selectively executed in place of the corresponding legacy phase. Compared with the legacy phases, the shallow phases are typically less accurate, but have reduced latency and consume less power. Therefore, processing using one or more of the shallow phases in place of one or more of the legacy phases enables the augmented neural network to dynamically adapt to changes in the execution environment (e.g., processing load or performance requirement).

Patent Agency Ranking