Device and method for route planning

    公开(公告)号:US11940287B2

    公开(公告)日:2024-03-26

    申请号:US17763230

    申请日:2019-12-27

    CPC classification number: G01C21/3492 G01C21/3461 G01C21/3614

    Abstract: Provided is a device and a method for route planning. The route planning device (100) may include a data interface (128) coupled to a road and traffic data source (160); a user interface (170) configured to display a map and receive a route planning request from a user, the route planning request including a line of interest on the map; a processor (110) coupled to the data interface (128) and the user interface (170). The processor (110) may be configured to identify the line of interest in response to the route planning request; acquire, via the data interface (128), road and traffic information associated with the line of interest from the road and traffic data source (160); and calculate, based on the acquired road and traffic information, a navigation route that matches or corresponds to the line of interest and meets or satisfies predefined road and traffic constraints.

    METHODS AND SYSTEMS FOR BUDGETED AND SIMPLIFIED TRAINING OF DEEP NEURAL NETWORKS

    公开(公告)号:US20200026965A1

    公开(公告)日:2020-01-23

    申请号:US16475078

    申请日:2017-04-07

    Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.

    METHODS AND SYSTEMS FOR BUDGETED AND SIMPLIFIED TRAINING OF DEEP NEURAL NETWORKS

    公开(公告)号:US20220222492A1

    公开(公告)日:2022-07-14

    申请号:US17584216

    申请日:2022-01-25

    Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.

    Methods and systems for budgeted and simplified training of deep neural networks

    公开(公告)号:US11263490B2

    公开(公告)日:2022-03-01

    申请号:US16475078

    申请日:2017-04-07

    Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.

Patent Agency Ranking