-
公开(公告)号:US11940287B2
公开(公告)日:2024-03-26
申请号:US17763230
申请日:2019-12-27
Applicant: INTEL CORPORATION , MOBILEYE VISION TECHNOLOGIES LTD.
Inventor: Yuqing Hou , Xiaolong Liu , Ignacio J. Alvarez , Xiangbin Wu
CPC classification number: G01C21/3492 , G01C21/3461 , G01C21/3614
Abstract: Provided is a device and a method for route planning. The route planning device (100) may include a data interface (128) coupled to a road and traffic data source (160); a user interface (170) configured to display a map and receive a route planning request from a user, the route planning request including a line of interest on the map; a processor (110) coupled to the data interface (128) and the user interface (170). The processor (110) may be configured to identify the line of interest in response to the route planning request; acquire, via the data interface (128), road and traffic information associated with the line of interest from the road and traffic data source (160); and calculate, based on the acquired road and traffic information, a navigation route that matches or corresponds to the line of interest and meets or satisfies predefined road and traffic constraints.
-
公开(公告)号:US11341368B2
公开(公告)日:2022-05-24
申请号:US16475079
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Anbang Yao , Shandong Wang , Wenhua Cheng , Dongqi Cai , Libin Wang , Lin Xu , Ping Hu , Yiwen Guo , Liu Yang , Yuqing Hou , Zhou Su , Yurong Chen
Abstract: Methods and systems for advanced and augmented training of deep neural networks (DNNs) using synthetic data and innovative generative networks. A method includes training a DNN using synthetic data, training a plurality of DNNs using context data, associating features of the DNNs trained using context data with features of the DNN trained with synthetic data, and generating an augmented DNN using the associated features.
-
公开(公告)号:US20200226790A1
公开(公告)日:2020-07-16
申请号:US16832094
申请日:2020-03-27
Applicant: Intel Corporation
Inventor: Ignacio Alvarez , Cornelius Buerkle , Maik Sven Fox , Florian Geissler , Ralf Graefe , Yiwen Guo , Yuqing Hou , Fabian Oboril , Daniel Pohl , Alexander Carl Unnervik , Xiangbin Wu
IPC: G06T7/80 , G01S13/931 , G01S13/86 , G01S17/931 , G01S7/40 , B60R11/04 , G01S7/497
Abstract: A sensor calibrator comprising one or more processors configured to receive sensor data representing a calibration pattern detected by a sensor during a period of relative motion between the sensor and the calibration pattern in which the sensor or the calibration pattern move along a linear path of travel; determine a calibration adjustment from the plurality of images; and send a calibration instruction for calibration of the sensor according to the determined calibration adjustment. Alternatively, a sensor calibration detection device, comprising one or more processors, configured to receive first sensor data detected during movement of a first sensor along a route of travel; determine a difference between the first sensor data and stored second sensor data; and if the difference is outside of a predetermined range, switch from a first operational mode to a second operational mode.
-
公开(公告)号:US20200026965A1
公开(公告)日:2020-01-23
申请号:US16475078
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Yiwen GUO , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shangong Wang , Wenhua Cheng , Yurong Chen , Libin Wag
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US12217163B2
公开(公告)日:2025-02-04
申请号:US18371934
申请日:2023-09-22
Applicant: Intel Corporation
Inventor: Yiwen Guo , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
IPC: G06K9/62 , G06F18/21 , G06F18/213 , G06F18/214 , G06N3/044 , G06N3/045 , G06N3/063 , G06N3/08 , G06V10/44 , G06V10/764 , G06V10/82 , G06V10/94 , G06V20/00
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US20240086693A1
公开(公告)日:2024-03-14
申请号:US18371934
申请日:2023-09-22
Applicant: Intel Corporation
Inventor: Yiwen GUO , Yuqing Hou , Anbang YAO , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
IPC: G06N3/063 , G06F18/21 , G06F18/213 , G06F18/214 , G06N3/044 , G06N3/045 , G06N3/08 , G06V10/44 , G06V10/764 , G06V10/82 , G06V10/94 , G06V20/00
CPC classification number: G06N3/063 , G06F18/213 , G06F18/2148 , G06F18/217 , G06N3/044 , G06N3/045 , G06N3/08 , G06V10/454 , G06V10/764 , G06V10/82 , G06V10/94 , G06V10/955 , G06V20/00
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US11803739B2
公开(公告)日:2023-10-31
申请号:US17584216
申请日:2022-01-25
Applicant: Intel Corporation
Inventor: Yiwen Guo , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
IPC: G06K9/62 , G06N3/063 , G06N3/08 , G06V10/94 , G06F18/21 , G06F18/213 , G06F18/214 , G06N3/044 , G06N3/045 , G06V10/764 , G06V10/82 , G06V10/44 , G06V20/00
CPC classification number: G06N3/063 , G06F18/213 , G06F18/217 , G06F18/2148 , G06N3/044 , G06N3/045 , G06N3/08 , G06V10/454 , G06V10/764 , G06V10/82 , G06V10/94 , G06V10/955 , G06V20/00
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US11798191B2
公开(公告)日:2023-10-24
申请号:US16832094
申请日:2020-03-27
Applicant: Intel Corporation
Inventor: Ignacio Alvarez , Cornelius Buerkle , Maik Sven Fox , Florian Geissler , Ralf Graefe , Yiwen Guo , Yuqing Hou , Fabian Oboril , Daniel Pohl , Alexander Carl Unnervik , Xiangbin Wu
IPC: G06T7/80 , G01S13/931 , G01S13/86 , G01S7/40 , G01S7/497 , G01S17/931
CPC classification number: G06T7/80 , G01S7/40 , G01S7/4972 , G01S13/865 , G01S13/867 , G01S13/931 , G01S17/931 , G06T2207/30236 , G06T2207/30248 , G06T2207/30252 , G06T2207/30261
Abstract: A sensor calibrator comprising one or more processors configured to receive sensor data representing a calibration pattern detected by a sensor during a period of relative motion between the sensor and the calibration pattern in which the sensor or the calibration pattern move along a linear path of travel; determine a calibration adjustment from the plurality of images; and send a calibration instruction for calibration of the sensor according to the determined calibration adjustment. Alternatively, a sensor calibration detection device, comprising one or more processors, configured to receive first sensor data detected during movement of a first sensor along a route of travel; determine a difference between the first sensor data and stored second sensor data; and if the difference is outside of a predetermined range, switch from a first operational mode to a second operational mode.
-
公开(公告)号:US20220222492A1
公开(公告)日:2022-07-14
申请号:US17584216
申请日:2022-01-25
Applicant: Intel Corporation
Inventor: Yiwen GUO , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US11263490B2
公开(公告)日:2022-03-01
申请号:US16475078
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Yiwen Guo , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
-
-
-
-
-
-
-
-