-
公开(公告)号:US11521059B2
公开(公告)日:2022-12-06
申请号:US16373939
申请日:2019-04-03
发明人: Weimeng Zhu , Yu Su , Christian Nunn
摘要: A device for processing data sequences by means of a convolutional neural network is configured to carry out the following steps: receiving an input sequence comprising a plurality of data items captured over time using a sensor, each of said data items comprising a multi-dimensional representation of a scene, generating an output sequence representing the input sequence processed item-wise by the convolutional neural network, wherein generating the output sequence comprises: generating a grid-generation sequence based on a combination of the input sequence and an intermediate grid-generation sequence representing a past portion of the output sequence or the grid-generation sequence, generating a sampling grid on the basis of the grid-generation sequence, generating an intermediate output sequence by sampling from the past portion of the output sequence according to the sampling grid, and generating the output sequence based on a weighted combination of the intermediate output sequence and the input sequence.
-
2.
公开(公告)号:US20190325306A1
公开(公告)日:2019-10-24
申请号:US16373939
申请日:2019-04-03
发明人: Weimeng Zhu , Yu Su , Christian Nunn
IPC分类号: G06N3/08
摘要: A device for processing data sequences by means of a convolutional neural network is configured to carry out the following steps: receiving an input sequence comprising a plurality of data items captured over time using a sensor, each of said data items comprising a multi-dimensional representation of a scene, generating an output sequence representing the input sequence processed item-wise by the convolutional neural network, wherein generating the output sequence comprises: generating a grid-generation sequence based on a combination of the input sequence and an intermediate grid-generation sequence representing a past portion of the output sequence or the grid-generation sequence, generating a sampling grid on the basis of the grid-generation sequence, generating an intermediate output sequence by sampling from the past portion of the output sequence according to the sampling grid, and generating the output sequence based on a weighted combination of the intermediate output sequence and the input sequence.
-
公开(公告)号:US20230120299A1
公开(公告)日:2023-04-20
申请号:US18047105
申请日:2022-10-17
发明人: Christian Nunn
摘要: This disclosure describes systems and techniques for processing radar sensor data. The systems and techniques include acquiring radar sensor data from a radar sensor and processing the radar sensor data by, for example, an artificial neural network to obtain at least one of range radar data or Doppler radar data.
-
公开(公告)号:US20230037900A1
公开(公告)日:2023-02-09
申请号:US17817466
申请日:2022-08-04
发明人: Mirko Meuter , Christian Nunn , Jan Siegemund , Jittu Kurian , Alessandro Cennamo , Marco Braun , Dominic Spata
IPC分类号: G01S13/931 , G01S13/89
摘要: The present disclosure is directed at systems and methods for determining objects around a vehicle. In aspects, a system includes a sensor unit having at least one radar sensor arranged and configured to obtain radar image data of external surroundings to determine objects around a vehicle. The system further includes a processing unit adapted to process the radar image data to generate a top view image of the external surroundings of the vehicle. The top view image is configured to be displayed on a display unit and useful to indicate a relative position of the vehicle with respect to determined objects.
-
公开(公告)号:US11195038B2
公开(公告)日:2021-12-07
申请号:US16374138
申请日:2019-04-03
发明人: Christian Nunn , Weimeng Zhu , Yu Su
IPC分类号: G06K9/00
摘要: A device for extracting dynamic information comprises a convolutional neural network, wherein the device is configured to receive a sequence of data blocks acquired over time, each of said data blocks comprising a multi-dimensional representation of a scene. The convolutional neural network is configured to receive the sequence as input and to output dynamic information on the scene in response, wherein the convolutional neural network comprises a plurality of modules, and wherein each of said modules is configured to carry out a specific processing task for extracting the dynamic information.
-
公开(公告)号:US11093762B2
公开(公告)日:2021-08-17
申请号:US16406356
申请日:2019-05-08
发明人: Jan Siegemund , Christian Nunn
摘要: A method for validation of an obstacle candidate identified within a sequence of image frames comprises the following steps: A. for a current image frame of the sequence of image frames, determining within the current image frame a region of interest representing the obstacle candidate, dividing the region of interest into sub-regions, and, for each sub-region, determining a Time-To-Contact (TTC) based on at least the current image frame and a preceding or succeeding image frame of the sequence of image frames; B. determining one or more classification features based on the TTCs of the sub-regions determined for the current image frame; and C. classifying the obstacle candidate based on the determined one or more classification features.
-
公开(公告)号:US10943131B2
公开(公告)日:2021-03-09
申请号:US16409035
申请日:2019-05-10
发明人: Yu Su , Andre Paus , Kun Zhao , Mirko Meuter , Christian Nunn
摘要: An image processing method includes: determining a candidate track in an image of a road, wherein the candidate track is modelled as a parameterized line or curve corresponding to a candidate lane marking in the image of a road; dividing the candidate track into a plurality of cells, each cell corresponding to a segment of the candidate track; determining at least one marklet for a plurality of said cells, wherein each marklet of a cell corresponds to a line or curve connecting left and right edges of the candidate lane marking; determining at least one local feature of each of said plurality of cells based on characteristics of said marklets; determining at least one global feature of the candidate track by aggregating the local features of the plurality of cells; and determining if the candidate lane marking represents a lane marking based on the at least one global feature.
-
公开(公告)号:US20220383146A1
公开(公告)日:2022-12-01
申请号:US17804652
申请日:2022-05-31
发明人: Markus Schoeler , Jan Siegemund , Christian Nunn , Yu Su , Mirko Meuter , Adrian Becker , Peet Cremer
摘要: A method is provided for training a machine-learning algorithm which relies on primary data captured by at least one primary sensor. Labels are identified based on auxiliary data provided by at least one auxiliary sensor. A care attribute or a no-care attribute is assigned to each label by determining a perception capability of the primary sensor for the label based on the primary data and based on the auxiliary data. Model predictions for the labels are generated via the machine-learning algorithm. A loss function is defined for the model predictions. Negative contributions to the loss function are permitted for all labels. Positive contributions to the loss function are permitted for labels having a care attribute, while positive contributions to the loss function for labels having a no-care attribute are permitted only if a confidence of the model prediction for the respective label is greater than a threshold.
-
公开(公告)号:US20220221303A1
公开(公告)日:2022-07-14
申请号:US17647306
申请日:2022-01-06
发明人: Mirko Meuter , Christian Nunn , Weimeng Zhu , Florian Kaestner , Adrian Becker , Markus Schoeler
摘要: A computer implemented method for determining a location of an object comprises the following steps carried out by computer hardware components: determining a pre-stored map of a vicinity of the object; acquiring sensor data related to the vicinity of the object; determining an actual map based on the acquired sensor data; carrying out image registration based on the pre-stored map and the actual map; carrying out image registration based on the image retrieval; and determining a location of the object based on the image registration.
-
10.
公开(公告)号:US20190325241A1
公开(公告)日:2019-10-24
申请号:US16374138
申请日:2019-04-03
发明人: Christian Nunn , Weimeng Zhu , Yu Su
IPC分类号: G06K9/00
摘要: A device for extracting dynamic information comprises a convolutional neural network, wherein the device is configured to receive a sequence of data blocks acquired over time, each of said data blocks comprising a multi-dimensional representation of a scene. The convolutional neural network is configured to receive the sequence as input and to output dynamic information on the scene in response, wherein the convolutional neural network comprises a plurality of modules, and wherein each of said modules is configured to carry out a specific processing task for extracting the dynamic information.
-
-
-
-
-
-
-
-
-