Object detection for distorted images

    公开(公告)号:US11763575B2

    公开(公告)日:2023-09-19

    申请号:US17678411

    申请日:2022-02-23

    Abstract: Techniques including receiving a distorted image from a camera disposed about a vehicle, detecting, in the distorted image, corner points associated with a target object, mapping the corner points to a distortion corrected domain based on one or more camera parameters, mapping the corner points and lines between the corner points back to a distorted domain based on the camera parameters, interpolating one or more intermediate points to generate lines between the corner points in the distortion corrected domain mapping the corner points and the lines between the corner points back to a distorted domain based on the camera parameters, and adjusting a direction of travel of the vehicle based on the located target object.

    Video object detection
    2.
    发明授权

    公开(公告)号:US11688078B2

    公开(公告)日:2023-06-27

    申请号:US17093681

    申请日:2020-11-10

    CPC classification number: G06T7/20 G06T7/70 G06T2207/10016

    Abstract: A method for video object detection includes detecting an object in a first video frame, and selecting a first interest point and a second interest point of the object. The first interest point is in a first region of interest located at a first corner of a box surrounding the object. The second interest point is in a second region of interest located at a second corner of the box. The second corner is diagonally opposite the first corner. A first optical flow of the first interest point and a second optical flow of the second interest point are determined. A location of the object in a second video frame is estimated by determining, in the second video frame, a location of the first interest point based on the first optical flow and a location of the second interest point based on the second optical flow.

    Estimation of time to collision in a computer vision system

    公开(公告)号:US11615629B2

    公开(公告)日:2023-03-28

    申请号:US17195915

    申请日:2021-03-09

    Abstract: A method for estimating time to collision (TTC) of a detected object in a computer vision system is provided that includes determining a three dimensional (3D) position of a camera in the computer vision system, determining a 3D position of the detected object based on a 2D position of the detected object in an image captured by the camera and an estimated ground plane corresponding to the image, computing a relative 3D position of the camera, a velocity of the relative 3D position, and an acceleration of the relative 3D position based on the 3D position of the camera and the 3D position of the detected object, wherein the relative 3D position of the camera is relative to the 3D position of the detected object, and computing the TTC of the detected object based on the relative 3D position, the velocity, and the acceleration.

    Methods and systems for analyzing images in convolutional neural networks

    公开(公告)号:US11443505B2

    公开(公告)日:2022-09-13

    申请号:US16898972

    申请日:2020-06-11

    Abstract: A method for analyzing images to generate a plurality of output features includes receiving input features of the image and performing Fourier transforms on each input feature. Kernels having coefficients of a plurality of trained features are received and on-the-fly Fourier transforms (OTF-FTs) are performed on the coefficients in the kernels. The output of each Fourier transform and each OTF-FT are multiplied together to generate a plurality of products and each of the products are added to produce one sum for each output feature. Two-dimensional inverse Fourier transforms are performed on each sum.

    METHODS AND SYSTEMS FOR ANALYZING IMAGES IN CONVOLUTIONAL NEURAL NETWORKS

    公开(公告)号:US20200302217A1

    公开(公告)日:2020-09-24

    申请号:US16898972

    申请日:2020-06-11

    Abstract: A method for analyzing images to generate a plurality of output features includes receiving input features of the image and performing Fourier transforms on each input feature. Kernels having coefficients of a plurality of trained features are received and on-the-fly Fourier transforms (OTF-FTs) are performed on the coefficients in the kernels. The output of each Fourier transform and each OTF-FT are multiplied together to generate a plurality of products and each of the products are added to produce one sum for each output feature. Two-dimensional inverse Fourier transforms are performed on each sum.

    Methods and systems for analyzing images in convolutional neural networks

    公开(公告)号:US10713522B2

    公开(公告)日:2020-07-14

    申请号:US16400149

    申请日:2019-05-01

    Abstract: A method for analyzing images to generate a plurality of output features includes receiving input features of the image and performing Fourier transforms on each input feature. Kernels having coefficients of a plurality of trained features are received and on-the-fly Fourier transforms (OTF-FTs) are performed on the coefficients in the kernels. The output of each Fourier transform and each OTF-FT are multiplied together to generate a plurality of products and each of the products are added to produce one sum for each output feature. Two-dimensional inverse Fourier transforms are performed on each sum.

    METHODS AND SYSTEMS FOR ANALYZING IMAGES IN CONVOLUTIONAL NEURAL NETWORKS

    公开(公告)号:US20190258891A1

    公开(公告)日:2019-08-22

    申请号:US16400149

    申请日:2019-05-01

    Abstract: A method for analyzing images to generate a plurality of output features includes receiving input features of the image and performing Fourier transforms on each input feature. Kernels having coefficients of a plurality of trained features are received and on-the-fly Fourier transforms (OTF-FTs) are performed on the coefficients in the kernels. The output of each Fourier transform and each OTF-FT are multiplied together to generate a plurality of products and each of the products are added to produce one sum for each output feature. Two-dimensional inverse Fourier transforms are performed on each sum.

    Methods and systems for analyzing images in convolutional neural networks

    公开(公告)号:US10325173B2

    公开(公告)日:2019-06-18

    申请号:US16108237

    申请日:2018-08-22

    Abstract: A method for analyzing images to generate a plurality of output features includes receiving input features of the image and performing Fourier transforms on each input feature. Kernels having coefficients of a plurality of trained features are received and on-the-fly Fourier transforms (OTF-FTs) are performed on the coefficients in the kernels. The output of each Fourier transform and each OTF-FT are multiplied together to generate a plurality of products and each of the products are added to produce one sum for each output feature. Two-dimensional inverse Fourier transforms are performed on each sum.

    Stationary-vehicle structure from motion

    公开(公告)号:US10108864B2

    公开(公告)日:2018-10-23

    申请号:US15235516

    申请日:2016-08-12

    Abstract: A vehicular structure from motion (SfM) system can store a number of image frames acquired from a vehicle-mounted camera in a frame stack according to a frame stack update logic. The SfM system can detect feature points, generate flow tracks, and compute depth values based on the image frames, the depth values to aid control of the vehicle. The frame stack update logic can select a frame to discard from the stack when a new frame is added to the stack, and can be changed from a first in, first out (FIFO) logic to last in, first out (LIFO) logic upon a determination that the vehicle is stationary. An optical flow tracks logic can also be modified based on the determination. The determination can be made based on a dual threshold comparison to insure robust SfM system performance.

Patent Agency Ranking