DYNAMIC VISION SENSORS FOR FAST MOTION UNDERSTANDING

    公开(公告)号:US20220138466A1

    公开(公告)日:2022-05-05

    申请号:US17328518

    申请日:2021-05-24

    Abstract: An apparatus for motion understanding, includes a memory storing instructions, and at least one processor configured to execute the instructions to obtain, from a dynamic vision sensor, a plurality of events corresponding to an object moving with respect to the dynamic vision sensor, and filter the obtained plurality of events, using a plurality of exponential filters integrating over different time periods, to obtain a plurality of representations of the object moving with respect to the dynamic vision sensor. The at least one processor is further configured to execute the instructions to filter the obtained plurality of representations, using a convolution neural network, to obtain a probability of the object impacting a location on the dynamic vision sensor.

    Method and apparatus for estimating touch locations and touch pressures

    公开(公告)号:US12162136B2

    公开(公告)日:2024-12-10

    申请号:US17553321

    申请日:2021-12-16

    Abstract: A tactile sensing system of a robot may include: a plurality of piezoelectric elements disposed at an object, and including a transmission (TX) piezoelectric element and a reception (RX) piezoelectric element; and at least one processor configured to: control the TX piezoelectric element to generate an acoustic wave having a chirp spread spectrum (CSS) at every preset time interval, along a surface of the object; receive, via the RX piezoelectric element, an acoustic wave signal corresponding to the generated acoustic wave; select frequency bands from a plurality of frequency bands of the acoustic wave signal; and estimate a location of a touch input on the surface of the object by inputting the acoustic wave signal of the selected frequency bands into a neural network configured to provide a touch prediction score for each of a plurality of predetermined locations on the surface of the object.

    SONICFINGER: LOW-COST, COMPACT, PROXIMITY AND CONTACT SENSOR FOR REACTIVE POSITIONING

    公开(公告)号:US20240100705A1

    公开(公告)日:2024-03-28

    申请号:US18239437

    申请日:2023-08-29

    CPC classification number: B25J9/1694 B25J15/08 G01B17/00

    Abstract: In some embodiments, an apparatus for performing reactive positioning of a robot gripper includes one or more fingers disposed on an end-effector of the robot, a signal processing circuit, a memory storing instructions, and a processor. Each of the one or more fingers includes a transducer configured to generate vibrational energy based on an input signal, and convert an acoustic reflection of the vibrational energy from an object into a voltage signal. The signal processing circuit is configured to provide the input signal to each transducer, and perform signal processing on the voltage signal of each transducer resulting in reflection data. The processor is configured to execute the instructions to perform pre-touch proximity detection on the reflection data, perform grasp positioning on the reflection data, perform contact detection from the reflection data, and provide, to the robot, results of the pre-touch proximity detection, the grasp positioning, and the contact detection.

    Acoustic collision detection and localization for robotic devices

    公开(公告)号:US11714163B2

    公开(公告)日:2023-08-01

    申请号:US17084257

    申请日:2020-10-29

    CPC classification number: G01S5/22 B25J19/026

    Abstract: A method of collision localization on a robotic device includes obtaining audio signals from a plurality of acoustic sensors spaced apart along the robotic device; identifying, based on a collision being detected, a strongest audio signal; identifying a primary onset time for an acoustic sensor producing the strongest audio signal, the primary onset time being a time at which waves propagating from the collision reach the acoustic sensor producing the strongest audio signal; generating a virtual onset time set, by shifting a calibration manifold, based on the identified primary onset time, the calibration manifold representing relative onset times from evenly spaced marker locations on the robotic device to the plurality of acoustic sensors; determining scores for the marker locations based a standard deviation of elements in the virtual onset time set; and estimating a location of the collision based on a highest score of the determined scores.

    OBJECT MESH BASED ON A DEPTH IMAGE
    18.
    发明申请

    公开(公告)号:US20220277519A1

    公开(公告)日:2022-09-01

    申请号:US17473541

    申请日:2021-09-13

    Abstract: A depth image is used to obtain a three dimensional (3D) geometry of an object as an object mesh. The object mesh is obtained using an object shell representation. The object shell representation is based on a series of depth images denoting the entry and exit points on the object surface that camera rays would pass through. Given a set of entry points in the form of a masked depth image of an object, an object shell (an entry image and an exit image) is generated. Since entry and exit images contain neighborhood information given by pixel adjacency, the entry and exit images provide partial meshes of the object which are stitched together in linear time using the contours of the entry and exit images. A complete object mesh is provided in the camera coordinate frame.

    METHOD AND APPARATUS FOR THREE-DIMENSIONAL (3D) OBJECT AND SURFACE RECONSTRUCTION

    公开(公告)号:US20210390776A1

    公开(公告)日:2021-12-16

    申请号:US17177896

    申请日:2021-02-17

    Abstract: An apparatus for reconstructing a 3D object, includes a memory storing instructions, and at least one processor configured to execute the instructions to obtain, using a first neural network, mapping function weights of a mapping function of a second neural network, based on an image feature vector corresponding to a 2D image of the 3D object, set the mapping function of the second neural network, using the obtained mapping function weights, and based on sampled points of a canonical sampling domain, obtain, using the second neural network of which the mapping function is set, 3D point coordinates and geodesic lifting coordinates of each of the sampled points in the 3D object corresponding to the 2D image, wherein the 3D point coordinates are first three dimensions of an embedding vector of a respective one of the sampled points, and the geodesic lifting coordinates are remaining dimensions of the embedding vector.

    ALIGNING IMAGE DATA AND MAP DATA
    20.
    发明申请

    公开(公告)号:US20250086812A1

    公开(公告)日:2025-03-13

    申请号:US18586111

    申请日:2024-02-23

    Abstract: A method of generating a composite map from image data and spatial map data may include: acquiring a spatial map of an environment; acquiring a plurality of images of a portion of the environment; generating a three-dimensional (3D) image of the portion of the environment using the plurality of images; identifying a floor in the 3D image; generating a synthetic spatial map of the portion of the environment based on the floor in the 3D image; determining a location of the portion of the environment within the spatial map by identifying a region of the spatial map that corresponds to the synthetic spatial map; and generating a composite map by associating the 3D image with the location within the spatial map of the portion of the environment.

Patent Agency Ranking