System and method of capturing three-dimensional human motion capture with LiDAR

    公开(公告)号:US12270910B2

    公开(公告)日:2025-04-08

    申请号:US17884273

    申请日:2022-08-09

    Abstract: Described herein are systems and methods for training machine learning models to generate three-dimensional (3D) motions based on light detection and ranging (LiDAR) point clouds. In various embodiments, a computing system can encode a machine learning model representing an object in a scene. The computing system can train the machine learning model using a dataset comprising synchronous LiDAR point clouds captured by monocular LiDAR sensors and ground-truth three-dimensional motions obtained from IMU devices. The machine learning model can be configured to generate a three-dimensional motion of the object based on an input of a plurality of point cloud frames captured by a monocular LiDAR sensor.

    MONOCLONAL ANTIBODY TARGETING FZD7, PREPARATION METHOD AND USE THEREOF

    公开(公告)号:US20250092142A1

    公开(公告)日:2025-03-20

    申请号:US18470595

    申请日:2023-09-20

    Abstract: The present disclosure discloses a monoclonal antibody targeting FZD7, preparation method and use thereof. The monoclonal antibody targeting FZD7 comprises a heavy chain variable region and a light chain variable region, the heavy chain variable region comprises an amino acid sequence of SEQ ID NO: 1 or an amino acid sequence with at least 99% sequence identity to sequence of SEQ ID NO: 1, the light chain variable region comprises an amino acid sequence of SEQ ID NO: 2 or an amino acid sequence having at least 99% sequence identity to sequence of SEQ ID NO: 2. The monoclonal antibody targeting FZD7 obtained by the present disclosure binds to the FZD7 protein both in vitro and in vivo and has clinical development value.

    Stream processing-based non-blocking ORB feature extraction accelerator implemented by FPGA

    公开(公告)号:US12217475B1

    公开(公告)日:2025-02-04

    申请号:US18813094

    申请日:2024-08-23

    Abstract: The provided is a stream processing-based non-blocking oriented FAST and rotated BRIEF (ORB) feature extraction accelerator implemented by a field programmable gate array (FPGA), which mainly includes two innovations: A stream processing-based non-blocking hardware architecture and a cache management algorithm are provided. The accelerator precisely controls and buffers each column of an rBRIEF descriptor computation window by using an algorithm, allowing to receive a new input pixel stream while computing a descriptor, thereby achieving non-blocking processing. An efficient hardware sorting design embedded in an accelerator is provided. Based on a count sorting algorithm, minimal resources are used to implement rBRIEF sorting on hardware, and the rBRIEF sorting is embedded in the accelerator. The accelerator ensures quality of a feature point while achieving high-speed feature point extraction, without significantly reducing accuracy of ORB_SLAM and other algorithms.

    DUAL-SIX-TRANSISTOR (D6T) IN-MEMORY COMPUTING (IMC) ACCELERATOR SUPPORTING ALWAYS-LINEAR DISCHARGE AND REDUCING DIGITAL STEPS

    公开(公告)号:US20240233815A9

    公开(公告)日:2024-07-11

    申请号:US18377840

    申请日:2023-10-09

    CPC classification number: G11C11/419 G11C8/16 G11C11/54

    Abstract: A dual-six-transistor (D6T) in-memory computing (IMC) accelerator supporting always-linear discharge and reducing digital steps is provided. In the IMC accelerator, three effective techniques are proposed: (1) A D6T bitcell can reliably run at 0.4 V and enter a standby mode at 0.26 V, to support parallel processing of dual decoupled ports. (2) An always-linear discharge and convolution mechanism (ALDCM) not only reduces a voltage of a bit line (BL), but also keeps linear calculation throughout an entire voltage range of the BL. (3) A bypass of a bias voltage time converter (BVTC) reduces digital steps, but still keeps high energy efficiency and computing density at a low voltage. A measurement result of the IMC accelerator shows that the IMC accelerator achieves an average energy efficiency of 8918 TOPS/W (8b×8b), and an average computing density of 38.6 TOPS/mm2 (8b×8b) in a 55 nm CMOS technology.

    DUAL-SIX-TRANSISTOR (D6T) IN-MEMORY COMPUTING (IMC) ACCELERATOR SUPPORTING ALWAYS-LINEAR DISCHARGE AND REDUCING DIGITAL STEPS

    公开(公告)号:US20240135989A1

    公开(公告)日:2024-04-25

    申请号:US18377840

    申请日:2023-10-08

    CPC classification number: G11C11/419 G11C8/16 G11C11/54

    Abstract: A dual-six-transistor (D6T) in-memory computing (IMC) accelerator supporting always-linear discharge and reducing digital steps is provided. In the IMC accelerator, three effective techniques are proposed: (1) A D6T bitcell can reliably run at 0.4 V and enter a standby mode at 0.26 V, to support parallel processing of dual decoupled ports. (2) An always-linear discharge and convolution mechanism (ALDCM) not only reduces a voltage of a bit line (BL), but also keeps linear calculation throughout an entire voltage range of the BL. (3) A bypass of a bias voltage time converter (BVTC) reduces digital steps, but still keeps high energy efficiency and computing density at a low voltage. A measurement result of the IMC accelerator shows that the IMC accelerator achieves an average energy efficiency of 8918 TOPS/W (8b×8b), and an average computing density of 38.6 TOPS/mm2 (8b×8b) in a 55 nm CMOS technology.

    Ripple push method for graph cut
    40.
    发明授权

    公开(公告)号:US11934459B2

    公开(公告)日:2024-03-19

    申请号:US17799278

    申请日:2021-09-22

    CPC classification number: G06F16/9024 G06T7/13 G06T7/162 G06T2207/20072

    Abstract: A ripple push method for a graph cut includes: obtaining an excess flow ef(v) of a current node v; traversing four edges connecting the current node v in top, bottom, left and right directions, and determining whether each of the four edges is a pushable edge; calculating, according to different weight functions, a maximum push value of each of the four edges by efw=ef(v)*W, where W denotes a weight function; and traversing the four edges, recording a pushable flow of each of the four edges, and pushing out a calculated flow. The ripple push method explores different push weight functions, and significantly improves the actual parallelism of the push-relabel algorithm.

Patent Agency Ranking