Low latency data synchronization
    1.
    发明授权

    公开(公告)号:US10986044B2

    公开(公告)日:2021-04-20

    申请号:US16145637

    申请日:2018-09-28

    Abstract: In some examples, a computing device for processing data streams includes storage to store instructions and a processor to execute the instructions. The processor is to execute the instructions to receive respective data streams provided from a plurality of data producer sensors. The processor is also to execute the instructions to stagger a time of triggering of a first of the plurality of data producer sensors relative to a time of triggering of a second of the plurality of data producer sensors to minimize a concurrency of data frames of the data stream received from the first data producer sensor and data frames of the data stream received from the second of the plurality of data producer sensors. The processor is also to execute the instructions to process the data streams from the plurality of data producer sensors in a time-shared manner. The processor is also to execute the instructions to provide the processed data streams to one or more consumer of the processed data streams.

    Bi-directional negotiation for dynamic data chunking

    公开(公告)号:US10915258B2

    公开(公告)日:2021-02-09

    申请号:US15857158

    申请日:2017-12-28

    Abstract: Systems and techniques for bi-directional negotiation for dynamic data chunking are described herein. A set of available features for a memory subsystem. The set of available features including latency of buffer locations of the memory subsystem. An indication of a first latency requirement of a first data consumer and a second latency requirement of a second data consumer may be obtained. A first buffer location of the memory subsystem for a data stream based on the first latency requirement may be negotiated with the first data consumer. A second buffer location of the memory subsystem for the data stream based on the second latency requirement may be negotiated with the second data consumer. An indication of the first buffer location may be provided to the first data consumer and an indication of the second buffer location may be provided to the second data consumer.

    BI-DIRECTIONAL NEGOTIATION FOR DYNAMIC DATA CHUNKING

    公开(公告)号:US20190042123A1

    公开(公告)日:2019-02-07

    申请号:US15857158

    申请日:2017-12-28

    Abstract: Systems and techniques for bi-directional negotiation for dynamic data chunking are described herein. A set of available features for a memory subsystem. The set of available features including latency of buffer locations of the memory subsystem. An indication of a first latency requirement of a first data consumer and a second latency requirement of a second data consumer may be obtained. A first buffer location of the memory subsystem for a data stream based on the first latency requirement may be negotiated with the first data consumer. A second buffer location of the memory subsystem for the data stream based on the second latency requirement may be negotiated with the second data consumer. An indication of the first buffer location may be provided to the first data consumer and an indication of the second buffer location may be provided to the second data consumer.

    Methods and apparatus to combine frames of overlapping scanning systems

    公开(公告)号:US11513204B2

    公开(公告)日:2022-11-29

    申请号:US16586431

    申请日:2019-09-27

    Abstract: Methods, apparatus, systems and articles of manufacture to combine frames of overlapping scanning systems are disclosed. An example apparatus includes a time delay controller to determine a first time value and a second time value, the first time value different from the second time value; a capture synchronizer to, in response to the first time value corresponding to a first time, capture a first frame from a first scanning system and, in response to the second time value corresponding to a second time, capture a second frame from a second scanning system; and a capture combiner to combine the first frame and the second frame into a third frame, the third frame including data from the first frame and data from the second frame.

    Multi-polygon, vertically-separated laser scanning apparatus and methods

    公开(公告)号:US11163154B2

    公开(公告)日:2021-11-02

    申请号:US16673488

    申请日:2019-11-04

    Abstract: Multi-polygon, vertically-separated laser scanning apparatus and methods are disclosed. An example apparatus includes a multi-polygon. The multi-polygon includes a first polygon, a central axis, and a second polygon. The first polygon includes a first plurality of outwardly-facing mirrored facets. The second polygon includes a second plurality of outwardly-facing mirrored facets angularly offset about the central axis relative to the first plurality of outwardly-facing mirrored facets. The second polygon is positioned relative to the first polygon along the central axis. The first and second polygons are rotatable about the central axis.

    MULTI-POLYGON, VERTICALLY-SEPARATED LASER SCANNING APPARATUS AND METHODS

    公开(公告)号:US20200064623A1

    公开(公告)日:2020-02-27

    申请号:US16673488

    申请日:2019-11-04

    Abstract: Multi-polygon, vertically-separated laser scanning apparatus and methods are disclosed. An example apparatus includes a multi-polygon. The multi-polygon includes a first polygon, a central axis, and a second polygon. The first polygon includes a first plurality of outwardly-facing mirrored facets. The second polygon includes a second plurality of outwardly-facing mirrored facets angularly offset about the central axis relative to the first plurality of outwardly-facing mirrored facets. The second polygon is positioned relative to the first polygon along the central axis. The first and second polygons are rotatable about the central axis.

    Low Latency Data Synchronization
    9.
    发明申请

    公开(公告)号:US20190044891A1

    公开(公告)日:2019-02-07

    申请号:US16145637

    申请日:2018-09-28

    Abstract: In some examples, a computing device for processing data streams includes storage to store instructions and a processor to execute the instructions. The processor is to execute the instructions to receive respective data streams provided from a plurality of data producer sensors. The processor is also to execute the instructions to stagger a time of triggering of a first of the plurality of data producer sensors relative to a time of triggering of a second of the plurality of data producer sensors to minimize a concurrency of data frames of the data stream received from the first data producer sensor and data frames of the data stream received from the second of the plurality of data producer sensors. The processor is also to execute the instructions to process the data streams from the plurality of data producer sensors in a time-shared manner. The processor is also to execute the instructions to provide the processed data streams to one or more consumer of the processed data streams.

    MULTIPLY-ACCUMULATE SHARING CONVOLUTION CHAINING FOR EFFICIENT DEEP LEARNING INFERENCE

    公开(公告)号:US20230153616A1

    公开(公告)日:2023-05-18

    申请号:US18148057

    申请日:2022-12-19

    CPC classification number: G06N3/08 G06F7/523

    Abstract: Systems, apparatuses and methods may provide for technology that chains a plurality of convolution operations together, wherein the plurality of convolution operations include one or more one-dimensional (1D) convolution operations and one or more two-dimensional (2D) convolution operations, streams the plurality of convolution operations to shared multiply-accumulate (MAC) hardware, wherein to stream the plurality of convolution operations to the shared MAC hardware, the technology swaps weight inputs to the shared MAC hardware with activation inputs to the shared MAC hardware based on convolution type, and stores output data associated with the plurality of convolution operations to a local memory. Each of the 2D convolution operations may include a multi-cycle multiplication operation.

Patent Agency Ranking