COMPRESSION-ENCODING SCHEDULED INPUTS FOR MATRIX COMPUTATIONS

    公开(公告)号:WO2020106502A1

    公开(公告)日:2020-05-28

    申请号:PCT/US2019/061052

    申请日:2019-11-13

    Abstract: A method of performing matrix computations includes receiving a compression-encoded matrix including a plurality of rows. Each row of the compression-encoded matrix has a plurality of defined element values and, for each such defined element value, a schedule tag indicating a schedule for using the defined element value in a scheduled matrix computation. The method further includes loading the plurality of rows of the compression-encoded matrix into a corresponding plurality of work memory banks, and providing decoded input data to a matrix computation module configured for performing the scheduled matrix computation. For each work memory bank, a next defined element value and a corresponding schedule tag are read. If the schedule tag meets a scheduling condition, the next defined element value is provided to the matrix computation module. Otherwise, a default element value is provided to the matrix computation module.

    ADAPTABLE IMAGE SEARCH WITH COMPUTER VISION ASSISTANCE
    6.
    发明申请
    ADAPTABLE IMAGE SEARCH WITH COMPUTER VISION ASSISTANCE 审中-公开
    适应性图像搜索与计算机视觉辅助

    公开(公告)号:WO2015112653A1

    公开(公告)日:2015-07-30

    申请号:PCT/US2015/012331

    申请日:2015-01-22

    Abstract: A computing device having adaptable image search and methods for operating an image recognition program on the computing device are disclosed herein. An image recognition program may receive a query from a user and a target image within which a search based on the query is to be performed using one or more of a plurality of locally stored image recognition models, which are determined to be able to perform the search with sufficiently high confidence. The query may comprise text that is typed or converted from speech. The image recognition program performs the search within the target image for a target region of the target image using at least one selected image recognition model stored locally, and returns a search result to the user.

    Abstract translation: 本文公开了具有适应性图像搜索的计算设备和用于在计算设备上操作图像识别程序的方法。 图像识别程序可以从用户和目标图像接收查询,在所述目标图像中,基于查询的搜索将使用多个本地存储的图像识别模型中的一个或多个来执行,所述多个本地存储的图像识别模型被确定为能够执行 搜索足够高的信心。 查询可以包括从语音输入或转换的文本。 图像识别程序使用本地存储的至少一个所选择的图像识别模型,在目标图像的目标区域内执行搜索,并将搜索结果返回给用户。

    IMAGE DATA ANNOTATION SYSTEM
    7.
    发明申请

    公开(公告)号:WO2022140001A1

    公开(公告)日:2022-06-30

    申请号:PCT/US2021/060413

    申请日:2021-11-23

    Abstract: An image data annotation system automatically annotates a physical object within individual images frames of an image sequence with relevant object annotations based on a three-dimensional (3D) model of the physical object. Annotating the individual image frames with object annotations includes updating individual image frames within image input data to generate annotated image data that is suitable for reliably training a DNN object detection architecture. Exemplary object annotations that the image data annotation system can automatically apply to individual image frames include, inter alia, object pose, image pose, object masks, 3D bounding boxes composited over the physical object, 2D bounding boxes composited over the physical object, and/or depth map information. Annotating the individual image frames may be accomplished by aligning the 3D model of the physical object with a multi-view reconstruction of the physical object that is generated by inputting an image sequence into a Structure-from-Motion and/or Multi-view Stereo pipeline.

    POWER-EFFICIENT DEEP NEURAL NETWORK MODULE CONFIGURED FOR EXECUTING A LAYER DESCRIPTOR LIST

    公开(公告)号:WO2018194993A1

    公开(公告)日:2018-10-25

    申请号:PCT/US2018/027834

    申请日:2018-04-16

    Abstract: A deep neural network (DNN) processor is configured to execute descriptors in layer descriptor lists. The descriptors define instructions for performing a pass of a DNN by the DNN processor. Several types of descriptors can be utilized: memory-to-memory move (M2M) descriptors; operation descriptors; host communication descriptors; configuration descriptors; branch descriptors; and synchronization descriptors. A DMA engine uses M2M descriptors to perform multi-dimensional strided DMA operations. Operation descriptors define the type of operation to be performed by neurons in the DNN processor and the activation function to be used by the neurons. M2M descriptors are buffered separately from operation descriptors and can be executed at soon as possible, subject to explicitly set dependencies. As a result, latency can be reduced and, consequently, neurons can complete their processing faster. The DNN module can then be powered down earlier than it otherwise would have, thereby saving power.

    QUEUE MANAGEMENT FOR DIRECT MEMORY ACCESS
    10.
    发明申请

    公开(公告)号:WO2018194845A1

    公开(公告)日:2018-10-25

    申请号:PCT/US2018/026352

    申请日:2018-04-06

    Abstract: A direct memory access (DMA) engine may be responsible to enable and control DMA data flow within a computing system. The DMA engine moves blocks of data, associated with descriptors in a plurality of queues, from a source to a destination memory location or address, autonomously from control by a computer system's processor. Based on analysis of the data blocks linked to the descriptors in the queues, the DMA engine and its associated DMA fragmenter ensure that data blocks stored linked to descriptors in the queues do not remain idle for an exorbitant period of time. The DMA fragmenter may divide large data blocks into smaller data blocks to ensure that the processing of large data blocks does not preclude the timely processing of smaller data blocks associated with one or more descriptors in the queues. The data blocks stored may be two-dimensional data blocks.

Patent Agency Ranking