PERSON SEARCH SYSTEM BASED ON MULTIPLE DEEP LEARNING MODELS

    公开(公告)号:US20200311387A1

    公开(公告)日:2020-10-01

    申请号:US16808983

    申请日:2020-03-04

    Abstract: A computer-implemented method executed by at least one processor for person identification is presented. The method includes employing one or more cameras to receive a video stream including a plurality of frames to extract features therefrom, detecting, via an object detection model, objects within the plurality of frames, detecting, via a key point detection model, persons within the plurality of frames, detecting, via a color detection model, color of clothing worn by the persons, detecting, via a gender and age detection model, an age and a gender of the persons, establishing a spatial connection between the objects and the persons, storing the features in a feature database, each feature associated with a confidence value, and normalizing, via a ranking component, the confidence values of each of the features.

    Automatic communication and optimization of multi-dimensional arrays for many-core coprocessor using static compiler analysis
    93.
    发明授权
    Automatic communication and optimization of multi-dimensional arrays for many-core coprocessor using static compiler analysis 有权
    使用静态编译器分析,为多核协处理器自动通信和优化多维数组

    公开(公告)号:US09535826B2

    公开(公告)日:2017-01-03

    申请号:US14293667

    申请日:2014-06-02

    Abstract: There are provided source-to-source transformation methods for a multi-dimensional array and/or a multi-level pointer for a computer program. A method includes minimizing a number of holes for variable length elements for a given dimension of the array and/or pointer using at least two stride values included in stride buckets. The minimizing step includes modifying memory allocation sites, for the array and/or pointer, to allocate memory based on the stride values. The minimizing step further includes modifying a multi-dimensional memory access, for accessing the array and/or pointer, into a single dimensional memory access using the stride values. The minimizing step also includes inserting offload pragma for a data transfer of the array and/or pointer prior as at least one of a single-dimensional array and a single-level pointer. The data transfer is from a central processing unit to a coprocessor over peripheral component interconnect express.

    Abstract translation: 为计算机程序提供多维数组和/或多级指针的源到源转换方法。 一种方法包括使用包括在步幅桶中的至少两个步幅值来最小化数组和/或指针的给定维度的可变长度元素的多个孔。 最小化步骤包括修改针对阵列和/或指针的内存分配站点,以基于步幅值来分配存储器。 最小化步骤还包括使用步幅值将用于访问阵列和/或指针的多维存储器访问修改为单维存储器访问。 最小化步骤还包括在单维数组和单级指针中的至少一个之前插入用于数组和/或指针的数据传输的卸载pragma。 数据传输是通过外围组件互连快递从中央处理单元到协处理器的。

    LambdaLib: In-Memory View Management and Query Processing Library for Realizing Portable, Real-Time Big Data Applications
    94.
    发明申请
    LambdaLib: In-Memory View Management and Query Processing Library for Realizing Portable, Real-Time Big Data Applications 审中-公开
    LambdaLib:内存视图管理和查询处理库,用于实现便携式实时大数据应用程序

    公开(公告)号:US20160300157A1

    公开(公告)日:2016-10-13

    申请号:US15089667

    申请日:2016-04-04

    CPC classification number: G06F16/252 G06F16/24568

    Abstract: A big data processing system includes a memory management engine having stream buffers, realtime views and models, and batch views and models, the stream buffers coupleable to one or more stream processing frameworks to process stream data, the batch models coupleable to one or more batch processing frameworks; one or more processing engines including Join, Group, Filter, Aggregate, Project functional units and classifiers; and a client layer engine communicating with one or more big data applications, the client layer engine handling an output layer, an API layer, and an unified query layer.

    Abstract translation: 大数据处理系统包括具有流缓冲器,实时视图和模型以及批视图和模型的存储器管理引擎,流缓冲器可耦合到一个或多个流处理框架以处理流数据,批处理模型可耦合到一个或多个批处理 处理框架; 一个或多个处理引擎,包括Join,Group,Filter,Aggregate,Project功能单元和分类器; 以及与一个或多个大数据应用通信的客户层引擎,处理输出层的客户层引擎,API层和统一查询层。

    Simultaneous scheduling of processes and offloading computation on many-core coprocessors
    95.
    发明授权
    Simultaneous scheduling of processes and offloading computation on many-core coprocessors 有权
    同时调度多核协处理器的进程和卸载计算

    公开(公告)号:US09367357B2

    公开(公告)日:2016-06-14

    申请号:US14261090

    申请日:2014-04-24

    CPC classification number: G06F9/5044 G06F9/4881

    Abstract: Methods and systems for scheduling jobs to manycore nodes in a cluster include selecting a job to run according to the job's wait time and the job's expected execution time; sending job requirements to all nodes in a cluster, where each node includes a manycore processor; determining at each node whether said node has sufficient resources to ever satisfy the job requirements and, if no node has sufficient resources, deleting the job; creating a list of nodes that have sufficient free resources at a present time to satisfy the job requirements; and assigning the job to a node, based on a difference between an expected execution time and associated confidence value for each node and a hypothetical fastest execution time and associated hypothetical maximum confidence value.

    Abstract translation: 将作业调度到群集节点的方法和系统包括根据作业的等待时间和作业的预期执行时间来选择要运行的作业; 向群集中的所有节点发送作业需求,其中每个节点包括一个处理器; 在每个节点处确定所述节点是否具有足够的资源以满足工作要求,如果没有节点具有足够的资源,则删除该作业; 创建一个目前有足够空闲资源的节点列表,以满足工作要求; 并且基于每个节点的预期执行时间和相关联的置信度值之间的差异以及假设的最快执行时间和相关联的假设最大置信度值将作业分配给节点。

    Large-Scale, Dynamic Graph Storage and Processing System
    96.
    发明申请
    Large-Scale, Dynamic Graph Storage and Processing System 有权
    大型动态图形存储和处理系统

    公开(公告)号:US20160110409A1

    公开(公告)日:2016-04-21

    申请号:US14831809

    申请日:2015-08-20

    Abstract: A method in a graph storage and processing system is provided. The method includes storing, in a scalable, distributed, fault-tolerant, in-memory graph storage device, base graph data representative of graphs, and storing, in a real-time, in memory graph storage device, update graph data representative of graph updates for the graphs with respect to a time threshold. The method further includes sampling the base graph data to generate sampled portions of the graphs and storing the sampled portions, by an in-memory graph sampler. The method additionally includes providing, by a query manager, a query interface between applications and the system. The method also includes forming, by the query manager, graph data representative of a complete graph from at least the base graph data and the update graph data, if any. The method includes processing, by a graph computer, the sampled portions using batch-type computations to generate approximate results for graph-based queries.

    Abstract translation: 提供了图形存储和处理系统中的一种方法。 该方法包括在可扩展的,分布式的,容错的存储器内存图形存储装置中存储表示图形的基本图形数据,并且实时地存储在存储器图形存储装置中,代表图形的更新图形数据 相对于时间阈值更新图表。 该方法还包括对基本图形数据进行采样以生成图形的采样部分并通过存储器内图形采样器存储采样部分。 该方法还包括由查询管理器提供应用和系统之间的查询接口。 该方法还包括由查询管理器从至少基本图形数据和更新图形数据(如果有的话)形成表示完整图形的图形数据。 该方法包括通过图形计算机处理使用分批式计算的采样部分,以生成基于图形的查询的近似结果。

    CAPTURING SNAPSHOTS OF OFFLOAD APPLICATIONS ON MANY-CORE COPROCESSORS
    97.
    发明申请
    CAPTURING SNAPSHOTS OF OFFLOAD APPLICATIONS ON MANY-CORE COPROCESSORS 有权
    捕获多个核心协处理器的卸载应用程序

    公开(公告)号:US20150212823A1

    公开(公告)日:2015-07-30

    申请号:US14572261

    申请日:2014-12-16

    Abstract: Methods are provided. A method for swapping-out an offload process from a coprocessor includes issuing a snapify_pause request from a host processor to the coprocessor to initiate a pausing of the offload process executing by the coprocessor and another process executing by the host processor using a plurality of locks. The offload process is previously offloaded from the host processor to the coprocessor. The method further includes issuing a snapify_capture request from the host processor to the coprocessor to initiate a local snapshot capture and saving of the local snapshot capture by the coprocessor. The method also includes issuing a snapify_wait request from the host processor to the coprocessor to wait for the local snapshot capture and the saving of the local snapshot capture to complete by the coprocessor.

    Abstract translation: 提供方法。 用于从协处理器交换卸载过程的方法包括从主处理器向协处理器发出snapify_pause请求,以启动由协处理器执行的卸载过程的暂停以及由主处理器使用多个锁执行的另一进程。 卸载过程先前从主机处理器卸载到协处理器。 该方法还包括从主处理器向协处理器发出snapify_capture请求以发起本地快照捕获并且由协处理器保存本地快照捕获。 该方法还包括从主机处理器向协处理器发出一个snapify_wait请求,以等待本地快照捕获并保存本地快照捕获以由协处理器完成。

    SYSTEMS AND METHODS FOR SWAPPING PINNED MEMORY BUFFERS
    98.
    发明申请
    SYSTEMS AND METHODS FOR SWAPPING PINNED MEMORY BUFFERS 有权
    用于切换PINNED内存缓冲区的系统和方法

    公开(公告)号:US20150212733A1

    公开(公告)日:2015-07-30

    申请号:US14603813

    申请日:2015-01-23

    CPC classification number: G06F3/061 G06F3/0656 G06F3/0673 G06F9/50 G06F12/023

    Abstract: Systems and methods for swapping out and in pinned memory regions between main memory and a separate storage location in a system, including establishing an offload buffer in an interposing library; swapping out pinned memory regions by transferring offload buffer data from a coprocessor memory to a host processor memory, unregistering and unmapping a memory region employed by the offload buffer from the interposing library, wherein the interposing library is pre-loaded on the coprocessor, and collects and stores information employed during the swapping out. The pinned memory regions are swapped in by mapping and re-registering the files to the memory region employed by the offload buffer, and transferring data of the offload buffer data from the host memory back to the re-registered memory region.

    Abstract translation: 用于在主存储器和系统中的单独存储位置之间交换出和被固定的存储器区域的系统和方法,包括在插入库中建立卸载缓冲器; 通过将协处理器存储器中的卸载缓冲器数据传送到主机处理器存储器来交换出固定的存储器区域,从插入库取消注册和解映射卸载缓冲器所使用的存储器区域,其中插入库被预加载到协处理器上,并且收集 并存储在交换期间使用的信息。 通过将文件映射并重新注册到由卸载缓冲器采用的存储器区域并将卸载缓冲器数据的数据从主机存储器传送回重新注册的存储器区域来交换被固定的存储器区域。

    Semi-automatic restructuring of offloadable tasks for accelerators
    100.
    发明授权
    Semi-automatic restructuring of offloadable tasks for accelerators 有权
    半自动重组加速器的卸载任务

    公开(公告)号:US08997073B2

    公开(公告)日:2015-03-31

    申请号:US14261897

    申请日:2014-04-25

    CPC classification number: G06F8/452 G06F8/4441 G06F9/5027 G06F2209/509

    Abstract: A computer implemented method entails identifying code regions in an application from which offloadable tasks can be generated by a compiler for heterogenous computing system with processor and accelerator memory, including adding relaxed semantics to a directive based language in the heterogenous computing for allowing a suggesting rather than specifying a parallel code region as an offloadable candidate, and identifying one or more offloadable tasks in a neighborhood of code region marked by the directive.

    Abstract translation: 计算机实现的方法需要识别应用程序中的代码区域,编译器可以由编译器为具有处理器和加速器存储器的异构计算系统生成可卸载任务,包括在异构计算中为基于指令的语言添加轻松语义,以允许建议而不是 将并行代码区域指定为可卸载候选者,以及识别由该指令标记的代码区域附近的一个或多个可卸载任务。

Patent Agency Ranking