Method, apparatus and computer programs providing cluster-wide page management
    31.
    发明授权
    Method, apparatus and computer programs providing cluster-wide page management 有权
    提供集群范围页面管理的方法,设备和计算机程序

    公开(公告)号:US09323677B2

    公开(公告)日:2016-04-26

    申请号:US13967857

    申请日:2013-08-15

    摘要: A data processing system includes a plurality of virtual machines each having associated memory pages; a shared memory page cache that is accessible by each of the plurality of virtual machines; and a global hash map that is accessible by each of the plurality of virtual machines. The data processing system is configured such that, for a particular memory page stored in the shared memory page cache that is associated with two or more of the plurality of virtual machines, there is a single key stored in the global hash map that identifies at least a storage location in the shared memory page cache of the particular memory page. The system can be embodied at least partially in a cloud computing system.

    摘要翻译: 数据处理系统包括多个虚拟机,每个虚拟机具有关联的存储器页面; 可由所述多个虚拟机中的每一个访问的共享存储器页面缓存; 以及可由所述多个虚拟机中的每一个访问的全局散列图。 数据处理系统被配置为使得对于存储在与多个虚拟机中的两个或更多个虚拟机相关联的共享存储器页面高速缓存中的特定存储器页面,存在存储在全局散列图中的单个密钥,其至少标识 特定存储器页面的共享存储器页面缓存中的存储位置。 该系统可以至少部分地体现在云计算系统中。

    Acceleration prediction in hybrid systems

    公开(公告)号:US09164814B2

    公开(公告)日:2015-10-20

    申请号:US14059624

    申请日:2013-10-22

    IPC分类号: G06F9/54 G06F9/445

    摘要: Predicting acceleration in a hybrid system may comprise determining a number of cross system calls in a first host-accelerator computer architecture running a workload. Host machine overhead and accelerator overhead in the first host-accelerator computer architecture associated with each of the cross system calls may be determined. Communication delay associated with each of the cross system calls in the first host-accelerator computer architecture running a workload may be determined. An application response time may be predicted for a candidate application to be run in a second host-accelerator computer architecture, based at least on the determined host machine overhead, the accelerator overhead, and the communication delay associated with each of the cross system calls in the first host-accelerator computer architecture running a workload.

    ACCELERATION PREDICTION IN HYBRID SYSTEMS

    公开(公告)号:US20150254111A1

    公开(公告)日:2015-09-10

    申请号:US14718607

    申请日:2015-05-21

    IPC分类号: G06F9/50 G06F9/54 G06F9/48

    摘要: Predicting acceleration in a hybrid system may comprise determining a number of cross system calls in a first host-accelerator computer architecture running a workload. Host machine overhead and accelerator overhead in the first host-accelerator computer architecture associated with each of the cross system calls may be determined. Communication delay associated with each of the cross system calls in the first host-accelerator computer architecture running a workload may be determined. An application response time may be predicted for a candidate application to be run in a second host-accelerator computer architecture, based at least on the determined host machine overhead, the accelerator overhead, and the communication delay associated with each of the cross system calls in the first host-accelerator computer architecture running a workload.

    ACCELERATING MULTIPLE QUERY PROCESSING OPERATIONS
    36.
    发明申请
    ACCELERATING MULTIPLE QUERY PROCESSING OPERATIONS 审中-公开
    加速多个查询处理操作

    公开(公告)号:US20150046486A1

    公开(公告)日:2015-02-12

    申请号:US14018646

    申请日:2013-09-05

    IPC分类号: G06F17/30

    CPC分类号: G06F17/30442

    摘要: Embodiments include methods, systems and computer program products a for offloading multiple processing operations to an accelerator includes receiving, by a processing device, a database query from an application. The method also includes performing analysis on the database query and selecting an accelerator template from a plurality of accelerator templates based on the analysis of the database query. The method further includes transmitting an indication of the accelerator template to the accelerator and executing at least a portion of the database query on the accelerator.

    摘要翻译: 实施例包括用于将多个处理操作卸载到加速器的方法,系统和计算机程序产品a包括由处理设备从应用程序接收数据库查询。 该方法还包括基于数据库查询的分析,对数据库查询执行分析并从多个加速器模板中选择加速器模板。 该方法还包括将加速器模板的指示传送到加速器并且在加速器上执行数据库查询的至少一部分。

    ACCELERATING MULTIPLE QUERY PROCESSING OPERATIONS

    公开(公告)号:US20150046427A1

    公开(公告)日:2015-02-12

    申请号:US13961089

    申请日:2013-08-07

    IPC分类号: G06F17/30

    CPC分类号: G06F17/30442

    摘要: Embodiments include methods, systems and computer program products a for offloading multiple processing operations to an accelerator includes receiving, by a processing device, a database query from an application. The method also includes performing analysis on the database query and selecting an accelerator template from a plurality of accelerator templates based on the analysis of the database query. The method further includes transmitting an indication of the accelerator template to the accelerator and executing at least a portion of the database query on the accelerator.

    Runtime estimation for machine learning tasks

    公开(公告)号:US11727309B2

    公开(公告)日:2023-08-15

    申请号:US17452596

    申请日:2021-10-28

    IPC分类号: G06N20/00 G06F16/22

    CPC分类号: G06N20/00 G06F16/22

    摘要: Techniques for estimating runtimes of one or more machine learning tasks are provided. For example, one or more embodiments described herein can regard a system that can comprise a memory that stores computer executable components. The system can also comprise a processor, operably coupled to the memory, and that can execute the computer executable components stored in the memory. The computer executable components can comprise an extraction component that can extract a parameter from a machine learning task. The parameter can define a performance characteristic of the machine learning task. Also, the computer executable components can comprise a model component that can generate a model based on the parameter. Further, the computer executable components can comprise an estimation component that can generate an estimated runtime of the machine learning task based on the model. The estimated runtime can define a period of time beginning at an initiation of the machine learning task and ending at a completion of the machine learning task.

    Performance based switching of a model training process

    公开(公告)号:US11551145B2

    公开(公告)日:2023-01-10

    申请号:US16782713

    申请日:2020-02-05

    IPC分类号: G06N20/00 G06F17/18 G06F11/34

    摘要: Systems, computer-implemented methods, and computer program products that can facilitate switching a model training process from a ground truth training phase to an adversarial training phase based on performance of a model trained in the ground truth training phase are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise an analysis component that identifies a performance condition of a model trained in a model training process. The computer executable components can further comprise a trainer component that switches the model training process from a ground truth training process to an adversarial training process based on the identified performance condition.