Event-based dynamic resource provisioning
    1.
    发明授权
    Event-based dynamic resource provisioning 有权
    基于事件的动态资源配置

    公开(公告)号:US08977752B2

    公开(公告)日:2015-03-10

    申请号:US12424893

    申请日:2009-04-16

    CPC classification number: G06F9/5011 G06F9/5061

    Abstract: Disclosed are a method, a system and a computer program product for automatically allocating and de-allocating resources for jobs executed or processed by one or more supercomputer systems. In one or more embodiments, a supercomputing system can process multiple jobs with respective supercomputing resources. A global resource manager can automatically allocate additional resources to a first job and de-allocate resources from a second job. In one or more embodiments, the global resource manager can provide the de-allocated resources to the first job as additional supercomputing resources. In one or more embodiments, the first job can use the additional supercomputing resources to perform data analysis at a higher resolution, and the additional resources can compensate for an amount of time the higher resolution analysis would take using originally allocated supercomputing resources.

    Abstract translation: 公开了一种用于为由一个或多个超级计算机系统执行或处理的作业自动分配和分配资源的方法,系统和计算机程序产品。 在一个或多个实施例中,超级计算系统可以使用相应的超级计算资源处理多个作业。 全局资源管理器可以自动为第一个作业分配额外的资源,并从第二个作业中分配资源。 在一个或多个实施例中,全局资源管理器可以将去分配的资源作为附加的超级计算资源提供给第一作业。 在一个或多个实施例中,第一作业可以使用额外的超级计算资源以更高的分辨率执行数据分析,并且附加资源可以补偿使用原始分配的超级计算资源的更高分辨率分析所花费的时间量。

    EVENT-BASED DYNAMIC RESOURCE PROVISIONING
    2.
    发明申请
    EVENT-BASED DYNAMIC RESOURCE PROVISIONING 有权
    基于事件的动态资源提供

    公开(公告)号:US20100269119A1

    公开(公告)日:2010-10-21

    申请号:US12424893

    申请日:2009-04-16

    CPC classification number: G06F9/5011 G06F9/5061

    Abstract: Disclosed are a method, a system and a computer program product for automatically allocating and de-allocating resources for jobs executed or processed by one or more supercomputer systems. In one or more embodiments, a supercomputing system can process multiple jobs with respective supercomputing resources. A global resource manager can automatically allocate additional resources to a first job and de-allocate resources from a second job. In one or more embodiments, the global resource manager can provide the de-allocated resources to the first job as additional supercomputing resources. In one or more embodiments, the first job can use the additional supercomputing resources to perform data analysis at a higher resolution, and the additional resources can compensate for an amount of time the higher resolution analysis would take using originally allocated supercomputing resources.

    Abstract translation: 公开了一种用于为由一个或多个超级计算机系统执行或处理的作业自动分配和分配资源的方法,系统和计算机程序产品。 在一个或多个实施例中,超级计算系统可以使用相应的超级计算资源处理多个作业。 全局资源管理器可以自动为第一个作业分配额外的资源,并从第二个作业中分配资源。 在一个或多个实施例中,全局资源管理器可以将去分配的资源作为附加的超级计算资源提供给第一作业。 在一个或多个实施例中,第一作业可以使用额外的超级计算资源以更高的分辨率执行数据分析,并且附加资源可以补偿使用原始分配的超级计算资源的更高分辨率分析所花费的时间量。

    VIRTUAL CONTROLLERS WITH A LARGE DATA CENTER
    3.
    发明申请
    VIRTUAL CONTROLLERS WITH A LARGE DATA CENTER 失效
    具有大数据中心的虚拟控制器

    公开(公告)号:US20100268755A1

    公开(公告)日:2010-10-21

    申请号:US12424852

    申请日:2009-04-16

    CPC classification number: H04L12/6418

    Abstract: Disclosed are a method, a system and a computer program product for dynamically allocating and/or de-allocating resources and/or partitions that provide I/O and/or active storage access services in a supercomputing system. The supercomputing system can include multiple compute nodes, high performance computing (HPC) switches coupled to the compute nodes, and active non-volatile storage devices coupled to the compute nodes. Each of the compute nodes can be configured to communicate with another compute node through at least one of the HPC switches. In one or more embodiments, each of at least two compute nodes includes a storage controller and is configured to dynamically allocate and de-allocate a storage controller partition to provide storage services to the supercomputing system, and each of at least two compute nodes includes an I/O controller and is configured to dynamically allocate and de-allocate an I/O controller partition to provide I/O services to the supercomputing system.

    Abstract translation: 公开了一种用于在超级计算系统中动态分配和/或去分配提供I / O和/或主动存储访问服务的资源和/或分区的方法,系统和计算机程序产品。 超级计算系统可以包括多个计算节点,耦合到计算节点的高性能计算(HPC)交换机以及耦合到计算节点的主动非易失性存储设备。 可以将每个计算节点配置成通过至少一个HPC交换机与另一个计算节点进行通信。 在一个或多个实施例中,至少两个计算节点中的每一个包括存储控制器,并且被配置为动态分配和解除分配存储控制器分区以向超级计算系统提供存储服务,并且至少两个计算节点中的每一个包括 I / O控制器并配置为动态分配和分配I / O控制器分区,以向超级计算系统提供I / O服务。

    Virtual controllers with a large data center
    4.
    发明授权
    Virtual controllers with a large data center 失效
    具有大数据中心的虚拟控制器

    公开(公告)号:US08028017B2

    公开(公告)日:2011-09-27

    申请号:US12424852

    申请日:2009-04-16

    CPC classification number: H04L12/6418

    Abstract: Disclosed are a method, a system and a computer program product for dynamically allocating and/or de-allocating resources and/or partitions that provide I/O and/or active storage access services in a supercomputing system. The supercomputing system can include multiple compute nodes, high performance computing (HPC) switches coupled to the compute nodes, and active non-volatile storage devices coupled to the compute nodes. Each of the compute nodes can be configured to communicate with another compute node through at least one of the HPC switches. In one or more embodiments, each of at least two compute nodes includes a storage controller and is configured to dynamically allocate and de-allocate a storage controller partition to provide storage services to the supercomputing system, and each of at least two compute nodes includes an I/O controller and is configured to dynamically allocate and de-allocate an I/O controller partition to provide I/O services to the supercomputing system.

    Abstract translation: 公开了一种用于在超级计算系统中动态分配和/或去分配提供I / O和/或主动存储访问服务的资源和/或分区的方法,系统和计算机程序产品。 超级计算系统可以包括多个计算节点,耦合到计算节点的高性能计算(HPC)交换机以及耦合到计算节点的主动非易失性存储设备。 可以将每个计算节点配置成通过至少一个HPC交换机与另一个计算节点进行通信。 在一个或多个实施例中,至少两个计算节点中的每一个包括存储控制器,并且被配置为动态分配和解除分配存储控制器分区以向超级计算系统提供存储服务,并且至少两个计算节点中的每一个包括 I / O控制器并配置为动态分配和分配I / O控制器分区,以向超级计算系统提供I / O服务。

    Techniques for dynamically assigning jobs to processors in a cluster based on broadcast information
    5.
    发明授权
    Techniques for dynamically assigning jobs to processors in a cluster based on broadcast information 有权
    基于广播信息将作业动态地分配给集群中的处理器的技术

    公开(公告)号:US08122132B2

    公开(公告)日:2012-02-21

    申请号:US12336312

    申请日:2008-12-16

    CPC classification number: G06F9/5088

    Abstract: A technique for operating a high performance computing cluster (HPC) having multiple nodes (each of which include multiple processors) includes periodically broadcasting information, related to processor utilization and network utilization at each of the multiple nodes, from each of the multiple nodes to remaining ones of the multiple nodes. Respective local job tables maintained in each of the multiple nodes are updated based on the broadcast information. One or more threads are then moved from one or more of the multiple processors to a different one of the multiple processors (based on the broadcast information in the respective local job tables).

    Abstract translation: 用于操作具有多个节点(每个包括多个处理器)的高性能计算群集(HPC)的技术包括:从多个节点中的每个节点到多个节点周期性地广播与多个节点中的每个节点处的处理器利用和网络利用相关的信息 多个节点中的一个。 基于广播信息来更新维护在多个节点中的每个节点的相应的本地作业表。 然后,一个或多个线程从多个处理器中的一个或多个移动到多个处理器中的不同处理器(基于相应的本地作业表中的广播信息)。

    Dynamic Runtime Modification of Array Layout for Offset
    6.
    发明申请
    Dynamic Runtime Modification of Array Layout for Offset 有权
    用于偏移的阵列布局的动态运行时修改

    公开(公告)号:US20100268880A1

    公开(公告)日:2010-10-21

    申请号:US12424348

    申请日:2009-04-15

    CPC classification number: G06F12/0886 G06F9/30047 G06F9/345

    Abstract: Disclosed are a method, a system and a computer program product for operating a cache system. The cache system can include multiple cache lines, and a first cache line of the multiple of cache lines can include multiple cache cells, and a bus coupled to the multiple cache cells. In one or more embodiments, the bus can include a switch that is operable to receive a first control signal and to split the bus into first and second portions or aggregate the bus into a whole based on the first control signal. When the bus is split, a first cache cell and a second cache cell of the multiple cache cells are coupled to respective first and second portions of the bus. Data from the first and second cache cells can be selected through respective portions of the bus and outputted through a port of the cache system.

    Abstract translation: 公开了一种用于操作缓存系统的方法,系统和计算机程序产品。 高速缓存系统可以包括多个高速缓存行,并且多条高速缓存行的第一高速缓存行可以包括多个高速缓存单元,以及耦合到多个高速缓存单元的总线。 在一个或多个实施例中,总线可以包括可操作以接收第一控制信号并且将总线分为第一和第二部分或基于第一控制信号将总线聚合成整体的开关。 当总线被分离时,多个高速缓存单元的第一高速缓存单元和第二高速缓存单元耦合到总线的相应的第一和第二部分。 可以通过总线的各个部分选择来自第一和第二高速缓存单元的数据,并通过高速缓存系统的端口输出。

    Method and data processing system for microprocessor communication in a cluster-based multi-processor system
    7.
    发明授权
    Method and data processing system for microprocessor communication in a cluster-based multi-processor system 失效
    基于群集的多处理器系统中微处理器通信的方法和数据处理系统

    公开(公告)号:US07818364B2

    公开(公告)日:2010-10-19

    申请号:US11952479

    申请日:2007-12-07

    Abstract: A processor communication register (PCR) contained within a multiprocessor cluster system provides enhanced processor communication. The PCR stores information that is useful in pipelined or parallel multi-processing. Each processor cluster has exclusive rights to store to a sector within the PCR and has continuous access to read its contents. Each processor cluster updates its exclusive sector within the PCR, instantly allowing all of the other processors within the cluster network to see the change within the PCR data, and bypassing the cache subsystem. Efficiency is enhanced within the processor cluster network by providing processor communications to be immediately networked and transferred into all processors without momentarily restricting access to the information or forcing all the processors to be continually contending for the same cache line, and thereby overwhelming the interconnect and memory system with an endless stream of load, store and invalidate commands.

    Abstract translation: 包含在多处理器集群系统内的处理器通信寄存器(PCR)提供增强的处理器通信。 PCR存储在流水线或并行多处理中有用的信息。 每个处理器集群具有存储到PCR中的扇区的独占权限,并且具有连续访问以读取其内容。 每个处理器集群在PCR中更新其独占部分,立即允许集群网络内的所有其他处理器查看PCR数据中的更改,并绕过缓存子系统。 处理器集群网络中的效率得到提高,通过提供处理器通信来立即联网并传输到所有处理器中,而不会立即限制对信息的访问,或迫使所有处理器持续竞争相同的高速缓存行,从而压倒互连和内存 系统具有无限的加载流,存储和无效命令。

    Multiprocessor system with retry-less TLBI protocol
    9.
    发明授权
    Multiprocessor system with retry-less TLBI protocol 失效
    具有重试TLBI协议的多处理器系统

    公开(公告)号:US07617378B2

    公开(公告)日:2009-11-10

    申请号:US10425402

    申请日:2003-04-28

    CPC classification number: G06F12/1027 G06F2212/682 G06F2212/683

    Abstract: A symmetric multiprocessor data processing system (SMP) that implements a TLBI protocol, which enables multiple TLBI operations from multiple processors to complete without causing delay. Each processor includes a TLBI register associated with the TLB and TLBI logic. The TLBI register includes a sequence of bits utilized to track the completion of a TLBI issued by the processor at the other processors. Each bit corresponds to a particular processor across the system and the particular processor is able to directly set the bit in the register of a master processor once the particular processor completes a TLBI operation initiated from the master processor. The master processor is able to track completion of the TLBI operation by checking the values of each bit within its TLBI register, without requiring multi-issuance of an address-only barrier operation on the system bus.

    Abstract translation: 实现TLBI协议的对称多处理器数据处理系统(SMP),使多个处理器的多个TLBI操作能够完成而不会造成延迟。 每个处理器包括与TLB和TLBI逻辑相关联的TLBI寄存器。 TLBI寄存器包括用于跟踪由处理器在其他处理器发出的TLBI的完成的位的序列。 每个位对应于跨系统的特定处理器,并且特定处理器能够在特定处理器完成从主处理器发起的TLBI操作之后直接设置主处理器的寄存器中的位。 主处理器能够通过检查其TLBI寄存器中每个位的值来跟踪完成TLBI操作,而不需要在系统总线上多次发出仅地址唯一的屏蔽操作。

Patent Agency Ranking