LANGUAGE INTEROPERABLE RUNTIME ADAPTABLE DATA COLLECTIONS

    公开(公告)号:US20210042323A1

    公开(公告)日:2021-02-11

    申请号:US17067479

    申请日:2020-10-09

    Abstract: Adaptive data collections may include various type of data arrays, sets, bags, maps, and other data structures. A simple interface for each adaptive collection may provide access via a unified API to adaptive implementations of the collection. A single adaptive data collection may include multiple, different adaptive implementations. A system configured to implement adaptive data collections may include the ability to adaptively select between various implementations, either manually or automatically, and to map a given workload to differing hardware configurations. Additionally, hardware resource needs of different configurations may be predicted from a small number of workload measurements. Adaptive data collections may provide language interoperability, such as by leveraging runtime compilation to build adaptive data collections and to compile and optimize implementation code and user code together. Adaptive data collections may also provide language-independent such that implementation code may be written once and subsequently used from multiple programming languages.

    Language interoperable runtime adaptable data collections

    公开(公告)号:US10803087B2

    公开(公告)日:2020-10-13

    申请号:US16165593

    申请日:2018-10-19

    Abstract: Adaptive data collections may include various type of data arrays, sets, bags, maps, and other data structures. A simple interface for each adaptive collection may provide access via a unified API to adaptive implementations of the collection. A single adaptive data collection may include multiple, different adaptive implementations. A system configured to implement adaptive data collections may include the ability to adaptively select between various implementations, either manually or automatically, and to map a given workload to differing hardware configurations. Additionally, hardware resource needs of different configurations may be predicted from a small number of workload measurements. Adaptive data collections may provide language interoperability, such as by leveraging runtime compilation to build adaptive data collections and to compile and optimize implementation code and user code together. Adaptive data collections may also provide language-independent such that implementation code may be written once and subsequently used from multiple programming languages.

    COORDINATED GARBAGE COLLECTION IN DISTRIBUTED SYSTEMS

    公开(公告)号:US20200257573A1

    公开(公告)日:2020-08-13

    申请号:US16864042

    申请日:2020-04-30

    Abstract: Fast modern interconnects may be exploited to control when garbage collection is performed on the nodes (e.g., virtual machines, such as JVMs) of a distributed system in which the individual processes communicate with each other and in which the heap memory is not shared. A garbage collection coordination mechanism (a coordinator implemented by a dedicated process on a single node or distributed across the nodes) may obtain or receive state information from each of the nodes and apply one of multiple supported garbage collection coordination policies to reduce the impact of garbage collection pauses, dependent on that information. For example, if the information indicates that a node is about to collect, the coordinator may trigger a collection on all of the other nodes (e.g., synchronizing collection pauses for batch-mode applications where throughput is important) or may steer requests to other nodes (e.g., for interactive applications where request latencies are important).

    Persistent multi-word compare-and-swap

    公开(公告)号:US10678587B2

    公开(公告)日:2020-06-09

    申请号:US16275175

    申请日:2019-02-13

    Abstract: A computer system including one or more processors and persistent, word-addressable memory implements a persistent atomic multi-word compare-and-swap operation. On entry, a list of persistent memory locations of words to be updated, respective expected current values contained the persistent memory locations and respective new values to write to the persistent memory locations are provided. The operation atomically performs the process of comparing the existing contents of the persistent memory locations to the respective current values and, should they match, updating the persistent memory locations with the new values and returning a successful status. Should any of the contents of the persistent memory locations not match a respective current value, the operation returns a failed status. The operation is performed such that the system can recover from any failure or interruption by restoring the list of persistent memory locations.

    LANGUAGE INTEROPERABLE RUNTIME ADAPTABLE DATA COLLECTIONS

    公开(公告)号:US20200125668A1

    公开(公告)日:2020-04-23

    申请号:US16165593

    申请日:2018-10-19

    Abstract: Adaptive data collections may include various type of data arrays, sets, bags, maps, and other data structures. A simple interface for each adaptive collection may provide access via a unified API to adaptive implementations of the collection. A single adaptive data collection may include multiple, different adaptive implementations. A system configured to implement adaptive data collections may include the ability to adaptively select between various implementations, either manually or automatically, and to map a given workload to differing hardware configurations. Additionally, hardware resource needs of different configurations may be predicted from a small number of workload measurements. Adaptive data collections may provide language interoperability, such as by leveraging runtime compilation to build adaptive data collections and to compile and optimize implementation code and user code together. Adaptive data collections may also provide language-independent such that implementation code may be written once and subsequently used from multiple programming languages.

    Permuted Memory Access Mapping
    46.
    发明申请

    公开(公告)号:US20180307617A1

    公开(公告)日:2018-10-25

    申请号:US15493035

    申请日:2017-04-20

    Abstract: When performing non-sequential accesses to large data sets, hot spots may be avoided by permuting the memory locations being accesses to more evenly spread those accesses across the memory and across multiple memory channels. A permutation step may be used when accessing data, such as to improve the distribution of those memory accesses within the system. Instead of accessing one memory address, that address may be permuted so that another memory address is accessed. Non-sequential accesses to an array may be modified such that each index to the array is permuted to another index in the array. Collisions between pre- and post-translation addresses may be prevented and one-to-one mappings may be used. Permutation mechanisms may be implemented in software, hardware, or a combination of both, with or without the knowledge of the process performing the memory accesses.

    Dynamic co-scheduling of hardware contexts for parallel runtime systems on shared machines
    47.
    发明授权
    Dynamic co-scheduling of hardware contexts for parallel runtime systems on shared machines 有权
    共享机器上并行运行时系统的硬件上下文的动态协同调度

    公开(公告)号:US09542221B2

    公开(公告)日:2017-01-10

    申请号:US14285513

    申请日:2014-05-22

    Abstract: Multi-core computers may implement a resource management layer between the operating system and resource-management-enabled parallel runtime systems. The resource management components and runtime systems may collectively implement dynamic co-scheduling of hardware contexts when executing multiple parallel applications, using a spatial scheduling policy that grants high priority to one application per hardware context and a temporal scheduling policy for re-allocating unused hardware contexts. The runtime systems may receive resources on a varying number of hardware contexts as demands of the applications change over time, and the resource management components may co-ordinate to leave one runnable software thread for each hardware context. Periodic check-in operations may be used to determine (at times convenient to the applications) when hardware contexts should be re-allocated. Over-subscription of worker threads may reduce load imbalances between applications. A co-ordination table may store per-hardware-context information about resource demands and allocations.

    Abstract translation: 多核计算机可以在操作系统和启用资源管理的并行运行时系统之间实现资源管理层。 当执行多个并行应用时,资源管理组件和运行时系统可以共同实现硬件上下文的动态协同调度,使用对每个硬件上下文给予一个应用的高优先级的空间调度策略和用于重新分配未使用的硬件上下文的时间调度策略 。 当应用程序的需求随时间变化时,运行时系统可以在不同数量的硬件上下文上接收资源,并且资源管理组件可以协调以为每个硬件上下文留下一个可运行的软件线程。 当重新分配硬件上下文时,可以使用周期性的登录操作来确定(有时适用于应用程序)。 工作线程的过度订阅可能会降低应用程序之间的负载不平衡。 协调表可以存储关于资源需求和分配的每个硬件上下文信息。

    Systems and Methods for Safely Subscribing to Locks Using Hardware Extensions
    48.
    发明申请
    Systems and Methods for Safely Subscribing to Locks Using Hardware Extensions 审中-公开
    使用硬件扩展安全地使用锁定的系统和方法

    公开(公告)号:US20160011915A1

    公开(公告)日:2016-01-14

    申请号:US14736123

    申请日:2015-06-10

    Abstract: Transactional Lock Elision allows hardware transactions to execute unmodified critical sections protected by the same lock concurrently, by subscribing to the lock and verifying that it is available before committing the transaction. A “lazy subscription” optimization, which delays lock subscription, can potentially cause behavior that cannot occur when the critical sections are executed under the lock. Hardware extensions may provide mechanisms to ensure that lazy subscriptions are safe (e.g., that they result in correct behavior). Prior to executing a critical section transactionally, its lock and subscription code may be identified (e.g., by writing their locations to special registers). Prior to committing the transaction, the thread executing the critical section may verify that the correct lock was correctly subscribed to. If not, or if locations identified by the special registers have been modified, the transaction may be aborted. Nested critical sections associated with different lock types may invoke different subscription code.

    Abstract translation: 事务锁定Elision允许硬件事务通过订阅锁并在提交事务之前验证它是否可用来同时执行受同一锁定保护的未修改的关键段。 延迟锁订阅的“延迟订阅”优化可能会导致在锁定下执行关键部分时不会发生的行为。 硬件扩展可以提供机制来确保延迟订阅是安全的(例如,它们导致正确的行为)。 在事务执行关键部分之前,可以识别其锁定和订阅代码(例如,通过将其位置写入特殊寄存器)。 在提交事务之前,执行关键部分的线程可能会验证正确锁定是否正确。 如果不是,或者如果由特殊寄存器识别的位置已被修改,则可能会中止该事务。 与不同锁类型相关联的嵌套关键部分可能会调用不同的订阅代码。

Patent Agency Ranking