CONTINUOUS OPERATION DURING RECONFIGURATION PERIODS
    1.
    发明申请
    CONTINUOUS OPERATION DURING RECONFIGURATION PERIODS 有权
    重新配置期间的连续运行

    公开(公告)号:US20140074996A1

    公开(公告)日:2014-03-13

    申请号:US13597273

    申请日:2012-08-29

    IPC分类号: H04L12/24

    摘要: A method for continuously updating a set of replicas. The method comprises storing a plurality of replicas of data in a current configuration of members from a plurality of nodes, receiving a reconfiguration command by a member of said current configuration, selecting at least one estimated configuration from said plurality of nodes, receiving at least one write command by a member of said current configuration, disseminating said at least one write command to each member of said at least one estimated configuration and validating said at least one estimated configuration. At least one estimated configuration processes at least one of the write commands before the validating is completed.

    摘要翻译: 一种连续更新一组副本的方法。 该方法包括将多个数据副本存储在来自多个节点的成员的当前配置中,由所述当前配置的成员接收重新配置命令,从所述多个节点中选择至少一个估计配置,接收至少一个 由所述当前配置的成员写入命令,向所述至少一个估计配置的每个成员传播所述至少一个写入命令并验证所述至少一个估计配置。 至少一个估计配置在验证之前处理至少一个写命令完成。

    Continuous operation during reconfiguration periods
    2.
    发明授权
    Continuous operation during reconfiguration periods 有权
    在重新配置期间连续运行

    公开(公告)号:US08943178B2

    公开(公告)日:2015-01-27

    申请号:US13597273

    申请日:2012-08-29

    IPC分类号: G06F15/177 H04L12/24

    摘要: A method for continuously updating a set of replicas. The method comprises storing a plurality of replicas of data in a current configuration of members from a plurality of nodes, receiving a reconfiguration command by a member of said current configuration, selecting at least one estimated configuration from said plurality of nodes, receiving at least one write command by a member of said current configuration, disseminating said at least one write command to each member of said at least one estimated configuration and validating said at least one estimated configuration. At least one estimated configuration processes at least one of the write commands before the validating is completed.

    摘要翻译: 一种连续更新一组副本的方法。 该方法包括将多个数据副本存储在来自多个节点的成员的当前配置中,由所述当前配置的成员接收重新配置命令,从所述多个节点中选择至少一个估计配置,接收至少一个 由所述当前配置的成员写入命令,向所述至少一个估计配置的每个成员传播所述至少一个写入命令并验证所述至少一个估计配置。 至少一个估计配置在验证之前处理至少一个写命令完成。

    PROPAGATING CHANGES IN TOPIC SUBSCRIPTION STATUS OF PROCESSES IN AN OVERLAY NETWORK
    3.
    发明申请
    PROPAGATING CHANGES IN TOPIC SUBSCRIPTION STATUS OF PROCESSES IN AN OVERLAY NETWORK 失效
    在重叠网络中的主题订阅状态改变过程的变化

    公开(公告)号:US20120016979A1

    公开(公告)日:2012-01-19

    申请号:US12836591

    申请日:2010-07-15

    IPC分类号: G06F15/173

    CPC分类号: G06F9/542

    摘要: A method of updating statuses of processes in a network is provided. The method may include the following steps: connecting N processes on a K-connected overlay network of nodes which is in operative association with a computer network; determining an update of subscription and un-subscription statuses of at least some of the processes; generating update messages reflecting the subscriptions and the un-subscriptions, the update messages being differences between previous update and current update; and propagating the update messages through the K-connected graph, such that at least some of the processes transfer the update to its respective K neighboring nodes, wherein at least one of the connecting, the subscribing, the unsubscribing, the generating, and the propagating is executed by at least one processor.

    摘要翻译: 提供了一种更新网络中进程状态的方法。 该方法可以包括以下步骤:在与计算机网络有效关联的节点的K连接覆盖网络上连接N个进程; 确定至少一些过程的订阅和非订阅状态的更新; 产生反映订阅和取消订阅的更新消息,更新消息是先前更新和当前更新之间的差异; 以及通过所述K连接的图形传播所述更新消息,使得所述过程中的至少一些将所述更新传送到其相应的K个相邻节点,其中所述连接,订阅,取消订阅,生成和传播中的至少一个 由至少一个处理器执行。

    Propagating changes in topic subscription status of processes in an overlay network
    4.
    发明授权
    Propagating changes in topic subscription status of processes in an overlay network 失效
    传播覆盖网络中进程的主题订阅状态的更改

    公开(公告)号:US08661080B2

    公开(公告)日:2014-02-25

    申请号:US12836591

    申请日:2010-07-15

    IPC分类号: G06F15/16 G06F15/173

    CPC分类号: G06F9/542

    摘要: A method of updating statuses of processes in a network is provided. The method may include the following steps: connecting N processes on a K-connected overlay network of nodes which is in operative association with a computer network; determining an update of subscription and un-subscription statuses of at least some of the processes; generating update messages reflecting the subscriptions and the un-subscriptions, the update messages being differences between previous update and current update; and propagating the update messages through the K-connected graph, such that at least some of the processes transfer the update to its respective K neighboring nodes, wherein at least one of the connecting, the subscribing, the unsubscribing, the generating, and the propagating is executed by at least one processor.

    摘要翻译: 提供了一种更新网络中进程状态的方法。 该方法可以包括以下步骤:在与计算机网络有效关联的节点的K连接覆盖网络上连接N个进程; 确定至少一些过程的订阅和非订阅状态的更新; 产生反映订阅和取消订阅的更新消息,更新消息是先前更新和当前更新之间的差异; 以及通过所述K连接的图形传播所述更新消息,使得所述过程中的至少一些将所述更新传送到其相应的K个相邻节点,其中所述连接,订阅,取消订阅,生成和传播中的至少一个 由至少一个处理器执行。

    CONSTRUCTING SCALABLE OVERLAYS FOR PUB-SUB WITH MANY TOPICS: THE GREEDY JOIN-LEAVE ALGORITHM
    5.
    发明申请
    CONSTRUCTING SCALABLE OVERLAYS FOR PUB-SUB WITH MANY TOPICS: THE GREEDY JOIN-LEAVE ALGORITHM 审中-公开
    构建具有多个主题的PUB-SUB的可扩展覆盖:GREEDY JOIN-LEAVE ALGORITHM

    公开(公告)号:US20100027442A1

    公开(公告)日:2010-02-04

    申请号:US12183319

    申请日:2008-07-31

    IPC分类号: H04L12/28

    摘要: A method and system for constructing a single topic-connected overlay network are disclosed. A link contribution array, which stores sets of edges in an order according contribution values, is provided. A highest contribution index indicates a highest element in the link contribution array. The method includes performing, at every iteration, a Greedy Merge (GM) algorithm for selecting an edge from the highest element in the link contribution array, removing the selected edge from the link contribution array, and adding the selected edge to a set of overlay edges. After the selected edge is added to the set of overlay edges, contribution values of other edges are updated. The GM algorithm terminates when all elements in the link contribution array become empty. As an output, the GM algorithm generates a single topic-connected overlay network for all topics. A Greedy Join (GJ) and Greedy Leave (GL) functions are also disclosed.

    摘要翻译: 公开了一种用于构建单个主题连接的覆盖网络的方法和系统。 提供了一种链接贡献阵列,其按照贡献值按顺序存储边缘集合。 最高贡献指数表示链接贡献数组中的最高元素。 该方法包括在每次迭代中执行用于从链路贡献阵列中的最高元素中选择边缘的贪婪合并(GM)算法,从链路贡献阵列中移除所选择的边缘,以及将所选边缘添加到一组覆盖 边缘。 在将所选边缘添加到叠加边缘集合之后,更新其他边缘的贡献值。 当链接贡献数组中的所有元素变为空时,GM算法终止。 作为输出,GM算法为所有主题生成单个主题连接的覆盖网络。 还公开了贪婪加盟(GJ)和贪婪遗漏(GL)功能。

    Allocation enforcement in a multi-tenant cache mechanism
    6.
    发明授权
    Allocation enforcement in a multi-tenant cache mechanism 有权
    多租户缓存机制中的分配实施

    公开(公告)号:US09235443B2

    公开(公告)日:2016-01-12

    申请号:US13476016

    申请日:2012-05-21

    摘要: Systems and methods for cache optimization are provided. The method comprises monitoring cache access rate for a plurality of cache tenants sharing same cache mechanism having an amount of data storage space, wherein a first cache tenant having a first cache size is allocated a first cache space within the data storage space, and wherein a second cache tenant having a second cache size is allocated a second cache space within the data storage space. The method further comprises determining cache profiles for at least the first cache tenant and the second cache tenant according to data collected during the monitoring; analyzing the cache profiles for the plurality of cache tenants to determine an expected cache usage model for the cache mechanism; and analyzing the cache usage model and factors related to cache efficiency or performance for the one or more cache tenants to dictate one or more occupancy constraints.

    摘要翻译: 提供了缓存优化的系统和方法。 该方法包括监视共享具有一定数量的数据存储空间的多个高速缓存机构的多个高速缓存租户的高速缓存访​​问速率,其中具有第一高速缓存大小的第一高速缓存租户在数据存储空间内被分配第一高速缓存空间,并且其中 具有第二高速缓存大小的第二高速缓存租户在数据存储空间内分配第二高速缓存空间。 该方法还包括根据监视期间收集的数据确定至少第一高速缓存租户和第二高速缓存租户的高速缓存简档; 分析所述多个高速缓存租户的高速缓存简档以确定所述高速缓存机构的预期高速缓存使用模型; 以及分析所述高速缓存使用模型以及与所述一个或多个高速缓存租户相关的高速缓存效率或性能的因素来决定一个或多个占用限制。

    Backoff protocols and methods for distributed mutual exclusion and ordering
    7.
    发明授权
    Backoff protocols and methods for distributed mutual exclusion and ordering 有权
    用于分布式互斥和排序的退避协议和方法

    公开(公告)号:US07155524B1

    公开(公告)日:2006-12-26

    申请号:US10005508

    申请日:2001-12-04

    CPC分类号: G06F9/526

    摘要: A system for and method of implementing a backoff protocol and a computer network incorporating the system or the method. In one embodiment, the system includes: (1) a client subsystem that generates a request for access to a shared resource and (2) a server subsystem that receives the request, returns a LOCKED indicator upon an expectation that the shared resource is unavailable and otherwise returns a FREE indicator, the client subsystem responding to the LOCKED indicator by waiting before regenerating the request for the access.

    摘要翻译: 用于实现退避协议的系统和方法以及结合该系统或方法的计算机网络。 在一个实施例中,系统包括:(1)生成对共享资源的访问请求的客户端子系统,以及(2)接收到请求的服务器子系统,在期望共享资源不可用时返回LOCKED指示符, 否则返回一个FREE指示器,客户端子系统在重新生成访问请求之前等待响应LOCKED指示符。

    Cache Optimization Via Predictive Cache Size Modification
    8.
    发明申请
    Cache Optimization Via Predictive Cache Size Modification 有权
    通过预测缓存大小修改进行缓存优化

    公开(公告)号:US20130138889A1

    公开(公告)日:2013-05-30

    申请号:US13306996

    申请日:2011-11-30

    IPC分类号: G06F12/08

    摘要: Systems and methods for cache optimization, the method comprising monitoring cache access rate for one or more cache tenants in a computing environment, wherein a first cache tenant is allocated a first cache having a first cache size which may be adjusted; determining a cache profile for at least the first cache over one or more time intervals according to data collected during the monitoring, analyzing the cache profile for the first cache to determine an expected cache usage model for the first cache; and analyzing the cache usage model and factors related to cache efficiency for the one or more cache tenants to dictate one or more constraints that define boundaries for the first cache size.

    摘要翻译: 用于高速缓存优化的系统和方法,所述方法包括监视计算环境中的一个或多个高速缓存租户的高速缓存访​​问速率,其中为第一高速缓存租户分配具有可调整的第一高速缓存大小的第一高速缓存; 根据在监视期间收集的数据,通过一个或多个时间间隔确定至少第一高速缓存的高速缓存简档,分析第一高速缓存的高速缓存配置文件,以确定第一高速缓存的预期高速缓存使用模型; 以及分析所述高速缓存使用模型和与所述一个或多个高速缓存租户相关的因素,以规定限定所述第一高速缓存大小的边界的一个或多个约束。

    Cache optimization via predictive cache size modification
    9.
    发明授权
    Cache optimization via predictive cache size modification 有权
    缓存优化通过预测缓存大小修改

    公开(公告)号:US08850122B2

    公开(公告)日:2014-09-30

    申请号:US13306996

    申请日:2011-11-30

    IPC分类号: G06F12/08

    摘要: Systems and methods for cache optimization, the method comprising monitoring cache access rate for one or more cache tenants in a computing environment, wherein a first cache tenant is allocated a first cache having a first cache size which may be adjusted; determining a cache profile for at least the first cache over one or more time intervals according to data collected during the monitoring, analyzing the cache profile for the first cache to determine an expected cache usage model for the first cache; and analyzing the cache usage model and factors related to cache efficiency for the one or more cache tenants to dictate one or more constraints that define boundaries for the first cache size.

    摘要翻译: 用于高速缓存优化的系统和方法,所述方法包括监视计算环境中的一个或多个高速缓存租户的高速缓存访​​问速率,其中向第一高速缓存租户分配具有可被调整的第一高速缓存大小的第一高速缓存; 根据在监视期间收集的数据,通过一个或多个时间间隔确定至少第一高速缓存的高速缓存简档,分析第一高速缓存的高速缓存配置文件,以确定第一高速缓存的预期高速缓存使用模型; 以及分析所述高速缓存使用模型和与所述一个或多个高速缓存租户相关的因素,以规定限定所述第一高速缓存大小的边界的一个或多个约束。

    ALLOCATION ENFORCEMENT IN A MULTI-TENANT CACHE MECHANISM
    10.
    发明申请
    ALLOCATION ENFORCEMENT IN A MULTI-TENANT CACHE MECHANISM 有权
    多重缓存机制中的分配执行

    公开(公告)号:US20130138891A1

    公开(公告)日:2013-05-30

    申请号:US13476016

    申请日:2012-05-21

    IPC分类号: G06F12/08

    摘要: Systems and methods for cache optimization are provided. The method comprises monitoring cache access rate for a plurality of cache tenants sharing same cache mechanism having an amount of data storage space, wherein a first cache tenant having a first cache size is allocated a first cache space within the data storage space, and wherein a second cache tenant having a second cache size is allocated a second cache space within the data storage space. The method further comprises determining cache profiles for at least the first cache tenant and the second cache tenant according to data collected during the monitoring; analyzing the cache profiles for the plurality of cache tenants to determine an expected cache usage model for the cache mechanism; and analyzing the cache usage model and factors related to cache efficiency or performance for the one or more cache tenants to dictate one or more occupancy constraints.

    摘要翻译: 提供了缓存优化的系统和方法。 该方法包括监视共享具有一定数量的数据存储空间的多个高速缓存机构的多个高速缓存租户的高速缓存访​​问速率,其中具有第一高速缓存大小的第一高速缓存租户在数据存储空间内被分配第一高速缓存空间,并且其中 具有第二高速缓存大小的第二高速缓存租户在数据存储空间内分配第二高速缓存空间。 该方法还包括根据监视期间收集的数据确定至少第一高速缓存租户和第二高速缓存租户的高速缓存简档; 分析所述多个高速缓存租户的高速缓存简档以确定所述高速缓存机构的预期高速缓存使用模型; 以及分析所述高速缓存使用模型以及与所述一个或多个高速缓存租户相关的高速缓存效率或性能的因素来决定一个或多个占用限制。