Dynamic routing of I/O requests in a multi-tier storage environment
    1.
    发明授权
    Dynamic routing of I/O requests in a multi-tier storage environment 有权
    I / O请求在多层存储环境中的动态路由

    公开(公告)号:US07606934B1

    公开(公告)日:2009-10-20

    申请号:US11077472

    申请日:2005-03-10

    IPC分类号: G06F15/173 G06E1/00

    摘要: A method for routing an incoming service request is described wherein the service request is routed to a selected storage tier based on that selected storage tier having a predicted value indicating a state having greater utility as compared with the predicted value of the state associated with at least one other storage tier within the storage system. A computer system comprising a multi-tier storage system is described, the multi-tier storage system having a routing algorithm configured to adaptively tune functions which map variables describing the state of each storage tier of the storage system into the average latency experienced by incoming service requests associated with the storage tier.

    摘要翻译: 描述了用于路由传入服务请求的方法,其中基于所选择的存储层,将所述服务请求路由到所选存储层,所述存储层具有指示与至少与所述状态相关联的状态的预测值相比具有更大效用的状态的预测值 存储系统中的另一个存储层。 描述了包括多层存储系统的计算机系统,所述多层存储系统具有路由算法,所述路由算法被配置为自适应地调整将描述存储系统的每个存储层的状态的变量映射到由传入服务经历的平均延迟的功能 与存储层关联的请求。

    Dynamic data migration in a multi-tier storage system
    2.
    发明授权
    Dynamic data migration in a multi-tier storage system 有权
    多层存储系统中的动态数据迁移

    公开(公告)号:US07539709B1

    公开(公告)日:2009-05-26

    申请号:US11153058

    申请日:2005-06-15

    IPC分类号: G06F17/30

    摘要: A method and apparatus for managing data is described which includes determining the current state of a storage tier of a plurality of storage tiers within a storage system. Further, a prediction is made, using a prediction architecture comprising at least one predetermined variable, of the utilities of future expected states for at least two of a plurality of storage tiers involved with a data operation, wherein a future expected state of a corresponding storage tier is based on conditions expected to occur following the completion of the data operation. Finally, the data operation is performed if the predicted utility of the future expected state associated with the at least two of a plurality of storage tiers is more beneficial than the utility of the current state.

    摘要翻译: 描述了用于管理数据的方法和装置,其包括确定存储系统内的多个存储层的存储层的当前状态。 此外,使用涉及数据操作的多个存储层中的至少两个的未来预期状态的效用,使用包括至少一个预定变量的预测架构进行预测,其中相应存储器的未来预期状态 层次是基于数据操作完成后预期发生的情况。 最后,如果与多个存储层中的至少两个存储层相关联的未来预期状态的预测效用比当前状态的效用更有利,则执行数据操作。

    ADAPTIVE TRIGGERING OF GARBAGE COLLECTION
    3.
    发明申请
    ADAPTIVE TRIGGERING OF GARBAGE COLLECTION 有权
    自动收藏收藏品

    公开(公告)号:US20110107050A1

    公开(公告)日:2011-05-05

    申请号:US12612777

    申请日:2009-11-05

    申请人: David Vengerov

    发明人: David Vengerov

    IPC分类号: G06F12/02

    CPC分类号: G06F12/0269

    摘要: Methods and apparatus are provided for adaptively triggering garbage collection. During relatively steady or decreasing rates of allocation of free memory, a threshold for triggering garbage collection is dynamically and adaptively determined on the basis of memory drops (i.e., decreases in free memory) during garbage collection. If a significant increase in the rate of allocation of memory is observed (e.g., two consecutive measurements that exceed a mean rate plus two standard deviations), the threshold is modified based on a memory drop previously observed in conjunction with the current memory allocation rate, or a memory drop estimated to be possible for the current allocation rate.

    摘要翻译: 提供了自动触发垃圾回收的方法和装置。 在相对稳定或降低的可用存储器分配速率的情况下,在垃圾收集期间,基于存储器丢弃(即,可用存储器中的减少)动态地并自适应地确定用于触发垃圾收集的阈值。 如果观察到存储器分配率的显着增加(例如,超过平均速率加上两个标准偏差的两个连续测量),则基于先前结合当前存储器分配速率观察到的存储器降级来修改阈值, 或估计对于当前分配速率可能的存储器丢弃。

    Modeling customer behavior in a multi-choice service environment
    4.
    发明申请
    Modeling customer behavior in a multi-choice service environment 审中-公开
    在多选服务环境中建模客户行为

    公开(公告)号:US20080133320A1

    公开(公告)日:2008-06-05

    申请号:US11607527

    申请日:2006-12-01

    IPC分类号: G06Q10/00 G06F17/10

    CPC分类号: G06Q30/02

    摘要: One embodiment of the present invention provides a system that models customer behavior in a multi-choice service environment. The system constructs a probability density function f to represent probabilities of service-level choices made by customers, wherein the probability density function is a function of functional variables uθ(d) and p(d); uθ(d) is a utility function for a specific customer type indexed by vector θ; p(d) is a given price curve which specifies a relationship between service levels offered by a service provider and corresponding prices for the offered service levels; and uθ(d) and p(d) are both functions of the offered service levels d. The system then obtains a distribution function π(θ) which specifies a probability distribution of different customer types θ. Next, the system obtains a service level-choice distribution for a population of customers as a function of a given price curve based on the probability density function f and π(θ).

    摘要翻译: 本发明的一个实施例提供一种在多选择服务环境中模拟客户行为的系统。 系统构造概率密度函数f,以表示由客户做出的服务级别选择的概率,其中概率密度函数是函数变量u(d)和p(d)的函数; (d)是由向量θ索引的特定客户类型的效用函数; p(d)是给定的价格曲线,其指定服务提供商提供的服务级别与所提供的服务级别的相应价格之间的关系; 和(D)和p(d)都是所提供的服务级别d的函数。 然后,系统获得指定不同顾客类型θ的概率分布的分布函数pi(θ)。 接下来,系统基于概率密度函数f和pi(θ),作为给定价格曲线的函数,获得用户群体的服务水平选择分布。

    Cache-aware thread scheduling in multi-threaded systems
    5.
    发明授权
    Cache-aware thread scheduling in multi-threaded systems 有权
    多线程系统中的缓存感知线程调度

    公开(公告)号:US08533719B2

    公开(公告)日:2013-09-10

    申请号:US12754143

    申请日:2010-04-05

    IPC分类号: G06F9/46 G06F15/00 G06F13/00

    CPC分类号: G06F9/5033 Y02D10/22

    摘要: The disclosed embodiments provide a system that facilitates scheduling threads in a multi-threaded processor with multiple processor cores. During operation, the system executes a first thread in a processor core that is associated with a shared cache. During this execution, the system measures one or more metrics to characterize the first thread. Then, the system uses the characterization of the first thread and a characterization for a second, second thread to predict a performance impact that would occur if the second thread were to simultaneously execute in a second processor core that is also associated with the cache. If the predicted performance impact indicates that executing the second thread on the second processor core will improve performance for the multi-threaded processor, the system executes the second thread on the second processor core.

    摘要翻译: 所公开的实施例提供了一种便于在具有多个处理器核心的多线程处理器中调度线程的系统。 在操作期间,系统执行与共享高速缓存相关联的处理器核心中的第一线程。 在此执行期间,系统测量一个或多个度量来表征第一个线程。 然后,系统使用第一线程的表征和第二线程的表征来预测如果第二线程同时在也与高速缓存相关联的第二处理器核心中执行的性能影响。 如果预测的性能影响指示在第二处理器核心上执行第二线程将提高多线程处理器的性能,则系统执行第二处理器核心上的第二线程。

    DYNAMIC SCHEDULING OF APPLICATION TASKS IN A DISTRIBUTED TASK BASED SYSTEM
    6.
    发明申请
    DYNAMIC SCHEDULING OF APPLICATION TASKS IN A DISTRIBUTED TASK BASED SYSTEM 有权
    基于分布式任务系统的应用任务动态调度

    公开(公告)号:US20090228888A1

    公开(公告)日:2009-09-10

    申请号:US12045064

    申请日:2008-03-10

    IPC分类号: G06F9/46

    摘要: Disclosed herein is a system and method for dynamic scheduling of application tasks in a distributed task-based system. The system and method employ a learning mechanism that observes and predicts overall application task costs across a networked system, taking into account how the states or loads of the applications are likely to change over time. The application task costs are defined in economic terms. The system and method allows continuous optimization of application response times as perceived by application users.

    摘要翻译: 本文公开了一种用于基于分布式任务的系统中的应用任务的动态调度的系统和方法。 该系统和方法采用学习机制,其观察并预测整个联网系统的整体应用任务成本,同时考虑到应用程序的状态或负载如何随时间而变化。 应用任务成本以经济术语定义。 该系统和方法允许应用程序用户感知到的应用程序响应时间的连续优化。

    Method for scheduling jobs using distributed utility-based preemption policies
    7.
    发明授权
    Method for scheduling jobs using distributed utility-based preemption policies 有权
    使用基于分布式实用程序的抢占策略调度作业的方法

    公开(公告)号:US07444316B1

    公开(公告)日:2008-10-28

    申请号:US11045561

    申请日:2005-01-28

    申请人: David Vengerov

    发明人: David Vengerov

    IPC分类号: G06N7/02

    CPC分类号: G06N7/02

    摘要: One embodiment of the present invention provides a system that assigns jobs to a system containing a number of central processing units (CPUs). During operation, the system captures a current state of the system, which describes available resources on the system, characteristics of jobs currently being processed, and characteristics of jobs waiting to be assigned. The system then uses the current system state to estimate a long-term benefit to the system of not preempting any jobs currently being processed. If the benefit from preempting one or more jobs exceeds the benefit from not preempting any jobs, the system preempts one or more jobs currently being processed on the system with a new job.

    摘要翻译: 本发明的一个实施例提供了一种将作业分配给包含多个中央处理单元(CPU)的系统的系统。 在操作期间,系统捕获系统的当前状态,其描述系统上的可用资源,当前正在处理的作业的特征以及等待被分配的作业的特征。 系统然后使用当前的系统状态来估计系统的长期利益,而不是抢占当前正在处理的任何作业。 如果抢占一个或多个工作的好处超过了不抢占任何工作的好处,系统将利用新工作来抢占目前在系统上处理的一个或多个作业。

    Selecting basis functions to form a regression model for cache performance
    8.
    发明授权
    Selecting basis functions to form a regression model for cache performance 有权
    选择基函数以形成缓存性能的回归模型

    公开(公告)号:US07346736B1

    公开(公告)日:2008-03-18

    申请号:US11243353

    申请日:2005-10-03

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0802 G06F2212/601

    摘要: One embodiment of the present invention provides a system that selects bases to form a regression model for cache performance. During operation, the system receives empirical data for a cache rate. The system also receives derivative constraints for the cache rate. Next, the system obtains candidate bases that satisfy the derivative constraints. For each of these candidate bases, the system: (1) computes an aggregate error E incurred using the candidate basis over the empirical data; (2) computes an instability measure I of an extrapolation fit for using the candidate basis over an extrapolation region; and then (3) computes a selection criterion F for the candidate basis, wherein F is a function of E and I. Finally, the system minimizes the selection criterion F across the candidate bases to select the basis used for the regression model.

    摘要翻译: 本发明的一个实施例提供一种选择基础以形成用于高速缓存性能的回归模型的系统。 在操作期间,系统接收高速缓存率的经验数据。 系统还接收缓存速率的导数约束。 接下来,系统获得满足导数约束的候选基数。 对于每个这些候选基数,系统:(1)计算使用候选基础对经验数据产生的总误差E; (2)计算在外推区域上使用候选基础的外推拟合的不稳定度量I; 然后(3)计算候选基础的选择标准F,其中F是E和I的函数。最后,系统最小化跨候选基数的选择标准F,以选择用于回归模型的基础。

    Method and system for maximizing revenue generated from service level agreements
    9.
    发明授权
    Method and system for maximizing revenue generated from service level agreements 有权
    最大化从服务级别协议产生的收入的方法和系统

    公开(公告)号:US08533026B2

    公开(公告)日:2013-09-10

    申请号:US11581939

    申请日:2006-10-17

    IPC分类号: G06Q10/00

    CPC分类号: G06Q10/06 G06Q10/087

    摘要: A method for maximizing revenue generated from a plurality of service level agreements (SLAs) that includes receiving a first subset of the plurality of SLAs for executing a first plurality of jobs, wherein each SLA in the first subset specifies a first maximum requested delay that is greater than an initial minimum offered delay, and wherein a price of each SLA in the first subset is defined by the maximum requested delay and a price/delay function, calculating a first expected revenue from executing the first subset, and optimizing a second subset of the plurality of SLAs by replacing the initial minimum offered delay on the initial price/delay function with a new minimum offered delay based on the expected revenue, wherein each SLA in the second subset specifies a second maximum requested delay that is greater than the new minimum offered delay.

    摘要翻译: 一种用于最大化从多个服务级别协议(SLA)产生的收入的方法,所述服务级别协议(SLA)包括接收所述多个SLA的第一子集以执行第一多个作业,其中,所述第一子集中的每个SLA指定第一最大请求延迟, 大于初始最小提供延迟,并且其中第一子集中的每个SLA的价格由最大请求延迟和价格/延迟函数定义,从执行第一子集计算第一预期收入,并且优化第 所述多个SLA通过基于预期收入用新的最小提供延迟替换初始价格/延迟函数上的初始最小提供延迟,其中第二子集中的每个SLA指定大于新最小值的第二最大请求延迟 提供延迟

    Maximizing throughput for a garbage collector
    10.
    发明授权
    Maximizing throughput for a garbage collector 有权
    最大化垃圾收集器的吞吐量

    公开(公告)号:US08356061B2

    公开(公告)日:2013-01-15

    申请号:US12144100

    申请日:2008-06-23

    申请人: David Vengerov

    发明人: David Vengerov

    IPC分类号: G06F17/30

    CPC分类号: G06F12/0253 G06F12/0276

    摘要: Some embodiments of the present invention provide a system that executes a garbage collector in a computing system. During operation, the system obtains a throughput model for the garbage collector and estimates a set of characteristics associated with the garbage collector. Next, the system applies the characteristics to the throughput model to estimate a throughput of the garbage collector. The system then determines a level of performance for the garbage collector based on the estimated throughput. Finally, the system adjusts a tunable parameter for the garbage collector based on the level of performance to increase the throughput of the garbage collector.

    摘要翻译: 本发明的一些实施例提供一种在计算系统中执行垃圾收集器的系统。 在运行期间,系统获取垃圾收集器的吞吐量模型,并估计与垃圾回收器相关联的一组特性。 接下来,系统将特征应用于吞吐量模型以估计垃圾收集器的吞吐量。 然后,系统基于估计的吞吐量来确定垃圾收集器的性能水平。 最后,系统根据性能级别调整垃圾收集器的可调参数,增加垃圾收集器的吞吐量。