Locally made, globally coordinated resource allocation decisions based on information provided by the second-price auction model
    1.
    发明授权
    Locally made, globally coordinated resource allocation decisions based on information provided by the second-price auction model 失效
    基于第二价格拍卖模式提供的信息进行全球协调的资源分配决策

    公开(公告)号:US06587865B1

    公开(公告)日:2003-07-01

    申请号:US09157479

    申请日:1998-09-21

    IPC分类号: G06F900

    CPC分类号: G06F9/4881 G06F9/50

    摘要: In a computer system, a method and apparatus for scheduling activities' access to a resource with minimal involvement of the kernel of the operating system. More specifically, a “next bid” is maintained, and this parameter identifies the highest bid for the resource by any activity not currently accessing the resource. The accessing activity then compares its bid, which can be time varying, with the “next bid” to determine whether it should release the resource to another activity. The “next bid” can be accessed without any system calls to the operating system. This allows the activity to determine whether to relinquish control to the system without the necessity of communication between the two. Likewise, the operating system can access the bid of the accessing activity without explicit communication. This allows the system to determine whether to preempt the accessing activity without the necessity of communication between the two.

    摘要翻译: 在计算机系统中,一种方法和装置,用于以最少的操作系统的内核参与调度活动对资源的访问。 更具体地说,维持“下一个出价”,并且该参数通过当前未访问资源的任何活动来识别该资源的最高出价。 然后,访问活动将其可以随时间变化的出价与“下一个出价”进行比较,以确定是否将资源释放到另一个活动。 无需对操作系统进行任何系统调用即可访问“下一个出价”。 这允许活动确定是否放弃对系统的控制,而不需要两者之间的通信。 同样,操作系统可以访问访问活动的出价而不进行明确的通信。 这允许系统确定是否抢占访问活动,而不需要两者之间的通信。

    Technique for efficiently transferring moderate amounts of data across address space boundary
    2.
    发明授权
    Technique for efficiently transferring moderate amounts of data across address space boundary 失效
    用于跨地址空间边界高效传输适量数据的技术

    公开(公告)号:US06601146B2

    公开(公告)日:2003-07-29

    申请号:US09098061

    申请日:1998-06-16

    IPC分类号: G06F1200

    CPC分类号: H04L29/06 G06F9/544

    摘要: A method and apparatus for performing efficient interprocess communication (IPC) in a computer system. With this invention, a memory region called the IPC transfer region is shared among all processes of the system to enable more efficient IPC. The unique physical address of the region is mapped into a virtual address from each of the address spaces of the processes of the system. When one of the processes needs to transfer data to another of the processes, the first process stores arguments describing the data in the region using the virtual address in its address space that maps into the unique physical address. When the other or second process needs to receive the data, the second process reads the data from the second region using the virtual address in its memory space that maps into the unique physical address. With this invention, in most cases, control of the IPC transfer region occurs automatically without any kernel intervention.

    摘要翻译: 一种用于在计算机系统中执行有效的进程间通信(IPC)的方法和装置。 利用本发明,在系统的所有进程之间共享称为IPC传送区域的存储区域,以实现更高效的IPC。 区域的唯一物理地址被映射到系统进程的每个地址空间的虚拟地址。 当其中一个进程需要将数据传输到另一个进程时,第一个进程使用映射到唯一物理地址的地址空间中的虚拟地址来存储描述该区域中的数据的参数。 当另一个或第二个进程需要接收数据时,第二个进程使用映射到唯一物理地址的存储空间中的虚拟地址从第二个区域读取数据。 利用本发明,在大多数情况下,IPC传送区域的控制自动发生,无需任何内核干预。

    Achieving autonomic behavior in an operating system via a hot-swapping mechanism
    3.
    发明授权
    Achieving autonomic behavior in an operating system via a hot-swapping mechanism 失效
    通过热插拔机制实现操作系统中的自主行为

    公开(公告)号:US07533377B2

    公开(公告)日:2009-05-12

    申请号:US10673587

    申请日:2003-09-29

    IPC分类号: G06F9/44 G06F15/177

    CPC分类号: G06F8/656

    摘要: Systems, especially operating systems, are becoming more complex to the point where maintaining them by humans is becoming nearly impossible. Many corporations have recognized this trend and have begun investing in autonomic technology. Autonomic technology allows a piece of software to monitor, diagnose, and repair itself. This can be used for improved performance, reliability, maintainability, security, etc. Disclosed herein is a mechanism to allow operating systems to hot swap a piece of operating system code, while continuing to offer to the user the service which that code is providing. This can be used, for examples, to increase the performance of an application or to fix a detected security hole live without bringing the machine down. Some autonomic ability will be mandatory in next generation operating system for without it they will collapse under their own complexity. The invention offers a key component of being able to achieve autonomic computing.

    摘要翻译: 系统尤其是操作系统正在变得越来越复杂,人们几乎不可能维系这些系统。 许多公司已经认识到这一趋势,并开始投资于自主技术。 自动技术允许一个软件来监视,诊断和修复自身。 这可以用于改进的性能,可靠性,可维护性,安全性等。这里公开了一种允许操作系统热插拔操作系统代码的机制,同时继续向用户提供该代码提供的服务。 例如,这可以用于增加应用程序的性能或者在不使机器停机的情况下固定检测到的安全漏洞。 一些自主能力在下一代操作系统中将是强制性的,没有它们将在自己的复杂性下崩溃。 本发明提供能够实现自主计算的关键组件。

    Cache architecture to enable accurate cache sensitivity
    4.
    发明授权
    Cache architecture to enable accurate cache sensitivity 失效
    缓存结构,以实现高速缓存灵敏度

    公开(公告)号:US06243788B1

    公开(公告)日:2001-06-05

    申请号:US09098988

    申请日:1998-06-17

    IPC分类号: G06F1200

    CPC分类号: G06F9/5033

    摘要: A technique of monitoring the cache footprint of relevant threads on a given processor and its associated cache, thus enabling operating systems to perform better cache sensitive scheduling. A function of the footprint of a thread in a cache can be used as an indication of the affinity of that thread to that cache's processor. For instance, the larger the number of cachelines already existing in a cache, the smaller the number of cache misses the thread will experience when scheduled on that processor, and hence the greater the affinity of the thread to that processor. Besides a thread's priority and other system defined parameters, scheduling algorithms can take cache affinity into account when assigning execution of threads to particular processors. This invention describes an apparatus that accurately measures the cache footprint of a thread on a given processor and its associated cache by keeping a state and ownership count of cachelines based on ownership registration and a cache usage as determined by a cache monitoring unit.

    摘要翻译: 监视给定处理器及其关联高速缓存上的相关线程的缓存占用空间的技术,从而使操作系统能够执行更好的缓存敏感调度。 高速缓存中的线程占用空间的功能可以用作该线程与该缓存处理器的亲和度的指示。 例如,缓存中已经存在的高速缓存行数越多,线程在该处理器上调度时将遇到的高速缓存未命中的数量越小,因此线程对该处理器的亲和性越大。 除了线程的优先级和其他系统定义的参数之外,调度算法可以在将特定执行线程分配给特定处理器时考虑缓存关联性。 本发明描述了一种通过基于由高速缓存监视单元确定的所有权注册和高速缓存使用来保持高速缓存行的状态和所有权计数来精确地测量给定处理器及其相关联的高速缓存上的线程的缓存占用空间的装置。

    Dynamic update mechanisms in operating systems
    5.
    发明授权
    Dynamic update mechanisms in operating systems 失效
    操作系统中的动态更新机制

    公开(公告)号:US07818736B2

    公开(公告)日:2010-10-19

    申请号:US11227761

    申请日:2005-09-14

    IPC分类号: G06F9/44 G06F15/16

    CPC分类号: G06F8/67 G06F8/656

    摘要: To dynamically update an operating system, a new factory object may have one or more new and/or updated object instances. A corresponding old factory object is then located and its version is checked for compatibility. A dynamic update procedure is then executed, which includes (a) changing a factory reference pointer within the operating system from the old factory object to the new factory object. For the case of updated object instances, (b) hot swapping each old object instance for its corresponding updated object instance, and (c) removing the old factory object. This may be performed for multiple updated object instances in the new factory object, preferably each separately. For the case of new object instances, they are created by the new factory and pointers established to invoke them. A single factory object may include multiple updated objects from a class, and/or new object instances from different classes, and the update may be performed without the need to reboot the operating system.

    摘要翻译: 为了动态更新操作系统,新的工厂对象可能具有一个或多个新的和/或更新的对象实例。 然后找到相应的旧工厂对象,并检查其版本的兼容性。 然后执行动态更新过程,其中包括(a)将操作系统内的工厂参考指针从旧工厂对象更改为新的工厂对象。 对于更新的对象实例的情况,(b)热交换其对应的更新对象实例的每个旧对象实例,以及(c)删除旧的工厂对象。 这可以针对新的工厂对象中的多个更新的对象实例来执行,优选地每个单独地执行。 对于新对象实例的情况,它们由新工厂创建,并且已建立的指针用于调用它们。 单个工厂对象可以包括来自类的多个更新对象和/或来自不同类的新对象实例,并且可以执行更新而不需要重新启动操作系统。

    Assist thread for injecting cache memory in a microprocessor
    6.
    发明授权
    Assist thread for injecting cache memory in a microprocessor 有权
    协助在微处理器中注入高速缓存的线程

    公开(公告)号:US08949837B2

    公开(公告)日:2015-02-03

    申请号:US13434423

    申请日:2012-03-29

    摘要: A data processing system includes a microprocessor having access to multiple levels of cache memories. The microprocessor executes a main thread compiled from a source code object. The system includes a processor for executing an assist thread also derived from the source code object. The assist thread includes memory reference instructions of the main thread and only those arithmetic instructions required to resolve the memory reference instructions. A scheduler configured to schedule the assist thread in conjunction with the corresponding execution thread is configured to execute the assist thread ahead of the execution thread by a determinable threshold such as the number of main processor cycles or the number of code instructions. The assist thread may execute in the main processor or in a dedicated assist processor that makes direct memory accesses to one of the lower level cache memory elements.

    摘要翻译: 数据处理系统包括具有访问多级缓存存储器的微处理器。 微处理器执行从源代码对象编译的主线程。 该系统包括用于执行也源自源代码对象的辅助线程的处理器。 辅助线程包括主线程的存储器参考指令和仅解析存储器参考指令所需的算术指令。 配置成与对应的执行线程一起调度辅助线程的调度器被配置为通过诸如主处理器周期的数量或代码指令的数量的可确定的阈值来执行执行线程之前的辅助线程。 辅助线程可以在主处理器或专用辅助处理器中执行,该处理器直接对下一级高速缓冲存储器元件之一进行存储器访问。

    Optimal interconnect utilization in a data processing network
    7.
    发明授权
    Optimal interconnect utilization in a data processing network 失效
    数据处理网络中的最佳互连利用率

    公开(公告)号:US07400585B2

    公开(公告)日:2008-07-15

    申请号:US10948414

    申请日:2004-09-23

    IPC分类号: H04L12/56

    摘要: A method for managing packet traffic in a data processing network includes collecting data indicative of the amount of packet traffic traversing each of the links in the network's interconnect. The collected data includes source and destination information indicative of the source and destination of corresponding packets. A heavily used links are then identified from the collected data. Packet data associated with the heavily used link is then analyzed to identify a packet source and packet destination combination that is a significant contributor to the packet traffic on the heavily used link. In response, a process associated with the identified packet source and packet destination combination is migrated, such as to another node of the network, to reduce the traffic on the heavily used link. In one embodiment, an agent installed on each interconnect switch collects the packet data for interconnect links connected to the switch.

    摘要翻译: 一种用于管理数据处理网络中的分组业务的方法包括收集表示穿过网络互连中每个链路的分组流量的数据的数据。 收集的数据包括指示相应分组的源和目的地的源和目的地信息。 然后从收集的数据中识别出大量使用的链接。 然后分析与大量使用的链路相关联的分组数据,以识别作为重度使用的链路上的分组业务的重要贡献者的分组源和分组目的地组合。 作为响应,与识别的分组源和分组目的地组合相关联的进程被迁移,例如到网络的另一个节点,以减少重度使用的链路上的流量。 在一个实施例中,安装在每个互连交换机上的代理收集用于连接到交换机的互连链路的分组数据。

    Optimal interconnect utilization in a data processing network
    9.
    发明授权
    Optimal interconnect utilization in a data processing network 失效
    数据处理网络中的最佳互连利用率

    公开(公告)号:US07821944B2

    公开(公告)日:2010-10-26

    申请号:US12059762

    申请日:2008-03-31

    IPC分类号: H04L12/56

    摘要: A method for managing packet traffic in a data processing network includes collecting data indicative of the amount of packet traffic traversing each of the links in the network's interconnect. The collected data includes source and destination information indicative of the source and destination of corresponding packets. A heavily used links are then identified from the collected data. Packet data associated with the heavily used link is then analyzed to identify a packet source and packet destination combination that is a significant contributor to the packet traffic on the heavily used link. In response, a process associated with the identified packet source and packet destination combination is migrated, such as to another node of the network, to reduce the traffic on the heavily used link. In one embodiment, an agent installed on each interconnect switch collects the packet data for interconnect links connected to the switch.

    摘要翻译: 一种用于管理数据处理网络中的分组业务的方法包括收集表示穿过网络互连中每个链路的分组流量的数据的数据。 收集的数据包括指示相应分组的源和目的地的源和目的地信息。 然后从收集的数据中识别出大量使用的链接。 然后分析与大量使用的链路相关联的分组数据,以识别作为重度使用的链路上的分组业务的重要贡献者的分组源和分组目的地组合。 作为响应,与识别的分组源和分组目的地组合相关联的进程被迁移,例如到网络的另一个节点,以减少重度使用的链路上的流量。 在一个实施例中,安装在每个互连交换机上的代理收集用于连接到交换机的互连链路的分组数据。

    Method and system for memory address translation and pinning
    10.
    发明授权
    Method and system for memory address translation and pinning 失效
    内存地址转换和固定的方法和系统

    公开(公告)号:US07636800B2

    公开(公告)日:2009-12-22

    申请号:US11426588

    申请日:2006-06-27

    摘要: A method and system for memory address translation and pinning are provided. The method includes attaching a memory address space identifier to a direct memory access (DMA) request, the DMA request is sent by a consumer and using a virtual address in a given address space. The method further includes looking up for the memory address space identifier to find a translation of the virtual address in the given address space used in the DMA request to a physical page frame. Provided that the physical page frame is found, pinning the physical page frame al song as the DMA request is in progress to prevent an unmapping operation of said virtual address in said given address space, and completing the DMA request, wherein the steps of attaching, looking up and pinning are centrally controlled by a host gateway.

    摘要翻译: 提供了一种用于存储器地址转换和钉扎的方法和系统。 该方法包括将存储器地址空间标识符附加到直接存储器访问(DMA)请求,DMA请求由消费者发送并且使用给定地址空间中的虚拟地址。 该方法还包括查找存储器地址空间标识符以找到在DMA请求中使用的给定地址空间中的虚拟地址到物理页面帧的转换。 如果发现物理页框,则在进行DMA请求时固定物理页框al歌,以防止在所述给定地址空间中所述虚拟地址的解映射操作,并完成DMA请求,其中, 查找和固定由主机网关集中控制。