Low latency thread context caching
    1.
    发明授权
    Low latency thread context caching 有权
    低延迟线程上下文缓存

    公开(公告)号:US09384036B1

    公开(公告)日:2016-07-05

    申请号:US14059218

    申请日:2013-10-21

    Applicant: Google Inc.

    Abstract: A method includes performing one or more operations as requested by a thread executing on a processor, the thread having a thread context; receiving a park request from the thread, the park request received following a request from the thread for a low latency resource, wherein the cache response time is less than or equal to a resource response threshold so as to allow the thread context to be stored and retrieved from the cache in less time than the portion of time it takes to complete the request for the low latency resource; storing the thread context in the cache; detecting that the resume condition has occurred; retrieving the thread context from the cache; and resuming execution of the thread.

    Abstract translation: 一种方法包括:执行在处理器上执行的线程所请求的一个或多个操作,所述线程具有线程上下文; 从所述线程接收到驻留请求,所述驻留请求是在所述线程针对低延迟资源的请求之后接收的,其中所述高速缓存响应时间小于或等于资源响应阈值,以便允许所述线程上下文被存储;以及 从比缓存时间资源完成请求所花费的时间少的时间,从缓存中检索; 将线程上下文存储在高速缓存中; 检测到恢复条件已经发生; 从缓存中检索线程上下文; 并恢复线程的执行。

    Shared input/output (I/O) unit
    2.
    发明授权
    Shared input/output (I/O) unit 有权
    共享输入/输出(I / O)单元

    公开(公告)号:US09218310B2

    公开(公告)日:2015-12-22

    申请号:US13835000

    申请日:2013-03-15

    Applicant: GOOGLE INC.

    CPC classification number: G06F13/4045 G06F13/38 G06F13/382 G06F2213/0026

    Abstract: A system includes a bus, a processor operably coupled to the bus, a memory operably coupled to the bus, a plurality of input/output (I/O) devices operably coupled to the bus, where each of the I/O devices has a set of control registers, and a first shared I/O unit operably coupled to the bus. The first shared I/O unit has a plurality of shared functions and is configured to perform the shared functions, where the shared I/O functions are not included as functions on the I/O devices and the I/O devices and the processor interact with the first shared I/O unit to use one or more of the shared functions performed by the first shared I/O unit.

    Abstract translation: 系统包括总线,可操作地耦合到总线的处理器,可操作地耦合到总线的存储器,可操作地耦合到总线的多个输入/输出(I / O)设备,其中每个I / O设备具有 一组控制寄存器,以及可操作地耦合到总线的第一共享I / O单元。 第一共享I / O单元具有多个共享功能,并且被配置为执行共享功能,其中共享I / O功能不包括在I / O设备和I / O设备和处理器相互作用之间的功能中 第一共享I / O单元使用由第一共享I / O单元执行的一个或多个共享功能。

    EFFICIENT INPUT/OUTPUT (I/O) OPERATIONS
    3.
    发明申请
    EFFICIENT INPUT/OUTPUT (I/O) OPERATIONS 有权
    有效的输入/输出(I / O)操作

    公开(公告)号:US20140281107A1

    公开(公告)日:2014-09-18

    申请号:US13835000

    申请日:2013-03-15

    Applicant: GOOGLE INC.

    CPC classification number: G06F13/4045 G06F13/38 G06F13/382 G06F2213/0026

    Abstract: A system includes a bus, a processor operably coupled to the bus, a memory operably coupled to the bus, a plurality of input/output (I/O) devices operably coupled to the bus, where each of the I/O devices has a set of control registers, and a first shared I/O unit operably coupled to the bus. The first shared I/O unit has a plurality of shared functions and is configured to perform the shared functions, where the shared I/O functions are not included as functions on the I/O devices and the I/O devices and the processor interact with the first shared I/O unit to use one or more of the shared functions performed by the first shared I/O unit.

    Abstract translation: 系统包括总线,可操作地耦合到总线的处理器,可操作地耦合到总线的存储器,可操作地耦合到总线的多个输入/输出(I / O)设备,其中每个I / O设备具有 一组控制寄存器,以及可操作地耦合到总线的第一共享I / O单元。 第一共享I / O单元具有多个共享功能,并且被配置为执行共享功能,其中共享I / O功能不包括在I / O设备和I / O设备和处理器相互作用之间的功能中 第一共享I / O单元使用由第一共享I / O单元执行的一个或多个共享功能。

    Doubling thread resources in a processor

    公开(公告)号:US09367318B1

    公开(公告)日:2016-06-14

    申请号:US14930893

    申请日:2015-11-03

    Applicant: Google Inc.

    Inventor: James Laudon

    Abstract: Methods and systems are provided for managing thread execution in a processor. Multiple instructions are fetched from fetch queues. The instructions satisfy the condition that they involve fewer bits than the integer processing pathway that is used to execute them. The instructions are decoded, and divided into groups. The instructions are processed simultaneously through the pathway, such that part of the pathway is used to execute one group of instructions and another part of the pathway is used to execute another group of instructions. These parts are isolated from one another so the execution of the instructions can share the pathway and execute simultaneously and independently.

    Doubling thread resources in a processor
    5.
    发明授权
    Doubling thread resources in a processor 有权
    在处理器中加倍线程资源

    公开(公告)号:US09207944B1

    公开(公告)日:2015-12-08

    申请号:US13839602

    申请日:2013-03-15

    Applicant: Google Inc.

    Inventor: James Laudon

    Abstract: Methods and systems are provided for managing thread execution in a processor. Multiple instructions are fetched from fetch queues. The instructions satisfy the condition that they involve fewer bits than the integer processing pathway that is used to execute them. The instructions are decoded, and divided into groups. The instructions are processed simultaneously through the pathway, such that part of the pathway is used to execute one group of instructions and another part of the pathway is used to execute another group of instructions. These parts are isolated from one another so the execution of the instructions can share the pathway and execute simultaneously and independently.

    Abstract translation: 提供了用于在处理器中管理线程执行的方法和系统。 从提取队列中获取多个指令。 这些指令满足条件,它们涉及比用于执行它们的整数处理路径少的位。 指令被解码,并分成组。 通过该路径同时处理指令,使得路径的一部分用于执行一组指令,并且该路径的另一部分用于执行另一组指令。 这些部分彼此隔离,因此指令的执行可以共享路径并且同时且独立地执行。

Patent Agency Ranking