CONTINUATION ANALYSIS TASKS FOR GPU TASK SCHEDULING

    公开(公告)号:US20180349145A1

    公开(公告)日:2018-12-06

    申请号:US15607991

    申请日:2017-05-30

    CPC classification number: G06F9/505 G06F9/5066 G06F2209/509

    Abstract: Systems, apparatuses, and methods for implementing continuation analysis tasks (CATs) are disclosed. In one embodiment, a system implements hardware acceleration of CATs to manage the dependencies and scheduling of an application composed of multiple tasks. In one embodiment, a continuation packet is referenced directly by a first task. When the first task completes, the first task enqueues a continuation packet on a first queue. The first task can specify on which queue to place the continuation packet. The agent responsible for the first queue dequeues and executes the continuation packet which invokes an analysis phase which is performed prior to determining which dependent tasks to enqueue. If it is determined during the analysis phase that a second task is now ready to be launched, the second task is enqueued on one of the queues. Then, an agent responsible for this queue dequeues and executes the second task.

    METHOD AND APPARATUS FOR INTER-LANE THREAD MIGRATION

    公开(公告)号:US20170220346A1

    公开(公告)日:2017-08-03

    申请号:US15010093

    申请日:2016-01-29

    CPC classification number: G06F9/3851 G06F9/3887 G06F9/4856

    Abstract: Briefly, methods and apparatus to migrate a software thread from one wavefront executing on one execution unit to another wavefront executing on another execution unit whereby both execution units are associated with a compute unit of a processing device such as, for example, a GPU. The methods and apparatus may execute compiled dynamic thread migration swizzle buffer instructions that when executed allow access to a dynamic thread migration swizzle buffer that allows for the migration of register context information when migrating software threads. The register context information may be located in one or more locations of a register file prior to storing the register context information into the dynamic thread migration swizzle buffer. The method and apparatus may also return the register context information from the dynamic thread migration swizzle buffer to one or more different register file locations of the register file.

    DYNAMIC WAVEFRONT CREATION FOR PROCESSING UNITS USING A HYBRID COMPACTOR
    33.
    发明申请
    DYNAMIC WAVEFRONT CREATION FOR PROCESSING UNITS USING A HYBRID COMPACTOR 有权
    使用混合压缩机处理单元的动态波形创建

    公开(公告)号:US20160239302A1

    公开(公告)日:2016-08-18

    申请号:US14682971

    申请日:2015-04-09

    Abstract: A method, a non-transitory computer readable medium, and a processor for repacking dynamic wavefronts during program code execution on a processing unit, each dynamic wavefront including multiple threads are presented. If a branch instruction is detected, a determination is made whether all wavefronts following a same control path in the program code have reached a compaction point, which is the branch instruction. If no branch instruction is detected in executing the program code, a determination is made whether all wavefronts following the same control path have reached a reconvergence point, which is a beginning of a program code segment to be executed by both a taken branch and a not taken branch from a previous branch instruction. The dynamic wavefronts are repacked with all threads that follow the same control path, if all wavefronts following the same control path have reached the branch instruction or the reconvergence point.

    Abstract translation: 提出了一种方法,非暂时计算机可读介质和用于在处理单元上的程序代码执行期间重新包装动态波前的处理器,每个动态波前包括多个线程。 如果检测到分支指令,则确定程序代码中跟随相同控制路径的所有波前是否已经到达作为分支指令的压缩点。 如果在执行程序代码时没有检测到分支指令,则确定跟随相同控制路径的所有波前是否已经达到重新收敛点,该再失真点是要由执行分支而不是执行的程序代码段的开始 从前一个分支指令中分支。 如果跟随相同控制路径的所有波前已经到达分支指令或再聚合点,那么动态波前将重新打包所有遵循相同控制路径的线程。

    Atomic Execution of Processing-in-Memory Operations

    公开(公告)号:US20240419330A1

    公开(公告)日:2024-12-19

    申请号:US18211544

    申请日:2023-06-19

    Abstract: Scheduling processing-in-memory transactions in systems with multiple memory controllers is described. In accordance with the described techniques, an addressing system segments operations of a transaction into multiple microtransactions, where each microtransaction includes a subset of the transaction operations that are scheduled by a corresponding one of the multiple memory controllers. Each transaction, and its associated microtransactions, is assigned a transaction identifier based on a current counter value maintained at the multiple memory controllers, and the multiple memory controllers schedule execution of microtransactions based on associated transaction identifiers to ensure atomic execution of operations for a transaction without interruption by operations of a different transaction.

    Resource Access Control
    35.
    发明公开

    公开(公告)号:US20240220265A1

    公开(公告)日:2024-07-04

    申请号:US18147103

    申请日:2022-12-28

    CPC classification number: G06F9/3836 G06F9/4806 G06F9/5061

    Abstract: Resource access control is described. In accordance with the described techniques, a process (e.g., an application process, a system process, etc.) issues an instruction seeking access to a computation resource (e.g., a processor resource, a memory resource, etc.) to perform a computation task. An execution context for the instruction is checked to determine whether the execution context includes a resource indicator indicating permission to access the processor resource. Alternatively or additionally, the instruction is checked against an access table which identifies processes that are permitted and/or not permitted to access the computation resource.

    Executing Kernel Workgroups Across Multiple Compute Unit Types

    公开(公告)号:US20240111591A1

    公开(公告)日:2024-04-04

    申请号:US17957907

    申请日:2022-09-30

    CPC classification number: G06F9/5038 G06F9/3009 G06F9/5072

    Abstract: Portions of programs, oftentimes referred to as kernels, are written by programmers to target a particular type of compute unit, such as a central processing unit (CPU) core or a graphics processing unit (GPU) core. When executing a kernel, the kernel is separated into multiple parts referred to as workgroups, and each workgroup is provided to a compute unit for execution. Usage of one type of compute unit is monitored and, in response to the one type of compute unit being idle, one or more workgroups targeting another type of compute unit are executed on the one type of compute unit. For example, usage of CPU cores is monitored, and in response to the CPU cores being idle, one or more workgroups targeting GPU cores are executed on the CPU cores.

    DYNAMIC KERNEL MEMORY SPACE ALLOCATION
    40.
    发明公开

    公开(公告)号:US20230196502A1

    公开(公告)日:2023-06-22

    申请号:US18103322

    申请日:2023-01-30

    CPC classification number: G06T1/60 G06F9/30098 G06F12/023 G06T1/20 G06F12/02

    Abstract: A processing unit includes one or more processor cores and a set of registers to store configuration information for the processing unit. The processing unit also includes a coprocessor configured to receive a request to modify a memory allocation for a kernel concurrently with the kernel executing on the at least one processor core. The coprocessor is configured to modify the memory allocation by modifying the configuration information stored in the set of registers. In some cases, initial configuration information is provided to the set of registers by a different processing unit. The initial configuration information is stored in the set of registers prior to the coprocessor modifying the configuration information.

Patent Agency Ranking