Methods and systems for inter-pipeline data hazard avoidance

    公开(公告)号:US10817301B2

    公开(公告)日:2020-10-27

    申请号:US16009358

    申请日:2018-06-15

    IPC分类号: G06F9/44 G06F9/38 G06F9/30

    摘要: Methods and parallel processing units for avoiding inter-pipeline data hazards wherein inter-pipeline data hazards are identified at compile time. For each identified inter-pipeline data hazard the primary instruction and secondary instruction(s) thereof are identified as such and are linked by a counter which is used to track that inter-pipeline data hazard. Then when a primary instruction is output by the instruction decoder for execution the value of the counter associated therewith is adjusted (e.g. incremented) to indicate that there is hazard related to the primary instruction, and when primary instruction has been resolved by one of multiple parallel processing pipelines the value of the counter associated therewith is adjusted (e.g. decremented) to indicate that the hazard related to the primary instruction has been resolved. When a secondary instruction is output by the decoder for execution, the secondary instruction is stalled in a queue associated with the appropriate instruction pipeline if at least one counter associated with the primary instructions from which it depends indicates that there is a hazard related to the primary instruction.

    SCHEDULING TASKS
    32.
    发明申请
    SCHEDULING TASKS 审中-公开

    公开(公告)号:US20180365058A1

    公开(公告)日:2018-12-20

    申请号:US16011241

    申请日:2018-06-18

    IPC分类号: G06F9/48 G06F9/30 G06F7/575

    摘要: A method of activating scheduling instructions within a parallel processing unit is described. The method includes checking if an ALU targeted by a decoded instruction is full by checking a value of an ALU work fullness counter stored in the instruction controller and associated with the targeted ALU. If the targeted ALU is not full, the decoded instruction is sent to the targeted ALU for execution and the ALU work fullness counter associated with the targeted ALU is updated. If, however, the targeted ALU is full, a scheduler is triggered to de-activate the scheduled task by changing the scheduled task from the active state to a non-active state. When an ALU changes from being full to not being full, the scheduler is triggered to re-activate an oldest scheduled task waiting for the ALU by removing the oldest scheduled task from the non-active state.

    SCHEDULING TASKS
    33.
    发明申请
    SCHEDULING TASKS 审中-公开

    公开(公告)号:US20180365057A1

    公开(公告)日:2018-12-20

    申请号:US16011093

    申请日:2018-06-18

    摘要: A method of scheduling instructions within a parallel processing unit is described. The method comprises decoding, in an instruction decoder, an instruction in a scheduled task in an active state, and checking, by an instruction controller, if an ALU targeted by the decoded instruction is a primary instruction pipeline. If the targeted ALU is a primary instruction pipeline, a list associated with the primary instruction pipeline is checked to determine whether the scheduled task is already included in the list. If the scheduled task is already included in the list, the decoded instruction is sent to the primary instruction pipeline.

    SCHEDULING TASKS
    34.
    发明申请
    SCHEDULING TASKS 审中-公开

    公开(公告)号:US20180365009A1

    公开(公告)日:2018-12-20

    申请号:US16010935

    申请日:2018-06-18

    IPC分类号: G06F9/30 G06F7/575

    摘要: A method of activating scheduling instructions within a parallel processing unit is described. The method comprises decoding, in an instruction decoder, an instruction in a scheduled task in an active state and checking, by an instruction controller, if a swap flag is set in the decoded instruction. If the swap flag in the decoded instruction is set, a scheduler is triggered to de-activate the scheduled task by changing the scheduled task from the active state to a non-active state.

    Queues for inter-pipeline data hazard avoidance

    公开(公告)号:US11698790B2

    公开(公告)日:2023-07-11

    申请号:US17523633

    申请日:2021-11-10

    IPC分类号: G06F9/38 G06F9/30

    摘要: Methods and parallel processing units for avoiding inter-pipeline data hazards identified at compile time. For each identified inter-pipeline data hazard the primary instruction and secondary instruction(s) thereof are identified as such and are linked by a counter which is used to track that inter-pipeline data hazard. When a primary instruction is output by the instruction decoder for execution the value of the counter associated therewith is adjusted to indicate that there is hazard related to the primary instruction, and when primary instruction has been resolved by one of multiple parallel processing pipelines the value of the counter associated therewith is adjusted to indicate that the hazard related to the primary instruction has been resolved. When a secondary instruction is output by the decoder for execution, the secondary instruction is stalled in a queue associated with the appropriate instruction pipeline if at least one counter associated with the primary instructions from which it depends indicates that there is a hazard related to the primary instruction.

    Scheduling tasks using swap flags
    38.
    发明授权

    公开(公告)号:US11531545B2

    公开(公告)日:2022-12-20

    申请号:US17108389

    申请日:2020-12-01

    摘要: A method of activating scheduling instructions within a parallel processing unit is described. The method comprises decoding, in an instruction decoder, an instruction in a scheduled task in an active state and checking, by an instruction controller, if a swap flag is set in the decoded instruction. If the swap flag in the decoded instruction is set, a scheduler is triggered to de-activate the scheduled task by changing the scheduled task from the active state to a non-active state.

    Synchronizing scheduling tasks with atomic ALU

    公开(公告)号:US11500677B2

    公开(公告)日:2022-11-15

    申请号:US17087837

    申请日:2020-11-03

    摘要: A method of synchronizing a group of scheduled tasks within a parallel processing unit into a known state is described. The method uses a synchronization instruction in a scheduled task which triggers, in response to decoding of the instruction, an instruction decoder to place the scheduled task into a non-active state and forward the decoded synchronization instruction to an atomic ALU for execution. When the atomic ALU executes the decoded synchronization instruction, the atomic ALU performs an operation and check on data assigned to the group ID of the scheduled task and if the check is passed, all scheduled tasks having the particular group ID are removed from the non-active state.

    SCHEDULING TASKS USING WORK FULLNESS COUNTER

    公开(公告)号:US20220075652A1

    公开(公告)日:2022-03-10

    申请号:US17529004

    申请日:2021-11-17

    摘要: A method of activating scheduling instructions within a parallel processing unit includes checking if an ALU targeted by a decoded instruction is full by checking a value of an ALU work fullness counter stored in the instruction controller and associated with the targeted ALU. If the targeted ALU is not full, the decoded instruction is sent to the targeted ALU for execution and the ALU work fullness counter associated with the targeted ALU is updated. If, however, the targeted ALU is full, a scheduler is triggered to de-activate the scheduled task by changing the scheduled task from the active state to a non-active state. When an ALU changes from being full to not being full, the scheduler is triggered to re-activate an oldest scheduled task waiting for the ALU by removing the oldest scheduled task from the non-active state.