-
公开(公告)号:US20220391264A1
公开(公告)日:2022-12-08
申请号:US17338377
申请日:2021-06-03
Applicant: NVIDIA CORPORATION
Inventor: Ajay Sudarshan TIRUMALA , Olivier GIROUX , Peter NELSON , Gary M. TAROLLI , Ankita UPRETI
Abstract: Various embodiments include a parallel processing computer system that enables parallel instances of a program to synchronize at disparate addresses in memory. When the parallel program instances need to exchange data, the program instances synchronize based on a mask that identifies the program instances that are synchronizing. As each program instance reaches the point of synchronization, the program instance blocks and waits for all other program instances to reach the point of synchronization. When all program instances have reached the point of synchronization, at least one program instance executes a synchronous operation to exchange data. The program instances then continue execution at respective and disparate return addresses.
-
公开(公告)号:US20250068421A1
公开(公告)日:2025-02-27
申请号:US18908678
申请日:2024-10-07
Applicant: NVIDIA Corporation
Inventor: Maciej Piotr TYRLIK , Ajay Sudarshan TIRUMALA , Shirish GADRE , Frank Joseph EATON , Daniel Alan STIFFLER
Abstract: Various techniques for accelerating dynamic programming algorithms are provided. For example, a fused addition and comparison instruction, a three-operand comparison instruction, and a two-operand comparison instruction are used to accelerate a Needleman-Wunsch algorithm that determines an optimized global alignment of subsequences over two entire sequences. In another example, the fused addition and comparison instruction is used in an innermost loop of a Floyd-Warshall algorithm to reduce the number of instructions required to determine shortest paths between pairs of vertices in a graph. In another example, a two-way single instruction multiple data (SIMD) floating point variant of the three-operand comparison instruction is used to reduce the number of instructions required to determine the median of an array of floating point values.
-
公开(公告)号:US20230305844A1
公开(公告)日:2023-09-28
申请号:US17936172
申请日:2022-09-28
Applicant: NVIDIA CORPORATION
Inventor: Maciej Piotr TYRLIK , Ajay Sudarshan TIRUMALA , Shirish GADRE , Frank Joseph EATON , Daniel Alan STIFFLER
CPC classification number: G06F9/30065 , G06F9/3887
Abstract: Various techniques for accelerating dynamic programming algorithms are provided. For example, a fused addition and comparison instruction, a three-operand comparison instruction, and a two-operand comparison instruction are used to accelerate a Needleman-Wunsch algorithm that determines an optimized global alignment of subsequences over two entire sequences. In another example, the fused addition and comparison instruction is used in an innermost loop of a Floyd-Warshall algorithm to reduce the number of instructions required to determine shortest paths between pairs of vertices in a graph. In another example, a two-way single instruction multiple data (SIMD) floating point variant of the three-operand comparison instruction is used to reduce the number of instructions required to determine the median of an array of floating point values.
-
公开(公告)号:US20180314522A1
公开(公告)日:2018-11-01
申请号:US15582549
申请日:2017-04-28
Applicant: NVIDIA Corporation
Inventor: Olivier GIROUX , Peter NELSON , Jack CHOQUETTE , Ajay Sudarshan TIRUMALA
Abstract: A streaming multiprocessor (SM) includes a nanosleep (NS) unit configured to cause individual threads executing on the SM to sleep for a programmer-specified interval of time. For a given thread, the NS unit parses a NANOSLEEP instruction and extracts a sleep time. The NS unit then maps the sleep time to a single bit of a timer and causes the thread to sleep. When the timer bit changes, the sleep time expires, and the NS unit awakens the thread. The thread may then continue executing. The SM also includes a nanotrap (NT) unit configured to issue traps using a similar timing mechanism to that described above. For a given thread, the NT unit parses a NANOTRAP instruction and extracts a trap time. The NT unit then maps the trap time to a single bit of a timer. When the timer bit changes, the NT unit issues a trap.
-
公开(公告)号:US20230101085A1
公开(公告)日:2023-03-30
申请号:US17491276
申请日:2021-09-30
Applicant: NVIDIA CORPORATION
Inventor: Maciej Piotr TYRLIK , Ajay Sudarshan TIRUMALA , Shirish GADRE
IPC: G06F9/38
Abstract: Various techniques for accelerating Smith-Waterman sequence alignments are provided. For example, threads in a group of threads are employed to use an interleaved cell layout to store relevant data in registers while computing sub-alignment data for one or more local alignment problems. In another example, specialized instructions that reduce the number of cycles required to compute each sub-alignment score are utilized. In another example, threads are employed to compute sub-alignment data for a subset of columns of one or more local alignment problems while other threads begin computing sub-alignment data based on partial result data received from the preceding threads. After computing a maximum sub-alignment score, a thread stores the maximum sub-alignment score and the corresponding position in global memory.
-
6.
公开(公告)号:US20230095916A1
公开(公告)日:2023-03-30
申请号:US17491266
申请日:2021-09-30
Applicant: NVIDIA CORPORATION
Inventor: Maciej Piotr TYRLIK , Ajay Sudarshan TIRUMALA , Shirish GADRE
IPC: G06F16/23 , G06F16/242 , G16B50/30 , G16B30/10
Abstract: Various techniques for accelerating Smith-Waterman sequence alignments are provided. For example, threads in a group of threads are employed to use an interleaved cell layout to store relevant data in registers while computing sub-alignment data for one or more local alignment problems. In another example, specialized instructions that reduce the number of cycles required to compute each sub-alignment score are utilized. In another example, threads are employed to compute sub-alignment data for a subset of columns of one or more local alignment problems while other threads begin computing sub-alignment data based on partial result data received from the preceding threads. After computing a maximum sub-alignment score, a thread stores the maximum sub-alignment score and the corresponding position in global memory.
-
公开(公告)号:US20210019198A1
公开(公告)日:2021-01-21
申请号:US16513393
申请日:2019-07-16
Applicant: NVIDIA CORPORATION
Inventor: Peter NELSON , Olivier GIROUX , Ajay Sudarshan TIRUMALA
IPC: G06F9/52
Abstract: Techniques are disclosed for reducing the latency associated with performing data reductions in a multithreaded processor. In response to a single instruction associated with a set of threads executing in the multithreaded processor, a warp reduction unit acquires register values stored in source registers, where each register value is associated with a different thread included in the set of threads. The warp reduction unit performs operation(s) on the register values to compute an aggregate value. The warp reduction unit stores the aggregate value in a destination register that is accessible to at least one of the threads in the set of threads. Because the data reduction is performed via a single instruction using hardware specialized for data reductions, the number of cycles required to perform the data reduction is decreased relative to prior-art techniques that are performed via multiple instructions using hardware that is not specialized for data reductions.
-
-
-
-
-
-