Abstract:
Methods and apparatus relating to scalar core integration in a graphics processor. In an example, an apparatus comprises a processor to receive a set of workload instructions for a graphics workload from a host complex, determine a first subset of operations in the set of operations that is suitable for execution by a scalar processor complex of the graphics processing device and a second subset of operations in the set of operations that is suitable for execution by a vector processor complex of the graphics processing device, assign the first subset of operations to the scalar processor complex for execution to generate a first set of outputs, assign the second subset of operations to the vector processor complex for execution to generate a second set of outputs. Other embodiments are also disclosed and claimed.
Abstract:
An apparatus and method for performing BVH compression and decompression concurrently with stores and loads, respectively. For example, one embodiment comprises: bounding volume hierarchy (BVH) construction circuitry to build a BVH based on a set of input primitives, the BVH comprising a plurality of uncompressed coordinates; traversal/intersection circuitry to traverse one or more rays through the BVH and determine intersections with the set of input primitives using the uncompressed coordinates; store with compression circuitry to compress the BVH including the plurality of uncompressed coordinates to generate a compressed BVH with compressed coordinates and to store the compressed BVH to a memory subsystem; and load with decompression circuitry to decompress the BVH including the compressed coordinates to generate a decompressed BVH with the uncompressed coordinates and to load the decompressed BVH with uncompressed coordinates to a cache and/or a set of registers accessible by the traversal/intersection circuitry.
Abstract:
In an embodiment, the present invention is directed to a processor including a decode logic to receive a multi-dimensional loop counter update instruction and to decode the multi-dimensional loop counter update instruction into at least one decoded instruction, and an execution logic to execute the at least one decoded instruction to update at least one loop counter value of a first operand associated with the multi-dimensional loop counter update instruction by a first amount. Methods to collapse loops using such instructions are also disclosed. Other embodiments are described and claimed.
Abstract:
An apparatus is described having instruction execution logic circuitry to execute first, second, third and fourth instruction. Both the first instruction and the second instruction insert a first group of input vector elements to one of multiple first non overlapping sections of respective first and second resultant vectors. The first group has a first bit width. Each of the multiple first non overlapping sections have a same bit width as the first group. Both the third instruction and the fourth instruction insert a second group of input vector elements to one of multiple second non overlapping sections of respective third and fourth resultant vectors. The second group has a second bit width that is larger than said first bit width. Each of the multiple second non overlapping sections have a same bit width as the second group. The apparatus also includes masking layer circuitry to mask the first and third instructions at a first resultant vector granularity, and, mask the second and fourth instructions at a second resultant vector granularity.
Abstract:
A processor core including a hardware decode unit to decode vector instructions for decompressing a run length encoded (RLE) set of source data elements and an execution unit to execute the decoded instructions. The execution unit generates a first mask by comparing set of source data elements with a set of zeros and then counts the trailing zeros in the mask. A second mask is made based on the count of trailing zeros. The execution unit then copies the set of source data elements to a buffer using the second mask and then reads the number of RLE zeros from the set of source data elements. The buffer is shifted and copied to a result and the set of source data elements is shifted to the right. If more valid data elements are in the set of source data elements this is repeated until all valid data is processed.
Abstract:
A processor comprises a plurality of vector registers, and an execution unit, operatively coupled to the plurality of vector registers, the execution unit comprising a logic circuit implementing a load instruction for loading, into two or more vector registers, two or more data items associated with a data structure stored in a memory, wherein each one of the two or more vector registers is to store a data item associated with a certain position number within the data structure.
Abstract:
In one embodiment, a processor includes: a fetch logic to fetch instructions, the instructions including a partial reduction instruction; a decode logic to decode the partial reduction instruction and provide the decoded partial reduction instruction to one or more execution units; and the one or more execution units to, responsive to the decoded partial reduction instruction, perform a plurality of N partial reduction operations to generate an result array including N output data elements, where an input array comprises N lanes, and where each of the N partial reduction operations is to reduce a set of input data elements included in a corresponding lane of the N lanes. Other embodiments are described and claimed.
Abstract:
A method is described that includes reading a first read mask from a first register. The method also includes reading a first vector operand from a second register or memory location. The method also includes applying the read mask against the first vector operand to produce a set of elements for operation. The method also includes performing an operation of the set elements. The method also includes creating an output vector by producing multiple instances of the operation's result. The method also includes reading a first write mask from a third register, the first write mask being different than the first read mask. The method also includes applying the write mask against the output vector to create a resultant vector. The method also includes writing the resultant vector to a destination register.
Abstract:
Receiving an instruction indicating a source operand and a destination operand. Storing a result in the destination operand in response to the instruction. The result operand may have: (1) first range of bits having a first end explicitly specified by the instruction in which each bit is identical in value to a bit of the source operand in a corresponding position; and (2) second range of bits that all have a same value regardless of values of bits of the source operand in corresponding positions. Execution of instruction may complete without moving the first range of the result relative to the bits of identical value in the corresponding positions of the source operand, regardless of the location of the first range of bits in the result. Execution units to execute such instructions, computer systems having processors to execute such instructions, and machine-readable medium storing such an instruction are also disclosed.
Abstract:
Systems, methods, and apparatuses relating to one or more instructions for element aligning of a tile of a matrix operations accelerator are described. In one embodiment, a system includes a matrix operations accelerator circuit comprising a two-dimensional grid of processing elements, a first plurality of registers that represents a first two-dimensional matrix coupled to the two-dimensional grid of processing elements, and a second plurality of registers that represents a second two-dimensional matrix coupled to the two-dimensional grid of processing elements; and a hardware processor core coupled to the matrix operations accelerator circuit and comprising a decoder circuit to decode a single instruction into a decoded instruction, the single instruction including a first field that identifies the first two-dimensional matrix, a second field that identifies the second two-dimensional matrix, and an opcode that indicates an execution circuit of the hardware processor core is to cause the matrix operations accelerator circuit to generate a third two-dimensional matrix from a proper subset of elements of a row or a column of the first two-dimensional matrix and a proper subset of elements of a row or a column of the second two-dimensional matrix and store the third two-dimensional matrix at a destination in the matrix operations accelerator circuit, and the execution circuit of the hardware processor core to execute the decoded instruction according to the opcode.