摘要:
One embodiment provides for a compute apparatus to perform machine learning operations, the apparatus comprising a decode unit to decode a single instruction into a decoded instruction, the decoded instruction to perform one or more machine learning operations, wherein the decode unit, based on parameters of the one or more machine learning operations, is to request a scheduler to schedule the one or more machine learning operations to one of an array of programmable compute units and a fixed function compute unit.
摘要:
One embodiment provides for a compute apparatus to perform machine learning operations, the compute apparatus comprising instruction decode logic to decode a single instruction including multiple operands into a single decoded instruction, the multiple operands having differing precisions and a general-purpose graphics compute unit including a first logic unit and a second logic unit, the general-purpose graphics compute unit to execute the single decoded instruction, wherein to execute the single decoded instruction includes to perform a first instruction operation on a first set of operands of the multiple operands at a first precision and a simultaneously perform second instruction operation on a second set of operands of the multiple operands at a second precision.
摘要:
Methods and apparatus relating to autonomous vehicle neural network optimization techniques are described. In an embodiment, the difference between a first training dataset to be used for a neural network and a second training dataset to be used for the neural network is detected. The second training dataset is authenticated in response to the detection of the difference. The neural network is used to assist in an autonomous vehicle/driving. Other embodiments are also disclosed and claimed.
摘要:
Technologies for generating composable library functions include a first computing device that includes a library compiler configured to compile a composable library and second computing device that includes an application compiler configured to compose library functions of the composable library based on a plurality of abstractions written at different levels of abstractions. For example, the abstractions may include an algorithm abstraction at a high level, a blocked-algorithm abstraction at medium level, and a region-based code abstraction at a low level. Other embodiments are described and claimed herein.
摘要:
An apparatus and method for improving the efficiency with which speculative critical sections are executed within a transactional memory architecture. For example, a method in accordance with one embodiment comprises: waiting to execute a speculative critical section of program code until a lock is freed by a current transaction; responsively executing the speculative critical section to completion upon detecting that the lock has been freed, regardless of whether the lock is held by another transaction during the execution of the speculative critical section; once execution of the speculative critical section is complete, determining whether the lock is taken; and if the lock is not taken, then committing the speculative critical section and, if the lock is taken, then aborting the speculative critical section.
摘要:
Techniques for improved transactional memory management are described. In one embodiment, for example, an apparatus may comprise a processor element, an execution component for execution by the processor element to concurrently execute a software transaction and a hardware transaction according to a transactional memory process, a tracking component for execution by the processor element to activate a global lock to indicate that the software transaction is undergoing execution, and a finalization component for execution by the processor element to commit the software transaction and deactivate the global lock when execution of the software transaction completes, the finalization component to abort the hardware transaction when the global lock is active when execution of the hardware transaction completes. Other embodiments are described and claimed.
摘要:
Systems and methods may provide for identifying an object in a managed runtime environment and determining an age of the object at a software level of the managed runtime environment. Additionally, the object may be selectively allocated in one of a dynamic random access memory (DRAM) or a non-volatile random access memory (NVRAM) based at least in part on the age of the object. In one example, the data type of the object is also determined, wherein the object is selectively allocated further based on the data type.
摘要:
Techniques for improved transactional memory management are described. In one embodiment, for example, an apparatus may comprise a processor element, an execution component for execution by the processor element to concurrently execute a software transaction and a hardware transaction according to a transactional memory process, a tracking component for execution by the processor element to activate a global lock to indicate that the software transaction is undergoing execution, and a finalization component for execution by the processor element to commit the software transaction and deactivate the global lock when execution of the software transaction completes, the finalization component to abort the hardware transaction when the global lock is active when execution of the hardware transaction completes. Other embodiments are described and claimed.
摘要:
A method and apparatus to facilitate shared pointers in a heterogeneous platform. In one embodiment of the invention, the heterogeneous or non-homogeneous platform includes, but is not limited to, a central processing core or unit, a graphics processing core or unit, a digital signal processor, an interface module, and any other form of processing cores. The heterogeneous platform has logic to facilitate sharing of pointers to a location of a memory shared by the CPU and the GPU. By sharing pointers in the heterogeneous platform, the data or information sharing between different cores in the heterogeneous platform can be simplified.
摘要:
Methods and apparatus relating to autonomous vehicle neural network optimization techniques are described. In an embodiment, the difference between a first training dataset to be used for a neural network and a second training dataset to be used for the neural network is detected. The second training dataset is authenticated in response to the detection of the difference. The neural network is used to assist in an autonomous vehicle/driving. Other embodiments are also disclosed and claimed.