Abstract:
A method, system and article of manufacture for balancing a workload on heterogeneous processing devices. The method comprising accessing a memory storage of a processor of one type by a dequeuing entity associated with a processor of a different type, identifying a task from a plurality of tasks within the memory that can be processed by the processor of the different type, synchronizing a plurality of dequeuing entities capable of accessing the memory storage, and dequeuing the task form the memory storage.
Abstract:
Embodiments are described for a method for compiling instruction code for execution in a processor having a number of functional units by determining a thermal constraint of the processor, and defining instruction words comprising both real instructions and one or more no operation (NOP) instructions to be executed by the functional units within a single clock cycle, wherein a number of NOP instructions executed over a number of consecutive clock cycles is configured to prevent exceeding the thermal constraint during execution of the instruction code.
Abstract:
The disclosed embodiments provide a system for processing a memory command on a computer system. During operation, a command scheduler executing on a memory controller of the computer system obtains a predicted latency of the memory command based on a memory address to be accessed by the memory command. Next, the command scheduler orders the memory command with other memory commands in a command queue for subsequent processing by a memory resource on the computer system based on the predicted latency of the memory command.
Abstract:
Methods, systems, and computer program products are provided for minimizing latency in a implementation where a peripheral device is used as a capture device and a compute device such as a GPU processes the captured data in a computing environment. In embodiments, a peripheral device and GPU are tightly integrated and communicate at a hardware/firmware level. Peripheral device firmware can determine and store compute instructions specifically for the GPU, in a command queue. The compute instructions in the command queue are understood and consumed by firmware of the GPU. The compute instructions include but are not limited to generating low latency visual feedback for presentation to a display screen, and detecting the presence of gestures to be converted to OS messages that can be utilized by any application.
Abstract:
Techniques are disclosed relating to detecting and minimizing timing problems created by clock domain crossing (CDC) in integrated circuits. In various embodiments, one or more timing parameters are associated with a path that crosses between clock domains in an integrated circuit, where the one or more timing parameters specify a propagation delay for the path. In one embodiment, the timing parameters may be distributed to different design stages using a configuration file. In some embodiments, the one or more parameters may be used in conjunction with an RTL model to simulate propagation of a data signal along the path. In some embodiments, the one or more parameters may be used in conjunction with a netlist to create a physical design for the integrated circuit, where the physical design includes a representation of the path that has the specified propagation delay.
Abstract:
A graphics processing unit is disclosed, the graphics processing unit having a processor having one or more SIMD processing units, and a local data share corresponding to one of the one or more SIMD processing units, the local data share comprising one or more low latency accessible memory regions for each group of threads assigned to one or more execution wavefronts, and a global data share comprising one or more low latency memory regions for each group of threads.
Abstract:
A communication device includes a data source that generates data for transmission over a bus, and a data encoder that receives and encodes outgoing data. An encoder system receives outgoing data from a data source and stores the outgoing data in a first queue. An encoder encodes outgoing data with a header type that is based upon a header type indication from a controller and stores the encoded data that may be a packet or a data word with at least one layered header in a second queue for transmission. The device is configured to receive at a payload extractor, a packet protocol change command from the controller and to remove the encoded data and to re-encode the data to create a re-encoded data packet and placing the re-encoded data packet in the second queue for transmission.
Abstract:
In one form, scheduling data migration comprises determining whether the data is likely to be used by an input/output (I/O) device, the data being at a location remote to the I/O device; and scheduling the data for migration from the remote location to a location local to the I/O device in response to determining that the data is likely to be used by the I/O device.
Abstract:
Devices, methods, and systems for distributed gather and scatter operations in a network of memory nodes. A responding memory node includes a memory; a communications interface having circuitry configured to communicate with at least one other memory node; and a controller. The controller includes circuitry configured to receive a request message from a requesting node via the communications interface. The request message indicates a gather or scatter operation, and instructs the responding node to retrieve data elements from a source memory data structure and store the data elements to a destination memory data structure. The controller further includes circuitry configured to transmit a response message to the requesting node via the communications interface. The response message indicates that the data elements have been stored into the destination memory data structure.
Abstract:
A memory can be a sum addressed memory (SAM) that receives, for each read access, two address values (e.g. a base address and an offset) having a sum that indicates the entry of the memory to be read (the read entry). A decoder adds the two address value to identify the read entry. Concurrently, a predecode module predecodes the two address values to identify a set of entries (e.g. two different entries) at the memory, whereby the set includes the entry to be read. The predecode module generates a precharge disable signal to terminate precharging at the set of entries which includes the entry to be read. Because the precharge disable signal is based on predecoded address information, it can be generated without waiting for a full decode of the read address entry.