摘要:
An end-to-end record, request, response token-based protocol is used to facilitate processing of client jobs. This allows the client to forward analytical tasks of a job directly to an analytics cluster and to record an indication of such at a server. The accelerators of the cluster to perform the tasks are specified in a token provided by the server to the client.
摘要:
Methods, systems and computer program product for reducing latency and increasing throughput of data transmissions along a switch network path. Exemplary embodiments include a method in a network accelerator device having a memory buffer, a method including identifying a data transmission, copying data packets from the data transmission into the memory buffer, and in response to at least one of a missing data packet and a corrupt data packet identified during the data transmission, sending a copied data packet corresponding to the at least one of the missing data packet and the corrupt data packet.
摘要:
A method of streaming attachment of hardware accelerators to a computing system includes receiving a stream for processing, identifying a stream handler based on the received stream, activating the identified stream handler, and steering the stream to an associated hardware accelerator.
摘要:
A method, system, and computer program product for target computer processor unit (CPU) determination during cache injection using I/O hub/chipset resources are provided. The method includes creating a cache injection indirection table on the input/output (I/O) hub or chipset. The cache injection indirection table includes fields for address or address range, CPU identifier, and cache type. In response to receiving an input/output (I/O) transaction, the hub/chipset reads the address in an address field of the I/O transaction, looks up the address in the cache injection indirection table, and injects the address and data of the I/O transaction to a target cache associated with a CPU as identified in the CPU identifier field when, in response to the look up, the address is present in the address field of the cache injection indirection table.
摘要:
A method, system, and computer program product for cache injection using speculation are provided. The method includes creating a cache line indirection table at an input/output (I/O) hub, which includes fields and entries for addresses, processor ID, and cache type and includes cache level line limit fields. The method also includes setting cache line limits to the CLL fields and receiving a stream of contiguous addresses at the table. For each address in the stream, the method includes: looking up the address in the table; if the address is present in the table, inject the cache line corresponding to the address in the processor complex; if the address is not present in the table, search limit values from the lowest level cache to the highest level cache; and inject addresses not present in the table to the cache hierarchy of the processor last injected from the contiguous address stream.
摘要:
A method, hybrid server system, and computer program product, prefetch data. A set of prefetch requests associated with one or more given datasets residing on the server system are received from a set of accelerator systems. A set of data is prefetched from a memory system residing at the server system for at least one prefetch request in the set of prefetch requests. The set of data satisfies the at least one prefetch request. The set of data that has been prefetched is sent to at least one accelerator system, in the set of accelerator systems, associated with the at least one prefetch request.
摘要:
A method, system, and computer program product for maintaining reliability in a computer system. In an example embodiment, the method includes performing a first data computation by a first set of processors, the first set of processors having a first computer processor architecture. The method continues by performing a second data computation by a second processor coupled to the first set of processors, the second processor having a second computer processor architecture, the first computer processor architecture being different than the second computer processor architecture. Finally, the method includes dynamically allocating computational resources of the first set of processors and the second processor based on at least one metric while the first set of processors and the second processor are in operation such that the accuracy and processing speed of the first data computation and the second data computation are optimized.
摘要:
Accelerators of a computing environment are managed in order to optimize energy consumption of the accelerators. To facilitate the management, virtual queues are assigned to the accelerators, and a management technique is used to enqueue specific tasks on the queues for execution by the corresponding accelerators. The management technique considers various factors in determining which tasks to be placed on which virtual queues in order to manage energy consumption of the accelerators.
摘要:
A method, accelerator system, and computer program access data in an out-of-core processing environment. A data access configuration is received from a server system managing a plurality of data sets. A determination is made that data sets retrieved from the server system are to be stored locally based on the data access configuration. A request to interact with a given data set is received from a user client. At least a portion of the given data set is retrieved from the server system. The at least a portion of the given data set is stored locally a memory based on the data access configuration that has been received.
摘要:
A method, accelerator system, and computer program product, for prefetching data from a server system in an out-of-order processing environment. A plurality of prefetch requests associated with one or more given data sets residing on the server system are received from an application on the server system. Each prefetch request is stored in a prefetch request queue. A score is assigned to each prefetch request. A set of the prefetch requests are selected from the prefetch queue that comprise a score above a given threshold. A set of data, for each prefetch request in the set of prefetch requests, is prefetched from the server system that satisfies each prefetch request, respectively.