摘要:
The present invention relates to memory to memory communication and storage for hybrid systems. Under the present invention, a data stream is received on a first computing device of a hybrid system. An attempt is made to store the data stream on the first computing device up to a per stream limit and a total storage limit of the first computing device. It is then determined whether to store at least a portion of the data stream on a second computing device of the hybrid system that is in communication with the first computing device. This decision is based on the per stream limit and the total storage limit of the first computing device as well as a per stream limit and a total storage limit of the second computing device. Thereafter, the at least a portion of the data stream and a control signal are communicated to the second computing device for storage.
摘要:
A method and system are disclosed for migrating network resources to improve network utilization, for use in a multi-node network wherein nodes of the network share network resources. The method comprises the steps of identifying a group of nodes that share one of the network resources, and identifying one of the nodes satisfying a specified condition based on at least one defined access latency metric. The shared resource is moved to the identified one of the nodes to reduce overall access latency to access the shared resource by said group of nodes. One embodiment of the invention provides a method and system to synchronize tasks in a distributed computation using network attached devices (NADs). A second embodiment of the invention provides a method and system to reduce lock latency and network traffic by migrating lock managers to coupling facility locations closest to nodes seeking resource access.
摘要:
The present invention relates to a server-processor hybrid system that comprises (among other things) a set (one or more) of front-end servers (e.g., mainframes) and a set of back-end application optimized processors. Moreover, implementations of the invention provide a server and processor hybrid system and method for distributing and managing the execution of applications at a fine-grained level via an I/O-connected hybrid system. This method allows one system to be used to manage and control the system functions, and one or more other systems to co-processor.
摘要:
An Ethernet adapter system may include a transmitter to insert a payload type identifier sequence in a generic frame procedure header to indicate that a network is a converged enhanced Ethernet network. The transmitter may insert idle sequences in a stream of data frames transmitted along a link. The system may include a receiver to recognize a condition and to force a loss of synchronization condition on the link that will be converted by the receiver into a loss of light condition. The receiver may scan the transmitted stream of data frames for invalid data frames and introduce a code into the stream of data frames whenever an invalid data frame is detected.
摘要:
Execution of tasks on accelerator units is managed. The managing includes multi-level grouping of tasks into groups based on defined criteria, including start time of tasks and/or deadline of tasks. The task groups and possibly individual tasks are mapped to accelerator units to be executed. During execution, redistribution of a task group and/or an individual task may occur to optimize a defined energy profile.
摘要:
A method of streaming attachment of hardware accelerators to a computing system includes receiving a stream for processing, identifying a stream handler based on the received stream, activating the identified stream handler, and steering the stream to an associated hardware accelerator.
摘要:
A system for distributed function execution, the system includes a host in operable communication with an accelerator. The system is configured to perform a method including processing an application by the host and distributing at least a portion of the application to the accelerator for execution. The method also includes instructing the accelerator to create a buffer on the accelerator, instructing the accelerator to execute the portion of the application, wherein the accelerator writes data to the buffer and instructing the accelerator to transmit the data in the buffer to the host before the application requests the data in the buffer. The accelerator aggregates the data in the buffer before transmitting the data to the host based upon one or more runtime conditions in the host.
摘要:
A redundant power supply configuration for a data center is provided. A method includes receiving instructions to operate power supplies at a high current mode. An individual current for each of the power supplies is calculated to total a high current at the high current mode. The power supplies are operated at the high current mode to provide the high current at the high current mode. In response to operation at the high current mode being complete, the power supplies are operated at a normal mode to provide a normal current at the normal current mode.
摘要:
An improved database system may include a root-server including a computer processor. The system may also include a segment-server including a computer processor, the segment-server to store data based upon the data's frequency of use by a client who is closer to the segment-server than the root-server and any other segment-server in the system, and the data stored is at least write data. The system may further include a consistency unit to update the root-server based upon data stored by the segment-server and client.
摘要:
A system for distributed function execution, the system includes a host in operable communication with an accelerator. The system is configured to perform a method including processing an application by the host and distributing at least a portion of the application to the accelerator for execution. The method also includes instructing the accelerator to create a buffer on the accelerator, instructing the accelerator to execute the portion of the application, wherein the accelerator writes data to the buffer and instructing the accelerator to transmit the data in the buffer to the host before the application requests the data in the buffer. The accelerator aggregates the data in the buffer before transmitting the data to the host based upon one or more runtime conditions in the host.