Abstract:
A technique includes receiving a user input in an array-oriented database. The user input indicates a database operation and processing a plurality of chunks of data stored by the database to perform the operation. The processing in dudes selectively distributing the processing of the plurality of chunks between a first group of at least one central processing unit and a second group of at least one co-processor.
Abstract:
An example method involves receiving, at a first memory node, data to be written at a memory location in the first memory node. The data is received from a device. At the first memory node, old data is read from the memory location, without sending the old data to the device. The data is written to the memory location. The data and the old data are sent from the first memory node to a second memory node to store parity information in the second memory node without the device determining the parity information. The parity information is based on the data stored in the first memory node.
Abstract:
Examples of the present disclosure include methods, devices, and/or systems. An example method for performing data stream operations can include passing input data through a data stream splitter, dividing the input data into multiple lines of data upon recognizing a delimiter within the input data at the data stream splitter, splitting the multiple lines of data into multiple data streams at the data stream splitter, and performing data stream operations on each of the multiple data streams with a respective one of a plurality of finite state machines (FSMs).
Abstract:
A system and method is illustrated for comparing a target memory address and a local memory size using a hypervisor module that resides upon a compute blade, the comparison based upon a unit of digital information for the target memory address and an additional unit of digital information for the local memory size. Additionally, the system and method utilizes swapping of a local virtual memory page with a remote virtual memory page using a swapping module that resides on the hypervisor module, the swapping based upon the comparing of the target memory address and the local memory size. Further, the system and method is implemented to transmit the local virtual memory page to a memory blade using a transmission module that resides upon the compute blade.
Abstract:
A tier identification (TID) is to indicate a characteristic of a memory region associated with a virtual address in a tiered memory system. A thread may be serviced according to a first path based on the TID indicating a first characteristic. The thread may be serviced according to a second path based on the TID indicating a second characteristic.
Abstract:
Reconfigurable crossbar networks, and devices, systems and methods, including hardware in the form of logic (e.g. application specific integrated circuits (ASICS)), and software in the form of machine readable instructions stored on machine readable media (e.g., flash, non-volatile memory, etc.), which implement the same, are provided. An example of a reconfigurable crossbar network includes a crossbar. A plurality of endpoints is coupled to the crossbar. The plurality of endpoints is grouped into regions at design time of the crossbar network. A plurality of regional interconnects are provided. Each regional interconnect connects a group of endpoints within a given region.
Abstract:
A method for communication between computing devices includes identifying the parameters of a data transfer between a source computing device and a target computing device and identifying communication paths between a source computing device and target computing device, in which at least one of the communications paths is a physical network. A communication path is selected for the data transfer. When a data transfer over the physical network is selected as a communication path, a nonvolatile memory (NVM) unit is removed from the source computing device and placed in a cartridge and the cartridge is programmed with transfer information. The NVM unit and cartridge are transported through the physical network to the target computing device according to the transfer information and the NVM unit is electrically connected to the target computing device.
Abstract:
A hybrid. memory module. The module includes at least two heterogeneous memory devices and a memory buffer in communication with the memory devices to read data from any one of the memory devices and write the data to any other of the memory devices.
Abstract:
Migrating data may include determining to copy a first data block in a first memory location to a second memory location and determining to copy a second data block in the first memory location to the second memory location based on a migration policy.
Abstract:
Examples of the present disclosure include methods, devices, and/or systems. An example method for performing data stream operations can include passing input data through a data stream splitter, dividing the input data into multiple lines of data upon recognizing a delimiter within the input data at the data stream splitter, splitting the multiple lines of data into multiple data streams at the data stream splitter, and performing data stream operations on each of the multiple data streams with a respective one of a plurality of finite state machines (FSMs).