Abstract:
Log analysis can include transferring compiled log analysis code, executing log analysis code, and performing a log analysis on the executed log analysis code.
Abstract:
A detector detects, using an error code, an error in data stored in a memory. The detector determines whether the error is uncorrectable using the error code. In response to determining that the error is uncorrectable, an error handler associated with an application is invoked to handle the error in the data by recovering the data to an application-wide consistent state.
Abstract:
A technique includes receiving a user input in an array-oriented database. The user input indicates a database operation and processing a plurality of chunks of data stored by the database to perform the operation. The processing in dudes selectively distributing the processing of the plurality of chunks between a first group of at least one central processing unit and a second group of at least one co-processor.
Abstract:
According to an example, data for a memcached server is replicated to a memcached replication server. Data operations for the memcached server may be filtered for backing up data to the memcached replication server.
Abstract:
A method of using a buffer within an indexing accelerator during periods of inactivity, comprising flushing indexing specific data located in the buffer, disabling a controller within the indexing accelerator, handing control of the buffer over to a higher level cache, and selecting one of a number of operation modes of the buffer. An indexing accelerator, comprising a controller and a buffer communicatively coupled to the controller, in which, during periods of inactivity, the controller is disabled and a buffer operating mode among a number of operating modes is chosen under which the buffer will be used.
Abstract:
According to an example, an indexing accelerator with memory-level parallelism (MLP) support may include a request decoder to receive indexing requests. The request decoder may include a plurality of configuration registers. A controller may be communicatively coupled to the request decoder to support MLP by assigning an indexing request of the received indexing requests to a configuration register of the plurality of configuration registers. A buffer may be communicatively coupled to the controller to store data related to an indexing operation of the controller for responding to the indexing request.
Abstract:
Storing data in persistent hybrid memory includes promoting a memory block from non-volatile memory to a cache based on a usage of said memory block according to a promotion policy, tracking modifications to the memory block while in the cache, and writing the memory block back into the non-volatile memory after the memory block is modified in the cache based on a writing policy that keeps a number of the memory blocks that are modified at or below a number threshold while maintaining the memory block in the cache.
Abstract:
According to an example, a hybrid secure non-volatile main memory (HSNVMM) may include a non-volatile memory (NVM) to store a non-working set of memory data in an encrypted format, and a dynamic random-access memory (DRAM) buffer to store a working set of memory data in a decrypted format. A cryptographic engine may selectively encrypt and decrypt memory pages in the working and non-working sets of memory data. A security controller may control memory data placement and replacement in the NVM and the DRAM buffer based on memory data characteristics that include clean memory pages, dirty memory pages, working set memory pages, and non-working set memory pages. The security controller may further provide incremental encryption and decryption instructions to the cryptographic engine based on the memory data characteristics.
Abstract:
Storing data in persistent hybrid memory includes promoting a memory block from non-volatile memory to a cache based on a usage of said memory block according to a promotion policy, tracking modifications to the memory block while in the cache, and writing the memory block back into the non-volatile memory after the memory block is modified in the cache based on a writing policy that keeps a number of the memory blocks that are modified at or below a number threshold while maintaining the memory block in the cache.
Abstract:
Systems and methods of vertically aggregating tiered servers in a data center are disclosed. An example method includes partitioning a plurality of servers in the data center to form an array of aggregated end points (AEPs). Multiple servers within each AEP are connected by an intra-AEP network fabric and different AEPs are connected by an inter-AEP network. Each AEP has one or multiple central hub servers acting as end-points on the inter-AEP network. The method includes resolving a target server identification (ID). If the target server ID is the central hub server in the first AEP, the request is handled in the first AEP. If the target server ID is another server local to the first AEP, the request is redirected over the intra-AEP fabric. If the target server ID is a server in a second AEP, the request is transferred to the second AEP.