Abstract:
A system, apparatus, device, or method to output different iterations of data entities. The method may include establishing a first data entity; establishing a first state for the first data entity. The method may include establishing a second state for the first data entity. The method may include storing the first data entity, the first state, and the second state at a storage device. The method may include retrieving a first iteration of the first data entity exhibiting at least a portion of the first state. The method may include retrieving a second iteration of the first data entity exhibiting at least a portion of the second state. The method may include outputting the first iteration and the second iteration at an output time.
Abstract:
The present disclosure relates to a data storage device and a storage control method based on a log-structured merge tree. The log-structured merge tree comprises a plurality of SST files stored on at least one storage medium. The storage control method comprises: using a first filter to obtain the SST file set matched by the query key; using a second filter to globally sort the matching tags in the SST file set to generate a global tag set; the global tag set selects SST files to perform file IO operations to read key-value pairs. The storage control method selects the SST file to perform file IO operations according to the global tag set, thereby reducing the number of file IO operations in data reading operations, further improving file IO efficiency.
Abstract:
The technology disclosed relates to streamlined analysis of security posture of a cloud environment. In particular, the disclosed technology relates to accessing permissions data and access control data for pairs of compute resources and storage resources in the cloud environment, tracing network communication paths between the pairs of the compute resources and the storage resources based on the permissions data and the access control data, accessing sensitivity classification data for objects in the storage resources, qualifying a subset of the pairs of the compute resources and the storage resources as vulnerable to breach attack based on an evaluation of the permissions data, the access control data, and the sensitivity classification data against a set risk criterion, and generating a representation of propagation of the breach attack along the network communication paths, the representation identifying relationships between the subset of the pairs of the compute resources and the storage resources.
Abstract:
The present invention discloses a heterogeneous computation framework, of Association. Rule Mining (ARM) using Micron's Autotmata Processor (AP). This framework is based on the Apriori algorithm. Two Automaton designs are proposed to match and count the individual itemset. Several performance improvement strategies are proposed including minimizing the number of reporting vectors and reduce reconfiguration delays. The experiment results show up to 94× speed ups of the proposed AP-accelerated Apriori on six synthetic and real-world datasets, when compared with the Apriori single-core CPU implementation. The proposed AP-accelerated Apriori solution also outperforms the state-of-the-art multicore and GPU implementations of Equivalence Class Transformation (Eclat) algorithm on big datasets.
Abstract:
A system and a method for data dispatch processing in a big data system are provided. The system includes a plurality of computing machines and a database cluster. The method includes disassembling a computing procedure into a plurality of processing elements. The method also includes identifying a database accessing point for accessing a target data node from one of the data nodes in the computing procedure. The method further includes configuring the processing elements to the computing machines according to the database accessing point, and transmitting a data tuple corresponding to the computing procedure according to the processing elements configured to the computing machines and a data transmitting cost between the computing machines. Accordingly, the method effectively improves system performance for transmitting the big data.
Abstract:
A database management system (DBMS) manages a database existing in a second storage device with an access speed lower than that of a first storage device. In an execution of a query, the DBMS dynamically generates tasks two or more executable tasks in parallel. The DBMS generates task start information which is information representing a content of the execution of the task, manages the task start information, and executes a content represented by the task start information by the task. The task start information includes a data address set existing in the second storage device. The DBMS controls movement of the data address sets between the first storage device and the second storage device based on a management state of the task start information. In addition, the DBMS selects the task start information based on whether or not the data address set exists in the first storage device.
Abstract:
The method includes (A) acquiring storage location information that can identify a volume that stores data and access type information, (B) acquiring volume management information that can identify the storage unit that stores the volume, (C) identifying the volume of data to be accessed, identifying the storage unit storing the volume, and identifying the storage method of the storage unit, (D) identifying the type of access to the data to be accessed, (E) determining whether the data needs to be moved to another storage unit of a different storage method based on the storage method and the type of access, and (F) giving an indication of moving the data if it is determined that the data needs to be moved in (E).
Abstract:
Technologies are described for storing and reporting user activities within a computing environment. For example, bitsets (e.g., compressed and/or uncompressed bitsets) can be used to store activities (e.g., where each activity is a bit in the bitset in chronological order). Separate bitsets can be maintained for followable aspects of the activities (e.g., a separate bitset for each unique followable). Activity streams can be produced from the compressed bitsets (e.g., custom streams reflecting followables designated by users).
Abstract:
Query processing systems and methods are disclosed herein. In an example system, query information is received over a network for processing a query. A first processing architecture loads a set of data associated with the query into a shared memory. A second processing architecture accesses the set of data from the shared memory. In one example, the first and second processing architectures and the shared memory are integrated in a hardware chip (e.g., a chiplet containing several processor architectures, such as CPU and a graphics processing unit (GPU)). The query is processed based on the set of data accessed from the shared memory using the second processing architecture to generate a query result. The query result is provided over the network. In this manner, a computing device may execute a query based on different processing systems contained therein.
Abstract:
An information handling system includes a hardware device having a query processing engine to provide queries into source data and to provide responses to the queries. A processor stores a query to a query address in the memory device, issues a command to the hardware device, the command including the query address and a response address in the memory device, and retrieves a response to the query from the response address. The hardware device retrieves the query from the query address in response to the command, provides the query to the query processing engine, and stores a response to the query from the query processing engine to the response address.