Abstract:
Disclosed are a method and system for transmitting monitoring data, and a periphery device. By the method, a plurality of first files are acquired by segmenting monitoring data acquired from at least one monitoring device, and the plurality of first files are transmitted to a cloud. Since the cloud acquires the monitoring data in the form of the first files, it is convenient to analyze and manage the monitoring data in different first files, and the flexibility is improved. In addition, as the file formats of the plurality of first files transmitted by the periphery device to the cloud are the same, a parsing engine capable of parsing the first files only needs to be installed on the cloud. In this way, the efficiency of file parsing is improved and the complexity is lowered, and the memory occupied by the parsing engine may be reduced.
Abstract:
A management server in a parallel data system stores a correspondence-relationship between a first response-time for communication-processing and a second response-time for data-processing, executed by each of a plurality of data servers in relation to first processing by a client node, acquires, at a time of execution of second processing by the client node, a third response-time desired for communication-processing and a fourth response-time desired for data-processing, which are related to second processing, in each of a plurality of data servers, based on the first response-time and the second response-time, determines combinations of the data servers used to execute the second processing, based on the third response-time and fourth response-time, and selects a combination that satisfies a response-time to be satisfied by the second processing and that includes a smallest number of processor cores allocated to the communication-processing of a data server, among the determined combinations.
Abstract:
A system facilitates access to data in a network and includes a cache that stores instructions. A processor executes the instructions including: caching processing configured to integrate caching into a local cluster file system, and cache local file data in the cache based on fetching file data on demand from a remote cluster file system. The cache is visible to file system clients as a Portable Operating System Interface (POSIX) compliant file system. Applications execute on a multi-node cache cluster using POSIX semantics via a POSIX compliant file system interface. Data cache is locally and remotely consistent for updates.
Abstract:
Presented herein are methods, non-transitory computer readable media, and devices for integrating a hybrid model of fine-grained locking and data-partitioning wherein fine-grained locking is added to existing systems that are based on hierarchical data-partitioning in order in increase parallelism with minimal code re-write. Methods for integrating a hybrid model of fine-grained locking and data-partitioning are disclosed which include: creating, by a network storage server, a plurality of domains for execution of processes of the network storage server, the plurality of domains including a domain; creating a hierarchy of storage filesystem subdomains within the domain, wherein each of the subdomains corresponds to one or more types of processes, wherein at least one of the storage filesystem subdomains maps to a data object that is locked via fine-grained locking; and assigning processes for simultaneous execution by the storage filesystem subdomains within the domain and the at least one subdomain that maps to the data object locked via fine-grained locking.
Abstract:
A network system for providing long haul network connection between endpoint devices is disclosed. The network system includes a first and a second endpoint devices, a first and a second exchange servers, a first access point server coupled between the first endpoint device and the first exchange server, a second access point server coupled between the second endpoint device and the second exchange server, a first storage node coupled between the first exchange server and the second exchange server, and a second storage node coupled between the first exchange server and the second exchange server. The first exchange server is configured to convert first packetized traffic into a carrier file and write the carrier file to the second storage node. The second exchange server is configured to read the carrier file from the second storage node and convert the carrier file into second packetized traffic.
Abstract:
A computer-implemented method includes: scheduling computing jobs; processing data by executing the computing jobs; arranging the data in a file system; managing the arranging the data by monitoring a performance parameter of the file system and extracting information about the scheduling, and tuning one of the arranging and the scheduling based on the performance parameter and the information about the scheduling. An article of manufacture includes a computer-readable medium storing signals representing instructions for a computer program executing the method.
Abstract:
Data storage systems and methods for storing data in computing nodes of a super computer or compute cluster are described herein. The super computer storage may be integrated with or coupled with a primary storage system. In addition to a CPU and memory, non-volatile memory is included with the computing nodes as local storage. A high speed interconnect remote direct memory access (HRI) unit is also included with each computing node. When data bursts occur, data may be stored by a first computing node on the local storage of a second computing node through the HRI units of the computing nodes, bypassing the CPU of the second computing node. Further, the local storage of other computing nodes may be used to store redundant copies of data from a first computing node to make the super computer data resilient while not interfering with the CPU of the other computing nodes.
Abstract:
A system and method for parallel file system traversal using multiple job executors is disclosed. The system includes a pool of job executors, a job queue, and a trigger tracker. An object, representative of a node in the filesystem, is added (i.e., pushed) to the job queue for processing by an job executor. The job queue assigns (i.e., pops) objects to job executors in accordance to a LIFO (Last In First Out) ordering. Then the job executor performs an action such as copy. In one embodiment, the trigger tracker follows the processing of a child nodes to a particular child node. Thus, the filesystem is being traversed by several job executors at the same time.
Abstract:
A computer system splits a data space to partition data between processors or processes. The data space may be split into sub-regions which need not be orthogonal to the axes defined by the data space's parameters, using a decision tree. The decision tree can have neural networks in each of its non-terminal nodes that are trained on, and are used to partition, training data. Each terminal, or leaf, node can have a hidden layer neural network trained on the training data that reaches the terminal node. The training of the non-terminal nodes' neural networks can be performed on one processor and the training of the leaf nodes' neural networks can be run on separate processors. Different target values can be used for the training of the networks of different non-terminal nodes. The non-terminal node networks may be hidden layer neural networks. Each non-terminal node automatically may send a desired ratio of the training records it receives to each of its child nodes, so the leaf node networks each receives approximately the same number of training records. The system may automatically configures the tree to have a number of leaf nodes equal to the number of separate processors available to train leaf node networks. After the non-terminal and leaf node networks have been trained, the records of a large data base can be passed through the tree for classification or for estimation of certain parameter values.
Abstract:
Example distributed storage systems, replication managers, and methods provide replication barriers for dependent data transfers between data stores. An object data store may include a barrier object and be configured to identify dependencies between a dependency set of data objects and the barrier object. When replicating data objects to another data store, the dependency set of data objects may be transferred first, delaying the transfer of the barrier object while the dependency set is being transferred.