摘要:
Disclosed herein are a method for automatically allocating IP addresses for a distributed storage system in a large-scale torus network and an apparatus for the method. The method for automatically allocating IP addresses includes acquiring neighboring-node information from multiple storage nodes that constitute the system; generating torus network topology for the multiple storage nodes by combining the neighboring-node information; generating IP address information through which a position of each of the multiple storage nodes in a structure corresponding to the torus network topology can be identified; and allocating IP addresses to the multiple storage nodes by delivering the IP address information to the multiple storage nodes.
摘要:
Disclosed herein are a method for extending and shrinking a volume of a distributed file system based on a torus network and an apparatus for the same. The method for extending a volume includes searching for a volume neighboring a target volume, the extension of which is requested; determining a direction in which a data server is to be added in consideration of whether the neighboring volume is to be extended; searching for multiple candidate data servers in the direction in which a data server is to be added based on the target volume; and adding any one of the multiple candidate data servers to the target volume in consideration of at least one of a server state and a network communication cost.
摘要:
Disclosed herein is a distributed file system using a torus network. The distributed file system includes multiple servers. The location of a master server may be determined to shorten the latency of data input/output. The location of the master server may be determined such that the distance between the master server and a node farthest away from the master server, among nodes, is minimized. When the location of the master server is determined, the characteristics of the torus network and the features of a propagation transmission scheme may be taken into consideration.
摘要:
Disclosed are a method and apparatus for reading data in a distributed file system in which a client and a server are separated. In the method and apparatus, a prefetching operation is performed to provide a continuous read function with high performance even in the distributed file system so that an optimum continuous read function of a local file system within the client may be effectively supported when an application program of the client requests continuous reading on a file (or chunk).
摘要:
There are provided a system and method for providing a virtual desktop service using a cache server. A system for providing a virtual desktop service according to the invention includes a host server configured to provide a virtual desktop service to a client terminal using a virtual machine, a distributed file system configured to store data for the virtual machine, and a cache server that is provided for each host server group having at least one host server, and performs a read process or a write process of data using physically separate caches when the read process or write process of the data is requested from the virtual machine in the host server.
摘要:
Disclosed herein are a method for machine-learning parallelization using host CPUs of a multi-socket structure and an apparatus therefor. The method, performed by the apparatus for machine-learning parallelization using host CPUs of a multi-socket structure, includes a compile phase in which a learning model is split at a layer level for respective pipeline stages and allocated to Non-Uniform Memory Access (NUMA) nodes for respective CPU sockets and a runtime phase in which parameters required for learning are initialized and multiple threads generated in consideration of a policy of each parallelism algorithm are executed by being allocated to respective cores included in the NUMA node.
摘要:
Disclosed herein is a method for distributed training of an AI model in a channel-sharing network environment. The method includes determining whether data parallel processing is applied, calculating a computation time and a communication time when input data is evenly distributed across multiple computation devices, and unevenly distributing the input data across the multiple computation devices based on the computation time and the communication time.
摘要:
Disclosed herein are a storage server and an adaptive prefetching method performed by the storage server in a distributed file system. An adaptive prefetching method includes receiving, by a management request processing unit of a storage server, a stream generation request from a client, sending, by the management request processing unit, a stream identifier and information about an I/O worker, which correspond to the stream generation request, to the client, receiving, by the management request processing unit, a read request from the client, inserting, by the management request processing unit, the read request into a queue of the I/O worker corresponding to the read request, performing, by the I/O worker, adaptive prefetching for the read request using an identifier of a file object of stream information corresponding to the read request, and transmitting, by the I/O worker, data that is read by performing adaptive prefetching to the client.
摘要:
Disclosed herein is an apparatus for controlling synchronization of metadata, which includes a transaction creation unit for creating a transaction corresponding to a request for an operation for metadata, the request being received from a client; a journal management unit for storing the transaction in a journal; a journal synchronization unit for comparing the journal with an external journal in a node connected with the apparatus and transmitting and receiving only inconsistencies therebetween, thereby synchronizing the journal; and an operation-processing unit for processing the request for the operation or the transaction for the metadata.
摘要:
Disclosed herein are an apparatus and method for page allocation in a many-to-one virtualization environment. The method may include determining whether a page fault interrupt is caused by page initialization for page allocation, sending an ownership change message to a node having ownership of the corresponding page when the page fault interrupt is determined to be caused by page initialization, and initializing the corresponding page upon receiving an ownership-change-processing-complete message.