Abstract:
A resource assignment method, and a recording medium and a distributed processing device applying the same are provided. The resource assignment method includes: when information regarding a plurality of tasks is received from a plurality of first nodes, calculating a size of a resource necessary for each of the received plurality of tasks; and when information regarding an available resource is received from a second node, assigning one of the plurality of tasks to the available resource of the second node, based on the calculated size of the resource necessary for each task.
Abstract:
A method for generating firmware by allowing a developer to freely select functions to be included in firmware installed on a main board of a server, and by building a firmware image is provided. The method for generating firmware includes: listing functions that are allowed to be included in firmware installed on a main board of a server; receiving selection of at least one of the listed functions from a user; and building a firmware image including the functions selected by the user.Accordingly, since a firmware image is built by a developer freely selecting functions to be included in firmware installed on a main board of a server, firmware optimized for requirements of the developer can be generated.
Abstract:
There is provided a dynamic data block caching automation application method for high-speed data access based on a computational storage. A query execution method according to an embodiment includes the steps of: synchronizing, by a DBMS, an ECC which is a cache of the DBMS and an ICC which is a cache of a computational storage in which a DB is established; generating an offloading execution code that defines operation information necessary for query computation offloading based on a query requested by a client; and processing the offloading execution code by using the ECC and the ICC which are synchronized. Accordingly, a load even in a CSD for reducing a load of a DBMS is reduced through snippet offloading reduction, snippet processing reduction, and high-speed query processing is enabled by disk I/O optimized data access.
Abstract:
There is provided a query execution method in a DB system in which a plurality of CSDs are used as a storage. According to an embodiment, a query execution method includes: generating snippets for offloading a part of query computations for a query received from a client to CSDs; scheduling the generated snippets for the CSDs; collecting results of offloading; and merging the collected results of offloading. Accordingly, by dividing query computations, offloading, and processing in parallel, while processing query computations that are inappropriate for offloading by a DBMS, a query request from a client can be executed effectively and rapidly.
Abstract:
There is provided an adaptive temperature control method based on log analysis of a chassis manager in an edge server. The adaptive temperature control method of the edge server system according to an embodiment includes: collecting, by a chassis manger module of the edge server system, work logs of a computing module and a storage module; predicting a future work load from the collected work logs; predicting a future internal temperature of the edge server system, based on the work load and a future temperature; and controlling, by the chassis manager module, the edge server system, based on the predicted future internal temperature. Accordingly, a configuration module of an edge server system may be managed/controlled in a rugged environment, and temperature of the edge server system may be adaptively controlled by transferring or additionally generating works of an edge server.
Abstract:
There are provided a method and an apparatus for cloud management, which selects optimal resources based on graphic processing unit (GPU) resource analysis in a large-scale container platform environment. According to an embodiment, a GPU bottleneck phenomenon occurring in an application of a large-scale container environment may be reduced by processing partitioned allocation of GPU resources, rather than existing 1:1 allocation, through real-time GPU data analysis (application of a threshold) and synthetic analysis of GPU performance degrading factors.
Abstract:
A cloud management method and a cloud management device are provided. The cloud management method determines whether a plurality of pods are overloaded, identifies resource usage current states of a cluster and a node, and determines a method of scaling resources of a specific pod that is overloaded from among the plurality of pods, according to the resource usage current states of the cluster and the node, and scales the resources of the specific pod according to the determined method. Accordingly, scaling for uniformly extending resources of a node and a pod in a cluster horizontally and vertically can be automatically performed.
Abstract:
An adaptive block cache management method and a DBMS applying the same are provided. A DB system according to an exemplary embodiment of the present disclosure includes: a cache configured to temporarily store DB data; a disk configured to permanently store the DB data; and a processor configured to determine whether to operate the cache according to a state of the DB system. Accordingly, a high-speed cache is adaptively managed according to a current state of a DBMS, such that a DB processing speed can be improved.
Abstract:
A cache management method for optimizing read performance in a distributed file system is provided. The cache management method includes: acquiring metadata of a file system; generating a list regarding data blocks based on the metadata; and pre-loading data blocks into a cache with reference to the list. Accordingly, read performance in analyzing big data in a Hadoop distributed file system environment can be optimized in comparison to a related-art method.
Abstract:
There are a method and an apparatus for managing a hybrid cloud to perform consistent resource management for all resources in a heterogeneous cluster environment which is comprised of an on-premise cloud and a plurality of public clouds. Accordingly, the method and apparatus for hybrid cloud management provides an integration support function between different cluster orchestrations in a heterogenous cluster environment which is comprised of an on-premise cloud and a plurality of public clouds, supports consistent resource management for all resources, and provides optimal workload deployment, free optimal reconfiguration, migration and restoration, whole resource integration scaling.