Abstract:
A cache management method for optimizing read performance in a distributed file system is provided. The cache management method includes: acquiring metadata of a file system; generating a list regarding data blocks based on the metadata; and pre-loading data blocks into a cache with reference to the list. Accordingly, read performance in analyzing big data in a Hadoop distributed file system environment can be optimized in comparison to a related-art method.
Abstract:
Fabrics with a multi-layered circuit of high reliability and a manufacturing method thereof are provided. The fabrics with the multi-layered circuit include: a base layer; a first conductive pattern which is formed on the base layer; a second conductive pattern which is formed to intersect with the first conductive pattern at least in part; and an insulating pattern which is formed on an intersection portion which is a region where the first conductive pattern and the second conductive pattern intersect.
Abstract:
There is provided an intelligent BMC for predicting a fault by interworking on-device AI. A fault prediction method of a BMC according to an embodiment includes: collecting monitoring information regarding computing modules installed on a main board; calculating a FOFL from the collected monitoring data; and constructing an AI model related to the calculated FOFL and predicting a FOFL from the monitoring data. Accordingly, a fault occurring in various patterns may be predicted based on monitoring data by interworking with on-device AI.
Abstract:
There is provided an edge server system management and control method in a rugged environment. An edge server management apparatus according to an embodiment of the present disclosure includes: a communication unit configured to communicate with an edge server; and a processor configured to collect environmental information of the edge server through the communication unit, and to control an external environment of the edge server and to control resource configuration for an edge service, based on the collected environmental information. Accordingly, it is possible to manage/control an edge server system-based configuration module (a fan, a heater) even, and to operate an edge service by reconfiguring resources of the edge server in a severe industrial site.
Abstract:
There are provided a cloud management method and a cloud management apparatus for rapidly scheduling arrangements of service resources by considering equal distribution of resources in a large-scale container environment of a distributed collaboration type. The cloud management method according to an embodiment includes: receiving, by a cloud management apparatus, a resource allocation request for a specific service; monitoring, by the cloud management apparatus, available resource current statuses of a plurality of clusters, and selecting a cluster that is able to be allocated a requested resource; calculating, by the cloud management apparatus, a suitable score with respect to each of the selected clusters; and selecting, by the cloud management apparatus, a cluster that is most suitable to the requested resource for executing a requested service from among the selected clusters, based on the respective suitable scores. Accordingly, for the method for determining equal resource arrangements between associative clusters according to characteristics of a required resource, a model for selecting a candidate group and finally selecting a cluster that is suitable to a required resource can be supported.
Abstract:
A method for generating firmware by allowing a developer to freely select functions to be included in firmware installed on a main board of a server, and by building a firmware image is provided. The method for generating firmware includes: listing functions that are allowed to be included in firmware installed on a main board of a server; receiving selection of at least one of the listed functions from a user; and building a firmware image including the functions selected by the user.Accordingly, since a firmware image is built by a developer freely selecting functions to be included in firmware installed on a main board of a server, firmware optimized for requirements of the developer can be generated.
Abstract:
A module type PDU for different power supply is provided. The PDU includes: a base configured to transmit different kinds of power; and a multi socket module connected with the base to transmit one kind of power to devices plugs of which are connected to the multi socket module. Accordingly, double power supply can be achieved through a single PDU and thus a PDU installing cost can be reduced, and, as the number of PDUs is reduced, electric equipments can be simplified.
Abstract:
There are a method and an apparatus for managing a hybrid cloud to perform consistent resource management for all resources in a heterogeneous cluster environment which is comprised of an on-premise cloud and a plurality of public clouds. Accordingly, the method and apparatus for hybrid cloud management provides an integration support function between different cluster orchestrations in a heterogenous cluster environment which is comprised of an on-premise cloud and a plurality of public clouds, supports consistent resource management for all resources, and provides optimal workload deployment, free optimal reconfiguration, migration and restoration, whole resource integration scaling.
Abstract:
There is provided a smart power management method for power consumption reduction based on an intelligent BMC. A cooling fan control method by a BMC according to an embodiment includes: collecting monitoring data regarding computing modules; calculating a current CPU power from the collected monitoring data; predicting a future CPU temperature from the collected monitoring data; setting a rotation speed of a cooling fan based on the calculated current CPU power and the predicted future CPU temperature; and controlling the cooling fan at the set rotation speed. Accordingly, the BMC controls a cooling fan effectively/efficiently by interworking with on-device AI, thereby reducing power consumption in a server.
Abstract:
There is provided an offloading data interfacing method between a DBMS storage engine and a computational storage device. In a query offloading method according to an embodiment of the disclosure, a DBMS may generate an offloading code which is a code for offloading a part of query computations, based on a received query, when a query execution request is received from a client, and may deliver the offloading code to a storage in which a DB is established in a CSD. Accordingly, in a DB system using a CSD, a snippet for offloading a part of query computations may be defined, and a DBMS and a storage are interfaced by using the offloading snippet, so that a guideline for execution of a query by interworking between the CSD-based storage and the DBMS is provided.