-
1.
公开(公告)号:US20230153317A1
公开(公告)日:2023-05-18
申请号:US17985994
申请日:2022-11-14
Applicant: Korea Electronics Technology Institute
Inventor: Jae Hoon AN , Young Hwan KIM
IPC: G06F16/25 , G06F16/2453
CPC classification number: G06F16/252 , G06F16/24542
Abstract: There is provided a method for scheduling offloading snippets based on a large amount of DBMS task computation. A DB scheduling method according to an embodiment of the disclosure includes determining, by a DBMS, whether to offload a part of query computations upon receiving a query execution request from a client, generating, by the DBMS, an offloading code which is a code for offloading a part of the query computations, based on the received query, when offloading is determined, selecting one of the plurality of storages in which a DB is established, and delivering the offloading code. Accordingly, snippets which will be generated simultaneously are scheduled for CSDs, so that resources are equally utilized, a query execution time is reduced, and reliability on data processing is enhanced.
-
公开(公告)号:US20220078231A1
公开(公告)日:2022-03-10
申请号:US17467804
申请日:2021-09-07
Applicant: Korea Electronics Technology Institute
Inventor: Jae Hoon AN , Young Hwan KIM
Abstract: There is provided a cloud management method and apparatus for performing load balancing so as to make a service in a cluster that is geographically close in an associative container environment and has a good resource current status. The cloud management method according to an embodiment includes: monitoring, by a cloud management apparatus, available resource current statuses of a plurality of clusters, and selecting a cluster that owns a first service supported by a first cluster an available resource rate of which is less than a threshold value; calculating, by the cloud management apparatus, scores regarding an available resource current status and geographical proximity of each cluster; and performing, by the cloud management apparatus, load balancing of the first service, based on a result of calculating the scores. Accordingly, a delay in a response speed of a service that is required in a distributed environment can be minimized, and a service can be supported to be processed in a geographically close cluster through analysis of geographical closeness (proximity) between an access location where there is a user request and a cluster in which services are distributed.
-
3.
公开(公告)号:US20240160612A1
公开(公告)日:2024-05-16
申请号:US18387626
申请日:2023-11-07
Applicant: Korea Electronics Technology Institute
Inventor: Jae Hoon AN , Young Hwan KIM , Ri A CHOI
CPC classification number: G06F16/217 , G06F9/4881
Abstract: There is provided a method for dividing query computations and scheduling for CSDs in a DB system in which a plurality of CSDs are used as a storage. A scheduling method according to an embodiment includes: selecting one of a plurality of scheduling polices; selecting a CSD to which snippets included in a group are delivered according to the selected scheduling policy; and delivering the snippets to the selected CSD, and the scheduling polices are polices for selecting CSDs to which snippets are delivered, based on different criteria. Accordingly, CSDs may be randomly selected according to user setting or a query execution environment, or an optimal CSD may be selected according to a CSD status or a content of an offload snippet, so that a query execution speed can be enhanced.
-
4.
公开(公告)号:US20240160487A1
公开(公告)日:2024-05-16
申请号:US18388799
申请日:2023-11-10
Applicant: Korea Electronics Technology Institute
Inventor: Jae Hoon AN , Young Hwan KIM , Ju Hyun KIL
IPC: G06F9/50
CPC classification number: G06F9/5038 , G06F9/5011 , G06F9/5072
Abstract: There is provided a cloud management method and apparatus for available GPU resource scheduling in a large-scale container platform environment. Accordingly, a list of available GPUs may be reflected through a GPU resource metric collected in a large-scale container driving (operating) environment, and an allocable GPU may be selected from the GPU list according to a request, so that GPU resources can be allocated flexibly in response to a GPU resource request of a user (resource allocation reflecting requested resources rather than 1:1 allocation).
-
公开(公告)号:US20240155815A1
公开(公告)日:2024-05-09
申请号:US18387222
申请日:2023-11-06
Applicant: Korea Electronics Technology Institute
Inventor: Jae Hoon AN , Young Hwan KIM , Ki Cheol PARK
IPC: H05K7/20
CPC classification number: H05K7/20836 , H05K7/20727
Abstract: There is a method for applying a BMC analytical local fan control model in a rugged environment. A server chassis cooling fan control method according to an embodiment controls rotation speeds of cooling fans on a zone basis while identifying/managing a temperature distribution of an edge server chassis on a zone basis through BMC data analysis. Accordingly, a damage that may be caused by increased temperature of an edge server in a rugged environment may be minimized, and also, power consumption for cooling an edge server may be reduced.
-
6.
公开(公告)号:US20240160610A1
公开(公告)日:2024-05-16
申请号:US18387256
申请日:2023-11-06
Applicant: Korea Electronics Technology Institute
Inventor: Jae Hoon AN , Young Hwan KIM
IPC: G06F16/21 , G06F16/2455
CPC classification number: G06F16/21 , G06F16/2455
Abstract: There is provided a query execution method in a DB system in which a plurality of CSDs are used as a storage. According to an embodiment, a query execution method includes: generating snippets for offloading a part of query computations for a query received from a client to CSDs; scheduling the generated snippets for the CSDs; collecting results of offloading; and merging the collected results of offloading. Accordingly, by dividing query computations, offloading, and processing in parallel, while processing query computations that are inappropriate for offloading by a DBMS, a query request from a client can be executed effectively and rapidly.
-
7.
公开(公告)号:US20230156976A1
公开(公告)日:2023-05-18
申请号:US17986130
申请日:2022-11-14
Applicant: Korea Electronics Technology Institute
Inventor: Jae Hoon AN , Young Hwan KIM
IPC: H05K7/20
CPC classification number: H05K7/20836 , H05K7/20718
Abstract: There is provided an adaptive temperature control method based on log analysis of a chassis manager in an edge server. The adaptive temperature control method of the edge server system according to an embodiment includes: collecting, by a chassis manger module of the edge server system, work logs of a computing module and a storage module; predicting a future work load from the collected work logs; predicting a future internal temperature of the edge server system, based on the work load and a future temperature; and controlling, by the chassis manager module, the edge server system, based on the predicted future internal temperature. Accordingly, a configuration module of an edge server system may be managed/controlled in a rugged environment, and temperature of the edge server system may be adaptively controlled by transferring or additionally generating works of an edge server.
-
8.
公开(公告)号:US20230155958A1
公开(公告)日:2023-05-18
申请号:US17983770
申请日:2022-11-09
Applicant: Korea Electronics Technology Institute
Inventor: Jae Hoon AN , Young Hwan KIM
IPC: H04L47/783
CPC classification number: H04L47/783
Abstract: There are provided a method and an apparatus for cloud management, which selects optimal resources based on graphic processing unit (GPU) resource analysis in a large-scale container platform environment. According to an embodiment, a GPU bottleneck phenomenon occurring in an application of a large-scale container environment may be reduced by processing partitioned allocation of GPU resources, rather than existing 1:1 allocation, through real-time GPU data analysis (application of a threshold) and synthetic analysis of GPU performance degrading factors.
-
公开(公告)号:US20210149745A1
公开(公告)日:2021-05-20
申请号:US17082446
申请日:2020-10-28
Applicant: Korea Electronics Technology Institute
Inventor: Jae Hoon AN , Young Hwan KIM
IPC: G06F9/50
Abstract: A cloud management method and a cloud management device are provided. The cloud management method determines whether a plurality of pods are overloaded, identifies resource usage current states of a cluster and a node, and determines a method of scaling resources of a specific pod that is overloaded from among the plurality of pods, according to the resource usage current states of the cluster and the node, and scales the resources of the specific pod according to the determined method. Accordingly, scaling for uniformly extending resources of a node and a pod in a cluster horizontally and vertically can be automatically performed.
-
公开(公告)号:US20170293441A1
公开(公告)日:2017-10-12
申请号:US15465605
申请日:2017-03-22
Applicant: Korea Electronics Technology Institute
Inventor: Jae Hoon AN , Young Hwan KIM , Chang Won PARK
IPC: G06F3/06 , G06F12/0846
CPC classification number: G06F3/0619 , G06F3/0665 , G06F3/0689 , G06F12/0848 , G06F12/0868 , G06F12/0871 , G06F17/30312 , G06F2212/282 , G06F2212/312 , G06F2212/502
Abstract: An adaptive block cache management method and a DBMS applying the same are provided. A DB system according to an exemplary embodiment of the present disclosure includes: a cache configured to temporarily store DB data; a disk configured to permanently store the DB data; and a processor configured to determine whether to operate the cache according to a state of the DB system. Accordingly, a high-speed cache is adaptively managed according to a current state of a DBMS, such that a DB processing speed can be improved.
-
-
-
-
-
-
-
-
-