-
公开(公告)号:US20200311546A1
公开(公告)日:2020-10-01
申请号:US16830253
申请日:2020-03-25
Inventor: Chang Sik LEE , Sung Back HONG , Seungwoo HONG , Ho Yong RYU
Abstract: A processor partitions a deep neural network having a plurality of exit points and at least one partition point in a branch corresponding to each of the exit points, for distributed processing in an edge device and a cloud. The processor sets environmental variables and training variables for training, selects an action to move at least one of an exit point and a partition point from a combination of the exit point and the partition point corresponding to a current state, performs the training by accumulating experience data using a reward according to the selected action and then moves to a next state, and outputs a combination of an optimal exit point and a partition point as a result of the training.
-
公开(公告)号:US20250029004A1
公开(公告)日:2025-01-23
申请号:US18651555
申请日:2024-04-30
Inventor: Chang Sik LEE , Tae Yeon KIM , Taeheum NA , Seungjae SHIN
IPC: G06N20/00
Abstract: Proposed is a technology that supports updating a machine learning model in a terminal or base station of a mobile communication system. The method where a core network supports a machine learning (ML) model update may include receiving by a first network function in the core network a model update request from a user equipment (UE) or a radio access network (RAN), obtaining by the first network function information for the model update on the basis of the received model update request and a communication with a second network function, and transferring by the first network function to the UE or the RAN a model update response including the information for the model update.
-
公开(公告)号:US20240185101A1
公开(公告)日:2024-06-06
申请号:US18224762
申请日:2023-07-21
Inventor: Chang Sik LEE , HYEBIN PARK , Seungjae SHIN , Hong Seok JEON
IPC: G06N5/04
CPC classification number: G06N5/04
Abstract: An apparatus and method for split processing of a model are provided. The apparatus for the split processing of the model includes a memory including instructions and a processor electrically connected to the memory and configured to execute the instructions. When the instructions are executed by the processor, the processor may be configured to perform a plurality of operations. The plurality of operations may include obtaining information on a plurality of computing nodes that uses at least one layer among a plurality of layers of a model for an artificial intelligence (AI)-based service, obtaining a requirement for the AI-based service, and controlling split processing of the model based on the information and the requirement.
-
-