Invention Application
- Patent Title: ADAPTIVE DEEP LEARNING INFERENCE APPARATUS AND METHOD IN MOBILE EDGE COMPUTING
-
Application No.: US17519352Application Date: 2021-11-04
-
Publication No.: US20220150129A1Publication Date: 2022-05-12
- Inventor: Ryang Soo KIM , Geun Yong KIM , Sung Chang KIM , Hark YOO , Jae In KIM , Chor Won KIM , Hee Do KIM , Byung Hee SON
- Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
- Applicant Address: KR Daejeon
- Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
- Current Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
- Current Assignee Address: KR Daejeon
- Priority: KR10-2020-0147642 20201106,KR10-2021-0055977 20210429
- Main IPC: H04L12/24
- IPC: H04L12/24 ; H04L12/26 ; G06K9/00 ; G06N3/04

Abstract:
Disclosed is an adaptive deep learning inference system that adapts to changing network latency and executes deep learning model inference to ensure end-to-end data processing service latency when providing a deep learning inference service in a mobile edge computing (MEC) environment. An apparatus and method for providing a deep learning inference service performed in an MEC environment including a terminal device, a wireless access network, and an edge computing server are provided. The apparatus and method provide deep learning inference data having deterministic latency, which is fixed service latency, by adjusting service latency required to provide a deep learning inference result according to a change in latency of the wireless access network when at least one terminal device senses data and requests a deep learning inference service.
Public/Granted literature
- US12218801B2 Adaptive deep learning inference apparatus and method in mobile edge computing Public/Granted day:2025-02-04
Information query