Abstract:
Disclosed are a network latency measurement method and apparatus for a low latency immersive service. The network latency measurement method includes transmitting a request message for measuring network latency from a first end node to a second end node, calculating a node latency at each node present between the first end node and the second end node and inserting the calculated node latency into the request message, and measuring a one-way network latency from the first end node to the second end node based on the node latency at each of the nodes inserted into the request message at the second end node.
Abstract:
A trust reality service brokering apparatus located on an edge cloud receives a context rule, analyzes event data of at least one physical entity connected to the edge cloud based on the context rule, and transmits an action command to a physical entity or virtual entity corresponding to the event when it is determined that an event has occurred according to an analysis result.
Abstract:
In a content network over which a plurality of smart nodes is coupled, a content network management system receives information response messages including pieces of management interface base (MIB) information from smart nodes. Next, the content network management system classifies the pieces of MIB information included in the received response messages into server resource information, topology information, and network resource information, and stores and manages the pieces of information.
Abstract:
Proposed is a technology that supports updating a machine learning model in a terminal or base station of a mobile communication system. The method where a core network supports a machine learning (ML) model update may include receiving by a first network function in the core network a model update request from a user equipment (UE) or a radio access network (RAN), obtaining by the first network function information for the model update on the basis of the received model update request and a communication with a second network function, and transferring by the first network function to the UE or the RAN a model update response including the information for the model update.
Abstract:
A method for verifying reliability of an artificial intelligence (AI) model includes receiving an AI model request; creating a verification twin for evaluating the reliability of the AI model; and verifying the reliability of the AI model based on information collected while the AI model is executed on the digital twin network.
Abstract:
Disclosed are a method, device, and system for providing automated explanations for inference services based on artificial intelligence using a cloud. The method includes: requesting an inference response message to an inference container according to an inference service based on an inference request message received from a client; sending the inference request message and the inference response message to an imitation learning container linked with the inference container according to a mirroring setting; and creating interpretation information of the inference container based on the inference request message and the inference response message, and providing the created interpretation information to the client.
Abstract:
The present invention relates to a system and a method for service orchestration capable of integrating network resources distributed in a cloud network to provide a service required by a service provider. The method includes: receiving a service profile from a service provider; analyzing the received service profile, and generating information on a virtual function and an application server used to provide a service as a service specification; setting a service flow to provide the service to a user based on the service specification; and transmitting a service execution control command according to the service flow to at least one micro data centre.
Abstract:
Disclosed is a virtual file system for interworking between a content server and an information-centric network server, the system including: a file system function processing unit configured to process a file operation for a predetermined content requested from a plurality of content service protocols; a cache control unit configured to process the content requested through the file operation by managing a cache in a node; and a protocol matching unit configured to process the content requested through the file operation by interfacing with a plurality of content transfer protocols.
Abstract:
Disclosed herein are a method and apparatus for processing an application service. The method includes: registering a user using a visual network service and at least one physical entity supporting the visual network service; mapping at least one virtual entity corresponding to the at least one physical entity; in response to the user's entering a visual network service space, displaying the at least one virtual entity on a terminal device of the user; and confirming a user input for the at least one virtual entity and controlling an operation of the at least one physical entity corresponding to the at least one virtual entity.
Abstract:
A method of setting a user-defined virtual network is disclosed. A method of setting a virtual network includes configuring a virtual network including a controller, at least one network address translation (NAT) and at least one edge node, checking an operation type of the at least one edge node, setting a tunnel between the at least one edge node based on the operation type, and performing data transmission between the at least one edge node through the set tunnel.