Abstract:
The present invention relates to a method and apparatus where a reinforcement learning agent ensures quality of an initial control operation of an environment on the basis of reinforcement learning, wherein a first action calculated by using an algorithm is selected at an initial learning stage, and a second action calculated by using a Q function is selected when the initial learning stage is ended.
Abstract:
Disclosed is a virtual file system for interworking between a content server and an information-centric network server, the system including: a file system function processing unit configured to process a file operation for a predetermined content requested from a plurality of content service protocols; a cache control unit configured to process the content requested through the file operation by managing a cache in a node; and a protocol matching unit configured to process the content requested through the file operation by interfacing with a plurality of content transfer protocols.
Abstract:
a method and an apparatus for managing a cache for storing content by determining popularity of the content based on content requests received during a current time slot for the content; transmitting information about the popularity of the content to a time-to-live (TTL) controller and receiving, from the TTL controller, TTL values for each popularity level determined by the TTL controller based on the information about the popularity; and managing the content based on the TTL values for each popularity level are provided.
Abstract:
In a content network over which a plurality of smart nodes is coupled, a content network management system receives information response messages including pieces of management interface base (MIB) information from smart nodes. Next, the content network management system classifies the pieces of MIB information included in the received response messages into server resource information, topology information, and network resource information, and stores and manages the pieces of information.
Abstract:
The present invention related to a method for federated learning method interworking with a mobile core system, the method comprising: querying terminal information of each individual terminal among a plurality of terminals; querying network performance information; selecting participating terminals among the plurality of terminals on the basis of the terminal information and the network performance information; transmitting respective parameters to the participating terminals and requesting local learning; and integrating the parameters.
Abstract:
In a content network over which a plurality of smart nodes is coupled, each smart node receives advertisement messages broadcasted by adjacent smart nodes. The advertisement message may be one of a link state advertisement (LSA) message including link state information indicative of a link that is a network interface, a server state advertisement (SSA) message including server state information indicative of a data storage state and a processing state of a processing unit of the smart node, and a content state advertisement (CSA) message including content state information indicative of content stored in the smart node. Each smart node updates its own database based on the information included in the received advertisement message.
Abstract:
In order to easily prevent traffic of a content data server, which provides an arbitrary content, from being overloaded, the content transmission system according to an exemplary embodiment includes a content data server which provides an arbitrary content to a plurality of terminals when a request signal for the arbitrary content is input from the plurality of terminals, a data control server which when the arbitrary content is provided, monitors whether the traffic of the content data server is overloaded and if the traffic is overloaded as a result of monitoring, generates a traffic distribution request signal, and a node control server which when the traffic distribution request signal is input from the data control server, controls to provide the arbitrary content to an arbitrary distribution node, which satisfies a setting standard among a plurality of distribution nodes, to be provided to an arbitrary terminal among the plurality of terminals.
Abstract:
An apparatus and method for split processing of a model are provided. The apparatus for the split processing of the model includes a memory including instructions and a processor electrically connected to the memory and configured to execute the instructions. When the instructions are executed by the processor, the processor may be configured to perform a plurality of operations. The plurality of operations may include obtaining information on a plurality of computing nodes that uses at least one layer among a plurality of layers of a model for an artificial intelligence (AI)-based service, obtaining a requirement for the AI-based service, and controlling split processing of the model based on the information and the requirement.
Abstract:
An exemplary embodiment provides a router including: a number calculating unit configured to count an accumulated request number corresponding to a plurality of previously requested contents and a request number of a request signal for an arbitrary content which is currently input; a probability calculating unit configured to calculate an arbitrary probability value for the arbitrary content based on the accumulated request number and the request number; and a policy determining unit configured to determine whether to store the arbitrary content which is provided to an arbitrary terminal which transmits the request signal from an arbitrary content server, based on the arbitrary probability value and a set reference probability value.
Abstract:
Disclosed is hop-count based content caching. The present invention implements hop-count based content cache placement strategies that efficiently decrease traffics of a network by the routing node's primarily judging whether to cache a content chunk by grasping an attribute of the received content chunk; the routing node's secondarily judging whether to cache the content chunk based on a caching probability of ‘1/hop count’; and storing the content chunk and the hop count information in the cache memory of the routing node when the content chunk is determined to cache the content chunk as a result of the secondary judgment.