Abstract:
In a content network over which a plurality of smart nodes is coupled, each smart node receives advertisement messages broadcasted by adjacent smart nodes. The advertisement message may be one of a link state advertisement (LSA) message including link state information indicative of a link that is a network interface, a server state advertisement (SSA) message including server state information indicative of a data storage state and a processing state of a processing unit of the smart node, and a content state advertisement (CSA) message including content state information indicative of content stored in the smart node. Each smart node updates its own database based on the information included in the received advertisement message.
Abstract:
In order to easily prevent traffic of a content data server, which provides an arbitrary content, from being overloaded, the content transmission system according to an exemplary embodiment includes a content data server which provides an arbitrary content to a plurality of terminals when a request signal for the arbitrary content is input from the plurality of terminals, a data control server which when the arbitrary content is provided, monitors whether the traffic of the content data server is overloaded and if the traffic is overloaded as a result of monitoring, generates a traffic distribution request signal, and a node control server which when the traffic distribution request signal is input from the data control server, controls to provide the arbitrary content to an arbitrary distribution node, which satisfies a setting standard among a plurality of distribution nodes, to be provided to an arbitrary terminal among the plurality of terminals.
Abstract:
Disclosed is hop-count based content caching. The present invention implements hop-count based content cache placement strategies that efficiently decrease traffics of a network by the routing node's primarily judging whether to cache a content chunk by grasping an attribute of the received content chunk; the routing node's secondarily judging whether to cache the content chunk based on a caching probability of ‘1/hop count’; and storing the content chunk and the hop count information in the cache memory of the routing node when the content chunk is determined to cache the content chunk as a result of the secondary judgment.
Abstract:
Disclosed are a method, device, and system for providing automated explanations for inference services based on artificial intelligence using a cloud. The method includes: requesting an inference response message to an inference container according to an inference service based on an inference request message received from a client; sending the inference request message and the inference response message to an imitation learning container linked with the inference container according to a mirroring setting; and creating interpretation information of the inference container based on the inference request message and the inference response message, and providing the created interpretation information to the client.
Abstract:
The present invention relates to a method and apparatus where a reinforcement learning agent ensures quality of an initial control operation of an environment on the basis of reinforcement learning, wherein a first action calculated by using an algorithm is selected at an initial learning stage, and a second action calculated by using a Q function is selected when the initial learning stage is ended.
Abstract:
A network infrastructure system implements data sharing and processing by using a network infrastructure to which an application terminal or application server constituting an application domain is connected in a shared manner, includes a plurality of network infrastructure nodes storing, processing, sharing data, wherein each of the plurality of network infrastructure nodes includes a data processing module including a data transfer function, a data distribution function, a data processing function, and a data sharing function which are provided to at least one of the application terminal and the application server.
Abstract:
Provided are methods of managing and storing distributed files based on an information-centric network (ICN). A method of managing distributed files performed by an ICN node includes receiving a message for requesting provision of data from a first network node, determining whether a name of the requested data is identical to a name of data stored in the ICN node, and adaptively providing the data to the first network node based on a result of the determination. Accordingly, it is possible to reduce the overall network load by preventing duplication of a data access path.
Abstract:
Disclosed is a virtual file system for interworking between a content server and an information-centric network server, the system including: a file system function processing unit configured to process a file operation for a predetermined content requested from a plurality of content service protocols; a cache control unit configured to process the content requested through the file operation by managing a cache in a node; and a protocol matching unit configured to process the content requested through the file operation by interfacing with a plurality of content transfer protocols.