Abstract:
An apparatus for processing multi-layer data, the apparatus comprising: a lower layer switch configured to classify lower layer data based on frame or packet from multi-layer data having different properties for switching processing; and an upper layer switch configured to generate flows of the multi-layer data having different properties based on upper layer information or lower layer information.
Abstract:
Provided is a device and a method for providing network virtualization, in which a method of dynamically mapping a processor includes extracting tenant information on a tenant and information on a virtual machine (VM) generated by a Cloud OS or controller; classifying virtual machine queues (VMQs) and processors to process the VMQs by tenant, and dynamically mapping the VMQs onto the processors by tenant.
Abstract:
A network function virtualization device includes at least one network function virtual machine; and a network function flow switch configured to receive flows and to switch the flows to the at least one network function virtual machine, and a network functions virtualization method for applying the virtualized network function to the flows.
Abstract:
Disclosed herein are a method and apparatus for data slicing in an information centric network (ICN) system. According to the present disclosure, the method may include, by an IoT terminal device provided in the ICN system, receiving a creation request for slice data from an application, processing registration of the slice data, and processing publication of the slice data, wherein the slice data may include sensor data that are created from at least one sensor node at every predetermined time unit.
Abstract:
An apparatus and a method for mapping of a tenant based dynamic processor, which classify virtual machine multi queues and processors processing the corresponding multi queues for each tenant that one or more virtual machines are belonged to, dynamically map the virtual machine multi queues that belong to the corresponding tenant to multi processors that belong to the corresponding tenant based on total usages of a network and a processor for each tenant to provide network virtualization to assure network traffic processing of virtual machines, that belong to the same tenant not to be influenced by congestion of network traffic that belongs to another tenant.
Abstract:
Provided is an interoperation method of a network device performed by a computing device including a cloud operating system (OS) in a cloud environment. An interoperation method of a network device based on a plug-in and performed by a computing device including a cloud OS includes acquiring control information of a different type of network device not supporting an instruction of the plug-in among network devices connected to the computing device, receiving an instruction from the cloud OS, converting the received instruction into an instruction for the network device based on the acquired control information, and providing the converted instruction to the network device. Therefore, the cloud OS can cause the computing device to simultaneously interoperate with several network devices through the plug-in.
Abstract:
A communication system for supporting a cloud service comprising: a communication node connected to a network; an IDC connected to the network; a converged communication apparatus connected to the network and configured to interact with the IDC to integrally manage a resource stored in the IDC, and, upon a receipt of a request for the cloud service, to interact with the communication node to transfer the resource to a user who makes the cloud service request or a service provider which makes the cloud service request.
Abstract:
The present invention suggests a data parallel processing device that performs parallel processing on input data by varying a flow ID generating manner depending on a loading degree of the processor in the multi-processor structure configured by processor array. The suggested device includes a flow ID generating unit which generates a flow ID for input data which is differentiated in accordance with a status of a buffer; a data allocating unit which allocates data having the same flow ID to a specified processor; and a data processing unit which sequentially processes data allocated to each processor so that the parallel processing performance is improved as compared with the related art.