Abstract:
Disclosed herein a disaggregation computing system. The disaggregation computing system comprising: a local computing device that comprises a local processor, a local memory bus, a local memory and a local disaggregation controller; a remote computing device that comprises a remote processor, a remote memory bus, a remote memory and a remote disaggregation controller; and a disaggregation network that connects the local computing device and the remote computing device, wherein the local disaggregation controller and the remote disaggregation controller are configured to: check a response delay for access of the remote memory, and control the access of the remote memory based on the response delay.
Abstract:
An information security device that inspects information being transmitted between a server that provides a social network service (SNS) and a terminal and that allows transmission of information selectively based on a predetermined security condition; a terminal that exchanges information with the server through the information security device; and a network system including the same, so as to provide an effect of preventing confidential information from being leaked outside through the social network service while providing the social network service through the terminal of an internal network.
Abstract:
According to an embodiment of the present disclosure, a computer implementation method using a device for managing a priority in a memory disaggregation network, the computer implementation method comprising: classifying received read requests by priority and storing the read requests in a request queue of a memory module; classifying the received read requests by response path indicating an output port of the memory module and storing the read requests in a response queue of the memory queue; and performing scheduling in consideration of states of the request queue and response queues.
Abstract:
A parallel scheduling apparatus includes an information managing unit generating a first request information for scheduling, a first scheduling unit performing first scheduling and then generating first matching information on the basis of the first request information, and a second scheduling unit performing second scheduling on the basis of the first request information and the first matching information. The parallel scheduling has an advantage of improving the scheduling performance and lowering the implementation complexity, ensuring low delay and transmission fairness among VOQs at low input traffic, being applied to all scheduling algorithms that perform existing multi-iterations, and providing efficient scheduling in a packet switch having a long RTT time or having a very short time slot or cell size, such as an optical switch.
Abstract:
A method and apparatus for providing a mobility in an Ethernet network. An Ethernet switch receives an Ethernet frame through a port of the Ethernet switch, and transmits the Ethernet frame to an upper Ethernet switch through a root port of the Ethernet switch based on whether a forwarding information for a destination address of the Ethernet frame exists in a forwarding table.