Abstract:
Provided is a many-core system including a resource unit including a resource needed for execution of an operating system and a resource needed for execution of a lightweight kernel, a programing constructing unit configured to convert an input program into an application program and to load the application program into the resource unit, a run-time management unit configured to manage a running environment for executing the application program, and a self-organization management unit configured to monitor the application program and the resources in the resource unit, to dynamically adjust the running environment to prevent a risk factor from occurring during the execution of the application program, and to cure a risk factor which occurred.
Abstract:
The present invention relates to verifying the suitability of a tool and diagnosing a spindle for a machining center on which different tools are mounted. Provided are a machining center spindle diagnosis apparatus and method configured to monitor a change of a tool in the machining center; when the change of the tool is recognized, control the machining center to idle the spindle; acquire sensor data from a sensor installed on the machining center during the idling of the spindle; input the acquired sensor data to a tool verifying model pre-trained by a machine learning technology to verify suitability of the tool; and when the tool is verified to be suitable, inputting the acquired sensor data to a spindle diagnosing model pre-trained by the machine learning technology to diagnose an operating state of the spindle.
Abstract:
A method and an apparatus for processing to support scalability in a many-core environment are provided. The processing apparatus includes: a counter unit configured to include a global reference counter, at least one category reference counter configured to access the global reference counter, and at least one local reference counter configured to access the category reference counter; and a processor connected to the counter unit and configured to increase or decrease each reference counter. The at least one category reference counter has a hierarchical structure including at least one layer.
Abstract:
Disclosed is an apparatus and method of processing input and output in a multi-kernel system. A method of processing input and output in a multi-kernel system according to the present disclosure includes: setting a shared memory between a first kernel on a main processor and a lightweight kernel on a parallel processor; setting a data transmission and reception channel between the first kernel on the main processor and the lightweight kernel on the parallel processor using the shared memory; providing, on the basis of the data transmission and reception channel, an input/output task that occurs in the lightweight kernel to the first kernel on the main processor; processing, by the first kernel on the main processor, an operation corresponding to the input/output task; and providing a result of the processing to the lightweight kernel.
Abstract:
Provided is a method of scheduling threads in a many-cores system. The method includes generating a thread map where a connection relationship between a plurality of threads is represented by a frequency of inter-process communication (IPC) between threads, generating a core map where a connection relationship between a plurality of cores is represented by a hop between cores, and respectively allocating the plurality of threads to the plurality of cores defined by the core map, based on a thread allocation policy defining a mapping rule between the thread map and the core map.
Abstract:
A method and an apparatus for partitioning or combining massive data, which can efficiently partition and combine data when an operation is executed by being distributed to a plurality of nodes in an environment such as genome analysis, in which massive data can be partitioned and executed. The method includes storing meta information on partition or combination of at least one data, if a request for data is sensed, acquiring meta information corresponding to the data, partitioning or combining the data, based on the meta information, and transmitting the partitioned or combined data in response to the request.