Abstract:
A method for determining a source and a transmission path to provide content includes receiving a message comprising at least one of channel information of a link between a request device which requests content download and a central management device, available resource amount information of a candidate device, channel information of a link between the candidate device and the central management device, and channel information of a link between the request device and one candidate device. The method also includes determining a source device and the transmission path for providing the content to the request device using an available resource amount of the candidate device, a data rate of the link between the request device and the central management device, a data rate of the link between the request device and the candidate device, and a data rate of the link between the candidate device and the central management device.
Abstract:
A cloud service system includes at least one user device, a plurality of clouds for providing different cloud services, and a gateway connected between the user device and the clouds. The gateway selects at least one of the clouds according to predefined Service Level Agreement (SLA) information, and stores content provided from the user device to the selected cloud.
Abstract:
A method of operating a reconfigurable logic circuit includes; receiving a video sequence, profiling throughput of the video sequence with regard to a parameter constituting the reconfigurable logic circuit to generate a profiling result, initializing the parameter to a maximum value, evaluating throughput of the video sequence with regard to a current parameter value based on the profiling result, decreasing the current parameter value when throughput with regard to the current parameter value is not a maximum value, and determining that the current parameter value is an optimal parameter when throughput with regard to the current parameter value is the maximum value, and analyzing the video sequence based on the optimal parameter.
Abstract:
A privacy protection policy is present in a content sharing system. A method for managing contents in a content sharing system includes receiving a content download request from a first account through a first device; and determining whether to carry out the download by considering at least one of a sharing range of a download-requested content, a content access right of the first account, a content access right of an owner account of the first device, a sharing range of a download folder, and sharing acceptance or rejection of an owner of the content.
Abstract:
A method for determining a source and a transmission path to provide content includes receiving a message comprising at least one of channel information of a link between a request device which requests content download and a central management device, available resource amount information of a candidate device, channel information of a link between the candidate device and the central management device, and channel information of a link between the request device and one candidate device. The method also includes determining a source device and the transmission path for providing the content to the request device using an available resource amount of the candidate device, a data rate of the link between the request device and the central management device, a data rate of the link between the request device and the candidate device, and a data rate of the link between the candidate device and the central management device.
Abstract:
A cloud service system includes at least one user device, a plurality of clouds for providing different cloud services, and a gateway connected between the user device and the clouds. The gateway selects at least one of the clouds according to predefined Service Level Agreement (SLA) information, and stores content provided from the user device to the selected cloud.
Abstract:
A task execution method using resources includes receiving an execution request for a first task; analyzing the first task and dividing the first task into a plurality of sub-tasks; identifying a sub-task using a first neural network from among the sub-tasks and dividing the identified sub-task into a plurality of layer tasks corresponding to calculations between layers constituting the first neural network; calculating a deadline time of each of the sub-tasks; scheduling a first sub-task to be scheduled to a first resource group from among the resources; and, when a runtime of the first sub-task exceeds a deadline time of the first sub-task, scheduling a sub-task or a layer task subsequent to the first sub-task to a second resource group.
Abstract:
A task execution method includes receiving an execution request for a first task; analyzing the first task and dividing the first task into sub-tasks, the sub-tasks including a first sub-task and a second sub-task; calculating respective deadline times of the sub-tasks; scheduling the sub-tasks for processing by resources, the resources using neural networks; sequentially executing the sub-tasks using the resources and the neural networks; checking whether respective runtimes of the sub-tasks exceed a corresponding deadline time; and executing the second sub-task, subsequent to executing the first sub-task, using a first pruned neural network generated from the neural networks when the checking indicates that the runtime of the first sub-task exceeds the deadline time of the first sub-task.
Abstract:
An electronic device includes a memory configured to store at least one instruction; and at least one processor configured to execute the at least one instruction to: input first data to a first artificial intelligence model including a plurality of convolution blocks sequentially connected with a pooling layer interposed therebetween to obtain a plurality of feature maps that are output by corresponding ones of the plurality of convolution blocks, input the first data and the plurality of feature maps to a second artificial intelligence model including a plurality of local attention blocks sequentially connected to obtain a plurality of attention maps that are output by corresponding ones of the plurality of local attention blocks, output an amplified feature map by amplifying a region corresponding to a last attention map among the plurality of attention maps in a last feature map among the plurality of feature maps, and input the amplified feature map to a classifier to output a classification result for the first data.